Synthetic intelligence is getting into a brand new part. The dialog is shifting from mannequin innovation to operational actuality. Organizations are discovering that constructing AI fashions is usually the best a part of the journey. Working these fashions reliably, securely, and at scale throughout enterprise environments is the place complexity emerges.
Many AI initiatives decelerate, not as a result of groups lack GPUs, information, or expertise, however as a result of there isn’t any unified working sample that safely connects all of these parts into manufacturing. AI techniques aren’t single functions. They’re distributed ecosystems of knowledge pipelines, inference companies, orchestration layers, and more and more, autonomous brokers interacting with enterprise techniques in actual time.
Cisco Safe AI Manufacturing unit with NVIDIA is constructed round a easy however transformative concept. AI have to be handled as an end-to-end system. Efficiency, information readiness, cloud-native operations, and safety can’t be designed individually. They have to be engineered collectively from the start.
At VAST Ahead 2026, we’re demonstrating how that precept interprets right into a working safe AI information platform. This isn’t a future idea or hypothetical structure. It’s a actual, deployable reference implementation constructed utilizing NVIDIA accelerated computing infrastructure and software program, VAST information companies, Cisco infrastructure, the Isovalent Enterprise Platform based mostly on Cilium and Tetragon, and Cisco AI Protection. It displays a repeatable strategy to operationalize AI immediately whereas persevering with to evolve towards deeper integration over time.
The brand new actuality of enterprise AI
The rise of retrieval-augmented technology (RAG) and agent-driven functions is essentially reshaping how organizations work together with their information. AI techniques are now not remoted workloads. They constantly retrieve info, trade context between companies, and execute automated actions throughout enterprise environments.
This transformation introduces a brand new kind of operational problem. The assault floor expands dramatically as AI workloads generate fixed east-west visitors inside Kubernetes clusters. Runtime habits turns into extra dynamic as containers load libraries, execute helper processes, and work together with exterior companies. On the similar time, fashions and brokers introduce dangers that conventional safety instruments had been by no means designed to handle, together with immediate injection, delicate information leakage, and uncontrolled device execution.
Enterprise leaders aren’t asking whether or not these dangers exist. They’re asking whether or not AI could be trusted to ship measurable outcomes with out exposing the group to unacceptable operational or regulatory publicity. The reply lies in designing AI platforms the place safety is inseparable from efficiency and scalability.
Constructing the platform from the info outward
Each efficient AI system begins with information that’s accessible, constant, and instantly usable. VAST Knowledge Platform and VAST InsightEngine rework enterprise information into an energetic participant in AI workflows relatively than a passive storage layer. By automating ingestion, indexing, and retrieval pipelines, the platform permits enterprise information to develop into a dependable context for AI techniques with out the delicate and sophisticated information engineering pipelines that usually sluggish innovation.
Working this information intelligence layer on Cisco UCS and NVIDIA accelerated computing, software program, and networking permits the platform to maneuver past experimental deployments. It creates a repeatable constructing block that organizations can deploy throughout environments with constant efficiency and lifecycle administration. Manufacturing AI requires this stage of operational self-discipline. With out it, scaling AI turns into unpredictable and tough to control.
The place safety should dwell in fashionable AI platforms
Probably the most vital shift in AI safety is the situation. Safety can now not focus solely on defending the community perimeter or scanning container photos earlier than deployment. In AI information platforms, the vast majority of threat now exists inside Kubernetes clusters and inside AI utility interactions themselves.
The primary essential problem is controlling east-west visitors. AI microservices talk constantly as retrieval pipelines, embedding companies, and inference engines trade information. With out sturdy segmentation, unintended service reachability can emerge as clusters scale, permitting lateral motion throughout workloads.
The Isovalent Enterprise Platform based mostly on Cilium addresses this problem by imposing identity-based community insurance policies instantly inside Kubernetes. As a substitute of counting on fragile, IP-based guidelines, insurance policies comply with workload identification as companies scale, migrate, or restart. This ensures that solely licensed companies talk with each other whereas sustaining excessive efficiency by means of eBPF-accelerated networking. The result’s constant enforcement of least-privileged communication throughout the cluster.
Nevertheless, community segmentation alone can not detect sudden habits inside containers. AI workloads incessantly execute processes, entry delicate information, and dynamically load instruments and libraries. Even when community communication is restricted, compromised workloads can nonetheless behave unpredictably at runtime.
Isovalent Enterprise Runtime Safety, constructed on Tetragon, addresses this second layer of threat. By offering kernel-level observability of course of execution and file exercise, it permits operators to know what workloads are doing inside containers. Suspicious habits could be recognized early, serving to organizations examine and reply earlier than points escalate.
Collectively, these capabilities create a significant and enforceable Kubernetes safety posture. They management how companies talk and supply visibility into how workloads behave throughout execution.
Extending safety to the AI layer itself
The quickest rising threat floor in AI environments sits on the mannequin boundary. Fashions and brokers function in dynamic environments the place person prompts, enterprise information, and exterior instruments intersect. Conventional safety instruments weren’t constructed to detect manipulation of AI interactions or unsafe agent habits.
Cisco AI Protection brings safety instantly into the AI utility layer. It helps organizations analyze mannequin parts for vulnerabilities, apply runtime guardrails to prompts and responses, and monitor how fashions work together with instruments and information sources. This offers visibility into how AI techniques behave and helps cut back the chance of enterprise information or automated agent actions creating unintended publicity.
With this layer in place, safety spans the total lifecycle of AI workloads, from infrastructure and information to Kubernetes operations and AI utility habits.
Demonstrating the safe AI information platform in motion
At VAST Ahead 2026, we’re exhibiting this structure working as an entire and practical answer. Enterprise information is reworked into AI-ready context by means of the VAST pipeline. The platform runs on Cisco infrastructure aligned to Cisco Safe AI Manufacturing unit with NVIDIA design rules. Kubernetes east-west visitors is segmented utilizing the Isovalent Enterprise Platform based mostly on Cilium, whereas runtime habits is monitored utilizing Isovalent Enterprise Runtime Safety constructed on Tetragon. The AI interplay layer is protected utilizing Cisco AI Protection.
This isn’t a theoretical blueprint. It’s a dwell, deployable reference structure that prospects can implement immediately whereas persevering with to evolve towards deeper integration and automation.
The shift towards safe AI outcomes
Crucial lesson rising from enterprise AI adoption is that safety can’t be measured by the variety of controls deployed. It have to be measured by the power to function AI safely and confidently at scale.
A safe AI information platform permits organizations to ship this end result by making certain:
- AI pipelines stay remoted throughout groups and workloads
- East-west visitors inside Kubernetes is managed and observable
- Runtime habits inside containers is monitored and understood
- Fashions and agent interactions are protected against rising AI-specific threats
When these parts are designed collectively, organizations acquire the boldness to scale AI initiatives throughout departments, functions, and enterprise items.
The way forward for accountable AI operations
Cisco Safe AI Manufacturing unit with NVIDIA represents a blueprint for the way enterprise AI can be constructed shifting ahead. It brings efficiency, information intelligence, cloud-native operations, and AI-native safety collectively in a unified operational sample.
Organizations now not want to decide on between pace and security. They’ll deploy AI techniques which might be each revolutionary and reliable, permitting them to maneuver from experimental initiatives to manufacturing AI companies that ship actual enterprise affect.
If you’re attending VAST Ahead 2026, we invite you to expertise this answer firsthand and discover what it means to construct AI techniques designed for manufacturing from day one.
