Thanks to all the contributors of the State of AI Safety 2026, together with Amy Chang, Tiffany Saade, Emile Antone, and the broader Cisco AI analysis staff.
As synthetic intelligence (AI) expertise and enterprise AI adoption advance at a speedy tempothe safety panorama round it’s increasing sooner, leaving many defenders struggling to maintain up. Final 12 months, we launched our inaugural State of AI Safety report to assist safety professionals, enterprise leaders, policymakers, and the broader group make sense of this novel and sophisticated subject—and put together for what comes subsequent.
Actually, quite a bit can change in a 12 months.
At the moment, we’re proud to share the State of AI Safety 2026, our flagship report that builds upon the foundational evaluation coated in final 12 months’s version.
This publication sheds gentle on the AI risk panorama, a snapshot in time, however one which marks the beginnings of a serious paradigm shift in AI safety. The confluence of speedy AI adoption, untested boundaries and limits of AI, non-existent norms of habits round AI safety and security, and present cybersecurity threat requires a elementary change to how corporations strategy digital safety. Because the report particulars, AI vulnerabilities and exploits conceptualized inside the confines of a analysis lab have materialized, evidenced by quite a few stories of AI compromise and AI-enabled malicious campaigns from the second half of 2025. Different notable developments—the proliferation of agentic AI, adjustments in authorities regulation, and rising attacker curiosity in AI, for instance—have additional sophisticated the scenario.
Like its predecessor, the State of AI Safety 2026 explores new and notable developments throughout AI risk intelligence, international AI coverage, and AI safety analysis. On this weblog, we present a preview of a few of the areas coated in our newest report.
Threats to AI functions and agentic programs
On the outset of 2025, the trade was characterised by a profound dissonance between AI adoption and AI readiness. Whereas 83 p.c of organizations we surveyed had deliberate to deploy agentic AI capabilities into their enterprise features, solely 29 p.c of organizations felt they have been actually able to leverage these applied sciences securely. Organizations that rushed to combine LLMs into crucial workflows might have bypassed conventional safety vetting processes in favor of velocity, sowing a fertile floor for safety lapses and opening the door for adversarial campaigns.
At the moment, AI capabilities exceed the conceptual boundaries of beforehand out there programs. Generative AI is accelerating quickly, usually with out correct testing and analysis, provide chains are rising in complexity, usually with out correct controls and governance, and highly effective, autonomous AI brokers are proliferating throughout crucial workflows, usually with out accountability being ensured. The potential for immense worth in these programs comes with an equally large threat floor for organizations to take care of.
The State of AI Safety 2026 dives into the evolution of immediate injection assaults and jailbreaks of AI programs. It additionally examines the fragility of the fashionable AI provide chain, highlighting vulnerabilities that may be present in datasets, open-source fashions, instruments, and numerous different AI elements. We additionally take a look at the rising threat floor of Mannequin Context Protocol (MCP) agentic AI and be aware how adversaries can use brokers to execute assault campaigns with tireless effectivity.
An innovation-first strategy for international AI coverage
In opposition to the backdrop of an evolving risk panorama, and as agentic and generative AI applied sciences introduce new safety complexities, the State of AI Safety 2026 report additionally examines three main AI gamers’ approaches to AI coverage: america, European Union, and the Folks’s Republic of China. The trajectory of AI governance in 2025 represented a definitive shift, with previous years outlined by a stronger emphasis on AI security—non-binding agreements and regulation that have been supposed to guard constitutional or elementary rights. In 2025, we witnessed a international repositioning in direction of innovation and funding in AI improvement whereas nonetheless contending with the inherent safety and security issues that generative AI might pose via misaligned mannequin habits or malicious exercise similar to growing deepfakes for social engineering.
The US, beneath a brand new administration, is centered on fostering an setting that encourages innovation over regulation, pivoting away from extra stringent security frameworks and counting on present legal guidelines. Within the European Union (EU), following the ratification of the EU AI Act, there was broad political consensus for the necessity to simplify guidelines and stimulate AI investing, together with via public funding. China has pursued a dual-track technique of deeply integrating AI by way of state planning whereas concurrently erecting a classy digital equipment to handle the social dangers of anthropomorphic and emotional AI. As our report explores, every of those three regulatory blocs has adopted a definite national-level strategy to AI improvement reflecting political programs, financial priorities, and normative values.
AI safety analysis and tooling at Cisco
During the last 12 months, the Cisco AI Menace Intelligence & Safety Analysis staff has each pioneered and contributed to risk analysis and open-source fashions and instruments. These initiatives map on to a few of the most crucial up to date AI safety challenges, together with AI provide chain vulnerability, agentic AI threat, and the weaponization of AI by attackers.
The State of AI Safety 2026 report provides a succinct overview of a few of the newest releases by our staff. These embrace analysis into open-weight mannequin vulnerabilities, which sheds gentle on how numerous fashions stay inclined to jailbreaks and immediate injections, particularly over lengthier conversations. It additionally covers 4 open-source initiatives: a structure-aware pickle fuzzer that generates adversarial pickle recordsdata and scanners for MCP, A2A, and agentic ability recordsdata to assist safe the AI provide chain.
Get the report
Able to learn the total State of AI Safety report for 2026? Test it out right here.
