Artificial intelligence has moved toward becoming an independent actor shaping the global economy. It is no longer merely a technical tool, but a transformative core that is restructuring economic systems from within. As AI evolves into an autonomous force capable of decision-making and execution, concepts of production, responsibility, and governance are being reshaped, while new and highly complex risk patterns emerge.
These risks cannot be reduced to technical malfunctions or operational vulnerabilities. They now reach deep into the foundations of institutional stability, through their entanglement with digital supply chains, decision-making structures, and systems of trust that govern modern organisations.
In this context, the report titled “Six Predictions for the AI Economy: The New Rules of Cybersecurity for 2026”, issued by Palo Alto Networks, offers a revealing entry point for understanding this structural shift.
The issue at stake is not merely the development of protective tools or the enhancement of digital defences, but the redefinition of the nature of risk in an economy where intelligent systems operate as independent actors within institutional frameworks. This, in turn, requires a comprehensive reassessment of concepts of security, governance, and accountability.
The year 2025 marked the peak of an unprecedented wave of cyber disruption, with threat landscapes reaching extreme levels of speed and complexity. This was driven by the use of artificial intelligence in offensive operations and by accumulated fragility in digital supply chains.
Data shows that 84% of major cyber incidents investigated during that year resulted in operational shutdowns, reputational damage, or direct financial losses. This confirms that attacks are no longer isolated events that can be contained through ad hoc responses, but have become full-scale operational crises striking at the core of business continuity.
This reality clearly exposed the limits of the traditional security model based on post-incident response, and highlighted the need for a fundamentally different approach, one that is grounded not in the concept of breach, but in the notion of structural exposure.
As 2026 approaches, a qualitative transition is taking shape from a logic of disruption to a logic of defence, not in its classical sense of fortification and pursuit, but through redesigning security architectures to match an economy led by non-human entities.
In modern operational environments, the number of automated identities and agents exceeds that of humans by an estimated ratio of 82 to 1. This quantitative shift reflects a deeper qualitative transformation in the structure of the digital workforce.
With this numerical imbalance, humans are no longer the sole decision makers nor the primary targets of attacks. They have become part of a hybrid system in which machines and humans share authority and responsibility. In this context, security no longer means protecting networks or systems alone, but safeguarding operational logic, decision-making processes, and the chains of trust upon which institutions rely.
Cybersecurity in the Age of Autonomous Agents: Emerging Risks
Identity emerges here as the central arena of confrontation. In a world where automated identities proliferate and the ability to distinguish between authentic and synthetic actors diminishes due to real-time deepfake technologies, institutions enter a structural crisis of trust.
The threat is no longer confined to data theft or service disruption. It now includes the possibility of steering entire chains of automated decisions through forged commands or compromised identities. This makes identity security a fundamental condition for the stability of institutional decision-making itself, not merely a technical preventive measure.
This shift intersects with a deeper paradox related to AI agents themselves. These agents are relied upon to bridge the global cybersecurity skills gap, estimated at around 4.8 million specialists, to reduce the fatigue of security teams, and to accelerate incident response through continuous operation.
Yet granting them broad authority and implicit trust simultaneously turns them into the most valuable assets within institutions, and therefore the most attractive targets for attack. Here, the concept of the internal threat undergoes a radical transformation. It is no longer tied to human behaviour, but to the possibility of converting a trusted intelligent agent into an adversarial element, operating with speed and precision that surpass any conventional breach.
In parallel, the focus of attacks has shifted to a more concealed level, namely the data itself. Rather than targeting systems after deployment, trust is undermined by manipulating training data at the source. This leads to the production of AI models that appear sound externally, but are biased or unreliable in their decisions.
This type of attack does not cause immediate collapse. Instead, it implants a structural flaw whose impact accumulates over time, undermining reliance on artificial intelligence as a pillar of the new economy. Data thus shifts from being an operational resource to a sovereign asset requiring unified governance across the entire AI lifecycle.
The legal and institutional dimension is no less serious than the technical one. The gap between the rapid adoption of artificial intelligence and the limited number of institutions with mature AI security strategies, estimated at no more than 6%, elevates risks from the technical level to the level of institutional sovereignty.
Unregulated behaviour by intelligent systems is no longer a neutral technical failure. It has become a set of decisions with legal consequences and direct liabilities borne by executive leadership and boards of directors, redrawing the boundaries of governance in the digital age.
In the background looms a quieter yet more inevitable threat: quantum computing. The “harvest now, decrypt later” scenario means that data stolen today, even if encrypted, may become a strategic liability in the near future, as the timeline for practical quantum computing shrinks from a full decade to only a few potential years. This necessitates moving beyond temporary updates toward building sustainable cryptographic resilience capable of continuous adaptation in a threat environment changing at an unprecedented pace.
At the level of daily practice, the web browser has shifted from a display tool to a central execution environment, through which applications, data, and AI agents are managed.
With the sharp rise in the use of generative AI applications, the browser has become a convergence point between humans, machines, and data, while also constituting one of the broadest and least controlled attack surfaces. This requires relocating security controls to the final point of execution, rather than relying solely on protecting backend system layers.
In sum, the ongoing transformation is not about introducing more advanced security tools, but about redefining the relationship between artificial intelligence, decision-making, and trust. In an economy led by autonomous systems, cyber risks become a mirror of risks related to governance, leadership, and accountability.
As 2026 approaches, the real challenge will not lie in the speed of AI adoption, but in the ability to control, govern, and secure it as an independent economic actor, without allowing it to shift from a driver of growth into a permanent source of uncertainty. The question facing humanity today is no longer whether to adopt artificial intelligence, but how to live and manage institutions in a world where AI has become an unavoidable force shaping our economic and institutional destiny.








