Artificial intelligence is continuously improving to assist, automate, and make decisions on behalf of humans. However, as its power increases, so do the possibilities for its decisions to be manipulated—either internally or through techniques that are difficult to detect with current monitoring and auditing approaches.
A tool designed for progress can become a critical vulnerability if its security is not ensured by design and by default. The question is not whether AI is a prime target for cybercriminals—it clearly is—but whether organizations are truly prepared to protect it.
A New Playing Field for Cyberattacks
Every time a company integrates AI into its daily operations, it expands its attack surface with new vectors, many of which are beyond its control due to lack of knowledge about the technology and its ecosystem.
A clear example is the overuse of LLMs and AI agents in advisory or corrective applications, which can expose confidential information to the public or even to the model itself during training. This opens the door to:
- Unauthorized inferences about critical operations.
- Exfiltration of sensitive data.
- Future risks in algorithm training.
Another key point is the interaction between data sources and AI. The interoperability ecosystem generates vulnerabilities such as:
- Data poisoning to degrade training.
- Model theft or inference.
- Creation of unwanted biases.
- Prompt injection.
Supply chain attacks in AI processing are also notable, exacerbated by dependency on technology partners and the lack of auditability in some processes. These threats are critical because they can alter results without leaving detectable evidence in a traditional cyber incident analysis.
Moreover, AI-enabled attacks have transformed the speed, scale, and sophistication of threats. An alarming trend is the drastic reduction in data exfiltration times:
- 2021: 9 days.
- 2024: 2 days.
- 2025: less than 5 hours.
Beyond Protecting Data: Safeguarding the Model
Today, the asset to protect is not just the data at rest or in transit, but the AI model itself and its processing.
Unlike traditional software, AI requires:
- Supervision, observability, and explainability at every stage.
- Special attention during training and retraining, where risks of inference or result alteration are concentrated.
- Protection of the technology supply chain, highly sensitive to changes in computational processing.
New Risks Require New Strategies
AI models are constantly evolving: they are retrained, generate new insights, and produce valuable new data. Classical cybersecurity practices are not enough. Specific methodologies are needed to address the full lifecycle of an AI system: design, training, and production.
Key reference frameworks include:
- NIST Cybersecurity Framework
- OWASP AI Exchange
- MIT AI Risk-MTI
- European Artificial Intelligence Act (AI Act)
A common auditing mistake is limiting testing to the visible layer of the model. Without reviewing the data origin or the complete training cycle, long-range attacks—such as dataset poisoning or subtle result manipulation—can be overlooked.
A Paradigm Shift in Cybersecurity
The challenge is not just about applying standards or investing in technology. True protection begins with a mindset shift.
Organizations must assume that AI security is not an add-on—it is a design requirement. This requires:
- Training developers.
- Integrating mixed teams of cybersecurity and data science experts.
- Managing risk in alignment with the evolving reality of intelligent systems.
Beyond technical measures, protecting AI means building a culture of shared responsibility. Providers, clients, regulators, and employees must recognize that every phase of an AI system’s lifecycle can become an open door if not properly managed.
Want to Learn More?
On our AI & Data landing page, you’ll find how we approach the security of intelligent systems from practical and strategic perspectives.
You can also expand this vision in the article published in Escudo Digital: “Shielding AI Starts with Protecting Its Foundations,” featured in La Tribuna de Roger.