Cisco’s recommended security measures—open-source scanning, vulnerability testing, application firewalls, and AI-focused data loss prevention—address different aspects of this risk, forming the foundation of a comprehensive AI security strategy.
As AI adoption continues to expand across the Middle East, spanning sectors such as government, financial services, energy, and critical infrastructure, organizations face increasing pressure to secure AI applications throughout their lifecycle. From the data used to train models to the deployment of the models themselves, CISOs and IT leaders must manage emerging risks while maintaining digital trust. In response, Cisco has highlighted four priority focus areas organizations should consider to secure AI applications as they scale adoption. The guidance outlines how security teams can adapt established application security practices to AI, helping organizations reduce risk without slowing innovation.
The first focus area is open-source scanning. AI application development often relies on components such as open-source models, public datasets, and third-party libraries. While these resources accelerate development, they may contain vulnerabilities or malicious insertions that could compromise the entire system. Regular scanning of these components helps identify and mitigate risks early in the development process.
The second focus area is vulnerability testing. Static testing involves validating all components of an AI application—including binaries, datasets, and models—to detect potential vulnerabilities, such as backdoors or poisoned data. Dynamic testing evaluates how models perform under various scenarios in production. Cisco also recommends algorithmic red-teaming, which simulates a broad range of adversarial techniques without requiring manual testing, to strengthen model resilience.
Third, application firewalls are emerging to address the unique safety and security risks posed by generative AI, particularly large language models (LLMs). These AI-specific firewalls act as model-agnostic guardrails, monitoring AI application traffic in transit to prevent failures and enforce policies. They help mitigate threats such as prompt injection, denial of service (DoS) attacks, and the leakage of personally identifiable information (PII).
Finally, data loss prevention (DLP) for AI applications is critical, given the dynamic nature of natural language content. Traditional DLP approaches are insufficient, so AI-focused DLP examines both inputs and outputs to prevent sensitive data leakage. Input DLP can restrict file uploads, block copy-paste functionalities, or limit access to unapproved AI tools. Output DLP leverages guardrail filters to ensure model responses do not contain PII, intellectual property, or other sensitive information.
“As AI adoption accelerates across the region, organizations are moving quickly from pilots to production, and that shift changes the risk profile. Securing AI applications requires looking beyond traditional application controls to protect the full AI lifecycle, from the data and third-party components feeding models to how those models behave in real-world use. By applying familiar security principles in AI-specific ways, organizations in the Middle East can scale innovation with confidence while reducing risks such as prompt injection and sensitive data leakage.”
– Fady Younes, Managing Director for Cybersecurity, Cisco Middle East, Africa, Türkiye, Romania, and CIS
In summary, risk exists at virtually every stage of the AI lifecycle, from sourcing supply chain components through development and deployment. Cisco’s recommended security measures—open-source scanning, vulnerability testing, application firewalls, and AI-focused data loss prevention—address different aspects of this risk, forming the foundation of a comprehensive AI security strategy. Together, they enable organizations to innovate safely while mitigating potential threats in AI adoption.

