NIST’s CAISI Issues Request for Information About Securing AI Agent Systems
Jan. 13, 2026 — The Center for AI Standards and Innovation (CAISI) at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has published a Request for Information (RFI) seeking insights from industry, academia, and the security community regarding the secure development and deployment of AI agent systems.
Credit: Grandbrothers/Shutterstock
AI agent systems are capable of planning and taking autonomous actions that impact real-world systems or environments. While these systems promise significant benefits for productivity and innovation, they present unique security challenges.
AI agent systems face a range of security threats and risks. Some risks overlap with other software systems, such as exploitable authentication or memory management vulnerabilities. This RFI, however, focuses on distinct risks that arise when combining AI model outputs with the functionality of software systems. This includes risks from models interacting with adversarial data (such as in indirect prompt injection), risks from the use of insecure models (such as models that have been subject to data poisoning), and risks that models may take actions that harm security even in the absence of adversarial inputs (such as models that exhibit specification gaming or otherwise pursue misaligned objectives). These security challenges not only hinder adoption today but may also pose risks for public safety and national security as AI agent systems become more widely deployed.
The RFI poses questions on topics including:
Unique security threats affecting AI agent systems, and how these threats may change over time.
Methods for improving the security of AI agent systems in development and deployment.
Promise of and possible gaps in existing cybersecurity approaches when applied to AI agent systems.
Methods for measuring the security of AI agent systems and approaches to anticipating risks during development.
Interventions in deployment environments to address security risks affecting AI agent systems, including methods to constrain and monitor the extent of agent access in the deployment environment.
Input from AI agent deployers, developers, and computer security researchers, among others, will inform future work on voluntary guidelines and best practices related to AI agent security. It will also contribute to CAISI’s ongoing research and evaluations of agent security. Respondents are encouraged to provide concrete examples, best practices, case studies and actionable recommendations based on their experience with AI agent systems. The full RFI can be found here.
The comment period closes on March 9, 2026, at 11:59 PM Eastern Time. Comments can be submitted online at www.regulations.gov, under docket no. NIST-2025-0035.
Source: NIST
Related
Berkeley Lab Materials Project Surpasses 650,000 Users as Demand for AI-Ready Data Grows
Jan. 13, 2026 — In 2011, a small team at the Department of Energy’s Lawrence…
DDN Report Reveals 65% of Organizations Are Struggling to Achieve AI Success
CHATSWORTH, Calif., Jan. 13, 2026 — As enterprises race to adopt and deploy artificial intelligence…
Thomson Reuters Convenes Global AI Leaders to Advance Trust in the Age of Intelligent Systems
Anthropic, AWS, Google Cloud, and OpenAI join Thomson Reuters Labs in the Trust in AI…
Owkin Launches Agentic AI Infrastructure for Biology at JPM Healthcare Conference
SAN FRANCISCO, Jan. 12, 2026 — Owkin today announced the launch of its agentic infrastructure for…
NVIDIA and Lilly Announce Co-Innovation AI Lab for Drug Discovery
SAN FRANCISCO, Jan. 12, 2026 — NVIDIA and Eli Lilly and Company today announced a…
Owkin’s Biological AI Agent Launches with Anthropic’s Claude for Healthcare and Life Sciences
SAN FRANCISCO, Jan. 12, 2026 — Owkin, an AI company on a mission to solve the…
Electronics Giants Tap into Industrial Automation with NVIDIA Metropolis for Factories
May 30, 2023 — The $46 trillion global electronics manufacturing industry spans more than 10…
WPP Partners with NVIDIA to Build Generative AI-Enabled Content Engine for Digital Advertising
TAIPEI, Taiwan, May 30, 2023 — NVIDIA and WPP have announced they are developing a…
MediaTek Partners with NVIDIA to Transform Automobiles with AI and Accelerated Computing
May 30, 2023 — MediaTek, a leading innovator in connectivity and multimedia, is teaming with…
HPE Reports Fiscal 2023 2nd Quarter Results
HOUSTON, May 31, 2023 — Hewlett Packard Enterprise has announced financial results for the second quarter…
Syslogic Introduces Rugged Computer Based on NVIDIA Jetson AGX Orin Industrial
BADEN, Switzerland, May 31, 2023 — Syslogic has introduced the first embedded system based on…
Industry Leaders Launch RISE to Accelerate the Development of Open Source Software for RISC-V
BRUSSELS, May 31, 2023 — The RISC-V Software Ecosystem (RISE) Project is a new collaborative…
NIST’s CAISI Issues Request for Information About Securing AI Agent Systems
Jan. 13, 2026 — The Center for AI Standards and Innovation (CAISI) at the U.S. Department…
Berkeley Lab Materials Project Surpasses 650,000 Users as Demand for AI-Ready Data Grows
Jan. 13, 2026 — In 2011, a small team at the Department of Energy’s Lawrence…
DDN Report Reveals 65% of Organizations Are Struggling to Achieve AI Success
CHATSWORTH, Calif., Jan. 13, 2026 — As enterprises race to adopt and deploy artificial intelligence…
Thomson Reuters Convenes Global AI Leaders to Advance Trust in the Age of Intelligent Systems
Anthropic, AWS, Google Cloud, and OpenAI join Thomson Reuters Labs in the Trust in AI…
Owkin Launches Agentic AI Infrastructure for Biology at JPM Healthcare Conference
SAN FRANCISCO, Jan. 12, 2026 — Owkin today announced the launch of its agentic infrastructure for…
NVIDIA and Lilly Announce Co-Innovation AI Lab for Drug Discovery
SAN FRANCISCO, Jan. 12, 2026 — NVIDIA and Eli Lilly and Company today announced a…