Welcome to this edition of the AI Security Newsletter. This issue tracks the rapid spread of agentic systems across infrastructure, commerce, and enterprise workflows, while also highlighting the security and governance questions that come with them. NVIDIA appears repeatedly, with updates spanning reasoning models, the reported NemoClaw platform, and broader enterprise agent ambitions. The edition also covers agent hijacking risks, DeepSeek’s mobile and multimodal systems, and new guidance on AI-assisted authorship from the U.S. Copyright Office.
Risks & Security
Agent Hijacking Moves From Theory to Active Security Problem
NIST describes agent hijacking as attacks that hide malicious instructions in emails, files, or websites to derail AI agents into harmful actions. That concern is now operational rather than hypothetical: NIST opened a 2026 request for input on securing agents, and CAISI found DeepSeek agents notably more vulnerable to hijacking in simulated tests.
DeepSeek iOS App Exposed Multiple Mobile Security Risks
NowSecure found that DeepSeek’s iOS app transmitted some data without encryption, used weak hardcoded 3DES keys, and stored sensitive information insecurely on-device. Follow-on reporting said some traffic also reached ByteDance-controlled infrastructure, raising the risk profile for enterprise and government use.
OpenAI Planned Venado Deployment for National Lab Research
OpenAI said it would work with Microsoft to deploy an o-series model on Los Alamos’ Venado supercomputer for shared use across U.S. national labs. The effort was framed around scientific research, cybersecurity, disease work, and nuclear-security-related tasks, and later DOE reporting indicated the deployment became operational.
Technology & Tools
NVIDIA Frames GTC 2026 Around Agentic Reasoning and Safety
NVIDIA’s recent GTC messaging centers on agent infrastructure, open reasoning models, and enterprise deployment safeguards. Its March 11, 2026 Nemotron 3 Super release adds an open hybrid Mamba-Transformer MoE model aimed at long-running agentic reasoning, while related announcements reinforce NVIDIA’s broader push into safety-aware AI platforms.
NemoClaw Aims to Become an Operating Layer for AI Agents
NemoClaw is being described as NVIDIA’s attempt to control the orchestration layer for enterprise agents, not just the chips underneath them. Reporting points to a stack designed for dispatching agents across internal workflows with security guardrails, though its exact release status and implementation details were still unclear in public sources as of March 18, 2026.
MCP Continues Expanding as a Cross-Platform Agent Standard
Model Context Protocol appears to be gaining adoption rather than fading. Its official documentation now positions MCP as an open standard supported across major AI clients and developer tools, while newer coverage shows both deeper product integration and the kind of security scrutiny that usually comes with real ecosystem growth.
DeepSeek-VL2 Expanded Open Multimodal MoE Models
DeepSeek released VL2 as a family of open vision-language models with tiny, small, and full variants using 1.0B, 2.8B, and 4.5B activated parameters. The project targets OCR, visual grounding, and document, table, and chart understanding, and the authors position it as competitive with other open-source multimodal models at similar or lower activated size.
Business & Products
NVIDIA Pushes NemoClaw as an Enterprise Agent Platform
Recent reporting says NVIDIA has been pitching NemoClaw as an open-source, hardware-agnostic platform for enterprise agents with built-in privacy, security, and governance controls. The positioning is consistent across sources, although public details still appear to rely more on reporting than on a complete official product specification.
Agentic Commerce Infrastructure Starts Taking Shape
The idea of “robot money” is becoming more concrete through payment and cloud infrastructure for AI agents. Visa and AWS have outlined systems for agentic commerce with identity, authorization, and workflow support, while Mastercard is also moving to shape the trust and standards layer for AI-driven checkout.
Regulation & Policy
Copyright Office Reaffirmed Human Authorship in AI-Assisted Works
The U.S. Copyright Office said copyright protection still depends on human authorship, not on whether AI tools were used during creation. Its January 2025 report draws a line between human-directed creative contributions, which can remain protectable, and outputs generated primarily by a model, which generally cannot.

Leave a comment