cybersecurity
-
Welcome to this edition of our AI Security Newsletter. We’re taking a close look at the intersection of AI, security, and innovation. Expect to explore updated security practices for AI agents, address vulnerabilities within Model Context Protocols, and examine significant threats like RCE in widely-used servers. We also provide insights on new AI tools and…
-
This issue of the AI Security Newsletter addresses several pressing topics in AI security. It highlights the vulnerabilities in Model Context Protocol tools and discusses the urgent need for robust safeguards against AI-related data breaches and malware. Furthermore, it emphasizes the challenges of governance in AI adoption and data leakage within organizations. The newsletter also…
-
Welcome to our latest edition of the AI Security Newsletter, where we dive into the dynamic world of AI security developments. This issue unwraps the new AI Security Shared Responsibility Framework, setting the stage for secure AI deployments. We spotlight the unveiling of SlowMist’s MCP Security Checklist and Tencent’s innovative AI-Infra-Guard solution. Exciting advances such…
-
This issue of AI Security newsletter highlights advancements in large language models (LLMs) and AI-written ransomware. It notes how persuasion techniques can influence AI compliance and recognizes the emergence of AI-generated threats like ESET’s PromptLock ransomware. Encouraging interdisciplinary collaboration, the importance of robust cybersecurity is emphasized in combating these evolving threats. Additionally, the newsletter covers…
-
This issue of the AI Security Newsletter highlights key developments and challenges in AI security. It reviews the AI Agent Security Summit 2025, focusing on new frameworks for managing AI risks. There’s an exploration of “AgentHopper,” a hypothetical AI virus highlighting vulnerabilities in coding agents. The edition also covers Meta’s innovative data access strategies with…
-
This issue of AI Security Newsletter unravels a sneaky AI-driven malware attack on Solana users, explores growing trust issues with AI coding tools among developers, and unveils cutting-edge insights into evolving cybersecurity tactics. We also highlight a significant security breach with Amazon’s developer tool and spotlight the vital role quality data plays in enhancing AI…
-
In “Risks & Security,” of this issue of AI Security Newsletter, we highlights emerging threats such as the security vulnerabilities of Model Context Protocol (MCP), the GPUHammer attack degrading AI model integrity, AI impersonation scams targeting U.S. diplomats, privacy concerns in AI training sets, and new technologies to protect artists from AI scraping. “Technology &…
-
AI attacks academic paper reviews, now perpetrated by the authors themselves. Research from 14 universities involved embedding AI prompts within papers to secure favorable reviews, representing a typical prompt injection attack. Due to prompt injection, any input to an AI model can be exploited as a cyber attack, and anyone submitting data can be an…
-
We just witnessed XBOW became the first autonomous penetration tester to top HackerOne’s US leaderboard. XBOW’s rise to the top of the leaderboard was accomplished through rigorous benchmarking, discovering zero-day vulnerabilities, and participating in bug bounty programs without shortcuts. This achievement underscores the great potential for autonomous AI in cybersecurity, or more generally the potential…
-
Microsoft has open-sourced an AI red teaming lab course on GitHub. The labs are designed to teach security professionals how to evaluate AI systems through hands-on adversarial and Responsible AI challenges, making it an excellent resource for those looking to enhance their skills in AI security, particularly in attack scenarios. Google has published a comprehensive…
