AI
-
This issue of AI Security newsletter highlights advancements in large language models (LLMs) and AI-written ransomware. It notes how persuasion techniques can influence AI compliance and recognizes the emergence of AI-generated threats like ESET’s PromptLock ransomware. Encouraging interdisciplinary collaboration, the importance of robust cybersecurity is emphasized in combating these evolving threats. Additionally, the newsletter covers…
-
This issue of the AI Security Newsletter highlights key developments and challenges in AI security. It reviews the AI Agent Security Summit 2025, focusing on new frameworks for managing AI risks. There’s an exploration of “AgentHopper,” a hypothetical AI virus highlighting vulnerabilities in coding agents. The edition also covers Meta’s innovative data access strategies with…
-
This issue of AI Security Newsletter unravels a sneaky AI-driven malware attack on Solana users, explores growing trust issues with AI coding tools among developers, and unveils cutting-edge insights into evolving cybersecurity tactics. We also highlight a significant security breach with Amazon’s developer tool and spotlight the vital role quality data plays in enhancing AI…
-
In “Risks & Security,” of this issue of AI Security Newsletter, we highlights emerging threats such as the security vulnerabilities of Model Context Protocol (MCP), the GPUHammer attack degrading AI model integrity, AI impersonation scams targeting U.S. diplomats, privacy concerns in AI training sets, and new technologies to protect artists from AI scraping. “Technology &…
-
AI attacks academic paper reviews, now perpetrated by the authors themselves. Research from 14 universities involved embedding AI prompts within papers to secure favorable reviews, representing a typical prompt injection attack. Due to prompt injection, any input to an AI model can be exploited as a cyber attack, and anyone submitting data can be an…
-
We just witnessed XBOW became the first autonomous penetration tester to top HackerOne’s US leaderboard. XBOW’s rise to the top of the leaderboard was accomplished through rigorous benchmarking, discovering zero-day vulnerabilities, and participating in bug bounty programs without shortcuts. This achievement underscores the great potential for autonomous AI in cybersecurity, or more generally the potential…
-
Microsoft has open-sourced an AI red teaming lab course on GitHub. The labs are designed to teach security professionals how to evaluate AI systems through hands-on adversarial and Responsible AI challenges, making it an excellent resource for those looking to enhance their skills in AI security, particularly in attack scenarios. Google has published a comprehensive…
-
Aim Labs discovered a vulnerability in Microsoft 365 Copilot named “EchoLeak,” which enables unauthorized data extraction through zero-click AI exploitation. This attack leverages the victim’s copilot to create URLs using sensitive data as query parameters, and utilizes markdown image auto-rendering for data extraction without user involvement. A very smart and dangerous tactic. Anthropic shared insights…
-
OpenAI has released a report detailing efforts to combat malicious AI activities through case studies, emphasizing the urgency of protective measures and global collaboration to prevent AI abuse. Fascinating examples and narratives are included (Combating AI Misuse: A Global Effort). Yoshua Bengio, a leading figure in AI and machine learning research, appears to be shifting…
-
MCP represents a cutting-edge architecture for AI agents but also introduces new vulnerabilities. Invariant Labs has identified a method that could allow access to a user’s private repository via the GitHub MCP server, constituting a variation of a prompt injection attack. It’s crucial to recognize that anything an AI model is exposed to can be…
