AI

  • Aim Labs discovered a vulnerability in Microsoft 365 Copilot named “EchoLeak,” which enables unauthorized data extraction through zero-click AI exploitation. This attack leverages the victim’s copilot to create URLs using sensitive data as query parameters, and utilizes markdown image auto-rendering for data extraction without user involvement. A very smart and dangerous tactic. Anthropic shared insights…

  • OpenAI has released a report detailing efforts to combat malicious AI activities through case studies, emphasizing the urgency of protective measures and global collaboration to prevent AI abuse. Fascinating examples and narratives are included (Combating AI Misuse: A Global Effort). Yoshua Bengio, a leading figure in AI and machine learning research, appears to be shifting…

  • MCP represents a cutting-edge architecture for AI agents but also introduces new vulnerabilities. Invariant Labs has identified a method that could allow access to a user’s private repository via the GitHub MCP server, constituting a variation of a prompt injection attack. It’s crucial to recognize that anything an AI model is exposed to can be…

  • After I put together the first version of the Fabric extension (my previous post about the first version), it really helped me use those Fabric patterns on web pages. But there were some downsides: it couldn’t read PDFs on the web, nor did it work with YouTube video pages. This was a bit of a…

  • Anthropic has released new models, Cloude Opus 4 and Sonnet 4, claiming exceptional coding and reasoning capabilities. Are they as impressive as advertised? We include a post that evaluated Claude 4 Opus in this issue. The results show promise, though some persistent issues remain. It’s also encouraging to see Stripe’s AI efforts improving fraud detection…

  • In this article, Rohit Krishnan explores the challenges and considerations of working with large language models (LLMs). Having developed several LLM applications from the ground up, I couldn’t agree more with his key observations: achieving perfect verifiability of LLM output is unattainable, increased AI usage in applications leads to more hallucinations, and trial and error…

  • This issue of AI newsletter includes Meta’s LlamaFirewall for AI security, WhatsApp’s Private Processing for enhanced privacy, and OpenAI’s retraction of the sycophantic GPT-4o update. Concerns over AI reliability pitfalls and privacy issues with ChatGPT’s location identification are also highlighted. On the technology front, we cover DARPA’s AI Cyber Challenge and advancements in jailbreaking resistance…

  • As Agentic AI becomes ubiquitous across industries, ensuring cybersecurity amidst the rise of AI and non-human identities is crucial. In addition, while it becomes very likely that companies might begin hiring virtal AI employees soon, how do we make sure that those fully autonomous virtual employees are safe and secure? This week, we delve into…

  • Last week was my kids’ Spring Break, so I paused the newsletter to take a short family vacation. I hope you all enjoyed the Spring weather too. One of this week’s intriguing analyses concerns the concept of Intelligence Explosion. This idea suggests that, thanks to AI, the same amount of technical advancement between 1925 and…

  • Why Bother? I use the open-source Fabric project almost daily to analyze and generate text content – it’s become an essential tool in my workflow. For those unfamiliar, Fabric has two main features: First, it’s a collection of high-quality prompts (called “patterns”) for different tasks. For example, the “extract_wisdom” pattern is great for pulling key…