cybersecurity
-
In my view, the standout article in this issue is by top hacker Joseph Thacker, who provides a thorough guide on hacking AI applications. The guide covers essential topics such as understanding AI models, mastering system prompts, and exploring attack scenarios. While the content about Language Model Mechanics (LLM) is at a high level, the…
-
The standout news in AI and technology last week was Microsoft’s Majorana 1 chip. Microsoft says that this chip leverages a new state of matter called topological superconductivity, potentially enabling the creation of qubits that are more stable and less susceptible to errors than those in current quantum computers, addressing a critical challenge in the…
-
Cisco researchers recently evaluated the DeepSeek R1 model using the HarmBench dataset and reported a 100% attack success rate. Looks like DeepSeek R1 has serious security issues, doesn’t it? However, Meta’s LLama 3.1 model also performed poorly, with a 96% success rate in the same test, while OpenAI’s closed-source model o1 had a 25% success…
-
One of the most talked-about topics in AI recently is DeepSeek and its newly launched R-1 model. Its innovative methodology, low operational cost, and high performance have created a substantial impact on the AI community and even affected the U.S. economy. Notably, major AI companies, including Nvidia, experienced significant stock price declines after the announcement.…
-
A study by Anthropic shows that language models, such as Claude 3 Opus, can fake alignment with training objectives to disguise their actual behaviors. Simply put, if you inform the model that it’s being trained and non-compliance will lead to modification, there’s about 15% chance it will act as instructed to avoid changes. This study…
-
Happy New Year! The AI Security Newsletter was on a two-week pause while I vacationed with family in China. I hope all my readers enjoyed the holiday season. Now, I’m excited to return and share the latest AI security news with you. As we enter another thrilling year in the AI era, MIT Technology Review…
-
As the leading LLM service provider, OpenAI faces significant challenges in safeguarding its AI models. A recent blog outlines their use of external and internal red teams for testing. One linked white paper details how they select and collaborate with external red teams, while another explores the automated testing techniques they employ—fascinating insights for AI…
-
Happy Thanksgiving to our US readers! 🦃 If you’re interested in discovering vulnerabilities in AI models like me, don’t miss the article on automated red-teaming techniques against OpenAI’s o1 model. It lists some advanced technical methods employed by Haize Labs, which secured testing contracts from OpenAI and Anthropic. In a recent blog, DryRun Security shared…
-
In this issue, I want to spotlight OWASP’s recent developments in GenAI security guidance. This is an extension of the OWASP Top 10 for LLM Application Security Project. The new guidance provides practical resources for addressing deepfake threats, creating AI Security Centers of Excellence, and navigating the AI Security Solution Landscape. It serves as a…
-
Several big players have unveiled new products or features: Apple launched iOS 18.1 with Apple Intelligence enhancements, OpenAI upgraded ChatGPT with web search capabilities, and Cohere introduced Embed 3 for multimodal AI search. I am particularly excited about ChatGPT’s new search feature. Many of my AI tasks require finding latest and most accurate information, and…
