AI Security Newsletter (09-02-2024)

Technology & Tools

Revolutionizing Multimodal Learning with 4M Framework

The 4M framework, spotlighted at NeurIPS 2023 and detailed in an arXiv 2024 paper, represents a significant leap in multimodal and multitask model training. By employing a unified Transformer encoder-decoder across a broad spectrum of modalities—from text and images to geometric shapes—4M achieves remarkable versatility. Its ability to generate and edit across modalities, coupled with enhanced text understanding and multimodal retrieval, showcases its potential as a scalable foundation model for future vision tasks. The open-sourced framework invites further exploration and adaptation across diverse applications.

(I think this technology is also important for cybersecurity field, where security situational awareness could be presented in different modalities, such as alert descriptions, network topologies, attack graphs, heatmaps, database table entries, etc. By training a model with this type of framework, it will be much easier to feed the model with different types of data and get specific outputs that fit to the needs of paritular security tasks.)

https://4m.epfl.ch/

Unifying AI and Databases: Introducing TAG for Advanced Query Answering

Researchers propose Table-Augmented Generation (TAG), a novel approach that surpasses the limitations of Text2SQL and RAG by integrating language models with databases to answer complex natural language queries. TAG’s unique three-step process—query synthesis, execution, and answer generation—enables it to handle queries requiring deep reasoning and world knowledge, significantly outperforming standard methods in accuracy. This advancement opens new avenues for AI and database interaction, promising enhanced query answering capabilities.

(This research has impact to all industries that rely on databases, including cybersecurity. Cybersecurity teams has a lot of data stored in databases, and quering these databases is a daily task. Improving the accuracy of the queries will help to find the right information faster and more accurately for security analysts.)

https://arxiv.org/abs/2408.14717
https://github.com/TAG-Research/TAG-Bench

Comprehensive Guide to AI Talks at Hacker Summer Camp 2024
Clint Gibler condenses over 60 AI-related talks from BSidesLV, Black Hat, and DEF CON 2024 into a digestible format, offering short and long summaries categorized by themes such as Securing AI, Attacking AI, and Public Policy. This resource aims to keep cybersecurity professionals updated on AI advancements with minimal time investment, including links to videos, slides, papers, and tools where available.

https://tldrsec.com/p/tldr-every-ai-talk-bsideslv-blackhat-defcon-2024

Risks & Vulnerabilities

Growing Concern Over Hardware Supply Chain Attacks

A recent HP Wolf Security study reveals a significant rise in concerns over nation-state actors targeting hardware supply chains, with 19% of businesses reporting impacts from such attacks. The global survey of 800 IT and security decision-makers underscores the urgency for enhanced device hardware and firmware integrity, as 91% anticipate future nation-state attacks on physical supply chains. The findings stress the necessity for organizations to adopt robust measures to verify device integrity and protect against tampering, highlighting the evolving threat landscape in device security.

https://www.hp.com/us-en/newsroom/press-releases/2024/hp-wolf-security-study-supply-chains.html

Revolutionizing BOLA Vulnerability Detection with AI

Researchers have developed “BOLABuster,” an innovative AI-driven methodology leveraging large language models (LLMs) to automate the detection of Broken Object Level Authorization (BOLA) vulnerabilities in web applications and APIs. This approach addresses the challenges of manual BOLA detection by understanding application logic, identifying endpoint dependencies, and generating test cases at scale. Early applications of BOLABuster have successfully identified numerous BOLA vulnerabilities in open-source projects, including Grafana, Harbor, and Easy!Appointments, showcasing the potential of AI to significantly enhance cybersecurity efforts.

https://unit42.paloaltonetworks.com/automated-bola-detection-and-ai/

Uruguayan Scientists Develop New Eavesdropping Technique via HDMI

Uruguayan researchers have advanced the field of side-channel attacks by demonstrating a novel method, dubbed Deep-TEMPEST, to reconstruct images from the radio emissions of HDMI cables using machine learning. This technique, which significantly improves upon older methods by utilizing modern digital interfaces and advanced neural networks, allows for the extraction of text from seemingly indecipherable signals. Despite its technical prowess, the practical applications remain limited, with the researchers suggesting its relevance mainly in scenarios involving highly sensitive data. The study also highlights the evolving nature of digital security threats and the continuous need for adaptive countermeasures.

Link: https://www.kaspersky.com/blog/deep-tempest-side-channel-hdmi/52058/

AI-Induced False Memories in Crime Witness Interviews

A study by the MIT Media Lab reveals that conversational AI, particularly generative chatbots powered by large language models, significantly amplifies the formation of false memories in simulated crime witness interviews. Through suggestive questioning, these chatbots induced over three times more immediate false memories compared to control groups and maintained a higher confidence in these inaccuracies over time. This underscores the ethical risks of deploying advanced AI in sensitive areas like legal testimonies, highlighting the need for careful consideration of AI’s influence on human recollection and perception.

https://www.media.mit.edu/projects/ai-false-memories/overview/

Regulation & Policy

US AI Safety Institute Advances with New Agreements on AI Safety Research
The US AI Safety Institute has recently entered into pivotal agreements to bolster AI safety research, marking a significant step forward in the development and implementation of safer AI technologies. These collaborations aim to enhance the understanding and management of AI risks, ensuring a safer integration of AI into society.

https://www.nist.gov/news-events/news/2024/08/us-ai-safety-institute-signs-agreements-regarding-ai-safety-research

California Legislature Approves AI “Kill Switch” Bill Amidst Debate
The California State Assembly has passed SB-1047, a bill mandating “kill switches” for large AI models to address potential public safety threats, with the legislation now awaiting Governor Gavin Newsom’s decision. The bill, which has sparked controversy over its focus on hypothetical future AI risks and its potential impact on innovation and academic research, has received mixed reactions from the AI community, including endorsements from industry leaders and criticism for possibly stifling open-source collaboration and imposing high compliance costs on developers. Governor Newsom faces pressure from both sides as he considers the balance between regulation and fostering technological advancement.

https://arstechnica.com/ai/2024/08/as-contentious-california-ai-safety-bill-passes-critics-push-governor-for-veto/


Discover more from Mindful Machines

Subscribe to get the latest posts sent to your email.

Leave a comment

Discover more from Mindful Machines

Subscribe now to keep reading and get access to the full archive.

Continue reading