Knostic published research this week, revealing a new method of cyber attack on AI search engines, which takes advantage of an unexpected trait – impulsiveness.
An Israeli AI access control company Knostic I published research this week, revealing a new method of cyberattack on AI search engines, which takes advantage of an unexpected trait – impulsiveness. Researchers demonstrate how AI-powered chatbots such as ChatGPT and Microsoft’s Copilot can expose sensitive data by bypassing their security mechanisms.
Related articles
AI access control company Knostic wins Black Hat Startup Award
This method, called Flowbreaking, exploits an interesting architectural gap in large language models (LLMs) in certain situations where the system “spits out” data before the security system has time to verify it. Then it scans the data like a person who regrets what he just said. Although the data is erased within a fraction of a second, a user who takes a screenshot can document it.
“LLM systems are built from multiple components and it is possible to attack the user interface between different components,” said Knostic co-founder and CEO Gadi Efron, who previously founded Cymmetria. The researchers showed two vulnerabilities that the new method exploits. The first method, called “Second Computer” makes LLM send an answer to the user before it undergoes security screening, and the second method called “Stop and Flow” takes advantage of the stop button to receive an answer before it undergoes filtering.
Published by Globes, Israel Business News – en.globes.co.il – on November 26, 2024.
© Copyright Globes Publisher Itonut (1983) Ltd., 2024.
Knostic founders Gadi Efron and Sunil Yu Credit: Knostic
Comments are closed, but trackbacks and pingbacks are open.