This week, Knostic published research revealing a new cyberattack technique against AI search engines that takes advantage of an unexpected trait: impulsivity.
Israeli AI access control company Knostic This week, researchers published research that reveals a new method of cyberattack against AI search engines that takes advantage of an unexpected attribute: impulsivity. Researchers have demonstrated how AI chatbots such as ChatGPT and Microsoft’s Copilot can bypass security mechanisms and reveal sensitive data.
Related articles
AI access control company Knostic wins Black Hat Startup Award
This method, called flow-breaking, takes advantage of an interesting architectural gap in large-scale language models (LLMs) in certain situations where the system “spits out” data before the security system has enough time to check it. Masu. Then, he erases the data as if he regrets what he said. The data is erased within seconds, but can be documented if the user captures an image of the screen.
Gadi Evron, co-founder and CEO of Knostic, who previously founded Cymmetria, said, “LLM systems are built from multiple components, and it is possible to attack the user interface between different components.” Masu. Researchers demonstrated two vulnerabilities that exploit this new technique. In the first method, called “Second Computer,” LLM sends answers to the user before undergoing the security check. The second method, called “stop and flow,” utilizes a stop button to receive answers. before being filtered.
Published by Globes, Israel Business News – en.globes.co.il – on November 26, 2024.
© Globes Publisher Itonut (1983) Ltd., Copyright 2024.
Knostic Founders Gadi Evron and Sounil Yu Credit: Knostic