Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

A dangerous new jailbreak for AI chatbots was just discovered

the side of a Microsoft building
Wikimedia Commons

Microsoft has released more details about a troubling new generative AI jailbreak technique it has discovered, called “Skeleton Key.” Using this prompt injection method, malicious users can effectively bypass a chatbot’s safety guardrails, the security features that keeps ChatGPT from going full Taye.

Skeleton Key is an example of a prompt injection or prompt engineering attack. It’s a multi-turn strategy designed to essentially convince an AI model to ignore its ingrained safety guardrails, “[causing] the system to violate its operators’ policies, make decisions unduly influenced by a user, or execute malicious instructions,” Mark Russinovich, CTO of Microsoft Azure, wrote in the announcement.

Recommended Videos

It could also be tricked into revealing harmful or dangerous information — say, how to build improvised nail bombs or the most efficient method of dismembering a corpse.

an example of a skeleton key attack
Microsoft

The attack works by first asking the model to augment its guardrails, rather than outright change them, and issue warnings in response to forbidden requests, rather than outright refusing them. Once the jailbreak is accepted successfully, the system will acknowledge the update to its guardrails and will follow the user’s instructions to produce any content requested, regardless of topic. The research team successfully tested this exploit across a variety of subjects including explosives, bioweapons, politics, racism, drugs, self-harm, graphic sex, and violence.

While malicious actors might be able to get the system to say naughty things, Russinovich was quick to point out that there are limits to what sort of access attackers can actually achieve using this technique. “Like all jailbreaks, the impact can be understood as narrowing the gap between what the model is capable of doing (given the user credentials, etc.) and what it is willing to do,” he explained. “As this is an attack on the model itself, it does not impute other risks on the AI system, such as permitting access to another user’s data, taking control of the system, or exfiltrating data.”

As part of its study, Microsoft researchers tested the Skeleton Key technique on a variety of leading AI models including Meta’s Llama3-70b-instruct, Google’s Gemini Pro, OpenAI’s GPT-3.5 Turbo and GPT-4, Mistral Large, Anthropic’s Claude 3 Opus, and Cohere Commander R Plus. The research team has already disclosed the vulnerability to those developers and has implemented Prompt Shields to detect and block this jailbreak in its Azure-managed AI models, including Copilot.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
The best AI chatbots to try: ChatGPT, Gemini, and more
Bing Chat shown on a laptop.

The idea of chatbots has been around since the early days of the internet. But even compared to popular voice assistants like Siri, the generated chatbots of the modern era are far more powerful.

Yes, you can converse with them in natural language. But these AI chatbots can generate text of all kinds, from poetry to code, and the results really are exciting. ChatGPT remains in the spotlight, but as interest continues to grow, more rivals are popping up to challenge it.
OpenAI ChatGPT and ChatGPT Plus

Read more
Perplexity’s two new features take it beyond just a chatbot
An abstract image showing floating squares used for a Perplexity blog post.

Perplexity AI, makers of the popular chatbot by the same name, announced Thursday that it is rolling out a pair of new features that promise to give users more flexibility over the sorts of sources they employ: Internal Knowledge Search and Spaces.

"Today, we're launching Perplexity for Internal Search: one tool to search over both the web and your team's files with multi-step reasoning and code execution," Perplexity AI CEO Aravind Srinivas wrote on X (formerly Twitter). Previously, users were able to upload personal files for the AI to chew through and respond upon, the same way they could with Gemini, ChatGPT, or Copilot. With Internal Search, Perplexity will now dig through both those personal documents and the internet to infer its response.

Read more
Microsoft Copilot: how to use this powerful AI assistant
Man using Windows Copilot PC to work

In the rapidly evolving landscape of artificial intelligence, Microsoft's Copilot AI assistant is a powerful tool designed to streamline and enhance your professional productivity. Whether you're new to AI or a seasoned pro, this guide will help you through the essentials of Copilot, from understanding what it is and how to sign up, to mastering the art of effective prompts and creating stunning images.

Additionally, you'll learn how to manage your Copilot account to ensure a seamless and efficient user experience. Dive in to unlock the full potential of Microsoft's Copilot and transform the way you work.
What is Microsoft Copilot?
Copilot is Microsoft's flagship AI assistant, an advanced large language model. It's available on the web, through iOS, and Android mobile apps as well as capable of integrating with apps across the company's 365 app suite, including Word, Excel, PowerPoint, and Outlook. The AI launched in February 2023 as a replacement for the retired Cortana, Microsoft's previous digital assistant. It was initially branded as Bing Chat and offered as a built-in feature for Bing and the Edge browser. It was officially rebranded as Copilot in September 2023 and integrated into Windows 11 through a patch in December of that same year.

Read more