Skip to main content

AI can now steal your passwords with almost 100% accuracy — here’s how

A digital depiction of a laptop being hacked by a hacker.
Digital Trends

Researchers at Cornell University have discovered a new way for AI tools to steal your data — keystrokes. A new research paper details an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.

The researchers accomplished this by training an AI model on the sound of keystrokes and deploying it on a nearby phone. The integrated microphone listened for keystrokes on a MacBook Pro and was able to reproduce them with 95% accuracy — the highest accuracy the researchers have seen without the use of a large language model.

Recommended Videos

The team also tested accuracy during a Zoom call, in which the keystrokes were recorded with the laptop’s microphone during a meeting. In this test, the AI was 93% accurate in reproducing the keystrokes. In Skype, the model was 91.7% accurate.

Before your throw away your loud mechanical keyboard, it’s worth noting that the volume of the keyboard had little to do with the accuracy of the attack. Instead, the AI model was trained on the waveform, intensity, and time of each keystroke to identify them. For instance, you may press one key a fraction of a second later than others due to your typing style, and that’s taken into account with the AI model.

In the wild, this attack would take the form of malware installed on your phone or another nearby device with a microphone. Then, it just needs to gather data from your keystrokes and feed them into an AI model by listening on your microphone. The researchers used CoAtNet, which is an AI image classifier, for the attack, and trained the model on 36 keystrokes on a MacBook Pro pressed 25 times each.

There are some ways around this kind of attack, as reported by Bleeping Computer. The first is to avoid typing your password in at all by leveraging features like Windows Hello and Touch ID. You can also invest in a good password manager, which not only avoids the threat of typing in your password but also allows you to use random passwords for all of your accounts.

What won’t help is a new keyboard. Even the best keyboards can fall victim to the attack due to its method, so quieter keyboards won’t make a difference.

Unfortunately, this is just the latest in a string of new attack vectors enabled by AI tools, including ChatGPT. Just a week ago, the FBI warned about the dangers of ChatGPT and how it’s being used to launch criminal campaigns. Security researchers have also seen new challenges, such as adaptive malware that can quickly change through tools like ChatGPT.

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
Radiohead’s Thom Yorke among thousands of artists who issue AI protest
Thom Yorke on stage.

Leading actors, authors, musicians, and novelists are among 11,500 artists to have put their name to a statement calling for a halt to the unlicensed use of creative works to train generative AI tools like OpenAI’s ChatGPT, describing it as a “threat” to the livelihoods of creators.

The open letter, comprising just 29 words, says: “The unlicensed use of creative works for training generative AI is a major, unjust threat to the livelihoods of the people behind those works, and must not be permitted.”

Read more
The best AI chatbots to try: ChatGPT, Gemini, and more
Bing Chat shown on a laptop.

The idea of chatbots has been around since the early days of the internet. But even compared to popular voice assistants like Siri, the generated chatbots of the modern era are far more powerful.

Yes, you can converse with them in natural language. But these AI chatbots can generate text of all kinds, from poetry to code, and the results really are exciting. ChatGPT remains in the spotlight, but as interest continues to grow, more rivals are popping up to challenge it.
OpenAI ChatGPT and ChatGPT Plus

Read more
How you can try OpenAI’s new o1-preview model for yourself
The openAI o1 logo

Despite months of rumored development, OpenAI's release of its Project Strawberry last week came as something of a surprise, with many analysts believing the model wouldn't be ready for weeks at least, if not later in the fall.

The new o1-preview model, and its o1-mini counterpart, are already available for use and evaluation, here's how to get access for yourself.

Read more