As the public adjusts to trusting artificial intelligence, there also brews a perfect environment for hackers to trap internet users into downloading malware.
The latest target is the Google Bard chatbot, which is being used as a decoy for those online to unknowingly click ads that are infected with nefarious code. The ads are styled as if they are promoting Google Bard, making them seem safe. However, once clicked on, users will be directed to a malware-ridden webpage instead of an official Google page.
Security researchers at ESET first observed the discrepancies in the ads, which include several grammar and spelling errors in the copy, as well as a writing style that is not up to par with Google’s standard, according to TechRadar.
The ad directs users to the webpage of a Dublin-based firm called rebrand.ly instead of a Google-hosted domain, where you would actually learn more about the Bard chatbot. Researchers have not confirmed, but have noted and warned that accessing such pages while being logged into browser accounts could leave your private data susceptible to being hacked.
Additionally, the ad includes a download button, which when accessed downloads a file that appears as a personal Google Drive space; however, it is actually a confirmed malware called GoogleAIUpdate.rar.
ESET researcher, Thomas Uhlemann noted as of Monday, the “campaign was still visible in different variations.”
He added this is one of the larger cyberattacks of its kind he has seen, some including fake ads for meta AI or different Google AI dupe marketing.
Bard is currently the biggest competition of OpenAI’s ChatGPT chatbot. ChatGPT experienced a similar cyberattack in late February when an info-stealing malware called Redline was observed by Security researcher Dominic Alvieri. The malware was hosted on the website chat-gpt-pc.online, which featured ChatGPT branding and was being advertised on a Facebook page as a legitimate OpenAI link to persuade people into accessing the infected site.
Alvieri also found fake ChatGPT apps on Google Play and various other third-party Android app stores, which could send malware to devices if downloaded.
ChatGPT has been a major target of bad actors, especially since it introduced its $20 monthly ChatGPT Plus tier in early February. Bad actors have even gone as far as using the chatbot to create malware. However, this is a rigged version of OpenAI’s GPT-3 API that was programmed to generate malicious content, such as text that can be used for phishing emails and malware scripts.
- This is how Google Docs is challenging Grammarly’s AI
- This Google Chrome feature may save you from malware
- Chrome has a security problem — here’s how Google is fixing it
- The best AI chatbots to try out: ChatGPT, Bard, and more
- Google tells workers to be wary of AI chatbots