Skip to main content

DuckDuckGo’s new AI service keeps your chatbot conversations private

DuckDuckGo
DuckDuckGo

DuckDuckGo released its new AI Chat service on Thursday, enabling users to anonymously access popular chatbots like GPT-3.5 and Claude 3 Haiku without having to share their personal information as well as preventing the companies from training the AIs on their conversations. AI Chat essentially works by inserting itself between the user and the model, like a high-tech game of telephone.

From the AI Chat home screen, users can select which chat model they want to use — Meta’s Llama 3 70B model and Mixtral 8x7B are available in addition to GPT-3.5 and Claude — then begin conversing with it as they normally would. DuckDuckGo will connect to that chat model as an intermediary, substituting the user’s IP address with one of their own. “This way it looks like the requests are coming from us and not you,” the company wrote in a blog post.

As with the company’s anonymized search feature, all metadata is stripped from the user queries, so even though DuckDuckGo warns that “the underlying model providers may store chats temporarily,” there’s no way to personally identify users based on those chats. And, as The Verge notes, DuckDuckGo also has agreements in place with those AI companies, preventing them from using chat prompts and outputs to train their models, as well as to delete any saved data within 30 days.

Data privacy is a growing concern among the AI community, even as the number of people using it both individually and at work continues to rise. A Pew Research study from October found that roughly eight in 10 “of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with.” While most chatbots already allow their users to opt out from having their data collected, those options are often buried in layers of menus with the onus on the user to find and select them.

AI Chat is available at both duck.ai and duckduckgo.com/chat. It’s free to use “within a daily limit,” though the company is currently considering a more expansive paid option with higher usage limits and access to more advanced models. This new service follows last year’s release of DuckDuckGo’s DuckAssist, which provides anonymized, AI-generated synopses of search results, akin to Google’s SGE.

Editors' Recommendations

Andrew Tarantola
Andrew has spent more than a decade reporting on emerging technologies ranging from robotics and machine learning to space…
Facebook might get chatbots — and that could be a problem
The Facebook app icon on an iPhone home screen, with other app icons surrounding it.

Facebook owner Meta is planning to introduce chatbots with distinct personalities to its social media app. The launch could come as soon as this September and would be a challenge to rivals like ChatGPT, but there are concerns that there could be serious implications for users’ privacy.

The idea comes from the Financial Times, which reports that the move is an attempt to boost engagement with Facebook users. The new tool could do this by providing fresh search capabilities or recommending content, all through humanlike discussions.

Read more
Hackers are using AI to create vicious malware, says FBI
A hacker typing on an Apple MacBook laptop while holding a phone. Both devices show code on their screens.

The FBI has warned that hackers are running wild with generative artificial intelligence (AI) tools like ChatGPT, quickly creating malicious code and launching cybercrime sprees that would have taken far more effort in the past.

The FBI detailed its concerns on a call with journalists and explained that AI chatbots have fuelled all kinds of illicit activity, from scammers and fraudsters perfecting their techniques to terrorists consulting the tools on how to launch more damaging chemical attacks.

Read more
Even OpenAI has given up trying to detect ChatGPT plagiarism
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.

Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.

Read more