Skip to main content

DeepSeek AI draws ire of spy agency over data hoarding and hot bias

DeepSeek AI chatbot running on an iPhone.
Nadeem Sarwar / Digital Trends

The privacy and safety troubles continue to pile up for buzzy Chinese AI upstart DeepSeek. After having access blocked for lawmakers and federal employees in multiple countries, while also raising alarms about its censorship and safeguards, it has now attracted an official notice from South Korea’s spy agency.

The country’s National Intelligence Service (NIS) has targeted the AI company over excessive collection and questionable responses for topics that are sensitive to the Korean heritage, as per Reuters.

Recommended Videos

“Unlike other generative AI services, it has been confirmed that chat records are transferable as it includes a function to collect keyboard input patterns that can identify individuals and communicate with Chinese companies’ servers such as volceapplog.com,” the agency was quoted as saying.

This comes after a government notice asking different agencies and ministries to block employee access to DeepSeek over security alarms. Australia and Taiwan have already put such restrictions in place, and more countries are expected to follow suit.

Homepage of DeepSeek's mobile AI app.
Nadeem Sarwar / Digital Trends

The core issue is that DeepSeek is reportedly offering its ad partners open access to user data, which the Chinese government can also get its hands on, as per local laws. According to The Korea Herald, the chatbot was also returning controversial answers to queries about culturally sensitive and contentious geopolitical topics.

Notably, the chatbot delivers different answers when asked the same question in Korean and Chinese languages. According to The Korea Times, the agency will conduct further tests to assess the safety and security aspects in the near future.

While security concerns have made headlines as the biggest concern with DeepSeek, experts are also worried about the responses it can generate. In an analysis by The Wall Street Journal, the AI coughed up worrying information such as the recipe to cook up bioweapons, a Nazi defense manifesto, and self-harm encouragement.

Mobile users experience censorship bias with DeepSeek AI.
DeepSeek’s censorial behavior mirrors that of the Great Firewall on China’s internet. Nadeem Sarwar / Digital Trends

In an analysis by fellow AI giant Anthropic, the company’s CEO Dario Amodei mentioned that DeepSeek proved to be the worst AI model in their tests when it comes to generating extremely disturbing information such as the creation of bioweapons.

Just over a week ago, researchers at Cisco also tested it against jailbreaking tools across six different categories, and it failed to block every single attack. In another round of tests by Qualys, the AI could only muster a 47% jailbreak pass rate.

Then there are the concerns about leaking sensitive data and sharing it without any restraint. Cybersecurity researchers at Wiz recently discovered over a million lines of chat history containing sensitive information that was publicly accessible.

DeepSeek plugged the flaw, but its commercial uptake remains a topic of hot debate. In the US, NASA has already banned employees from using DeepSeek, and so has the US Navy. Moreover, a bill seeking a DeepSeek ban on federal devices is also on the table.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Sundar Pichai says even more AI is coming to Google Search in 2025
Google Search on a laptop

Google will continue to go all in on AI in 2025, CEO Sundar Pichai announced during the company's Q4 earnings call Wednesday. Alphabet shares have since dropped more than 7% on news that the company giant fell short of fourth-quarter revenue expectations and announced an ambitious spending plan for its AI development.

"As AI continues to expand the universe of queries that people can ask, 2025 is going to be one of the biggest years for search innovation yet,” he said during the call. Pichai added that Search is on a “journey” from simply presenting a list of links to offering a more Assistant-like experience. Whether users actually want that, remains to be seen.

Read more
OpenAI CEO Sam Altman admits the heyday of ChatGPT is over
Sam Altman describing the o3 model's capabilities

OpenAI CEO Sam Altman has conceded that the company has lost its edge within the AI space amid the introduction of Chinese firm, DeepSeek and its R1 reasoning model. However, he says the brand will continue to develop in the industry. 

The company head admitted OpenAI has been "on the wrong side of history" in terms of open-source development for its AI models. Altman and several other OpenAI executives discussed the state of the company and its future plans during an Ask Me Anything session on Reddit on Friday, where the team got candid with curious enthusiasts about a range of topics. 

Read more
DeepSeek can create criminal plans and explain mustard gas, researchers say
Phone running Deepseek on a laptop keyboard.

There's been a frenzy in the world of AI surrounding the sudden rise of DeepSeek -- an open-source reasoning model out of China that's taken the AI fight to OpenAI. It's already been the center of controversy surrounding its censorship, it's caught the attention of both Microsoft and the U.S. government, and it caused Nvidia to suffer the largest single-day stock loss in history.

Still, security researchers say the problem goes deeper. Enkrypt AI is an AI security company that sells AI oversight to enterprises leveraging large language models (LLMs), and in a new research paper, the company found that DeepSeek's R1 reasoning model was 11 times more likely to generate "harmful output" compared to OpenAI's O1 model. That harmful output goes beyond just a few naughty words, too.

Read more