Skip to main content

Microsoft already has its legal crosshairs set on DeepSeek

DeepSeek AI running on an iPhone.
The home page chat interface of DeepSeek AI. Nadeem Sarwar / Digital Trends

Microsoft, a primary investor in OpenAI, is now exploring whether the Chinese company DeepSeek used nefarious methods to train its reasoning models. According to Bloomberg Law the company now believes DeepSeek violated its terms of service by using its application programming interface (API) to train its recently announced R1 model.

The news comes not long after White House AI and crypto czar, David Sacks, told Fox News in an interview on Tuesday it was “possible” DeepSeek “stole intellectual property from the United States.”

Recommended Videos

“There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” Sacks told the outlet.

The AI industry has been raving about DeepSeek’s ability to quickly and cost-effectively train AI models in one year with just $5.6 million. There is an underlying possibility the reason for the company’s efficiency is that it has used another company’s model as its baseline.

DeepSeek may have used a process called distillation, which entails two models having a teacher-student dynamic so one can collect information from the other. On one hand, this could explain the company’s inexpensive operating costs and use of less powerful Nvidia H800 chips. DeepSeek may now be on the hook to prove whether it took all unlawful actions when developing its models.

Before this recent development, industry experts previously speculated that DeepSeek likely used reverse engineering to train its models. This process analyzes models to identify their patterns and biases for improving future models. Reverse engineering is a common practice among open-source developers that is considered legal.

Security researchers sanctioned by Microsoft have already pieced together that DeepSeek may have exhumed a considerable amount of code from OpenAI’s API during the fall of 2024. Microsoft supposedly made OpenAI aware of the breach at the time. The R1 model was announced last week, bringing attention to the Chinese AI company, and associated parties.

DeepSeek has also been lauded as an open-source AI application, on which anyone can develop. This is from where much of the excitement surrounding the platform comes — in addition to its comparison to top tools such as ChatGPT and Google Gemini. OpenAI is not an open-source service; however, anyone can sign up to access its API. The company does make clear in its terms of services that other entities cannot use output to train other AI models, TechCrunch noted.

An OpenAI spokesperson told Reuters that regardless of regulations, various international companies trying to copy models from well-known companies in the U.S. is now a common occurrence.

“We engage in counter-measures to protect our IP, including a careful process for which frontier capabilities to include in released models, and believe as we go forward that it is critically important that we are working closely with the U.S. government to best protect the most capable models from efforts by adversaries and competitors to take U.S. technology,” the spokesperson said.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
DeepSeek’s censorship is a warning shot — and a wake-up call
Homepage of DeepSeek's mobile AI app.

The AI industry is abuzz with chatter about a new large language model that is taking the fight to the industry’s top dogs like OpenAI and Anthropic. But not without its generous share of surprises. The name is DeepSeek.

It comes out of China. It is open source. Most importantly, it is said to have been developed at a fraction of the cost compared to what current industry leaders from OpenAI, Meta, and Google have burned.

Read more
How DeepSeek flipped the tech world on its head overnight
The DeepSeek website.

DeepSeek, the chatbot made by a Chinese startup that seemingly dethroned ChatGPT, is taking the world by storm. It's currently the number one topic all over the news, and a lot has happened in the past 24 hours. Among other highlights, Nvidia's stock plummeted as a response to DeepSeek; President Donald Trump commented on the new AI; Mark Zuckerberg is assembling a team to find an answer to DeepSeek. Below, we'll cover all the latest news you need to know about DeepSeek.
Nvidia gets hit by the rise of DeepSeek

Although ChatGPT is the chatbot that quickly lost its public favorite status with the rise of DeepSeek, Nvidia is the company that suffered the greatest losses. In fact, Nvidia's market loss following the launch of DeepSeek's large language model (LLM) marks the greatest one-day stock market drop in history, says Forbes. Nvidia lost nearly $600 billion as a result of the Chinese company behind DeepSeek revealing just how cheap the new LLM is to develop in comparison to rivals from Anthropic, Meta, or OpenAI.

Read more
DeepSeek: everything you need to know about the AI that dethroned ChatGPT
robot hand in point space

A year-old startup out of China is taking the AI industry by storm after releasing a chatbot which rivals the performance of ChatGPT while using a fraction of the power, cooling, and training expense of what OpenAI, Google, and Anthropic's systems demand. Here's everything you need to know about Deepseek's V3 and R1 models and why the company could fundamentally upend America's AI ambitions.
What is DeepSeek?
DeepSeek (technically, "Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd.") is a Chinese AI startup that was originally founded as an AI lab for its parent company, High-Flyer, in April, 2023. That May, DeepSeek was spun off into its own company (with High-Flyer remaining on as an investor) and also released its DeepSeek-V2 model. V2 offered performance on par with other leading Chinese AI firms, such as ByteDance, Tencent, and Baidu, but at a much lower operating cost.

The company followed up with the release of V3 in December 2024. V3 is a 671 billion-parameter model that reportedly took less than 2 months to train. What's more, according to a recent analysis from Jeffries, DeepSeek's “training cost of only US$5.6m (assuming $2/H800 hour rental cost). That is less than 10% of the cost of Meta’s Llama.” That's a tiny fraction of the hundreds of millions to billions of dollars that US firms like Google, Microsoft, xAI, and OpenAI have spent training their models.

Read more