Skip to main content

DeepSeek invites users behind the curtain of its open source AI code

Phone running Deepseek on a laptop keyboard.
Reuters

The Chinese startup, DeepSeek plans to become even more transparent about the technology behind its open-source AI models, such as its R1 reasoning model.

The company detailed in a post on X on Friday that it will make several code repositories available to the public, starting next week. This will give developers and researchers a deeper understanding of the nuances of the key parts of DeepSeek’s code. It is an especially bold move for a tech company. However, bold moves are already par for the course for DeepSeek, which entered the AI space as an industry disrupter. It has especially stood out because its models have performed as well, if not better than many of the top AI brands in the industry, such as OpenAI and Meta– that use proprietary technologies.

Recommended Videos

🚀 Day 0: Warming up for #OpenSourceWeek!

We're a tiny team @deepseek_ai exploring AGI. Starting next week, we'll be open-sourcing 5 repos, sharing our small but sincere progress with full transparency.

These humble building blocks in our online service have been documented,…

— DeepSeek (@deepseek_ai) February 21, 2025

“We’re a tiny team exploring AGI. Starting next week, we’ll be open-sourcing 5 repos, sharing our small but sincere progress with full transparency,” DeepSeek said on X.

By making its AI models open source, DeepSeek made its codes available for others for further development without charge. Now, the brand is giving the public access to get behind the veil of the original code that took the world by storm. This move has the potential to make DeepSeek’s AI models even more popular, by making knowledge about the brand and its technologies more available and dispelling any concerns. The company said it plans to continue revealing more data after the initial code repository launch.

The public will be able to see “every line of code, configuration file, and piece of data lives there together,” the Cryptopolitan noted.

According to Bloomberg, DeepSeek’s effort to be more transparent may also aid the company in quelling various security concerns that have been raised by several government entities, including those in the U.S., South Korea, Australia, and Taiwan.

Since DeepSeek’s introduction into the AI space, several companies have either introduced or recommitted themselves to incorporating more open-source development into their AI technology. The Chinese brand aims to continue its current strategy.

“As part of the open-source community, we believe that every line shared becomes collective momentum that accelerates the journey…No ivory towers – just pure garage-energy and community-driven innovation,” DeepSeek said.

Fionna Agomuoh
Fionna Agomuoh is a Computing Writer at Digital Trends. She covers a range of topics in the computing space, including…
OpenAI CEO Sam Altman admits the heyday of ChatGPT is over
Sam Altman describing the o3 model's capabilities

OpenAI CEO Sam Altman has conceded that the company has lost its edge within the AI space amid the introduction of Chinese firm, DeepSeek and its R1 reasoning model. However, he says the brand will continue to develop in the industry. 

The company head admitted OpenAI has been "on the wrong side of history" in terms of open-source development for its AI models. Altman and several other OpenAI executives discussed the state of the company and its future plans during an Ask Me Anything session on Reddit on Friday, where the team got candid with curious enthusiasts about a range of topics. 

Read more
OpenAI’s new ChatGPT agent is ‘like a superpower,’ says CEO Sam Altman
OpenAI logo on a white board

 

OpenAI has just announced a new AI tool called Deep Research.

Read more
DeepSeek can create criminal plans and explain mustard gas, researchers say
Phone running Deepseek on a laptop keyboard.

There's been a frenzy in the world of AI surrounding the sudden rise of DeepSeek -- an open-source reasoning model out of China that's taken the AI fight to OpenAI. It's already been the center of controversy surrounding its censorship, it's caught the attention of both Microsoft and the U.S. government, and it caused Nvidia to suffer the largest single-day stock loss in history.

Still, security researchers say the problem goes deeper. Enkrypt AI is an AI security company that sells AI oversight to enterprises leveraging large language models (LLMs), and in a new research paper, the company found that DeepSeek's R1 reasoning model was 11 times more likely to generate "harmful output" compared to OpenAI's O1 model. That harmful output goes beyond just a few naughty words, too.

Read more