Skip to main content

Digital Trends may earn a commission when you buy through links on our site. Why trust us?

ChatGPT may have more paid subscribers than this popular streaming service

OpenAI CEO Sam Altman standing on stage at a product event.
Andrew Martonik / Digital Trends

OpenAI’s steamrolling of its rivals continued apace this week, and a new study estimates just how much success it’s had in winning over paid subscribers through ChatGPT Plus.

According to a report published by Futuresearch this week, OpenAI’s products are far and away the most popular — and profitable — in the AI space. Per the study, OpenAI has an estimated annual recurring revenue of $3.4 billion dollars.

a graph showing OpenAI's estimated ARR for 2024
Futuresearch

Some 55% of that, or $1.9 billion, comes from its 7.7 million ChatGPT Plus subscribers who pay $20 a month for the service. Another 21%, or $714 million, comes from the company’s 1.2 million $50/month ChatGPT Enterprise subscribers. Just 15%, or $510 million, is generated from the AI’s API while the remaining 8%, or $290 million, comes in from its 980,000 ChatGPT Teams subscribers who pay $25/month. In all, OpenAI is estimated to have some 9.88 million monthly subscribers.

Recommended Videos

That’s nearly 2 million more than the 8 million subscribers that YouTube TV, the nation’s fourth-largest cable television network, reportedly enjoys; though to be fair, Disney+ saw more than 10 million signups for its streaming service on its opening day. Still, it’s quite an achievement, especially at $20 per month.

The startling income begs the question: What’s the company doing with all this money? Well, another piece of news today ties directly into that answer.

Per a report from Bloomberg Thursday, OpenAI has developed a five-tier scale for measuring the capabilities of its AI systems as the company seeks to achieve AGI within the next decade. The company shared its scale internally with employees and investors earlier in the week.

OpenAI’s charter defines AGI as “highly autonomous systems that outperform humans at most economically valuable work — benefits all of humanity.” The company states that it will “attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”

The scale starts at Level 1 and describes AI that can interact with people in a conversational manner — essentially your run-of-the-mill chatbot. Level 2, where the company believes we are currently approaching, identifies Reasoners, AI that can solve problems in the same way (and as well as) a person with doctorate-level education could. We’re already seeing evidence of this given how often AI are passing state bar and medical school exams these days.

Level 3 describes Agents, AI that can operate on a user’s behalf across multiple days and systems — think Apple Intelligence but even more capable. Level 4, or Innovators, would be AI that can create its own novel solutions to a given problem or task, while Level 5 details “Organizations,” literally AI that can perform the same tasks as an entire company’s human workforce. The company was quick to point out that this categorization is still in its preliminary stages and could be adjusted as needed in the future.

The notion of interacting with an artificial intelligence as smart and capable as the people who built it has been around nearly as long as computers, though the requisite breakthroughs have always seemed to remain “a few years” out of reach. However, the release of ChatGPT in 2022 has drastically accelerated the estimated time frame for achieving that goal. Shane Legg, co-founder of Google’s DeepMind and the company’s lead AGI researcher, told Time last year that he estimates a 50-50 chance to develop AGI by 2028. Anthropic CEO Dario Amodei, on the other hand, believes AGI will be achieved in the next 24 months.

OpenAI certainly appears to be in position to achieve that goal.

Andrew Tarantola
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Chatbots are going to Washington with ChatGPT Gov
glasses and chatgpt

In an X post Monday commenting on DeepSeek's sudden success, OpenAI CEO Sam Altman promised to "pull up some releases" and it appears he has done so. OpenAI unveiled its newest product on Tuesday, a "tailored version of ChatGPT designed to provide U.S. government agencies with an additional way to access OpenAI’s frontier models," per the announcement post. ChatGPT Gov will reportedly offer even tighter data security measures than ChatGPT Enterprise, but how will it handle the hallucinations that plague the company's other models?

According to OpenAI, more than 90,000 federal, state, and local government employees across 3,500 agencies have queried ChatGPT more than 18 million times since the start of 2024. The new platform will enable government agencies to enter “non-public, sensitive information” into ChatGPT while it runs within their secure hosting environments -- specifically, the Microsoft Azure commercial cloud or Azure Government community cloud -- and cybersecurity frameworks like IL5 or CJIS. This enables each agency to "manage their own security, privacy and compliance requirements,” Felipe Millon, Government Sales lead at OpenAI told reporters on the press call Tuesday.

Read more
DeepSeek: everything you need to know about the AI that dethroned ChatGPT
robot hand in point space

A year-old startup out of China is taking the AI industry by storm after releasing a chatbot which rivals the performance of ChatGPT while using a fraction of the power, cooling, and training expense of what OpenAI, Google, and Anthropic's systems demand. Here's everything you need to know about Deepseek's V3 and R1 models and why the company could fundamentally upend America's AI ambitions.
What is DeepSeek?
DeepSeek (technically, "Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd.") is a Chinese AI startup that was originally founded as an AI lab for its parent company, High-Flyer, in April, 2023. That May, DeepSeek was spun off into its own company (with High-Flyer remaining on as an investor) and also released its DeepSeek-V2 model. V2 offered performance on par with other leading Chinese AI firms, such as ByteDance, Tencent, and Baidu, but at a much lower operating cost.

The company followed up with the release of V3 in December 2024. V3 is a 671 billion-parameter model that reportedly took less than 2 months to train. What's more, according to a recent analysis from Jeffries, DeepSeek's “training cost of only US$5.6m (assuming $2/H800 hour rental cost). That is less than 10% of the cost of Meta’s Llama.” That's a tiny fraction of the hundreds of millions to billions of dollars that US firms like Google, Microsoft, xAI, and OpenAI have spent training their models.

Read more
Sam Altman confirms ChatGPT’s latest model is free for all users
ChatGPT logo on a phone

Earlier this week, OpenAI CEO Sam Altman declared the company's newest reasoning model, o3, ready for public consumption after it passed its external safety testing and announced that it would soon be arriving as both an API and ChatGPT model option in the coming weeks. On Thursday, Altman took to social media to confirm that the lightweight version, o3-mini, won't just be made available to paid subscribers at the Plus, Teams, and Pro tiers, but to free tier users as well.

https://x.com/sama/status/1882478782059327666

Read more