Skip to main content

Meta’s next AI model to require nearly 10 times the power to train

Mark Zuckerberg discussing the Quest 3 and Vision Pro.
Meta

Facebook parent company Meta will continue to invest heavily in its artificial intelligence research efforts, despite expecting the nascent technology to require years of work before becoming profitable, company executives explained on the company’s Q2 earnings call Wednesday.

Meta is “planning for the compute clusters and data we’ll need for the next several years,” CEO Mark Zuckerberg said on the call. Meta will need an “amount of compute… almost 10 times more than what we used to train Llama 3,” he said, adding that Llama 4 will “be the most advanced [model] in the industry next year.” For reference, the Llama 3 model was trained on a cluster of 16,384 Nvidia H100 80GB GPUs.

Recommended Videos

The company is no stranger to writing checks for aspirational research and development projects. Meta’s Q2 financials show the company expects to spend $37 billion to $40 billion on capital expenditures in 2024, and executives expect a “significant” increase in that spending next year. “It’s hard to predict how this will trend multiple generations out into the future,” Zuckerberg remarked. “But at this point, I’d rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects.”

And it’s not like Meta doesn’t have the money to burn. With an estimated 3.27 billion people using at least one Meta app daily, the company made just over $39 billion in revenue in Q2, a 22% increase from the previous year. Out of that, the company earned around $13.5 billion in profit, a 73% year-over-year increase.

But just because Meta is making a profit doesn’t mean its AI efforts are profitable. CFO Susan Li conceded that its generative AI will not generate revenue this year, and reiterated that revenue from those investments will “come in over a longer period of time.” Still, the company is “continuing to build our AI infrastructure with fungibility in mind, so that we can flex capacity where we think it will be put to best use.”

Li also noted that the existing training clusters can be easily reworked to perform inference tasks, which are expected to constitute a majority of compute demand as the technology matures and more people begin using these models on a daily basis.

“As we scale generative AI training capacity to advance our foundation models, we’ll continue to build our infrastructure in a way that provides us with flexibility in how we use it over time. This will allow us to direct training capacity to gen AI inference or to our core ranking and recommendation work, when we expect that doing so would be more valuable,” she said during the earnings call.

Andrew Tarantola
Former Digital Trends Contributor
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
OpenAI teases its ‘breakthrough’ next-generation o3 reasoning model
Sam Altman describing the o3 model's capabilities

For the finale of its 12 Days of OpenAI livestream event, CEO Sam Altman revealed its next foundation model, and successor to the recently announced o1 family of reasoning AIs, dubbed o3 and 03-mini.

And no, you aren't going crazy -- OpenAI skipped right over o2, apparently to avoid infringing on the copyright of British telecom provider O2.

Read more
Ray-Ban Meta Smart Glasses get real-time visual AI and translation
Tracey Truly shows multi-reflective options with Ray-Ban Meta Smart Glasses.

Meta is rolling out two long-awaited features to its popular Ray-Ban Smart Glasses: real-time visual AI and translation. While it's just being rolled out for testing right now, the plan is that, eventually, anyone that owns Ray-Ban Meta Smart Glasses will get a live assistant that can see, hear, and translate Spanish, French, and Italian.

It's part of the v11 update that cover the upgrades Meta described at its Connect 2024 event, which also include Shazam integration for music recognition. This all happens via the camera, speakers, and microphones built into the Ray-Ban Meta glasses, so you don’t need to hold up your phone.

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more