Skip to main content

Meta’s next AI model to require nearly 10 times the power to train

Mark Zuckerberg discussing the Quest 3 and Vision Pro.
Meta

Facebook parent company Meta will continue to invest heavily in its artificial intelligence research efforts, despite expecting the nascent technology to require years of work before becoming profitable, company executives explained on the company’s Q2 earnings call Wednesday.

Meta is “planning for the compute clusters and data we’ll need for the next several years,” CEO Mark Zuckerberg said on the call. Meta will need an “amount of compute… almost 10 times more than what we used to train Llama 3,” he said, adding that Llama 4 will “be the most advanced [model] in the industry next year.” For reference, the Llama 3 model was trained on a cluster of 16,384 Nvidia H100 80GB GPUs.

Recommended Videos

The company is no stranger to writing checks for aspirational research and development projects. Meta’s Q2 financials show the company expects to spend $37 billion to $40 billion on capital expenditures in 2024, and executives expect a “significant” increase in that spending next year. “It’s hard to predict how this will trend multiple generations out into the future,” Zuckerberg remarked. “But at this point, I’d rather risk building capacity before it is needed rather than too late, given the long lead times for spinning up new inference projects.”

And it’s not like Meta doesn’t have the money to burn. With an estimated 3.27 billion people using at least one Meta app daily, the company made just over $39 billion in revenue in Q2, a 22% increase from the previous year. Out of that, the company earned around $13.5 billion in profit, a 73% year-over-year increase.

But just because Meta is making a profit doesn’t mean its AI efforts are profitable. CFO Susan Li conceded that its generative AI will not generate revenue this year, and reiterated that revenue from those investments will “come in over a longer period of time.” Still, the company is “continuing to build our AI infrastructure with fungibility in mind, so that we can flex capacity where we think it will be put to best use.”

Li also noted that the existing training clusters can be easily reworked to perform inference tasks, which are expected to constitute a majority of compute demand as the technology matures and more people begin using these models on a daily basis.

“As we scale generative AI training capacity to advance our foundation models, we’ll continue to build our infrastructure in a way that provides us with flexibility in how we use it over time. This will allow us to direct training capacity to gen AI inference or to our core ranking and recommendation work, when we expect that doing so would be more valuable,” she said during the earnings call.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Ray-Ban Meta AI glasses go high fashion with Coperni limited edition
Angled view of Ray-Ban Meta x Coperni Limited Edition Glasses.

Meta delivered an unexpected runaway success with its Ray-Ban Stories smart glasses, and now, it is headed to the runaway for the latest take. At the Paris Fashion week, the company lifted the covers from the Ray-Ban Meta x Coperni Limited Edition Glasses.

Revealed as part of Coperni’s Fall Winter 25 collection, these are the company’s “first-ever fashion-branded collaboration.” The collaboration product borrows Ray-Ban’s iconic Wayfarer look and gives it a translucent twist atop a black-grey framework.

Read more
Meta rolls out its AI chatbot to nearly a dozen Middle Eastern nations
Meta AI in the Middle East

Millions of Facebook, Instagram, WhatsApp, and Messenger users throughout the Middle East now enjoy access to Meta's self-named AI chatbot platform, the company announced on Monday. The chatbot is rolling out to users in Algeria, Egypt, Iraq, Jordan, Libya, Morocco, Saudi Arabia, Tunisia, United Arab Emirates, and Yemen.

"AI just got even more accessible than ever before, as we officially launched Meta AI in the Middle East and North Africa with Arabic capabilities," Meta wrote in its announcement blog post. At launch, these users will have access to only some of Meta AI's generative capabilities -- specifically, text and image generation, as well as image animation. The company plans to expand those offerings to include simultaneous dubbing for Reels, AI image editing, and the "Imagine Me" feature (which generates a user's portrait based on uploaded reference photos) in the near future.

Read more
xAI’s Grok-3 is free for a short time. I tried it, and I’m impressed
Grok-3 access option in the X mobile app.

xAI launched its Grok-3 AI chatbot merely a few days ago, but locked it behind a paywall worth $40 per month. Now, the company is offering free access to it, but only for a limited time. xAI chief, Elon Musk, says the free access will only be available for a “short time,” so it’s anyone’s guess how long that window is going to be.

For now, the only two features available to play around are Think and DeepSearch. Think is the feature that adds reasoning capabilities to Grok-3  interactions, in the same view as DeepThink on DeepSeek, Google’s Gemini 2.0 Flash Thinking Experimental, and OpenAI’s o-series models.

Read more