Skip to main content

Musk promises to deliver ‘the world’s most powerful AI’ by later this year

Tesla CEO and Twitter/X owner Elon Musk announced Monday that his AI startups, xAI, had officially begun training its Memphis supercomputer, what he describes as “the most powerful AI training cluster in the world.”

Once fully operational, Musk plans to use it to build “world’s most powerful AI by every metric by December of this year,” which presumably will be Grok 3.

Recommended Videos

Nice work by @xAI team, @X team, @Nvidia & supporting companies getting Memphis Supercluster training started at ~4:20am local time.

With 100k liquid-cooled H100s on a single RDMA fabric, it’s the most powerful AI training cluster in the world!

— Elon Musk (@elonmusk) July 22, 2024

xAI’s “Gigafactory of Compute,” where the supercomputer is housed, is located in a former Electrolux production facility in Memphis, Tennessee, and was announced just last month. Per Musk, the training cluster will utilize 100,000 Nvidia’s H100 GPUs. Those are based on the Hopper microarchitecture in a network roughly four times larger than the current state-of-the-art clusters. Those include the 60k Intel GPU Aurora at the Argonne National Lab, the ~38k AMD GPU Frontier in Oak Ridge, and Microsoft’s Eagle, which runs 14,400 NVIDIA H100 GPUs.

Opening this training facility constitutes the largest capital investment by a new-to-market company in Memphis’ history, according to President and CEO of Greater Memphis Chamber Ted Townsend. The supercomputer will be used “to fuel and fund the AI space for all of his [Musk’s] companies first, obviously with Tesla and SpaceX,” he said. “If you can imagine the computational power necessary to place humans on the surface of Mars, that is going to happen here in Memphis.”

Elon Musk’s xAI to build world's largest supercomputer in Memphis

However, despite the multibillion-dollar investment by xAI, the facility is only expected to generate a few hundred local jobs. What’s more, the “[Tennessee Valley Authority] does not have a contract in place with xAI,” per a report from WREG.

They “are working with xAI and our partners at [Memphis Light, Gas and Water] on the details of the proposal and electricity demand needs.” The TVA also pointed out that any project over 100 Megawatts (MW) needs its approval to connect to the state’s power grid. Musk’s facility could draw up to 150MW during peak usage, estimates MLGW President Doug McGowen.

Andrew Tarantola
Former Computing Writer
Andrew Tarantola is a journalist with more than a decade reporting on emerging technologies ranging from robotics and machine…
Elon Musk says the world is running out of data for AI training
Grok app on an iPhone.

Tesla/X CEO Elon Musk seems to believe that training AI models with solely human-made data is becoming impossible. Musk claims that there's a growing lack of real-world data with which to train AI models, including his Grok AI chatbot.

“We’ve now exhausted basically the cumulative sum of human knowledge … in AI training,” Musk said during an X live-stream interview conducted by Stagwell chairman Mark Penn. “That happened basically last year.”

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more
Elon Musk reportedly will blow $10 billion on AI this year
Elon Musk at Tesla Cyber Rodeo.

Between Tesla and xAI, Elon Musk's artificial intelligence aspirations have cost some $10 billion dollars in bringing training and inference compute capabilities online this year, according to a Thursday post on X (formerly Twitter) by Tesla investor Sawyer Merritt.

"Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas – and will have 50,000 H100 capacity by the end of October, and ~85,000 H100 equivalent capacity by December," Merritt noted.

Read more