Skip to main content

xAI’s Grok-3 is free for a short time. I tried it, and I’m impressed

Grok-3 access option in the X mobile app.
Nadeem Sarwar / Digital Trends

xAI launched its Grok-3 AI chatbot merely a few days ago, but locked it behind a paywall worth $40 per month. Now, the company is offering free access to it, but only for a limited time. xAI chief, Elon Musk, says the free access will only be available for a “short time,” so it’s anyone’s guess how long that window is going to be.

For now, the only two features available to play around are Think and DeepSearch. Think is the feature that adds reasoning capabilities to Grok-3  interactions, in the same view as DeepThink on DeepSeek, Google’s Gemini 2.0 Flash Thinking Experimental, and OpenAI’s o-series models.

Recommended Videos

Thinking and reasoning models show their train of thoughts on how they break down and eventually process the user queries. The outcome, as per experts, is better performance at tasks such as solving science, coding, and mathematical problems.

For a short time, Grok 3 is available for free to all! https://t.co/r5iLXi2pBm

— Elon Musk (@elonmusk) February 20, 2025

DeepSearch, on the other hand, is xAI’s equivalent of Deep Research tool that is now available for Perplexity, Gemini, and ChatGPT users. Grok-3 is only the second mainstream AI product out there offering free access to a compute-intensive process such as DeepSearch or Deep Research.

This is one of the most promising agentic use cases of an AI model, as it takes the knowledge gathering process to a whole new level. Once users push their question, it is broken down and a research plan is presented, including details of what sources the answers must be extracted from.

The AI goes though all the relevant repositories of knowledge, reasons through the data compiled in real-time, and presents it in the form of a comprehensive report.

Grok-3 beta dashboard on mobile.
Nadeem Sarwar / Digital Trends

If you want to access Grok-3 on the mobile platform, head over to the X app and tap on the Grok icon in the bottom bar. On the web, it is accessible directly via the social media platform by visiting the x.com/i/grok page.

There is substance to all the hype

In my brief time with DeepSearch, I have found it to be quite impressive. Unlike Gemini Deep Research, it doesn’t ask for an approval of the research methodology and directly goes to task once you submit your  question. Compared to Gemini, it’s also faster.

Regarding a deep research query about the scientific research situation on the impact of screen time on young minds, Grok-3 provided a report quicker than Gemini. Moreover, it is more transparent, as you can see the detailed thinking process that went behind collecting and finding the answers.

Seeking answers from xAI’s Grok-3 AI model.
Nadeem Sarwar / Digital Trends

Moreover, you can see a stage-wide breakdown of the whole process, progressing in real-time and saved as its own dataset, above the actual answer. On the flip side, you can not edit the research plan before the agentic search begins.

Gemini, on the other hand, takes a more opaque approach. You can edit the research plan, but you can’t see a process breakdown or the thinking-reasoning flow. As far as quality goes, Gemini referenced material from 37 websites, and provided them all as a footnote for citations. Grok-3’s DeepSearch only listed six key citations, even though the answers  it provided were no less useful.

When I tried the less intensive Think search, Grok-3 proved to be the quicker among the two, once again. I asked about the relevance of Microsoft’s new quantum computing chip, and noticed two crucial differences.

Comparing responses generated by Gemini and Grok-3
Nadeem Sarwar / Digital Trends

Grok-3 took a more holistic approach to answering it, focusing not only on the scientific applications and the benefits, but also the risks that come with it, in a dedication section of its own. Another difference is that you can see the chain of reasoning at any given time.

Gemini 2.0 Flash Thinking Experimental is, once again, opaque, though its segment-wise breakdown was more comprehensive. Another crucial difference is that Gemini’s answers appear more restrained and academically-inclined, while Grok-3 takes a more libertarian approach at explaining complex terms.

It’s hard to declare which AI model is superior, but according to benchmarks shared by xAI, Grok-3 has topped Google, OpenAI, DeepSeek, and Anthropic’s AI models on multiple evaluation charts.

Nadeem Sarwar
Nadeem is a tech and science journalist who started reading about cool smartphone tech out of curiosity and soon started…
Elon Musk says the world is running out of data for AI training
Grok app on an iPhone.

Tesla/X CEO Elon Musk seems to believe that training AI models with solely human-made data is becoming impossible. Musk claims that there's a growing lack of real-world data with which to train AI models, including his Grok AI chatbot.

“We’ve now exhausted basically the cumulative sum of human knowledge … in AI training,” Musk said during an X live-stream interview conducted by Stagwell chairman Mark Penn. “That happened basically last year.”

Read more
​​OpenAI spills tea on Musk as Meta seeks block on for-profit dreams
A digital image of Elon Musk in front of a stylized background with the Twitter logo repeating.

OpenAI has been on a “Shipmas” product launch spree, launching its highly-awaited Sora video generator and onboarding millions of Apple ecosystem members with the Siri-ChatGPT integration. The company has also expanded its subscription portfolio as it races toward a for-profit status, which is reportedly a hot topic of debate internally.

Not everyone is happy with the AI behemoth abandoning its nonprofit roots, including one of its founding fathers and now rival, Elon Musk. The xAI chief filed a lawsuit against OpenAI earlier this year and has also been consistently taking potshots at the company.

Read more
Elon Musk reportedly will blow $10 billion on AI this year
Elon Musk at Tesla Cyber Rodeo.

Between Tesla and xAI, Elon Musk's artificial intelligence aspirations have cost some $10 billion dollars in bringing training and inference compute capabilities online this year, according to a Thursday post on X (formerly Twitter) by Tesla investor Sawyer Merritt.

"Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas – and will have 50,000 H100 capacity by the end of October, and ~85,000 H100 equivalent capacity by December," Merritt noted.

Read more