Skip to main content

What is Grok? Elon Musk’s controversial ChatGPT competitor explained

Grok! It might not roll off the tongue like ChatGPT or Windows Copilot, but it’s a large language model chatbot all the same. Developed by xAI, an offshoot of the programmers who stuck around after Elon Musk purchased X (formerly known as Twitter), Grok is designed to compete directly with OpenAI’s GPT-4 models, Google’s Bard, and a range of other public-facing chatbots.

Launched in November 2023, Grok is designed to be a chatbot with less of a filter than other AIs. It’s said to have a “bit of wit, and has a rebellious streak.”

It’s only for X Premium users

Grok AI sign up page.
Jon Martindale / DigitalTrends

Want to play around with Grok yourself? You can, if you’re an X Premium+ subscriber and also reside in the U.S. (r use a VPN). Even so, there’s a waiting list to try out the prototype, so there’s no guarantee you’ll get in.

While you might expect it to open up to a wider audience when it’s out of beta, Musk has suggested that Grok will remain a feature exclusive to Premium+ X subscribers for the foreseeable future.

It’s a chatbot

Just weeks after discussing how dangerous Musk thought the recent advances in AI were, he launched Grok in beta form. Though he claimed that it would be more open in its communication style, with less of a topic filter, it’s ultimately a chatbot like any of the others.

That’s because it was built in much the same way. It’s a large language model AI, which means it was built using lots of training data, some from X, some from the web at large. We’re told its running on the Grok-1 large language model (LLM) with some 33 billion parameters. This would put it behind Meta’s LLaMa and OpenAI’s GPT-4, which have 70 billion and 1.76 trillion parameters, respectively. It was also trained for only a couple of months, whereas other models have taken years to put together.

That may be why Grok appears to be just as susceptible (if not to more so) to hallucinations as other AIs like ChatGPT. It often makes up facts or cites nonexistent sources when asked for factual responses.

It uses data from X

One of the big selling points of Grok is that it brings live data from X. Where other language models might use Google or Bing to find up-to-date information to augment their answers to user prompts, Grok can base its responses off of up-to-date information from X. That gives it access to information that other chatbots don’t have, but it also means that it is potentially more susceptible to misinformation due to the sheer scale of its spread on X since Musk’s takeover.

That does mean it has access to Musk’s X, account, however. Which means it doesn’t agree with him all the time.

It’s supposed to be anti-woke, but it isn’t

A major reason for Musk’s takeover of X and the launch of Grok has been his own attempt to fight back against what he considers “wokeness.” Grok was supposed to be an AI that eschewed societal politeness in favor of a particular brand of humor, as well as a propensity to lean right in its political biases. However, that’s not turned out to be the case. Indeed, many of Musk and Grok’s initial fans are now complaining that the AI has been captured by “woke programmers.

Has Grok been captured by woke programmers?

I am extremely concerned here. 😮

— Wall Street Silver (@WallStreetSilv) December 8, 2023

In reality, Grok’s training data was based on much the same data as every other AI: the humans who interact with each other online. Therefore, Grok was likely to graduate toward sounding like the other AIs out there.

In its default mode, most researchers found that Grok would respond with an overtly neutral answer if asked about topics that are seen as controversial by right-wing commenters, such as gun control, climate change, or debunked conspiracy theories. However, if you switch from the standard “fun” mode to its regular mode, it will provide more reasoned responses.

It’s borrowing from ChatGPT

Grok has been found to make responses as if it was ChatGPT at times. That’s not because it was built on the same technology as ChatGPT, but because it was trained on the web since the release of ChatGPT. That’s led it to hoover up AI-written content, and in one instance, an OpenAI privacy policy, which it proceeded to feed back to its users as if it wrote it.

This is a problem that has been cited by a number of experts and may worsen with all LLM AIs if it isn’t addressed. In a world where AIs can and do create mountains of recycled content every day, AI developers need a way to differentiate between AI and human content so that they don’t just train their future models on AI-generated content.

The name comes from a Heinlein novel

If you’re wondering where the bizarre name came from, “Grok” is a neologism from the Robert Heinlein novel, Stranger in a Strange Land. The original meaning was to “understand intuitively or by empathy, to establish rapport with.” That sounds like a lofty goal for an AI, but it’s likely that it’s been employed here in the fashion that it grew to be known by within the programming community in the decades after Heinlein’s work: To understand something so fully, that it’s part of your identity.

Grok is supposed to understand us so well it is us. We don’t think it’s quite there yet.

Editors' Recommendations

Jon Martindale
Jon Martindale is the Evergreen Coordinator for Computing, overseeing a team of writers addressing all the latest how to…
One year ago, ChatGPT started a revolution
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

Exactly one year ago, OpenAI put a simple little web app online called ChatGPT. It wasn't the first publicly available AI chatbot on the internet, and it also wasn't the first large language model. But over the following few months, it would grow into one of the biggest tech phenomenons in recent memory.

Thanks to how precise and natural its language abilities were, people were quick to shout that the sky was falling and that sentient artificial intelligence had arrived to consume us all. Or, the opposite side, which puts its hope for humanity within the walls of OpenAI. The debate between these polar extremes has continued to rage up until today, punctuated by the drama at OpenAI and the series of conspiracy theories that have been proposed as an explanation.

Read more
Here’s why people are saying GPT-4 is getting ‘lazy’
OpenAI announced its latest iteration of ChatGPT with greater accuracy and creativity.

OpenAI and its technologies have been in the midst of scandal for most of November. Between the swift firing and rehiring of CEO Sam Altman and the curious case of the halted ChatGPT Plus paid subscriptions, OpenAI has kept the artificial intelligence industry in the news for weeks.

Now, AI enthusiasts have rehashed an issue that has many wondering whether GPT-4 is getting "lazier" as the language model continues to be trained. Many who use it speed up more intensive tasks have taken to X (formerly Twitter) to air their grievances about the perceived changes.

Read more
Here’s why you can’t sign up for ChatGPT Plus right now
A person sits in front of a laptop. On the laptop screen is the home page for OpenAI's ChatGPT artificial intelligence chatbot.

CEO Sam Altman's sudden departure from OpenAI weekend isn't the only drama happening with ChatGPT. Due to high demand, paid subscriptions for OpenAI's ChatGPT Plus have been halted for nearly a week.

The company has a waitlist for those interested in registering for ChatGPT to be notified of when the text-to-speech AI generator is available once more.

Read more