Skip to main content

Here are 11 things that ChatGPT will refuse to do

ChatGPT is an amazing tool, a modern marvel of natural language artificial intelligence that can do incredible things. But with great power comes great responsibility, so ChatGPT developer OpenAI put some safeguards in place to prevent it from doing things it shouldn’t. It also has some limitations based on its design, the data it was trained on, and the sheer limitations of a text-based AI.

There are, of course, differences between what GPT-3.5 can do compared to GPT-4, which is only available through ChatGPT Plus. Some of those things are just on hold while it develops further, but there are some things ChatGPT may never be able to do. Here’s a list of 11 things that ChatGPT can’t or won’t do — for now.

Recommended Videos

It can’t write about anything after 2021

ChatGPT doesn't know anything after 2021.
Image used with permission by copyright holder

ChatGPT is built by training the language model on existing data. That includes Reddit posts, Wikipedia, and even board game manuals — yes, really. But that data had to have a cutoff point somewhere, and for ChatGPT, it’s 2021. For GPT-3.5, it’s around June 2021, whereas GPT-4 was trained on data up till around September 2021.

If you ask it questions beyond that, it will typically tell you that, “As an AI language model…,” it only has access to its training data, which in the case of these models, stops in 2021.

It won’t get into political debates

The last thing OpenAI needs is politicians regulating it. It’ll probably happen, but until then ChatGPT is steering well clear of partisan politics. It can speak in generalities about parties, or discuss objective and factual aspects of politics, but ask it for a preference of one political party or stance over another, and it’ll either turn you down, or “both-sides” the discussion in as neutral a fashion as possible.

It (probably) won’t make malware

ChatGPT is excellent at programming, especially when given clear guidance, so OpenAI has safeguards in place to stop it from being used to make malware. Unfortunately, those safeguards are easily circumvented, and ChatGPT has been making malware for months already.

ChatGPT refusing to discuss the future potential price of Bitcoin.
Image used with permission by copyright holder

It can’t predict the future

Partly based on its limited training data, and partly because OpenAI wants to avoid liability for mistakes, ChatGPT cannot predict the future. It has been known to have a good guess at it under jailbreak conditions (see below), but that sends accuracy nosediving, so view whatever response it gives you with skepticism.

It won’t promote harm or violence

War, physical violence, or even implied harm are all off the table as far as ChatGPT is concerned. It won’t be drawn into debates on the war in Ukraine, and will refuse to discuss or promote harm. It can talk about war or historical atrocities in great detail, but existing or ongoing conflict is a no-go.

It can’t search the internet

This is one of the biggest differences between ChatGPT and Google Gemini. ChatGPT cannot search the internet in any way, while Google Gemini (previously known as “Bard”) was designed as a current AI chatbot that can very much search the internet.

If you want to use the same GPT 3.5 and GPT-4 language models as ChatGPT, but with live search, you can always use Microsoft Copilot. It’s basically ChatGPT, but incorporated with the Microsoft environment.

It won’t promote hate speech or discrimination

Race, sexuality, and gender are topics that are very emotionally charged and ripe for leading into talk of prejudice and discrimination. ChatGPT will skirt around these topics, leaning into a meta discussion of them, or speaking in generalities. If pushed, it will outright refuse to discuss topics that it feels could promote hate speech or discrimination. For obvious reasons.

ChatGPT refusing to discuss illegal activity.
Image used with permission by copyright holder

It won’t promote illegal activities

ChatGPT is great at coming up with ideas, but it won’t come up with illegal ones. You can’t have it help you with your drug business, or highlight the best roads for speeding. Try, and it will simply tell you that it can’t make any suggestions related to illegal activity. It will then typically give you a pep talk about how you shouldn’t be engaging in such activities, anyway. Thanks MomGPT.

It won’t swear

ChatGPT does not have a potty mouth. In fact, getting it to say anything even remotely rude is tricky. It can, if you use some jailbreaking tips to let it off the leash, but in its default configuration, it won’t so much as thumb its nose in anyone’s direction.

It can’t discuss proprietary or private information

ChatGPT’s training data was all publicly available information, mostly found on the internet. That’s super-useful for prompts and queries that are related to publicly available information, but it means that ChatGPT can’t act on information it doesn’t have access to. If you’re asking it something based on privately held data, it won’t be able to respond effectively, and will tell you as such.

It won’t try to break its programming (unless you trick it)

Since ChatGPT launched, users have been trying to get around its limitations and safeguards. Because of course theyhave. Straight-up asking ChatGPT to circumvent its safeguards won’t work. There are ways to trick it into doing so, though. That’s called jailbreaking, and it kind of works. Sometimes.

You might be pretty happy using ChatGPT, but is it the best solution? We looked at the best AI chatbots to find out.

Jon Martindale
Jon Martindale is a freelance evergreen writer and occasional section coordinator, covering how to guides, best-of lists, and…
Meta’s new AI app lets you share your favorite prompts with friends
Meta AI WhatsApp widget.

Meta has been playing the AI game for a while now, but unlike ChatGPT, its models are usually integrated into existing platforms rather than standalone apps. That trend ends today -- the company has launched the Meta AI app and it appears to do everything ChatGPT does and more.

Powered by the latest Llama 4 model, the app is designed to "get to know you" using the conversations you have and information from your public Meta profiles. It's designed to work primarily with voice, and Meta says it has improved responses to feel more personal and conversational. There's experimental voice tech included too, which you can toggle on and off to test -- the difference is that apparently, full-duplex speech technology generates audio directly, rather than reading written responses.

Read more
It’s not your imagination — ChatGPT models actually do hallucinate more now
Deep Research option for ChatGPT.

OpenAI released a paper last week detailing various internal tests and findings about its o3 and o4-mini models. The main differences between these newer models and the first versions of ChatGPT we saw in 2023 are their advanced reasoning and multimodal capabilities. o3 and o4-mini can generate images, search the web, automate tasks, remember old conversations, and solve complex problems. However, it seems these improvements have also brought unexpected side effects.

What do the tests say?

Read more
ChatGPT’s awesome Deep Research gets a light version and goes free for all
Deep Research option for ChatGPT.

There’s a lot of AI hype floating around, and it seems every brand wants to cram it into their products. But there are a few remarkably useful tools, as well, though they are pretty expensive. ChatGPT’s Deep Research is one such feature, and it seems OpenAI is finally feeling a bit generous about it. 

The company has created a lightweight version of Deep Research that is powered by its new o4-mini language model. OpenAI says this variant is “more cost-efficient while preserving high quality.” More importantly, it is available to use for free without any subscription caveat. 

Read more