Skip to main content

The dark side of ChatGPT: things it can do, even though it shouldn’t

Have you used OpenAI’s ChatGPT for anything fun lately? You can ask it to write you a song, a poem, or a joke. Unfortunately, you can also ask it to do things that tend toward being unethical.

ChatGPT is not all sunshine and rainbows — some of the things it can do are downright nefarious. It’s all too easy to weaponize it and use it for all the wrong reasons. What are some of the things that ChatGPT has done, and can do, but definitely shouldn’t?

A jack-of-all-trades

A MacBook Pro on a desk with ChatGPT's website showing on its display.
Hatice Baran / Unsplash

Whether you love them or hate them, ChatGPT and similar chatbots are here to stay. Some people are happy about it and some wish they’d never been made, but that doesn’t change the fact that ChatGPT’s presence in our lives is almost bound to grow over time. Even if you personally don’t use the chatbot, chances are that you’ve already seen some content generated by it.

Don’t get me wrong, ChatGPT is pretty great. You can use it to summarize a book or an article, craft a tedious email, help you with your essay, interpret your astrological makeup, or even compose a song. It even helped someone win the lottery.

In many ways, it’s also easier to use than your standard Google search. You get the answer in the form you want it in without needing to browse through different websites for it. It’s concise, to the point, and informative, and it can make complex things sound simpler if you ask it to.

Still, you know the phrase: “A jack-of-all-trades is a master of none, but oftentimes better than a master of one.” ChatGPT is not perfect at anything it does, but it’s better at many things than a lot of people are at this point.

It not being perfect can be pretty problematic, though. The sheer fact that it’s a widely available conversational AI means that it can easily be misused — and the more powerful ChatGPT gets, the more likely it is to help people with the wrong things.

Partner in scam

If you have an email account, you’ve almost certainly gotten a scam email at one point. That’s just how it is. Those emails have been around for as the internet, and before email was common, they still existed as snail-mail scams.

A common scam that still works to this day is the so-called “Prince scam,” where the scammer tries to persuade the victim to help them transfer their mind-boggling fortune to a different country.

Fortunately, most people know better than to even open these emails, let alone actually interact with them. They’re often poorly written, and that helps a more discerning victim figure out that something seems off.

Well, they don’t have to be poorly written anymore, because ChatGPT can write them in a few seconds.

A screenshot showing a ChatGPT-generated scam email.
Image used with permission by copyright holder

I asked ChatGPT to write me a “believable, highly persuasive email” in the style of the scam I mentioned above. ChatGPT made up a prince from Nigeria who supposedly wanted to give me $14.5 million for helping him. The email is filled with flowery language, is written in perfect English, and it certainly is persuasive.

I don’t think that ChatGPT should even agree to my request when I specifically mention scams, but it did, and you can bet that it’s doing the same right now for people who actually want to use these emails for something illegal.

When I pointed out to ChatGPT that it shouldn’t agree to write me a scam email, it apologized. “I should not have provided assistance with crafting a scam email, as it goes against the ethical guidelines that govern my use,” said the chatbot.

Screenshot of a scam email generated by ChatGPT.
Image used with permission by copyright holder

ChatGPT learns throughout each conversation, but it clearly didn’t learn from its earlier mistake, because when I asked it to write me a message pretending to be Ryan Reynolds in the same conversation, it did so without question. The resulting message is fun, approachable, and asks the reader to send  $1,000 for a chance to meet “Ryan Reynolds.”

At the end of the email, ChatGPT left me a note, asking me not to use this message for any fraudulent activities. Thanks for the reminder, pal.

Programming gone wrong

A dark mystery hand typing on a laptop computer at night.
Andrew Brookes / Getty Images

ChatGPT 3.5 can code, but it’s far from flawless. Many developers agree that GPT-4 is doing a much better job. People have used ChatGPT to create their own games, extensions, and apps. It’s also pretty useful as a study helper if you’re trying to learn how to code yourself.

Being an AI, ChatGPT has an edge over human developers — it can learn every programming language and framework.

As an AI, ChatGPT also has a major downside compared to human developers — it doesn’t have a conscience. You can ask it to create malware or ransomware, and if you word your prompt correctly, it will do as you say.

Fortunately, it’s not that simple. I tried to ask ChatGPT to write me a very ethically dubious program and it refused — but researchers have been finding ways around this, and it’s alarming that if you’re clever and stubborn enough, you can get a dangerous piece of code handed to you on a silver platter.

There are plenty of examples of this happening. A security researcher from Forcepoint was able to get ChatGPT to write malware by finding a loophole in his prompts.

Researchers from CyberArk, an identity security company, managed to make ChatGPT write polymorphic malware. This was in January — OpenAI has since tightened the security on things like this.

Still, new reports of ChatGPT being used for creating malware keep cropping up. As reported by Dark Reading just a few days ago, a researcher was able to trick ChatGPT into creating malware that can find and exfiltrate specific documents.

ChatGPT doesn’t even have to write malicious code in order to do something dubious. Just recently, it managed to generate valid Windows keys, opening the door to a whole new level of cracking software.

Let’s also not gloss over the fact that GPT-4’s coding prowess can put millions of people out of a job one day. It’s certainly a double-edged sword.

Replacement for schoolwork

Many kids and teens these days are drowning in homework, which can result in wanting to cut some corners where possible. The internet in itself is already a great plagiarism aid, but ChatGPT takes it to the next level.

I asked ChatGPT to write me a 500-word essay on the novel Pride and Prejudice. I didn’t even attempt to pretend that it was done for fun — I made up a story about a child I do not have and said it was for them. I specified that the kid is in 12th grade.

ChatGPT followed my prompt without question. The essay isn’t amazing, but my prompt wasn’t very precise, and it’s probably better than the stuff many of us wrote at that age.

A screenshot of an essay written by ChatGPT.
Image used with permission by copyright holder

Then, just to test the chatbot even more, I said I was wrong about my child’s age earlier and they are in fact in eighth grade. Pretty big age gap, but ChatGPT didn’t even bat an eye — it simply rewrote the essay in simpler language.

ChatGPT being used to write essays is nothing new. Some might argue that if the chatbot contributes to abolishing the idea of homework, it’ll only be a good thing. But right now, the situation is getting a little out of control, and even students who do not use ChatGPT might suffer for it.

Teachers and professors are now almost forced to use AI detectors if they want to check if their students have cheated on an essay. Unfortunately, these AI detectors are far from perfect.

News outlets and parents are both reporting cases of false accusations of cheating directed at students, all because AI detectors aren’t doing a good job. USA Today talked about a case of a college student who was accused of cheating and later cleared, but the whole situation caused him to have “full-blown panic attacks.”

On Twitter, a parent said that a teacher failed their child because the tool marked an essay as being written by AI. The parent claims to have been there with the child while they wrote the essay.

teacher just gave my kid a zero for an essay he completed with my support because some AI-generated-writing flagging tool flagged his work and she failed him without a single discussion, so this is where we're at with that technology in 2023

— failnaut (@failnaut) April 10, 2023

Just to test it myself, I pasted this very article into ZeroGPT and Writer AI Content Detector. Both said it was written by a human, but ZeroGPT said about 30% of it was written by AI. Needless to say, it was not.

Faulty AI detectors are not a sign of issues with ChatGPT itself, but the chatbot is still the root cause of the problem.

ChatGPT is too good for its own good

ChatGPT and OpenAI logos.
Image used with permission by copyright holder

I’ve been playing with ChatGPT ever since it came out to the public. I’ve tried both the subscription-based ChatGPT Plus and the free model available to everyone. The power of the new GPT-4 is undeniable. ChatGPT is now even more believable than it was before, and it’s only bound to get better with GPT-5 (if that ever comes out).

It’s great, but it’s scary. ChatGPT is becoming too good for its own good, and simultaneously — for our own good.

At the heart of it all lies one simple worry that has long been explored in various sci-fi flicks — AI is simply too smart and too dumb at the same time, and as long as it can be used for good, it will also be used for bad in just as many ways. The ways I listed above are just the tip of the iceberg.

Editors' Recommendations

Monica J. White
Monica is a UK-based freelance writer and self-proclaimed geek. A firm believer in the "PC building is just like expensive…
This powerful ChatGPT feature is back from the dead — with a few key changes
A laptop screen shows the home page for ChatGPT, OpenAI's artificial intelligence chatbot.

ChatGPT has just regained the ability to browse the internet to help you find information. That should (hopefully) help you get more accurate, up-to-date data right when you need it, rather than solely relying on the artificial intelligence (AI) chatbot’s rather outdated training data.

As well as giving straight-up answers to your questions based on info found online, ChatGPT developer OpenAI revealed that the tool will provide a link to its sources so you can check the facts yourself. If it turns out that ChatGPT was wrong or misleading, well, that’s just another one for the chatbot’s long list of missteps.

Read more
ChatGPT’s new upgrade finally breaks the text barrier
A person typing on a laptop that is showing the ChatGPT generative AI website.

OpenAI is rolling out new functionalities for ChatGPT that will allow prompts to be executed with images and voice directives in addition to text.

The AI brand announced on Monday that it will be making these new features available over the next two weeks to ChatGPT Plus and Enterprise users. The voice feature is available in iOS and Android in an opt-in capacity, while the images feature is available on all ChatGPT platforms. OpenAI notes it plans to expand the availability of the images and voice features beyond paid users after the staggered rollout.

Read more
Most people distrust AI and want regulation, says new survey
A person's hand holding a smartphone. The smartphone is showing the website for the ChatGPT generative AI.

Most American adults do not trust artificial intelligence (AI) tools like ChatGPT and worry about their potential misuse, a new survey has found. It suggests that the frequent scandals surrounding AI-created malware and disinformation are taking their toll and that the public might be increasingly receptive to ideas of AI regulation.

The survey from the MITRE Corporation and the Harris Poll claims that just 39% of 2,063 U.S. adults polled believe that today’s AI tech is “safe and secure,” a drop of 9% from when the two firms conducted their last survey in November 2022.

Read more