Skip to main content

Don’t roll your eyes — AI isn’t just another doomed tech fad

Stop me if you’ve heard this one before: “This new technology will change everything!”

It’s a phrase regurgitated endlessly by analysts and tech executives with the current buzzword of the moment plugged in. And in 2023, that buzzword is AI. ChatGPT has taken the world by storm, Microsoft redesigned its Edge browser around an AI chatbot, and Google is rushing to integrate its AI model deeply into search.

I don’t blame you if you think AI is just another fad. I understand the skepticism (and frankly, the cynicism) around claiming any technology is some revolution when so many aren’t. But where augmented reality, the metaverse, and NFTs have faded into relative obscurity, AI isn’t going anywhere — for better and worse.

This isn’t new

Messed up Quick Settings on a Google Pixel 7 Pro.
Joe Maring/Digital Trends

Let’s be clear here: AI impacting everyday life isn’t new; tech companies are just finally bragging about it. It has been powering things you use behind the scenes for years.

For instance, anyone who’s interacted with Google search (read: everyone) has experienced a dozen or more AI models at play with only a single query. In 2020, Google introduced an update that leveraged AI to correct spelling, identify critical passages in articles, and generate highlights from YouTube videos.

It’s not just Google, either. Netflix and Amazon use AI to generate watching and shopping recommendations. Dozens of AI support chat programs power customer service from Target to your regional internet provider. Navigation programs like Google Maps use AI to identify roadblocks, speed traps, and traffic congestion.

The Netflix logo in app.
Image used with permission by copyright holder

Those are just a few high-level examples. Most things that could previously be done with a static algorithm — if ‘this,’ then ‘that’ — can be done now with AI, and almost always with better results. AI is even designing the chips that power most electronics today (and doing a better job than human designers).

Companies like Google and Microsoft are simply pulling back the curtain on the AI that’s been powering their services for several years. That’s the critical difference between AI and the endless barrage of tech fads we see every year.

Better over time

Microsoft's redesigned Bing search engine.
Image used with permission by copyright holder

AI’s staying power hinges on the fact that we’re all already using it, but there’s another important element here. AI doesn’t require an investment from you. It absolutely requires a ton of money and power, but that burden rests on the dozens of companies caught up in the AI arms race, not on the end user.

It’s a fundamental difference. Metaverse hype tells you that you need to buy an expensive headset like the Meta Quest Pro to participate, and NFTs want you to cough up cold cash for code. AI just asks whether you want the tasks you’re already performing to be easier and more effective. That’s a hell of a lot different.

AI doesn’t have the growing pains of this emerging (soon-to-be-dead) tech, either. It has problems of its own, which I’ll dig into next, but the basis of generative AI has already been refined to a point that it’s ready for primetime. You don’t have to hassle with expensive, half-baked tech that doesn’t have many practical applications.

It also holds a promise. AI models like the ones now powering search engines and web browsers use reinforcement learning. They’ll get things wrong, but every one of those missteps is put pack into a positive feedback loop that improves the AI as time goes on. Again, I understand the skepticism around believing that AI will magically get better, but I trust that logic much more than I trust a tech CEO telling me a buzzword is going to change the world.

A warning sign

A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.

Don’t get it twisted; this is not a resounding endorsement of AI. For as many positives as that can bring, AI also brings some sobering realities.

First and most obviously: AI is wrong a lot of the time. Google’s first demo of its Bard AI showed an answer that was disproven by the first search result. Microsoft’s ChatGPT-powered Bing has also proven that complex, technical questions often throw the AI off, resulting in a copy-paste job from whatever website is the first result in the search engine.

That seems tame enough, but a constantly learning machine can perpetuate problems we already have online — and develop an understanding that those problems aren’t valid. For instance, graphics card and processor brand AMD recently announced in an earnings call that it was “undershipping” chips, which lead many outlets to initially report the company was price fixing. That isn’t the case. This term simply refers to the number of products AMD is shipping to retailers and signifies that demand is lower. Will an AI understand that context? Or will it run with the same misunderstanding that usually trusted sources are already erroneously repeating?

It’s not hard to see a negative feedback loop of misinformation around these complex topics, nor how these AIs can learn to reinforce negative stereotypes. Studies from Johns Hopkins show the often racist and sexist bias present in AI models, and as the study reads: “Stereotypes, bias, and discrimination have been extensively documented in machine learning methods.”


Safeguards are in place to protect against this type of bias, but you can still skirt these guardrails and reveal what the AI believes underneath. I won’t link to the examples to avoid perpetuating these stereotypes, but Steven Piantadosi, a professor and researcher of cognitive computer science at UC Berkely, revealed half a dozen inputs that would produce racist, sexist responses within ChatGPT just a couple of months ago — and none of them were particularly hard to come up with.

It’s true that AI can be prodded into submission on these fronts, but it hasn’t been yet. Meanwhile, Google and Microsoft are caught up in an arms race to debut their rival AIs first, all carrying these same underpinnings that have been present in AI models for years. Even with protection, it’s a matter of when, not if, these models will deteriorate into the same rotten core that we’ve seen through AIs since their inception.

I’m not saying this bias is intentional, and I’m confident Microsoft and Google are working to remove as much of it as possible. But the momentum behind AI right now pushes these concerns into the background and ignores the implications they could have. After all, the AI revolution is upon us, and it won’t quickly fade into obscurity like another tech fad. My only hope is that the never-ending need for competition isn’t enough to uproot the necessity for responsibility.

Editors' Recommendations

Jacob Roach
Senior Staff Writer, Computing
Jacob Roach is a writer covering computing and gaming at Digital Trends. After realizing Crysis wouldn't run on a laptop, he…
The dark side of ChatGPT: things it can do, even though it shouldn’t
OpenAI and ChatGPT logos are marked do not enter with a red circle and line symbol.

Have you used OpenAI's ChatGPT for anything fun lately? You can ask it to write you a song, a poem, or a joke. Unfortunately, you can also ask it to do things that tend toward being unethical.

ChatGPT is not all sunshine and rainbows -- some of the things it can do are downright nefarious. It's all too easy to weaponize it and use it for all the wrong reasons. What are some of the things that ChatGPT has done, and can do, but definitely shouldn't?
A jack-of-all-trades

Read more
Someone just used ChatGPT to generate free Windows keys
A MacBook Pro on a desk with ChatGPT's website showing on its display.

ChatGPT is an incredibly capable piece of tech, with a huge number of interesting uses. But, perhaps inevitably, people have put it to use for less noble purposes. Now, someone has used it to generate valid Windows license keys for free.

The discovery was made by YouTuber Enderman, who used ChatGPT to create license keys for Windows 95. Why Windows 95? Well, support ended for it 20 years ago, so this was essentially an exercise in curiosity from Enderman rather than an attempt to crack more modern versions like Windows 11.

Read more
Newegg’s AI PC Builder is a dumpster fire that I can’t look away from
Newegg's AI PC builder

Newegg is the latest to capitalize on the hype of ChatGPT by integrating the GPT model into its PC Builder tool. It sounds great -- give it a prompt tailored for your purpose and get a PC build, all with quick links to buy what you need. There's just one problem -- it's terrible.

No, the Newegg AI PC Builder isn't just giving out a few odd recommendations. It's still in beta, and that's to be expected. The problem is that the AI seems to actively ignore the prompt you give it, suggests outlandish and unbalanced PCs, and has a clear bias toward charging you more when you asked to spend less.
A grab bag of nonsense

Read more