Skip to main content

Don’t roll your eyes — AI isn’t just another doomed tech fad

Stop me if you’ve heard this one before: “This new technology will change everything!”

It’s a phrase regurgitated endlessly by analysts and tech executives with the current buzzword of the moment plugged in. And in 2023, that buzzword is AI. ChatGPT has taken the world by storm, Microsoft redesigned its Edge browser around an AI chatbot, and Google is rushing to integrate its AI model deeply into search.

I don’t blame you if you think AI is just another fad. I understand the skepticism (and frankly, the cynicism) around claiming any technology is some revolution when so many aren’t. But where augmented reality, the metaverse, and NFTs have faded into relative obscurity, AI isn’t going anywhere — for better and worse.

This isn’t new

Messed up Quick Settings on a Google Pixel 7 Pro.
Joe Maring/Digital Trends

Let’s be clear here: AI impacting everyday life isn’t new; tech companies are just finally bragging about it. It has been powering things you use behind the scenes for years.

For instance, anyone who’s interacted with Google search (read: everyone) has experienced a dozen or more AI models at play with only a single query. In 2020, Google introduced an update that leveraged AI to correct spelling, identify critical passages in articles, and generate highlights from YouTube videos.

It’s not just Google, either. Netflix and Amazon use AI to generate watching and shopping recommendations. Dozens of AI support chat programs power customer service from Target to your regional internet provider. Navigation programs like Google Maps use AI to identify roadblocks, speed traps, and traffic congestion.

The Netflix logo in app.
Image used with permission by copyright holder

Those are just a few high-level examples. Most things that could previously be done with a static algorithm — if ‘this,’ then ‘that’ — can be done now with AI, and almost always with better results. AI is even designing the chips that power most electronics today (and doing a better job than human designers).

Companies like Google and Microsoft are simply pulling back the curtain on the AI that’s been powering their services for several years. That’s the critical difference between AI and the endless barrage of tech fads we see every year.

Better over time

Microsoft's redesigned Bing search engine.
Image used with permission by copyright holder

AI’s staying power hinges on the fact that we’re all already using it, but there’s another important element here. AI doesn’t require an investment from you. It absolutely requires a ton of money and power, but that burden rests on the dozens of companies caught up in the AI arms race, not on the end user.

It’s a fundamental difference. Metaverse hype tells you that you need to buy an expensive headset like the Meta Quest Pro to participate, and NFTs want you to cough up cold cash for code. AI just asks whether you want the tasks you’re already performing to be easier and more effective. That’s a hell of a lot different.

AI doesn’t have the growing pains of this emerging (soon-to-be-dead) tech, either. It has problems of its own, which I’ll dig into next, but the basis of generative AI has already been refined to a point that it’s ready for primetime. You don’t have to hassle with expensive, half-baked tech that doesn’t have many practical applications.

It also holds a promise. AI models like the ones now powering search engines and web browsers use reinforcement learning. They’ll get things wrong, but every one of those missteps is put pack into a positive feedback loop that improves the AI as time goes on. Again, I understand the skepticism around believing that AI will magically get better, but I trust that logic much more than I trust a tech CEO telling me a buzzword is going to change the world.

A warning sign

A Google blog post discussing its LaMBDA artificial intelligence technology displayed on a smartphone screen.
Shutterstock

Don’t get it twisted; this is not a resounding endorsement of AI. For as many positives as that can bring, AI also brings some sobering realities.

First and most obviously: AI is wrong a lot of the time. Google’s first demo of its Bard AI showed an answer that was disproven by the first search result. Microsoft’s ChatGPT-powered Bing has also proven that complex, technical questions often throw the AI off, resulting in a copy-paste job from whatever website is the first result in the search engine.

That seems tame enough, but a constantly learning machine can perpetuate problems we already have online — and develop an understanding that those problems aren’t valid. For instance, graphics card and processor brand AMD recently announced in an earnings call that it was “undershipping” chips, which lead many outlets to initially report the company was price fixing. That isn’t the case. This term simply refers to the number of products AMD is shipping to retailers and signifies that demand is lower. Will an AI understand that context? Or will it run with the same misunderstanding that usually trusted sources are already erroneously repeating?

It’s not hard to see a negative feedback loop of misinformation around these complex topics, nor how these AIs can learn to reinforce negative stereotypes. Studies from Johns Hopkins show the often racist and sexist bias present in AI models, and as the study reads: “Stereotypes, bias, and discrimination have been extensively documented in machine learning methods.”

Shutterstock

Safeguards are in place to protect against this type of bias, but you can still skirt these guardrails and reveal what the AI believes underneath. I won’t link to the examples to avoid perpetuating these stereotypes, but Steven Piantadosi, a professor and researcher of cognitive computer science at UC Berkely, revealed half a dozen inputs that would produce racist, sexist responses within ChatGPT just a couple of months ago — and none of them were particularly hard to come up with.

It’s true that AI can be prodded into submission on these fronts, but it hasn’t been yet. Meanwhile, Google and Microsoft are caught up in an arms race to debut their rival AIs first, all carrying these same underpinnings that have been present in AI models for years. Even with protection, it’s a matter of when, not if, these models will deteriorate into the same rotten core that we’ve seen through AIs since their inception.

I’m not saying this bias is intentional, and I’m confident Microsoft and Google are working to remove as much of it as possible. But the momentum behind AI right now pushes these concerns into the background and ignores the implications they could have. After all, the AI revolution is upon us, and it won’t quickly fade into obscurity like another tech fad. My only hope is that the never-ending need for competition isn’t enough to uproot the necessity for responsibility.

Editors' Recommendations

Jacob Roach
Lead Reporter, PC Hardware
Jacob Roach is the lead reporter for PC hardware at Digital Trends. In addition to covering the latest PC components, from…
OpenAI is on fire — here’s what that means for ChatGPT and Windows
OpenAI CEO Sam Altman standing on stage at a product event.

OpenAI kicked off a firestorm over the weekend. The creator of ChatGPT and DALL-E 3 ousted CEO Sam Altman on Friday, kicking off a weekend of shenanigans that led to three CEOs in three days, as well as what some are calling an under-the-table acquisition of OpenAI by Microsoft.

A lot happened at the tech world's hottest commodity in just a few days, and depending on how everything plays out, it could have major implications for the future of products like ChatGPT. We're here to explain how OpenAI got here, what the situation is now, and where the company could be going from here.

Read more
Google Bard could soon become your new AI life coach
Google Bard on a green and black background.

Generative artificial intelligence (AI) tools like ChatGPT have gotten a bad rep recently, but Google is apparently trying to serve up something more positive with its next project: an AI that can offer helpful life advice to people going through tough times.

If a fresh report from The New York Times is to be believed, Google has been testing its AI tech with at least 21 different assignments, including “life advice, ideas, planning instructions and tutoring tips.” The work spans both professional and personal scenarios that users might encounter.

Read more
Zoom backpedals, says it will no longer use user content to train AI
A woman on a Zoom call.

Like everyone else, Zoom has added AI features to improve its app and videoconferencing service. We all love the ease and speed AI provides, but there are often concerns about the data used to train models, and Zoom has been at the center of the controversy. It's backpedaling now, saying it won't use user content to train its AI models.

News leaked in May 2022 that Zoom was working on emotion-sensing AI that could analyze faces in meetings. Beyond the potential for inaccurate analysis, the results could be discriminatory.

Read more