Skip to main content

Do humans make computers smarter?

germany self driving car tests mercedes autonomous
Image used with permission by copyright holder
As machine learning makes computers smarter than us in some important ways, does adding a human to the mix make the overall system smarter? Does human plus machine always beat the machine by itself?

The question is easy when we think about using computers to do, say, long division. Would it really help to have a human hovering over the machine reminding it to carry the one? The issue is becoming less clear, and more important, as autonomous cars start to roam our streets.

Siri, you can drive my car

Many wary citizens assume that for safety’s sake an autonomous car ought to have a steering wheel and brakes that a human can use to override the car’s computer in an emergency. They assume – correctly for now – that humans are better drivers: so far, autonomous cars have more accidents, mainly minor and caused by human-driven cars, but I’m willing to bet that the accident rate for cars without human overrides will be significantly lower than for cars with them, as the percentage of driverless cars increases, and as they get smarter.

Does human plus machine always beat the machine by itself?

After all, autonomous cars have a 360-degree view of their surroundings, while humans are lucky to have half that. Autonomous cars react at the speed of light. Human react at the speed of neuro-chemicals, contradictory impulses, and second thoughts. Humans often make decisions that preserve their own lives above all others, while autonomous cars, especially once they’re networked, can make decisions to minimize the sum total of bad consequences. (Maybe. Mercedes has announced that its autonomous cars will save passengers over pedestrians).

In short, why would we think that cars would be safer if we put a self-interested, fear-driven, lethargic, poorly informed animal in charge?

A game of Go

But take a case where reaction time doesn’t matter, and where machines have access to the same information as humans. For example, imagine a computer playing a game of Go against a human. Surely adding a highly-skilled player to the computer’s side — or, put anthropocentrically, providing a computer to assist a highly-skilled human — would only make the computer better.

Actually not. AlphaGo, Google’s system that beat the third-ranked human player, makes its moves based on its analysis of 30 million moves in 160,000 games, processed through multiple levels of artificial neural networks that implement a type of machine learning called deep learning.

AlphaGo’s analysis assigns weights to potential moves and calculates the one most likely to lead to victory. The network of weighted moves is so large and complex that a human being simply could not comprehend the data and their relations, or predict their outcome.

AlphaGo
Google
Alpha (Photo: Google)

The process is far more complex than this, of course, and includes algorithms to winnow searches and to learn from successful projected behaviors. Another caveat: Recent news from MIT suggests we may be getting better at enabling neural nets to explain themselves.

Still, imagine that we gave AlphaGo a highly-ranked human partner and had that team play against an unassisted human. AlphaGo comes up with a move. Its human partner thinks it’s crazy. AlphaGo literally cannot explain why it disagrees, for the explanation is that vast network of weighted possibilities that surpasses the capacity of the human brain.

But maybe good old human intuition is better than the cold analysis of a machine. Maybe we should let the human’s judgment override the machine’s calculations.

Maybe, but nah. In the situation we’ve described, the machine wants to make one move, and the human wants to make another. Whose move is better? For any particular move, we can’t know, but we could set up some trials of AlphaGo playing with and without a human partner. We could then see which configuration wins more games.

The proof is in the results

But we don’t even need to do that to get our answer. When a human partner disagrees with AlphaGo’s recommendation, the human is in effect playing against AlphaGo: Each is coming up with its own moves. So far, evidence suggests that when humans do that, they usually lose to the computer.

Maybe we should let the human’s judgment override the machine’s calculations.

Now, of course there are situations where humans plus machines are likely to do better than machines on their own, at least for the foreseeable future. A machine might get good at recommending which greeting card to send to a coworker, but the human will still need to make the judgment about whether the recommended card is too snarky, too informal, or overly saccharine. Likewise, we may like getting recommendations from Amazon about the next book to read, but we are going to continue to want to be given a selection, rather than having Amazon automatically purchase for us the book it predicts we’ll like most.

We are also a big cultural leap away from letting computers arrange our marriages, even though they may well be better at it than we are, since our 40 to 50 percent divorce rate is evidence that we suck at it.

In AI we trust

As we get used to the ability of deep learning to come to conclusions more reliable than the ones our human brains come up with, the fields we preserve for sovereign human judgment will narrow. After all, the computer may well know more about our coworker than we do, and thus will correctly steer us away from the card with the adorable cats because one of our coworker’s cats just died, or because, well, the neural network may not be able to tell us why. And if we find we always enjoy Amazon’s top recommendations, we might find it reasonable to stop looking at its second choices, much less at its explanation of its choices for us.

After all, we don’t ask our calculators to show us their work.

David Weinberger
Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more