Skip to main content

Do humans make computers smarter?

As machine learning makes computers smarter than us in some important ways, does adding a human to the mix make the overall system smarter? Does human plus machine always beat the machine by itself?

The question is easy when we think about using computers to do, say, long division. Would it really help to have a human hovering over the machine reminding it to carry the one? The issue is becoming less clear, and more important, as autonomous cars start to roam our streets.

Recommended Videos

Siri, you can drive my car

Many wary citizens assume that for safety’s sake an autonomous car ought to have a steering wheel and brakes that a human can use to override the car’s computer in an emergency. They assume – correctly for now – that humans are better drivers: so far, autonomous cars have more accidents, mainly minor and caused by human-driven cars, but I’m willing to bet that the accident rate for cars without human overrides will be significantly lower than for cars with them, as the percentage of driverless cars increases, and as they get smarter.

Does human plus machine always beat the machine by itself?

After all, autonomous cars have a 360-degree view of their surroundings, while humans are lucky to have half that. Autonomous cars react at the speed of light. Human react at the speed of neuro-chemicals, contradictory impulses, and second thoughts. Humans often make decisions that preserve their own lives above all others, while autonomous cars, especially once they’re networked, can make decisions to minimize the sum total of bad consequences. (Maybe. Mercedes has announced that its autonomous cars will save passengers over pedestrians).

In short, why would we think that cars would be safer if we put a self-interested, fear-driven, lethargic, poorly informed animal in charge?

A game of Go

But take a case where reaction time doesn’t matter, and where machines have access to the same information as humans. For example, imagine a computer playing a game of Go against a human. Surely adding a highly-skilled player to the computer’s side — or, put anthropocentrically, providing a computer to assist a highly-skilled human — would only make the computer better.

Actually not. AlphaGo, Google’s system that beat the third-ranked human player, makes its moves based on its analysis of 30 million moves in 160,000 games, processed through multiple levels of artificial neural networks that implement a type of machine learning called deep learning.

AlphaGo’s analysis assigns weights to potential moves and calculates the one most likely to lead to victory. The network of weighted moves is so large and complex that a human being simply could not comprehend the data and their relations, or predict their outcome.

AlphaGo
Google
Alpha (Photo: Google)

The process is far more complex than this, of course, and includes algorithms to winnow searches and to learn from successful projected behaviors. Another caveat: Recent news from MIT suggests we may be getting better at enabling neural nets to explain themselves.

Still, imagine that we gave AlphaGo a highly-ranked human partner and had that team play against an unassisted human. AlphaGo comes up with a move. Its human partner thinks it’s crazy. AlphaGo literally cannot explain why it disagrees, for the explanation is that vast network of weighted possibilities that surpasses the capacity of the human brain.

But maybe good old human intuition is better than the cold analysis of a machine. Maybe we should let the human’s judgment override the machine’s calculations.

Maybe, but nah. In the situation we’ve described, the machine wants to make one move, and the human wants to make another. Whose move is better? For any particular move, we can’t know, but we could set up some trials of AlphaGo playing with and without a human partner. We could then see which configuration wins more games.

The proof is in the results

But we don’t even need to do that to get our answer. When a human partner disagrees with AlphaGo’s recommendation, the human is in effect playing against AlphaGo: Each is coming up with its own moves. So far, evidence suggests that when humans do that, they usually lose to the computer.

Maybe we should let the human’s judgment override the machine’s calculations.

Now, of course there are situations where humans plus machines are likely to do better than machines on their own, at least for the foreseeable future. A machine might get good at recommending which greeting card to send to a coworker, but the human will still need to make the judgment about whether the recommended card is too snarky, too informal, or overly saccharine. Likewise, we may like getting recommendations from Amazon about the next book to read, but we are going to continue to want to be given a selection, rather than having Amazon automatically purchase for us the book it predicts we’ll like most.

We are also a big cultural leap away from letting computers arrange our marriages, even though they may well be better at it than we are, since our 40 to 50 percent divorce rate is evidence that we suck at it.

In AI we trust

As we get used to the ability of deep learning to come to conclusions more reliable than the ones our human brains come up with, the fields we preserve for sovereign human judgment will narrow. After all, the computer may well know more about our coworker than we do, and thus will correctly steer us away from the card with the adorable cats because one of our coworker’s cats just died, or because, well, the neural network may not be able to tell us why. And if we find we always enjoy Amazon’s top recommendations, we might find it reasonable to stop looking at its second choices, much less at its explanation of its choices for us.

After all, we don’t ask our calculators to show us their work.

David Weinberger
Former Digital Trends Contributor
Dr. Weinberger is a senior researcher at the Berkman Center. He has been a philosophy professor, journalist, strategic…
Subaru’s electric comeback starts now: Trailseeker EV to debut in NYC
subaru trailseeker ev debut 2026 4  thumb

Subaru is finally accelerating into the EV fast lane. The automaker is officially teasing the 2026 Trailseeker, an all-new electric SUV set to debut at the New York International Auto Show next week. While details are still scarce, the Trailseeker marks Subaru’s long-awaited second entry into the EV space, joining the Solterra — and the expectations couldn't be higher.
The teaser image offers only a glimpse of the Trailseeker’s rear badge and taillight, but the name alone suggests rugged ambitions. It's a clear nod to Subaru’s outdoorsy heritage. But in the EV space, the outdoors belongs to brands like Rivian, whose upcoming R2 compact SUV is already turning heads. The Trailseeker is Subaru’s chance to reassert its identity in an electric age.
Currently, Subaru’s only EV is the Solterra, a joint venture with Toyota that shares a platform with the bZ4X. While the Solterra nails some Subaru essentials — all-wheel drive, spaciousness, and off-road capability — it falls short on key EV metrics. Reviewers have pointed to its modest 225-mile range, slow 100kW charging, and unremarkable acceleration, especially compared to rivals like the Hyundai Ioniq 5  or Ford Mustang Mach-E.
The hope is that Subaru has learned from these criticisms and is poised to deliver a more competitive product. The Trailseeker could either be a variation of a newer Toyota EV (possibly the next-gen C-HR+), or something entirely new under the shared platform strategy. Subaru previously announced that its next three EVs would be co-developed with Toyota, before launching four in-house EVs by 2028.
Given how long Subaru has waited to expand its EV offerings, the Trailseeker has to deliver. It's not just about adding a second electric model — it's about keeping pace with a market rapidly leaving legacy automakers behind. If the Trailseeker can improve on the Solterra's shortcomings and channel that classic Subaru ruggedness into a truly modern EV, it might just be the spark the brand needs.

Read more
I tested the world-understanding avatar of Gemini Live. It was shocking
Scanning a sticker using Gemini Live with camera and screen sharing.

It’s somewhat unnerving to hear an AI talking in an eerily friendly tone and telling me to clean up the clutter on my workstation. I am somewhat proud of it, but I guess it’s time to stack the haphazardly scattered gadgets and tidy up the wire mess. 

My sister would agree, too. But jumping into action after an AI “sees” my table, recognizes the mess, and doles out homemaker advice is the bigger picture. Google’s Gemini AI chatbot can now do that. And a lot more. 

Read more
What happened to Amazon’s inaugural Project Kuiper launch?
Official Imagery for Amazon Project Kuiper.

Amazon is aiming to take on SpaceX’s Starlink internet service using thousands of its own Project Kuiper satellites in low-Earth orbit.

The first Project Kuiper satellites were suppsoed to launch aboard a United Launch Alliance (ULA) Atlas V rocket from Cape Canaveral in Florida on April 9, but rough weather conditions forced the mission team to scrub the planned liftoff.

Read more