Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good

A.I. is good pepper the robot
Tomohiro Ohsumi/Getty Images

One of the most famous sayings about technology is the “law” laid out by the late American historian Melvin Kranzberg: “Technology is neither good nor bad; nor is it neutral.”

It’s a great saying: brief, but packed with instruction, like a beautifully poetic line of code. If I understand it correctly, it means that technology isn’t inherently good or bad, but that it will certainly impact upon us in some way — which means that its effects are not neutral. A similarly brilliant quote came from the French cultural theorist Paul Virilio: “the invention of the ship was also the invention of the shipwreck.”

“Technology is neither good nor bad; nor is it neutral.”

To adopt that last image, artificial intelligence (A.I.) is the mother of all ships. It promises to be as significant a transformation for the world as the arrival of electricity was in the nineteenth and twentieth century. But while many of us will coo excitedly over the latest demonstration of DeepMind’s astonishing neural networks, a lot of the discussion surrounding A.I. is decidedly negative. We fret about robots stealing jobs, autonomous weapons threatening the world’s wellbeing, and the creeping privacy issues of data-munching giants. Heck, once the dream of achieving artificial general intelligence arrives, some pessimists seem to think the only debate is whether we’re obliterated by Terminator-style robots or turned into grey goo by nanobots.

While some of this technophobia is arguably misplaced, it’s not hard to see critics’ point. Tech giants like Google and Facebook have hired some of the greatest minds of our generation, and put them to work not curing disease or rethinking the economy, but coming up with better ways to target us with ads. The Human Genome Project, this ain’t! Shouldn’t a world-changing technology like A.I. be doing a bit more… world changing?

A course in moral A.I.?

2018 may be the year when things start to change. While they’re still small seeds just beginning to sprout green shoots, there’s more evidence that the subject of making A.I. into a true force for good is starting to gain momentum. For example, starting this semester, the School of Computer Science at Carnegie Mellon University (CMU) will be teaching a new class, titled “Artificial Intelligence for Social Good.” It touches on many of the topics you’d expect from a graduate and undergraduate class — optimization, game theory, machine learning, and sequential decision making — and will look at these through the lens of how each will impact society. The course will also challenge students to build their own ethical A.I. projects, giving them real world experience with creating potentially life-changing A.I.

ai for social good 2018 is simon university
ITU/R.Farrell

“A.I. is the blooming field with tremendous commercial success, and most people benefit from the advances of A.I. in their daily lives,” Professor Fei Fang told Digital Trends. “At the same time, people also have various concerns, ranging from potential job loss to privacy and safety issues to ethical issues and biases. However, not enough awareness has been raised regarding how A.I. can help address societal challenges.”

Fang describes this new course as “one of the pioneering courses focusing on this topic,” but CMU isn’t the only institution to offer one. It joins a similar “A.I. for Social Good” course offered at the University of Southern California, which started last year. At CMU, Fang’s course is listed as a core course for a Societal Computing Ph.D. program.

“Not enough awareness has been raised regarding how A.I. can help address societal challenges.”

During the new CMU course, Fang and a variety of guest lecturers will discuss a number of ways A.I. can help solve big social questions: machine learning and game theory used to help protect wildlife from poaching, A.I. being used to design efficient matching algorithms for kidney exchange, and using A.I. to help prevent HIV among homeless young people by selecting a set of peer leaders to spread health-related information.

“The most important takeaway is that A.I. can be used to address pressing societal challenges, and can benefit society now and in the near future,” Fang said. “And it relies on the students to identify these challenges, to formulate them into clearly defined problems, and to develop A.I. methods to help address them.”

Challenges with modern A.I.

Professor Fang’s class isn’t the first time that the ethics of A.I. has been discussed, but it does represent (and, certainly, coincide with) a renewed interest in the field. A.I. ethics are going mainstream.

This month, Microsoft published a book called “The Future Computed: Artificial intelligence and its role in society.” Like Fang’s class, it runs through some of the scenarios in which A.I. can help people today: letting those with limited vision hear the world described to them by a wearable device, and using smart sensors to let farmers increase their yield and be more productive.

Ekso Bionics

There are plenty more examples of this kind. Here at Digital Trends, we’ve covered A.I. that can help develop new pharmaceutical drugs, A.I. that can help people avoid shelling out for a high priced lawyer, A.I. to diagnose disease, and A.I. and robotics projects which can help reduce backbreaking work — either by teaching humans how to perform it more safely or even taking them out of the loop altogether.

All of these are positive examples of how A.I. can be used for social good. But for it to really become a force for positive change in the world, artificial intelligence needs to go beyond simply good applications. It also needs to be created in a way that is considered positive by society. As Fang says, the possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

The possibility of algorithms reflecting bias is a significant problem, and one that’s still not well understood.

Several years ago, African-American Harvard University PhD Latanya Sweeney “exposed” Google’s search algorithms as being inadvertently racist, by linking names more commonly given to black people with ads relating to arrest records. Sweeney, who had never been arrested, found that she was nonetheless shown ads asking “Have you been arrested?” that her white colleagues were not. Similar case studies have noticed how image recognition systems will be more likely to associate a picture of a kitchen with women and one of sports coaching with men. In this case, the bias wasn’t necessarily the fault of one programmer, but rather discriminatory patterns hidden in the large sets of data Google’s algorithms are trained on.

The same is true for the “black boxing” of algorithms, which can make them inscrutable to even their own creators. In Microsoft’s new book, its authors suggest that A.I. should be built around an ethical framework, a bit like science fiction writer Isaac Asimov’s “Three Laws of Robotics” for the “woke” generation. These six principles include the fact that AI systems should be fair, reliable and safe; that they should be private and secure; that they should be inclusive; that they should be transparent, and that they they should be accountable.

“If designed properly, A.I. can help make decisions that are fairer because computers are purely logical and, in theory, are not subject to the conscious and unconscious biases that inevitably influence human decision-making,” Microsoft’s authors write.

More work to be done

Ultimately, this is going to be easier said than done. From most people’s perspective, A.I. research done in the private sector far outstrips work done in the public sector. The problem with this is accountability in a world where algorithms are guarded as secretly as missile launch codes. There is also no cause for companies to solve big societal problems if it will not immediately benefit their bottom line. (Or score them some brownie points to possibly avoid regulation.) It would be naive to think that all of the concerns raised by profit-driven companies are going to be altruistic, no matter how much they might suggest otherwise.

For broader discussions about the use of A.I. for public good, something is going to have to change. Is it recognizing the power of artificial intelligence and putting into place more regulations allowing for scrutiny? Does it mean companies forming ethics boards, as was the case with Google DeepMind, as part of their research into cutting edge A.I.? Is it awaiting a market-driven change, or backlash, that will demand that tech giants offer more information about the system’s that govern our lives? Is it, as Bill Gates has suggested, implementing a robot tax that will curtail the use of A.I. or robotics in some situations by taxing companies for replacing its workers? None of these solutions are perfect.

And the biggest question of all remains: Who exactly defines ‘good’? Debates about how A.I. can be a force for good in our society will involve a significant number of users, policy makers, activists, technologists, and other interested parties working out what kind of world it is that we want to create, and how to use technology to best achieve that.

As DeepMind co-founder Mustafa Suleyman told Wired: “Getting these things right is not purely a matter of having good intentions. We need to do the hard, practical and messy work of finding out what ethical A.I. really means. If we manage to get A.I. to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.”

Courses like Professor Fang’s aren’t the final destination, by any means. But they are a very good start.

Emerging Tech

Global Good wants to rid the world of deadly diseases with lasers and A.I.

Global Good, a collaboration between Intellectual Ventures and Bill Gates, aims to eradicate diseases that kill children in developing nations. It tackles difficult problems with high-tech prototypes.
Smart Home

Google Home vs. Amazon Echo: Which one is better for you?

What happens when you compare the Google Home vs the Amazon Echo? Both smart speakers have good qualities, but what happens when you compare they're features side-by-side? We think one of these smart gadgets wins over the other.
Computing

Like to be brand loyal? These tech titans make some of our favorite laptops

If you want to buy your next laptop based around a specific brand, it helps to know which the best brands of laptops are. This list will give you a good grounding in the most reliable, quality laptop manufacturers today.
Gaming

You're not a true fan without these Nintendo Switch exclusives

Who doesn't love a good Nintendo game? If you're looking for great first-party titles for your Nintendo Switch, take a look at our list of the very best exclusives available right now.
Smart Home

No strings attached: This levitating lamp uses science to defy gravity

Now on Kickstarter, the Levia lamp is a cool industrial-looking lamp which boasts a levitating bulb. Looking for a table light that will dazzle visitors? You've come to the right place.
Emerging Tech

The Great White Shark’s genome has been decoded, and it could help us end cancer

In a significant step for marine and genetic science, researchers have decoded the genome of the great white shark. The genetic code revealed a wealth of insight into what makes these creatures so successful from an evolutionary standpoint.
Emerging Tech

‘Guerrilla rainstorm’ warning system aims to prevent soakings, or worse

Japanese researchers have created a "guerrilla rainstorm" early-warning system aimed at preventing severe soakings, or worse. The team hopes to launch the system before the 2020 Tokyo Olympics.
Mobile

Barbie’s Corvette ain’t got nothing on Sphero’s fully programmable robot car

Sphero is known for devices like the Sphero Bolt and BB-8 Star Wars toy, but now the company is back with another addition to its lineup -- the Sphero RVR. The RVR is a fully programmable robot car that can be expanding with different…
Emerging Tech

Japanese spacecraft will collect a sample from asteroid Ryugu by shooting at it

The Japanese spacecraft Hayabusa 2 will soon touch down on the asteroid Ryugu, where it will collect a sample by shooting a bullet into the soil. The sample will be returned to Earth in 2020 to learn about the formation of asteroids.
Emerging Tech

Hong Kong’s vision for a smart prison is a full-blown Orwellian nightmare

Hong Kong wants to bring prisons up to date by introducing new location-tracking wristbands for inmates, and a robot arm whose job is to comb through poop on the lookout for contraband.
Emerging Tech

No faking! Doctors can now objectively measure how much pain you’re in

Researchers at Indiana University School of Medicine have discovered the blood biomarkers that can objectively reveal just how much pain a patient is in. Here's why that's so important.
Emerging Tech

SeaBubbles’ new electric hydrofoil boat is the aquatic equivalent of a Tesla

What do you get if you combine a Tesla, a flying car, and a sleek boat? Probably something a bit like SeaBubbles, the French "flying" boat startup which offers a fresh spin on the hydrofoil.
Emerging Tech

We tried a $500 electronic dab rig, and now we can’t go back to normal vaporizers

Induction heating is the future of cannabis vaporizers. Loto Labs wowed us with what likely is the best concentrate vaporizer on the market today. With a $500 price tag, it's expensive, but it should definitely be your next dab rig.
Emerging Tech

Israel will launch world’s first privately funded moon mission tomorrow

This week will see the world's first privately funded lunar mission launch. Israel's first mission to the moon will be launched aboard SpaceX's Falcon 9 rocket on Thursday, February 21.