Skip to main content

UN told to ban killer robots before they become a reality

Killer robot
Image used with permission by copyright holder
Human Rights Watch and Harvard Law School’s International Human Rights Clinic have a strong and eye-catching message for the U.N.: “Ban killer robots.” The two groups take up the cause against fully autonomous weapons in a 38-page report released ahead of an international meeting about said weapons starting April 13.

“Fully autonomous weapons, also known as ‘killer robots,’ raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control,” begins the report, titled Mind the Gap: The Lack of Accountability for Killer Robots. Human Rights Watch and Harvard Law School lay out a list of concerns about fully autonomous weapons, including doubts about their ability to distinguish civilian from military targets, the possibility of an arms race, and proliferation to militaries with little regard for the law.

All of those concerns are compounded by the accountability gap for “unlawful harm caused by fully autonomous weapons,” according to the report. Under current laws, parties associated with the use or production of killer robots (e.g., operators, commanders, programmers, manufacturers) would not be held liable in the case of harm caused by the robots. The ultimate solution proposed by the report is to adopt an international ban on fully autonomous weapons.

On Monday, a weeklong international meeting about autonomous weapons systems will take place at the U.N. in Geneva. The agenda will cover additions to The Convention on Certain Conventional Weapons.

“Also known as the inhumane weapons convention, the treaty has been regularly reinforced by new protocols on emerging military technology,” according to The Guardian. “Blinding laser weapons were pre-emptively outlawed in 1995 and combatant nations since 2006 have been required to remove unexploded cluster bombs.”

The paper is an early discussion of a hypothetical future world, and the authors of the paper admit as much: “Fully autonomous weapons do not yet exist, but technology is moving in their direction, and precursors are already in use or development.” The examples listed in the paper all respond to threats automatically, putting them a step beyond drones, which require a human to control it from a remote location.

“No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,” said Bonnie Docherty, senior Arms Division researcher at Human Rights Watch and the report’s lead author. “The many obstacles to justice for potential victims show why we urgently need to ban fully autonomous weapons.”

In November 2013, an op-ed in The Wall Street Journal co-authored by two professors disputed the notion that fully autonomous weapons need to be banned. Malicious actors already disposed to abusing such weapons would not respect a ban, argued Kenneth Anderson and Matthew Waxman. “Moreover, because the automation of weapons will happen gradually, it would be nearly impossible to design or enforce such a ban.”

Anderson and Waxman also suggested that autonomous weapons could reduce suffering and protect human lives rather than the opposite. Nevertheless, the co-authors said careful regulation is warranted.

“Autonomous weapons are not inherently unlawful or unethical,” they concluded. “If we adapt legal and ethical norms to address robotic weapons, they can be used responsibly and effectively on the battlefield.”

Editors' Recommendations

Jason Hahn
Jason Hahn is a part-time freelance writer based in New Jersey. He earned his master's degree in journalism at Northwestern…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more