The U.S. Navy’s Office of Naval Research is working on a way to turn the United States military fleet invisible. Well, to a computer, at least.
The project involves so-called adversarial objects, which exploit a weakness in computer-recognition systems, either prompting them to fail to recognize an object entirely or else to classify it incorrectly. A famous example of this was a terrifying demonstration in which such systems were fooled into thinking a rifle was actually a 3D-printed toy turtle. In another instance, researchers were able to create special glasses that would cause facial-recognition software to misidentify wearers.
According to New Scientist, the U.S. Navy has yet to reveal many details of the project. However, it has awarded contracts to three companies that will work on developing it. The project is split into two stages. In the first, the three companies currently working on the initiative will carry out initial background on the concept. In the second phase, they will then develop so-called “foolkits” for camouflaging aircraft and vehicles. This will take the form of special stickers or paint templates that could be used to cover vehicles. A document produced by the U.S. Navy describes how the technology could be employed to trick enemy surveillance systems into thinking that tanks are ordinary cars, or vice versa. This could be used to baffle the enemy.
The approach would, of course, only trick A.I. systems and not actual people. Nonetheless, when it comes to certain scenarios in which areas are monitored only by machine intelligence, this could prove to be incredibly useful. To counteract it, enemy troops would either have to develop smarter A.I. systems or waste resources by replacing machine intelligence-based security systems with flesh-and-blood humans for carrying out monitoring duties.
Things don’t just work one way, though. While the U.S. Navy is interested in the potential offensive possibilities inherent in this work, it is also keen to explore this technology for defensive reasons. In other words, they are hoping that it could help provide new insights that would help the Navy’s own image-recognition systems avoid being fooled in this way.
- Scientists are using A.I. to create artificial human genetic code
- Inside the rapidly escalating war between deepfakes and deepfake detectors
- Future armies could use teams of drones and robots to storm buildings
- Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
- How Tupac and Thanos led to Douglas, the most impressively humanlike A.I. yet