2018 has generated no shortage of news, and the fast-changing worlds of artificial intelligence and robotics are no exception.
While there were too many exciting developments for us to be able to name all of them, here are some of the biggest A.I. and robot game changers we saw this year.
Atlas goes parkour
For the past few years, if you wanted to either dazzle or freak out a person who asks how far robots have advanced this century, Googling “Boston Dynamics” was the surefire way to elicit a reaction. 2018 didn’t disappoint us in that regard.
Okay, so video footage of a lab demo doesn’t mean it’s ready to flawlessly execute such an impressive routine in the real world just yet. But as an attention-grabbing reminder that robots are well and truly on the way? Yep, that’ll do it!
Quadruped robots in the workplace
Boston Dynamics is the company behind the world’s most famous canine-inspired quadruped robot. But in 2018 ANYmal, a similar robot created by Swiss robotics startup ANYbotics actually beat it into the workplace.
This year, the Swiss quadruped robot (quadrobot?) underwent a one-week trial carrying out inspection tasks on one of the world’s largest offshore power-distribution platforms in the North Sea. The job involved covering a total of 16 inspection points, including checking gauges, levers, oil and water levels, and assorted other visual and thermal measurements.
At this point, counting the number of robots which could potentially steal away jobs from humans is as unmanageable as keeping a tally of angry people on Twitter. But this impressive showcase reminds us that such things are no longer just hypothetical.
A.I. artwork sells at auction
We’re used to A.I. proving its dominance across fields that require massive amounts of number crunching and statistical prowess. But we still get a bit funny when artificial intelligence starts entering fields we view as being profoundly human.
That’s what happened in October when a painting co-created by an A.I. went up for auction at a Christie’s auction house. The portrait, showing a rotund man in a dark frock coat and white collar, was created using a type of A.I. called a generative adversarial network (GAN).
“Portrait of Edmond de Belamy” was estimated to sell for $7,000 to $10,000. It wound up selling for $432,000. Apparently people are big into art painted by a robot. Maybe they think they’ll be spared when the rise of the machines eventually happens!
Nvidia generates a city
One day we’ll all live in cities designed by artificial intelligence bots for maximum efficiency of travel, convenience, and enjoyment. That day is still a distance off, but 2018 showed that A.I. techniques are perfectly capable of whipping up a three-dimensional city model when required.
Shown off by Nvidia at the recent NeurIPS artificial intelligence conference in Montreal, this impressive tech demo took data gathered from the dash cams of self-driving cars. Using some super computing A.I. wizardry, it then transformed this data into a fully realized virtual environment.
That could prove useful for everything from training autonomous cars to reducing the workload on game designers, who currently have to spend thousands of person-hours creating the 3D city representations seen in games like those in the Grand Theft Auto franchise.
Google Duplex has a conversation
Thanks to tools like the iPhone’s Siri, we’ve been “talking” to A.I. assistants for the best part of a decade by this point. But that didn’t prepare us for the showcase that was Google Duplex. Unveiled by Google during its summer Google I/O 2018 event, Duplex is capable of having natural-sounding conversations with people to perform tasks like making restaurant reservations.
The kicker? It’s so convincing that the human on the other end of the line isn’t even aware that they are speaking with a robot. To enhance the effect, Duplex incorporates filler words such as “hmmm” and “uh” into its speech.
At present, Google Duplex remains a tech demo rather than an actual product, but it nonetheless represents a big advance in A.I.’s ability to naturally converse with us puny humans.
Forget fake news, here are “deep fakes”
2018 was the year of “deep fake” technology, referring to A.I.-augmented videos able to superimpose one person’s face onto another body — with predictably worrying results.
There were too many demonstrations of this for us to rattle through all of them, but this Recycle-GAN system developed by Carnegie Mellon researchers shows how Barack Obama’s can be made to come from the mouth of Donald Trump, or a John Oliver monologue can be transferred across to Stephen Colbert. Add in some scarily accurate voice synthesis, and fake news is looking like it’ll be even harder to spot in 2019 than in years past.
Hey, at least there are researchers working on ways to spot said deep fakes.
Here come the delivery robots
The once science fiction notion of robots carrying out deliveries really gained traction in 2018. Leading the charge was autonomous robot manufacturer Starship Technologies, which raked in giant piles of investor cash to make its robot delivery dream a reality.
Right now, an army of the company’s wheeled delivery bots are carrying out “last mile” deliveries to customers’ front doors in select cities. Using an app, recipients can say when and where they want their package delivered, as well as keep a watchful eye on the robot’s progress in real time. The future, it seems, is not only here — it’s super convenient, too.
- Replaced by robots: 10 jobs that could be hit hard by the A.I. revolution
- A.I. can do almost anything now, but here are 6 things machines still suck at
- Robot uprising a step closer with plan for factory where they build themselves
- The weird, worrying, and wonderful science that happened in 2018
- Nvidia’s new A.I. creates entire virtual cities by watching dash cam videos