Intel showcased the extendable, ridable robot — dubbed the Segway Robot, for now — alongside forthcoming consumer drones and a chipset built specifically for wearables, among other things. It rolled onto the stage with an adorable expression that could rival that of a newborn infant, capitalizing on Intel’s RealSense RGB-D camera, which imbues the self-balancing device with a greater sense of spatial depth when tracking and mapping. Intel’s Atom processor makes it all possible, as does the hardware’s GPU acceleration and embedded vision algorithms.
Intel wasn’t alone in the effort, though. The robot represents a partnership between the behemoth tech giant and Xiaomi, the recent acquirer of Segway and the manufacturer of the Ninebot. The collaboration involves a wealth of technology — including voice capabilities, a livestreaming camera, and facial-recognition — which in combination allowed the robot to navigate around Intel’s mock living room and communicate with its inventor like something out of the waste-covered world of WALL-E.
Segway supposedly has plans to make the Segway Robot commercially available sometime next year, but a developer kit is slated to launch in the second half of 2016 at an undisclosed price. The kit will give developers access to its open-source SDK, allowing them to develop new applications for the robot before it’s readily available. Unfortunately, we doubt very many developers will be able to tackle the challenges that accompany a pair of arms that tout the mobility of a Lego figurine. I guess the future will have to wait.