Skip to main content

Think your house is smart now? Here’s a peek at what it’ll be like with AR

In Plato’s Allegory of the Cave, the influential Greek philosopher asks us to imagine a group of prisoners living their entire lives inside a cave. All that they can see of the real world comes from shadows which appear on the cave walls. Eventually, a prisoner escapes and realizes that his or her previous view of existence was based on a low resolution, flat understanding of how the world actually operated.

A slightly pretentious way of starting an article on augmented reality? Perhaps. But the broad idea is the same: Right now, in the pre-AR world, we have a visual perspective that contains only the details of things around us that we can see on the surface. AR, a technology which has been talked about increasingly in recent years, promises to let us go deeper.

Related Videos

Imagine walking down the street and having landmarks, store opening hours, Uber rider credentials and other (useful) contextual information overlaid on top of our everyday perspective. Or walking around your home and being able to determine, for instance, the live power draw of a power strip simply by looking at it. Or how much battery life is remaining on your smoke detector. Or the WiFi details of your router. Or any other useful number of “at a glance” details you might want to know.

Like the shift in perception described in Plato’s Cave, this won’t be an occasional “nice to have” supplement to the way we view the world. Augmented reality will, its biggest boosters claim, fundamentally alter our perception of real, physical places; permanently altering the way we view and experience reality and the possibilities offered by the real world.

The future of AR interfaces?

Right now, it’s not yet at that point. AR is still all about games and, if we’re lucky, the opportunity to pick and place virtual Ikea furniture in our apartments to show us how much better our life might be if we owned a minimalist Scandinavian bookshelf or a handwoven rug. There’s still much progress to be made, and lots of infrastructure to be laid down before the world around us can be rewritten in AR’s image.

One group working hard to achieve this vision is the Future Interfaces Group at Carnegie Mellon University. The group has previously created futuristic technology that ranges from conductive paint that turns walls into giant touchpads to a software update for smartwatches which allows them to know exactly what your hands are doing and respond accordingly. In other words, FIG anticipates the way we’ll be interfacing with technology and the world around us tomorrow (or, well, maybe the day after that).

LightAnchors: Appropriating Point Lights for Spatially-Anchored Augmented Reality Interfaces

In its latest work, the group has developed something called LightAnchors. This is a technique for spatially anchoring data in augmented reality. In essence, it creates a prototype tagging system that precisely places labels on top of everyday scenes. It marks up the real world like a neat, user friendly schematic. That’s important. After all, to “augment” means to make something better by adding to it; not to crowd it with unclear, messy popups and banner ads like a 1998 website. Augmented reality needs something like this if it’s ever going to live up to its promise.

“LightAnchors is sort of the AR equivalent of barcodes or QR Codes, which are everywhere,” Chris Harrison, head of Carnegie Mellon’s Future Interfaces Group, told Digital Trends. “Of course, barcodes don’t do a whole lot other than providing a unique ID for looking up price [and things like that.] LightAnchors can be so much more, allowing devices to not only say who and what they are, but also share live information and even interfaces. Being able to embed information right into the world is very powerful.”

How LightAnchors work

LightAnchors work by looking for light sources blinked by a microprocessor. Many devices already contain microprocessors used for things like controlling status lights. According to the Carnegie Mellon researchers, these could be LightAnchor-enabled simply via firmware update. In the event that an object does not currently display these blinked lights, an inexpensive microcontroller could be linked up to a simple LED for just a couple of bucks.

As part of their proof-of-concept, the researchers showed how a glue gun could be made to transmit its live temperature or a ride share’s headlights made to emit a unique ID to help passengers find the right vehicle.

lightanchors

Once the lights have been located, LightAnchors then scour video frame images to look for the right area to position a label. This is found by searching for bright pixels surrounded by darker ones.

“These candidate anchors are then tracked across time, looking for a blinked binary pattern,” Karan Ahuja, one of the researchers on the project, told Digital Trends. “Only candidates with the correct preamble are accepted, after which their data payloads can be decoded. LightAnchors allow ‘dumb’ devices to become smarter through AR with minimal extra cost. [For example,] a security camera can broadcast its privacy policy using the in-built LED.”

Right now, it’s still a concept that has yet to be commercialized. Implemented right, however, this could be one way to let users navigate and access the dense ecosystems of smart devices popping up with increasing regularity in the real world. “At present, there are no low cost and aesthetically pleasing methods to give appliances an outlet in the AR world,” Ahuja said. “AprilTags or QR codes are inexpensive, but visually obtrusive.”

Could LightAnchors be the answer? It’s certainly an exciting concept to explore. Suddenly we’re feeling more than ready for AR glasses to take off in a big way!

Editors' Recommendations

The next big thing in science is already in your pocket
A researcher looks at a protein diagram on his monitor

Supercomputers are an essential part of modern science. By crunching numbers and performing calculations that would take eons for us humans to complete by ourselves, they help us do things that would otherwise be impossible, like predicting hurricane flight paths, simulating nuclear disasters, or modeling how experimental drugs might effect human cells. But that computing power comes at a price -- literally. Supercomputer-dependent research is notoriously expensive. It's not uncommon for research institutions to pay upward of $1,000 for a single hour of supercomputer use, and sometimes more, depending on the hardware that's required.

But lately, rather than relying on big, expensive supercomputers, more and more scientists are turning to a different method for their number-crunching needs: distributed supercomputing. You've probably heard of this before. Instead of relying on a single, centralized computer to perform a given task, this crowdsourced style of computing draws computational power from a distributed network of volunteers, typically by running special software on home PCs or smartphones. Individually, these volunteer computers aren't particularly powerful, but if you string enough of them together, their collective power can easily eclipse that of any centralized supercomputer -- and often for a fraction of the cost.

Read more
Why AI will never rule the world
image depicting AI, with neurons branching out from humanoid head

Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity -- for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans.

According to the theory, advances in AI -- specifically of the machine learning type that's able to take on new information and rewrite its code accordingly -- will eventually catch up with the wetware of the biological brain. In this interpretation of events, every AI advance from Jeopardy-winning IBM machines to the massive AI language model GPT-3 is taking humanity one step closer to an existential threat. We're literally building our soon-to-be-sentient successors.

Read more
The best hurricane trackers for Android and iOS in 2022
Truck caught in gale force winds.

Hurricane season strikes fear into the hearts of those who live in its direct path, as well as distanced loved ones who worry for their safety. If you've ever sat up all night in a state of panic for a family member caught home alone in the middle of a destructive storm, dependent only on intermittent live TV reports for updates, a hurricane tracker app is a must-have tool. There are plenty of hurricane trackers that can help you prepare for these perilous events, monitor their progress while underway, and assist in recovery. We've gathered the best apps for following storms, predicting storm paths, and delivering on-the-ground advice for shelter and emergency services. Most are free to download and are ad-supported. Premium versions remove ads and add additional features.

You may lose power during a storm, so consider purchasing a portable power source,  just in case. We have a few handy suggestions for some of the best portable generators and power stations available. 

Read more