Skip to main content

Amazon designs an A.I. camera to teach computer vision to developers

Amazon's DeepLens camera is available in the U.S.for $249

How to Build an AWS DeepLens Project Using Amazon SageMaker

From Google to Snapchat, artificial intelligence is expanding the camera’s prowess and Amazon wants to give developers the chance to learn about deep learning and computer vision. On November 29, Amazon Web Services launched AWS DeepLens, a video camera designed to teach developers how to program A.I. functions from stylistic transfers to recognizing a hot dog. And now, on June 14, DeepLens is available in the U.S. for $249.

DeepLens is less of a camera and more of a learning tool. The camera is pre-loaded with several different A.I. infrastructures and helps teach developers how to use the tech with AWS infrastructure inside their own apps. The camera comes with AWS Greengrass Core and a version of MXNet, while users can also add their own frameworks like TensorFlow.

The learning camera looks rather unlike other cameras on the market — instead, it more closely resembles an action camera mounted on top of an external hard drive. The camera component houses a 4-megapixel camera capable of shooting standard 1080p HD video while a 2D microphone system incorporates sound.

But of course, a 4-megapixel camera isn’t what the DeepLens is all about. The camera system uses an Intel Atom processor fast enough to run deep learning algorithms on 10 frames in one second. The 8 GB of memory houses both the pre-stored code along with custom algorithms. Wi-Fi also opens up the possibility of using cloud computing for algorithms too large to run on the internal hardware.

Using AWS DeepLens software and a computer, users can choose from project templates for a more guided learning experience or choose to design their own software from scratch. The templates or sample project walks developers through how the project works so they can build hands-on experience to integrate deep learning into their own projects.

Deep learning is a form of artificial intelligence that requires less developer supervision over more traditional A.I. The form of machine learning is commonly used for computer vision, or the ability to recognize objects or patterns in images.

While the camera is designed primarily for developers, the hands-on access could allow smaller app companies to integrate the advanced features.

Updated on June 14: DeepLens is available in the U.S. 

Editors' Recommendations

Lulu Chang
Former Digital Trends Contributor
Fascinated by the effects of technology on human interaction, Lulu believes that if her parents can use your new app…
A.I.’s next big challenge? Playing a quantum version of Go
alphago zero

When Google DeepMind’s AlphaGo program defeated the world’s greatest Go player in March 2016, it represented a major tech breakthrough. Go, a Chinese board game in which the goal is to surround more territory than your opponent, is a game that’s notoriously easy to learn but next to impossible to master. The total number of allowable board positions exceeds the total number of atoms in the observable universe. However, an A.I. still learned to defeat one of humanity’s best players.

But while cutting-edge technology made this possible, cutting-edge technology could also make mastering Go even more difficult for future machines -- thanks to the insertion of quantum computing concepts like entanglement to add a new element of randomness to the game.

Read more
eBay deploys computer vision to make sellers’ products pop
Best apps for selling stuff

When it comes to selling stuff online, a decent photo is key in grabbing the attention of shoppers.

With this in mind, eBay is about to launch a new tool for its mobile app that uses computer vision to strip out any distracting background clutter so that the item you’re selling is the center of attention.

Read more
Deep learning A.I. can imitate the distortion effects of iconic guitar gods
guitar_amp_in_anechoic_chamber_26-1-2020_photo_mikko_raskinen_006 1

Music making is increasingly digitized here in 2020, but some analog audio effects are still very difficult to reproduce in this way. One of those effects is the kind of screeching guitar distortion favored by rock gods everywhere. Up to now, these effects, which involve guitar amplifiers, have been next to impossible to re-create digitally.

That’s now changed thanks to the work of researchers in the department of signal processing and acoustics at Finland’s Aalto University. Using deep learning artificial intelligence (A.I.), they have created a neural network for guitar distortion modeling that, for the first time, can fool blind-test listeners into thinking it’s the genuine article. Think of it like a Turing Test, cranked all the way up to a Spınal Tap-style 11.

Read more