Skip to main content

Analog A.I.? It sounds crazy, but it might be the future

Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.

Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog — and had been for as long as computers have been around.

Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.

“Digital kind of replaced analog computing,” Tim Vehling, senior vice president of product and business development at Mythic, told Digital Trends. “It was cheaper, faster, more powerful, and so forth. [As a result], analog went away for a while.”

In fact, to alter a famous quotation often attributed to Mark Twain, reports of the death of analog computing may have been greatly exaggerated. If the triumph of the digital transistor represented the beginning of the end for analog computers, it may only have been the beginning of the end of the beginning.

Building the next great A.I. processor

Mythic Ai logo on a chip graphic.
Mythic

Mythic isn’t building purposely retro tech, though. This isn’t some steampunk startup operating out of a vintage clock tower headquarters filled with Tesla coils; it’s a well-funded tech company, based in Redwood City, California and Austin, Texas, that’s building Mythic Analog Matrix Processors (Mythic AMP) that promise advances in power, performance, and cost using a unique analog compute architecture that diverges significantly from regular digital architectures.

Devices like its announced M1076 single-chip analog computation device purport to usher in an age of compute-heavy processing at impressively low power.

“There’s definitely a lot of interest in making the next great A.I. processor,” said Vehling. “There’s a lot of investment and venture capital money going into this space, for sure. There’s no question about that.”

The analog approach isn’t just a marketing gimmick, either. Mythic sees problems in the future for Moore’s Law, the famous observation made by Intel co-founder Gordon Moore in 1965, claiming that roughly every 18 months the number of transistors able to be squeezed onto an integrated circuit doubles. This observation has helped usher in a period of sustained exponential improvement for computers over the past 60 years, helping support the amazing advances A.I. research has made during that same period.

But Moore’s Law is running into challenges of the physics variety. Advances have slowed as a result of the physical limitations of constantly attempting to shrink components. Approaches like optical and quantum computing offer one possible way around this. Meanwhile, Mythic’s analog approach seeks to create compute-in-memory elements that function like tunable resistors, supplying inputs as voltages, and collecting the outputs as currents. In doing so, the idea is that the company’s chips can capably handle the matrix multiplication needed to enable artificial neural networks to function in an innovative new way.

As the company explains: “We use analog computing for our core neural network matrix operations, where we are multiplying an input vector by a weight matrix. Analog computing provides several key advantages. First, it is amazingly efficient; it eliminates memory movement for the neural network weights since they are used in place as resistors. Second, it is high performance; there are hundreds of thousands of multiply-accumulate operations occurring in parallel when we perform one of these vector operations.”

“There’s a lot of ways to tackle the problem of A.I. computation,” Vehling said, referring to the various approaches being explored by different hardware companies. “There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, [et cetera].”

The future of A.I.

brain with computer text scrolling artificial intelligence
Chris DeGraw/Digital Trends, Getty Images

If this problem is not taken care of, it’s going to have a big impact on the further advancement of A.I., especially when this is carried out locally on devices. Right now, some of the A.I. we rely on on a daily basis combines on-device processing and the cloud. Think of it like having an employee who’s able to make decisions up to a certain level, but must then call their boss to ask advice.

This is the model used by, for instance, smart speakers, which carry out tasks like keyword spotting (“OK, Google”) locally, but then outsource the actual spoken word queries to the cloud, thereby letting household devices harness the power of supercomputers stored in massive data centers thousands of miles away.

That’s all well and good, although some tasks require instant responses. And, as A.I. gets smarter, we’ll expect more and more of it. “We see a lot of what we call Edge A.I., which is not relying on the cloud, when it comes to industrial applications, machine vision applications, drones, in video surveillance,” Vehling said. “[For example], you may want to have a camera trying to identify somebody and take action immediately. There are a lot of applications that do need immediate application on a result.”

A.I. chips need to keep pace with other breakthroughs in hardware. Cameras, for instance, are getting better all the time. Picture resolution has increased dramatically over the past decades, meaning that deep A.I. models for image recognition must be able to parse ever-increasing amounts of resolution data to carry out analytics.

Add onto this the growing expectations for what people believe should be extractable from an image — whether that’s mapping objects in real-time, identifying multiple objects at once, figuring out the three-dimensional context of a scene — and you realize the immense challenge that A.I. systems face.

Whether it’s for offering more processing power while keeping devices small, or the privacy demands that require local processing instead of outsourcing, Mythic believes its compact chips have plenty to offer.

The roll-out

Mythic Ai logo on a chip graphic.
Mythic

“We’re [currently] in the early commercialization stages,” said Vehling. “We’ve announced a couple of products. So far we have a number of customers that are evaluating [our technology] for use in their own products… Hopefully by late this year, early next year, we’ll start seeing companies utilizing our technology in their products.”

Initially, he said, this is likely to be in enterprise and industrial applications, such as video surveillance, high-end drone manufacturers, automation companies, and more. Don’t expect that consumer applications will lag too far behind, though.

“Beyond 2022 — [2023] going into ’24 — we’ll start seeing consumer tech companies [adopt our technology] as well,” he said.

If analog computing turns out to be the innovation that powers the augmented and virtual reality needed for the metaverse to function … well, isn’t that about the most perfect meeting point of steampunk and cyberpunk you could hope for?

Hopefully, Mythic’s chips prove less imaginary and unreal than the company’s chosen name would have us believe.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
AI could replace around 7,800 jobs at IBM as part of a hiring pause
The ChatGPT website on a laptop's screen as the laptop sits on a counter in front of a black background.

A valid concern that is often brought up in the discourse surrounding AI and automation is the prospect that many jobs could disappear due to being replaced by the new technology. And the latest example of this is the recent news that IBM may include the use of AI and automation in its plans to pause hiring for certain roles within the company.

Bloomberg has reported that among IBM's plans for a hiring pause for certain "back-office functions," IBM could replace approximately 7,800 jobs with AI and automation over a span of five years.

Read more
I’ve seen the (distant) future of AI web search – here’s where it’s amazing, and where it struggles
Bing copilot AI chat interface.

The aggressiveness with which artificial intelligence (AI) moved from the realm of theoretical power into real-world consumer-ready products is astonishing. For several years now, and up until a couple of months ago when OpenAI's ChatGPT broke onto the scene, companies from the titans of Microsoft and Google down to myriad startups espoused the benefits of AI with little practical application of the tech to back it up. Everyone knew AI was a thing, but most didn't actually utilize it.

Just a handful of weeks after announcing an investment in OpenAI, Microsoft launched a publicly-accessible beta version of its Bing search engine and Edge browser powered by the same technology that has made ChatGPT the talk of the town. ChatGPT itself has been a fun thing to play with, but launching something far more powerful and fully integrated into consumer products like Bing and Edge is an entirely new level of exposure for this tech. The significance of this step cannot be overstated.
ChatGPT felt like a toy; having the same AI power applied to a constantly-updated search database changes the game.
Microsoft was kind enough to provide me with complete access to the new AI "copilot" in Bing. It only takes a few minutes of real-world use to understand why Microsoft (and seemingly every other tech company) is excited about AI. Asking the new Bing open-ended questions about planning a vacation, setting up a week of meal plans, or starting research into buying a new TV and having the AI guide you to something useful, is powerful. Anytime you have a question that would normally require pulling information from multiple sources, you'll immediately streamline the process and save time using the new Bing.
Let AI do the work for you
Not everyone wants to show up to Google or Bing ready to roll up their sleeves and get into a multi-hour research session with lots of open tabs, bookmarks, and copious amounts of reading. Sometimes you just want to explore a bit, and have the information delivered to you -- AI handles that beautifully. Ask one multifaceted question and it pulls the information from across the internet, aggregates it, and serves it to you in one text box. If it's not quite right, you can ask follow-up questions contextually and have it generate more finely-tuned results.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more