Forget digital. The future of A.I. is … analog? At least, that’s the assertion of Mythic, an A.I. chip company that, in its own words, is taking “a leap forward in performance in power” by going back in time. Sort of.
Before ENIAC, the world’s first room-sized programmable, electronic, general-purpose digital computer, buzzed to life in 1945, arguably all computers were analog — and had been for as long as computers have been around.
Analog computers are a bit like stereo amps, using variable range as a way of representing desired values. In an analog computer, numbers are represented by way of currents or voltages, instead of the zeroes and ones that are used in a digital computer. While ENIAC represented the beginning of the end for analog computers, in fact, analog machines stuck around in some form until the 1950s or 1960s when digital transistors won out.
“Digital kind of replaced analog computing,” Tim Vehling, senior vice president of product and business development at Mythic, told Digital Trends. “It was cheaper, faster, more powerful, and so forth. [As a result], analog went away for a while.”
In fact, to alter a famous quotation often attributed to Mark Twain, reports of the death of analog computing may have been greatly exaggerated. If the triumph of the digital transistor represented the beginning of the end for analog computers, it may only have been the beginning of the end of the beginning.
Mythic isn’t building purposely retro tech, though. This isn’t some steampunk startup operating out of a vintage clock tower headquarters filled with Tesla coils; it’s a well-funded tech company, based in Redwood City, California and Austin, Texas, that’s building Mythic Analog Matrix Processors (Mythic AMP) that promise advances in power, performance, and cost using a unique analog compute architecture that diverges significantly from regular digital architectures.
Devices like its announced M1076 single-chip analog computation device purport to usher in an age of compute-heavy processing at impressively low power.
“There’s definitely a lot of interest in making the next great A.I. processor,” said Vehling. “There’s a lot of investment and venture capital money going into this space, for sure. There’s no question about that.”
The analog approach isn’t just a marketing gimmick, either. Mythic sees problems in the future for Moore’s Law, the famous observation made by Intel co-founder Gordon Moore in 1965, claiming that roughly every 18 months the number of transistors able to be squeezed onto an integrated circuit doubles. This observation has helped usher in a period of sustained exponential improvement for computers over the past 60 years, helping support the amazing advances A.I. research has made during that same period.
But Moore’s Law is running into challenges of the physics variety. Advances have slowed as a result of the physical limitations of constantly attempting to shrink components. Approaches like optical and quantum computing offer one possible way around this. Meanwhile, Mythic’s analog approach seeks to create compute-in-memory elements that function like tunable resistors, supplying inputs as voltages, and collecting the outputs as currents. In doing so, the idea is that the company’s chips can capably handle the matrix multiplication needed to enable artificial neural networks to function in an innovative new way.
As the company explains: “We use analog computing for our core neural network matrix operations, where we are multiplying an input vector by a weight matrix. Analog computing provides several key advantages. First, it is amazingly efficient; it eliminates memory movement for the neural network weights since they are used in place as resistors. Second, it is high performance; there are hundreds of thousands of multiply-accumulate operations occurring in parallel when we perform one of these vector operations.”
“There’s a lot of ways to tackle the problem of A.I. computation,” Vehling said, referring to the various approaches being explored by different hardware companies. “There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, [et cetera].”
If this problem is not taken care of, it’s going to have a big impact on the further advancement of A.I., especially when this is carried out locally on devices. Right now, some of the A.I. we rely on on a daily basis combines on-device processing and the cloud. Think of it like having an employee who’s able to make decisions up to a certain level, but must then call their boss to ask advice.
This is the model used by, for instance, smart speakers, which carry out tasks like keyword spotting (“OK, Google”) locally, but then outsource the actual spoken word queries to the cloud, thereby letting household devices harness the power of supercomputers stored in massive data centers thousands of miles away.
That’s all well and good, although some tasks require instant responses. And, as A.I. gets smarter, we’ll expect more and more of it. “We see a lot of what we call Edge A.I., which is not relying on the cloud, when it comes to industrial applications, machine vision applications, drones, in video surveillance,” Vehling said. “[For example], you may want to have a camera trying to identify somebody and take action immediately. There are a lot of applications that do need immediate application on a result.”
A.I. chips need to keep pace with other breakthroughs in hardware. Cameras, for instance, are getting better all the time. Picture resolution has increased dramatically over the past decades, meaning that deep A.I. models for image recognition must be able to parse ever-increasing amounts of resolution data to carry out analytics.
Add onto this the growing expectations for what people believe should be extractable from an image — whether that’s mapping objects in real-time, identifying multiple objects at once, figuring out the three-dimensional context of a scene — and you realize the immense challenge that A.I. systems face.
Whether it’s for offering more processing power while keeping devices small, or the privacy demands that require local processing instead of outsourcing, Mythic believes its compact chips have plenty to offer.
“We’re [currently] in the early commercialization stages,” said Vehling. “We’ve announced a couple of products. So far we have a number of customers that are evaluating [our technology] for use in their own products… Hopefully by late this year, early next year, we’ll start seeing companies utilizing our technology in their products.”
Initially, he said, this is likely to be in enterprise and industrial applications, such as video surveillance, high-end drone manufacturers, automation companies, and more. Don’t expect that consumer applications will lag too far behind, though.
“Beyond 2022 —  going into ’24 — we’ll start seeing consumer tech companies [adopt our technology] as well,” he said.
If analog computing turns out to be the innovation that powers the augmented and virtual reality needed for the metaverse to function … well, isn’t that about the most perfect meeting point of steampunk and cyberpunk you could hope for?
Hopefully, Mythic’s chips prove less imaginary and unreal than the company’s chosen name would have us believe.