Analog Computing: The Underdog AI Hero That Refuses to Go Digital

Generative AI has the potential to benefit humankind in various ways. It can assist creative industries by generating innovative works of art, and personalized healthcare by analyzing medical data and creating customized treatment plans. It can help to reduce environmental waste, create more engaging and personalized learning experiences, and make technology more accessible for people with disabilities.

For us, semiconductor circuit design and verification engineers, Generative AI can provide with new tools to improve the efficiency and accuracy of the design process. It can optimize the design, automate the process, detect errors, and verify correctness. Generative AI can significantly reduce the time and effort required by engineers, while also improving the reliability and efficiency of the resulting designs

Generative AI is a diverse field that includes various models and networks to generate new data. Variational autoencoders (VAEs) can generate new data by learning the underlying distribution of the original set. Generative adversarial networks (GANs) consist of a generator and a discriminator that generate realistic images, videos, and 3D models. One of the most popular models currently is a large language model like ChatGPT. This model is capable of understanding and generating human-like language, which has numerous applications in natural language processing, text generation, and more.

However, training these models requires a significant amount of computational resources. ChatGPT used 10,000 Nvidia GPUs to train the model*. As the size of the model increases, so does the amount of compute needed. This has led to the development of advanced digital compute hardware that is specifically designed to handle the demands of training large language models.

At the same time, the computational demands of inference, the process of using a trained model to generate language, are comparatively lower. This has led to the exploration of analog compute hardware, which is more energy-efficient and has the potential to accelerate the inference process while reducing power consumption.

One company that is working in this space is Mythic IC. They have developed a unique analog compute chip that is designed to perform the inference process for AI models. Mythic uses analog compute hardware to perform calculations with significantly lower power consumption compared to digital compute hardware. This is achieved using a novel computing architecture that takes advantage of the Compute-In-Memory (CIM) techniques.

In Mythic’s CIM architecture, input/output is digital, while weights and computation are analog and high voltage. This combination creates significant challenges for analog-mixed-signal verification and simulation. One of the key steps to addressing the verification of such a huge mixed-signal system is to keep it tractable. This requires simulating digital, analog, and non-volatile memory together. The communication between them must also be simulated while maintaining high accuracy.

By using Symphony and a methodology of tiling and partitioning the design, it is possible to efficiently verify thousands of ADCs while addressing the performance and functional verification needs. Tiling and partitioning the overlap points in the chip hierarchy are mainly driven by verification decisions in order to meet performance and functional coverage goals.

To learn more about how Symphony is used for Analog AI Processor verification, please read our white paper

*According to UBS research

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at