Thought Leadership

Understanding AI-assisted Chip Design Podcast – Transcript

Chip design is one of the most complex and challenging tasks in the world, requiring specialized tools and knowledge far beyond what is needed in other fields. High-level synthesis (HLS) is a key tool to help address complexity and achieve efficient, optimized designs which are key in both modern smart products and cutting-edge AI. HLS and AI have strong synergies in improving ease of use, speed and efficiency.

Check out the full podcast here or keep reading for a transcript of that conversation.

Spencer Acain:

Hello, and welcome to the AI Spectrum podcast. I’m your host, Spencer Acain. In this series, we explore a wide range of AI topics from all across Siemens and how they apply to different technologies. Today, I’m joined by Russell Klein, program director for Siemens EDA’s High-Level Synthesis team. Welcome, Russ.

Russell Klein:

Thank you.

Spencer Acain:

Before we jump into our main AI topics here, it would be great if you could just give us a quick rundown of what is high-level synthesis? Why is it important for the modern chip design process?

Russell Klein:

Okay. Well, when people are designing circuits in the traditional fashion, the designer is going to describe in either Verilog or VHDL, one of the hardware description languages … they’re actually going to describe every single wire, every single register, every single operator that’s going to be used within that design. So they’re touching every single element. You can imagine trying to do a billion-gate design is going to be really challenging if you’ve got to go in and touch each one of those individual elements that’s going to be pulled together.

What high -level synthesis does is raises the level of abstraction. So we’re no longer describing every single wire, every single register, every single operator. We describe things at a much more algorithmic level, and this is typically done in C++ with our high-level synthesis tools. Other high level synthesis tools could use SystemC or Python, and it’s creeping into other languages.

But the entire concept is just to move the level of abstraction up so that the engineer doesn’t need to worry about all the very low level details in pulling that design together. He can think of things at a higher level of abstraction and let the tool, let the compiler handle all of those lower level details. So they become more productive. They’re able to get their designs done faster, not only design, but verified faster as well. And operating at that higher level of abstraction, it introduces a lot of efficiencies into the design process.

Now, not every designer today is using high-level synthesis, even though it’s been around for a long time, it hasn’t fully penetrated the market. So there are a lot of folks still doing it the old-fashioned way.

Spencer Acain:

You mentioned the efficiencies in there, and I think that’s kind of an interesting term because when we talk AI, energy efficiency and being energy hungry is a big … it’s big kind of detractor from the technology really. And is HLS something that can be used to help address that? All that efficiency gain is that just in the efficiency of the designer for the chip and their ability to make a design faster? Or is it actually helping them make a better, more efficient chip as well?

Russell Klein:

It actually does both. So one of the things that you can do, if you’re able to design … create a design in a shorter period of time, you can try that design out and see how it works. And if you realize it’s being inefficient, you can undo some steps and redo them in a different way. So you can look at different design alternatives.

And the challenge in creating hardware for the AI space is that it’s all relatively new. If you’re creating GPUs or you’re creating CPUs, we’ve got 50 or 60 years of really understanding what works well in a CPU architecture. In terms of AI accelerators, it’s a lot closer to 5 or 10 years, and the underlying algorithms are changing very rapidly. So we don’t know ahead of time, designers don’t know ahead of time, exactly what the best architecture is going to be at really the micro architecture level. All the details and decisions that they’ve got to make that are going to impact both performance and efficiency.

And so having a rapid design methodology, having a design methodology where we can create an implementation quickly, see if it works well, and then course correct, works out a lot better than a long slow implementation process. So, again, if we’re doing a traditional registered level design, we’re going to be touching every single register, every single operator, and that means we’re going to spend a long time building our first implementation.

Once we’re done with that, management usually comes in and says, “Do you have something that’s working? Let’s run with it.” People just don’t have the resources to try three or four different implementations with that design methodology. With high-level synthesis, you’re going a lot quicker, and all of that low-level work is being done by the computer. You fire it off, tell it to build a new version of the design where we’ve changed something from a parallel access to a serial access. What impact did that have? You come back the next morning, it’s designed, everything at the low level has been built, and you can evaluate it. Just something that’s not practical in a traditional design flow.

Spencer Acain:

Well, I mean, it sounds like … that AI or HLS has a lot to offer to AI then, because, like you say, the AI space, it’s still really new, it’s still developing. We don’t really have a set understanding of what the algorithms are going to look like yet. But would you say that this is also kind of like a two-way street? I don’t think it’d be an exaggeration to say, of course, that chip design is the most complicated discipline of engineering in the world right now. So, is AI helping make that process easier as well? Are HLS tools or chip design tools being made smarter or easier to use, or better, by incorporating AI into them?

Russell Klein:

Indeed, they are. I think all of the design tools across not just electronic design, but software design, mechanical design and so forth, we’re going to be seeing a lot of AI come into those tools and make them easier to use and make engineers more productive with those.

In the electronic design space, the compilers and the tools that we’ve been using have been getting smarter over the years. If you take a look at Verilog from 20 years ago, you didn’t declare a variable. You declared something to be either a wire or a register, and there were specific rules on when you could use a wire and when you could use a register and could you connect it to an output or an input, and it was all very arcane. But if you wanted to build hardware, you had to learn all of that.

Now, the compilers have gotten more capable. Today, you don’t describe it as a wire or a register. You say it’s a logic element, right? So we have all these logic, and then the compiler goes in and sorts all of that out. With AI, it’s going to be able to get a lot smarter. It’s going to be able to look at the circuits and figure out what we were trying to do and figure out new organizations that are going to increase performance and efficiency. So, in creating the RTL and going from the RTL to the gate level, going from the gate level to the layout, the GDSII for creating the masks and manufacturing the chips, all of those steps, all of the tools are going to start to get smarter.

Specifically in high-level synthesis, what we’re doing is we’re infusing AI in a number of different aspects of the tool. So one is, of course, generative AI, where rather than writing code, writing the algorithmic code that’s going in, we can describe what it is that we want to do and the generative AI, much like a coding copilot, is going to create the code that we can then feed to the high-level synthesis tool. And if we train that coding copilot on the good styles to use of coding and the directives to feed to the compiler and so forth, it can become an expert at writing high-level synthesis code, which means your learning curve for new adopters, people coming to the technology fresh, they don’t have to learn as much complicated stuff in order to get the benefits from high-level synthesis.

But in addition to that, in the tool itself, if we look at any compiler or compilation type technology, what we’ll find is that the compiler has to make decisions about how it’s going to implement the design. And those decisions today are generally based on what we call heuristics, and a heuristic is a way of making a decision when there are two algorithms that will work, but one’s going to be better than the other. And as we write these heuristics manually, as we have done in the past, you’ve got a very limited view of what’s going on in the design. So you sort of have your decision space where you’re looking at an if-then-else construct or a greater-than-less-than construct and trying to make a good decision.

You know method A will work, method B will work. You don’t know which is best. And so you’d use some simple metric to figure it out. And it turns out that a lot of times that simple metric doesn’t work. So what we’ll do is ask the user to give us more information if they want better results. And that means we’ve got a big manual of all these different decision points and how to put flags and how to put pragmas in the code and how to annotate it. And it all becomes really complex.

Now, as we put AI into the tool, it just becomes smarter, and those heuristics are going to be right more often because they’re going to be the result of looking at thousands of designs. What went in, what came out? How do we solve this problem the best? Let me give you a rather concrete example of that to help illustrate this.

So when we take an array of data and we’re going to implement it as data in a design that we’re running through high-level synthesis, we’re just going to declare it as an array of, say, 50 elements of data. Now, one of the things the high-level synthesis tool has to decide is are we going to take those 50 elements and put them into registers, and registers take up a lot of area, they take up a lot of power, but it means that other parts of the hardware design can get access to all of those data elements concurrently. We’ve got parallel access to everything thing. Alternatively, we could put it into a memory, and in the memory it’s going to be much smaller, it’s going to use much less power, but we can only access a limited number of those 50 values at a time.

So the heuristic that’s been in our high-level synthesis tool and other high-level synthesis tools has been, well, if it’s a really big array, put it in a memory. And if it’s a really small array, put it into registers. It’s kind of a blunt instrument, and a lot of times it’s going to make the wrong decision.

So, we told the users, “Well, you can change the threshold where we put it into registers or put it into memory. So we could say, if it’s more than 256 or more than 512 or some number, we can then put it in one or the other.” But what AI’s going to do is it’s going to look at how often is this data accessed? Do I need to access it in parallel? Am I going to speed things up by putting it into individual registers? Or could I put it into, say, 10 different memories where I could get access to 10 elements at once, but I don’t need to put it all into registers? That decision now gets made by the tool, and it’s right a lot more than that very simple heuristic that we implemented.

Now, if you look at a tool, like a high-level synthesis tool like Catapult or a compiler like GCC, there are literally thousands of these heuristics that are built in throughout the entire compiler. And we’re going to make each one of those decision points get smarter and smarter. The user doesn’t need to learn as much about how to drive the tool and get great results. Over time, as the tool gets more experienced to get smarter, it gets trained on larger datasets, it’s going to be able to make better implementations with less user input, and we end up being more productive and creating better designs.

Spencer Acain:

It really feels like that that would be just kind of a perfect use for AI, the way you’re describing it here because you have what’s basically kind of like a dumb decision tree before that is able to help make these decisions. But now you’re almost replacing it with a super intelligent person sitting there who can flip all these switches and decide when and where is the best place to use these different elements that you have access to, with context. And I think that’s kind of the big word here for AI in industries, bringing in that industrial context into your AI and into your decision process. And it seems like, almost at a macro level, that’s what you’re doing here as well, bringing in the context of the design of the problem itself into the decision-making of these chips, of building these chips.

Russell Klein:

Exactly.

Spencer Acain:

But that’s all the time we have for this episode. So, once again, I have been your host, Spencer Acain, joined by Russ Klein on the AI Spectrum podcast. Tune in again next time as we continue our discussion on the impact of AI in the EDA space.


Siemens Digital Industries Software helps organizations of all sizes digitally transform using software, hardware and services from the Siemens Xcelerator business platform. Siemens’ software and the comprehensive digital twin enable companies to optimize their design, engineering and manufacturing processes to turn today’s ideas into the sustainable products of the future. From chips to entire systems, from product to process, across all industries. Siemens Digital Industries Software – Accelerating transformation.

Spencer Acain

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/thought-leadership/understanding-ai-assisted-chip-design-podcast-transcript/