Products

Using Data And AI More Effectively In EDA

The semiconductor industry is at a pivotal moment, with the increasing complexity of designs demanding innovative approaches to efficiency and productivity. A central theme in this evolution is the strategic integration of AI with the vast amounts of data generated by EDA tools. While EDA produces a wealth of information, the critical question remains: how effectively is this data being leveraged by AI, and what more can be done to maximize its potential? Sathishkumar Balasubramanian, Head of Products at Siemens EDA, shared valuable insights on this subject in Brian Bailey’s recently published SemiEngineering.com article.

Let’s dive into a summary of his key points and explore how we are using data and AI more effectively in EDA.   



Unlocking the full potential of data and AI in EDA

Ever wonder how much of the vast amount of data generated by our EDA tools is truly being put to work by AI? It’s an important question, and one that the semiconductor industry, including Siemens EDA, is actively exploring.

The truth is, while our EDA tools generate enormous volumes of data, much of it has traditionally been designed for human eyes, with what is sometimes called “weak semantics.” This means it’s not always in the most AI-friendly format.

The data dilemma: from human-friendly to AI-ready

Engineers are incredibly skilled at creating and verifying designs using the data we have today. However, to truly supercharge our processes with AI, we need to get smarter about how we handle “control data.”

As Sathishkumar Balasubramanian puts it: “We have enough data that we can make it work, although there needs to be some improvement in the way we are formulating data to be inferred. The key thing is how you label the data in the data lake, how you vectorize the database, and how you’re connecting to all the relevant sources and keeping your data, or a data lake, current with what teams are supposed to do. When we build the data lake, we create what we call signal and label and origin, and everything else on where it should be used, where it cannot be used, what version of the software it can be attached to. There are a lot of things that you can label and attach to a given data when you’re starting to open it up to a data lake.” Think of it as giving AI a super-detailed map instead of just a general idea!

Enter the agents: orchestrating smarter EDA workflows

Imagine AI agents as autonomous orchestrators, poring over the data from our tools, sometimes across multiple runs. Their mission? To extract actionable insights that can lead to design improvements or optimized tool parameters.

These agents are all about boosting efficiency. Sathishkumar emphasizes the importance of “Knowledge transfers between runs, for different versions of a given problem, are going to be very key. Once we know that we have a self-verifying check loop that doesn’t compromise on accuracy, think about the amount of savings you get, both in terms of time in your computer resource and licenses, and you can get to the answer very fast. This is going to be an order of magnitude better once we get the agentic flow to work. When you have a working agentic flow that is tuned to a certain task, the agent will know about the previous version. It will know how to structure the regression, because this is the fastest way for throughput, and that’s what it is going to do for the next run.”

The Model Context Protocol (MCP): a smarter API for AI-powered EDA

For a long time, the EDA industry has relied on APIs to access internal data. But what if we could make this access even smarter for AI? That’s where the Model Context Protocol (MCP) comes in.

Sathishkumar highlights that “The most important thing that I believe needs to happen, that both industry and customer are asking for, is MCP-compliant ways of building the product. You define an MCP server, which is an AI way of saying an API layer, but it is much smarter. For each of the products there is a server, and then you open up and define all your commands and everything else. With the agents, with the LLM, and with the RAG infrastructure, each of the use case flows can be built together so that they are self-optimizing, to reach the end goal.”

Of course, the quality of the MCP implementation matters a lot. Sathishkumar is direct on this point. “It is easy to write an MCP server for a product.  But it may not work, or it might work for 20% of the time. Each product must have a fully owned, very clear way of doing an MCP server, and then having an MCP orchestrator that’s much more efficient in managing all these MCPs for a given flow. That’s going to be very critical. We have already seen some customers put them together, but they come back and say, for your product, I created an MCP server, but somehow it doesn’t work. That is because you don’t know everything about the product. If it comes from us, we can make an efficient MCP server. MCP compatibility is going to be very key. It’s already happening.”

What does this mean for EDA?

This shift towards more effective data utilization and AI integration in EDA represents a significant step forward. It means:

  • Smarter data: Moving towards data that’s not just abundant, but also structured and meaningful for AI is an essential foundation.
  • Automated efficiency: AI agents will help automate repetitive tasks and optimize EDA workflows, freeing up engineers for more complex challenges.
  • Connected tools: The Model Context Protocol is enabling more intelligent and interconnected EDA tools.

The work is already underway, and the direction is clear. If you would like to explore the full conversation behind these ideas, the complete article is available here: >> Using Data And AI More Effectively In EDA

Mary Rayburn

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/cicv/2026/03/12/using-data-and-ai-more-effectively-in-eda/