Products

Importance of early planning for interconnect verification in 3D IC physical design workflows

By Heather George

In our last podcast on 3D IC architecture workflows, we discussed how a system or microarchitectures determine how to partition a device’s functionality. Today, we will discuss what a 3D IC physical design workflow looks like, from prototyping and planning through system technology, co-optimization, and through to substrate routing and design verification.

There has been much growth and development in advanced heterogeneous packaging in the past several years. After defining the architecture, what are the typical design flow steps recommended or used for multi-die and chiplet heterogeneously integrated assemblies?

Watch the 3D IC podcast episode: Early planning for 3D IC physical design workflows

If you prefer video to text, watch this 20-minute video about what a 3D IC physical design workflow might look like, the challenges associated with 3D IC physical design and how a Siemens solution enables design teams to use data received in multiple, diverse formats.

We know that everyone has their own preferred learning style, and we want to make sure you can get the most out of this content in the way that works best for you. So, whether you prefer to read the copy below or watch the full episode above, we hope you find this information helpful!

Managing complex interconnect design verification challenges

For discussion purposes, “substrate” represents the range of interconnect systems that fall under the IC packaging domain. Challenges when designing organic substrates differ significantly from those designing silicon interposers. The macro level offers similarities in defining and implementing the designed conductivity regardless of the interconnect technology. 

Engagement with customers is typically at two points:

  • Very early: when planning the design process and conductivity is defined
  • Late: when customers are working to verify complex 2.5D or 3D IC system

As we all know, the days of managing a single-die package with a spreadsheet are long gone. Customer concern with conductivity grows alongside increasing artificial intelligence (AI) and high-performance compute silicon interposer designs with two to eight high-bandwidth memories (HBMs), fan-out wafer-level packaging, embedded bridge technology and system in package type designs. Engaging early to identify is solution is critical to help manage the complex interconnect challenges. 

Interconnect system considerations in the design flow

The interconnect between the various devices is also an important consideration. A physical representation of the multiple chiplets or the die involved, along with the platform these devices will sit on. Each discipline uses its own tool and data format. The silicon team uses a set of placing route tools or a custom layout tool to design the die. And it will have, for the rest of the system, that appears somewhat like a black box. But that black box has to interface with a package or an interposer. The interposer team needs to know what the interface looks like from a netlist perspective and the physical bump layout. Tools can consume the information in whatever format it may be available, but the challenge comes when things change – how does each tool get updated? Some formats are more easily passed back and forth between the tools, but it’s an important consideration when thinking about design flows.

Managing disparate data sources in chiplet solutions

Disparate data sources is a growing challenge with interposer design related to silicon or chip design. A company using chiplets and designing their silicon may have input in multiple formats between the chiplet provider, silicon, and interposer. How do they combine the data, evaluate it, and use it to manage conductivity?

The challenge of not having to reimplement, redesign, and recapture the design elements from their native format and use them as they are becomes a critical part of the design process. This challenge is particularly concerning for those worried about this early in the design process and not late in the verification stage. 

So, the substrate design team needs a solution that can quickly and accurately handle the multiple data formats without translating data into something else and bringing it into a single cohesive system representation and netlist. 

Early planning for design verification of multi-die systems

Design verification teams for multi-die systems work with the silicon team to verify the various interfaces between memory or standard interfaces like USB or PCI-E. The verification team uses the Verilog design representation that’s verified with a comprehensive test bench for multiple interfaces. Design teams want to import it and use it as is. Having a solution that works with that input as-is is crucial. Teams can get the standard parts like memory from those vendors, the chip design from the chip team, and then the netlist from the verification team that brings it all together. 

Verifying the independent pieces at the end of the design cycle leaves the design team to work with fixed data that they must adapt to figure out whether it connects correctly. And while we can do that, they often find issues very late in the design cycle. So, whenever possible, teams should consider late-stage design verification throughout the design and use a planning tool to manage the data. The golden netlist, as we often call it, is no longer kept in a spreadsheet but rather in a planning tool that replaces the spreadsheet. 

Combining all these pieces enables early analysis and collaboration, often referred to as system-technology co-optimization (STCO) for early analysis and trade-offs. Using a centralized planning tool and the integrations available makes it easy for design teams to make intelligent decisions early in the design cycle. And as the design matures, the questions about correctness can be answered as part of the design cycle and not waiting until the end. 

Physical verification of individual pieces and the total solution

One of the other aspects of the 3D IC design flow is the physical verification and the differences between the physical verification of individual pieces and the total solution. Teams can verify the interposer or the package, but what about when they are all together? Are they connected and lined up correctly?

For example, when teams are doing a stacked substrate design, there’s almost always a horror story where part of the layout is complete. Still, someone interpreted one of the specifications as pins up or down instead of the correct orientation. And they ended up building and manufacturing a piece mirrored from the die or the interposer. How do you prevent these design errors things from happening?

Teams need comprehensive verification of the entire assembly:

  • the die
  • the interposer
  • the package
  • the PCB (if it’s available)

After being put together in a single environment, the assembly needs verification to ensure all of the pieces line up.

Assembly verification and system-level LVS

How do companies account for die scaling and package bump compensation factor so that it works when everything is put together and manufactured? 

As the most active topic in 2.5D and 3D IC design, we know a comprehensive set of planning tools enables teams to ensure proper bump alignment and verify the correctness of net names to avoid conductivity shorts upon connection. Referred to as system layout versus schematic (LVS) flow, it involves doing LVS of the entire system to check the various design domains. The package design may call a set of net clock CLK. The interposer team has them, where the clock signals are similar names but not the same. And then the die calls them yet again something else. 

How do teams validate the connection from the package bumped, BPA bumped through the package to the interposer C4 to the die, and then that the clock connection from the BGA to the die is correct through all of the net name transitions, interconnect from the various package and interposer layers? 

The Siemens planning and verification platform allows you to do that. Finding those types of problems when you have a multi-substrate, multi-die system; finding those before you fabricate it is imperative because it’s costly when you don’t. 

Where to focus first: assembly verification

The most accessible place to start is assembly verification, the ability to take the package netlist and the silicon die, or multiple silicon dies and verify the system is correct. These verifications are typically straightforward and enable teams to find mistakes. When used early in the design cycle, teams don’t need the complete package implemented to find value in the verification because they can uncover simple “pins up, pins down” types of mistakes. By doing an overlaps check between the data, the silicon and the package, teams can verify the orientation is correct and that all the bumps are in the right spot. These checks are quick, high value-add and easy to do early in the design cycle. Setting up these checks early makes it easy to run them every time there’s an update from the silicon team. As the design matures and the implementation is done, adding LVS to the verification becomes very simple later in the design process. Assembly verification is an excellent place to start and allows teams to grow.

Want to learn more about the impact of 3D IC on physical design workflows? Listen to the podcast now, available on your favorite podcast platform.

View the episode transcript

[00:10] John McMillan: Welcome to the Siemens EDA podcast series on 3D IC design brought to you by the Siemens Thought Leadership team. I’m your host, John McMillan. In our last podcast on 3D IC architecture workflows, we talked about how a system or microarchitectures determine how to partition a device’s functionality. Today, we will discuss what a 3D IC physical design workflow looks like, from prototyping and planning through system technology, co-optimization, and through to substrate routing and verification. I’m pleased to introduce my special guest, Mike Walsh, who is the Technical Applications Engineering Director of the Technical Solutions Sales Team here at Siemens Digital EDA. Welcome, Mike, and thank you for taking the time to talk with me today about what a 3D IC physical design workflow might look like. And before we dive into the discussion, would you mind giving our listeners a brief description of your current role and background?

[01:05] Mike Walsh: Hi, John. Thanks for inviting me on to the podcast. My name is Mike Walsh, and I manage Siemens EDA’s worldwide team of IC packaging product specialists. We have a team of roughly a dozen specialists spread across the globe, supporting our leading-edge customers. I’ve been with Siemens EDA or Mentor Graphics since 1992 and have been focused on the IC packaging domain for the past 10 years or so, working with leading-edge customers on their complex IC packaging design and verification problems.

[01:33] John McMillan: Thanks, Mike. Given there has been a great deal of growth and development in the area of advanced heterogeneous packaging in the past several years. After the architecture has been defined, what are the typical design flow steps recommended, or used, for these multi-die and chiplet heterogeneously integrated assemblies?

[01:53] Mike Walsh: For the purposes of this discussion, I will use the term “substrate” to represent the range of interconnect solutions, which fall under the IC packaging domain. Here at Siemens, we recognize the challenges of designing an organic substrate are very different than designing a silicon interposer. At a macro level, there is quite a bit of similarity; how do I define and implement the desired conductivity regardless of the interconnect technology that’s being used? So, we tend to engage with customers one of two points in their substrate design process; either very early when the design process is first being planned out and conductivity is being defined, or we engage late in the design cycle when the customers worry about how to verify their complex 2.5D or 3D IC system that they’re building. We approach this from two different directions, and we’ll talk about both. The days of managing a single die package within Excel spreadsheets are long gone, and that was the methodology used by many customers for many, many years. But with the explosion of AI or high-performance compute silicon interposer designs with two, four, sometimes even as many as eight High-Bandwidth Memories or HBMs; the emergence of fan-out wafer-level packaging, embedded bridge technology, and the growth of system in package type designs, which have hundreds, sometimes even thousands of components; the conductivity problem is becoming a focal point for many customers, and one that they’re really worried about. With those customers, we engage early and help them put together a solution to manage this interconnect challenge that has really become a problem of complexity that’s just not well-suited towards a spreadsheet.

[03:38] Mike Walsh: One of the problems we see in particular with interposer design with its relation to silicon or chip design, and is compounded by the emergence of chiplets solutions, and we see this just growing, is the various disparate data sources. So, a company that’s using chiplets and designing their own silicon to work with them may have inputs in multiple formats; such as GDS, for their chiplet provider, they may be working with Verilog and LEF/DEF for their own silicon, they may be working with CSV for the ball out or the C4 bump plan for their interposer. How do they bring all of these various pieces of data together and make sense out of it into something that they can use to manage their conductivity? So, this challenge of not having to reimplement, redesign, recapture the design elements from their native format and use them as they are becomes a real critical part of the design process, especially those who are worried about this early in the design process and not late in the verification stage. So, a substrate design team needs a solution which can deal with multiple data formats quickly and accurately without having to translate the data into something else and bring it into a single cohesive system representation and netlist.

[05:03] Mike Walsh: So, for an example, multi-die systems – often there’ll be a verification team that’s working with the silicon team to verify the various interfaces between memory or standard interfaces like USB or PCIE or things like that. So, this verification team has got a Verilog representation of the design that’s been verified with comprehensive test bench for all various interfaces, it’s working with the actual RTL from the skeleton team. It’s well-understood and known good. There’s no reason to recapture that netlist to put a system together for the implementation of an interposer or a package. Design teams want to just import that and use it as is. So, having a solution or a platform that can deal with that sort of input, where you get the standard parts like memory from those vendors and the chip design from the chip team, and then the netlist from the verification team that brings it all together, and be able to import those pieces and use them as-is is really critical. And that’s what Siemens is working on with many of our customers.

[06:08] Mike Walsh: When we have the opportunity, we encourage our customers, thinking about how they’re going to verify their design and plan for that from the beginning and not worry about it until the end. When verification of the independent pieces is left to the very end of the design cycle, often design teams are working with fixed data that they have to make adapt to figure out whether it connects correctly. And while we can do that, it does often find issues very late in the design cycle. So, whenever possible, we try to encourage the customers to think and design their system with verification in mind at the end, and using a planning tool to manage all of these data can make the design team’s life significantly easier. And we often will refer to this as a golden netlist, and it’s no longer kept in a spreadsheet somewhere, but kept in a planning tool that replaces the spreadsheet. The Siemens conductivity management solution is a cockpit of sorts. And by bringing all these pieces together, it enables early analysis and collaboration, or often referred to as STCO with a co-optimization by doing early analysis and trade-offs for various reasons. Doing this from a planning tool and the integrations that that tool offers makes it easy for design teams to make intelligent decisions early in the design cycle. And as the design matures, the questions about correctness can be answered as part of the design cycle and not waiting until the end. So, that’s how we see the design flows maturing with the modern 3D IC type designs that we’re working with customers on these days, John.

[07:55] John McMillan: Well, that’s a pretty comprehensive design flow you described there. Can you give the audience an overview of each step and the relevant, important points of each?

[08:06] Mike Walsh: Sure, I can do my best to highlight some key areas. For planning purposes, obviously, the interconnect between the various devices is important. A physical representation of the various chiplets or the die involved, along with what the platform is that these devices will sit on. So, there are tools for each of those disciplines, and they each have their own format – the silicon team will use a set of placing route tools or a custom layout tool to design the die. And it will have, for the rest of the system, that appears somewhat like a black box. But that black box has to interface with a package or an interposer. So, the interposer design team needs to know what that interface looks like both from a netlist perspective – what’s connected to what – but also the physical bump layout. So, that information can be shared in any number of formats. There are some industry standards such as LEF/DEF; there’s GDS, which is widely used; and often just simple XY reports in Excel or CSV file format are used. And then the other tools will need to consume them. So, the challenge is when things change, how does each tool get updated? Now, formats like LEF/DEF that can be passed back and forth between the planning tool for the interposer planning tool or the package planning tool and the silicon tool; those are possible and they work well when all the pieces are available to both design teams. And we do have customers that are using the optimization aspects of our planning tools to move bumps around to make the package design simpler and passing that proposed bump change back to the silicon team via DEF files. And then the IC team can accept them, and they become part of the solution.

[10:04] Mike Walsh: At the same time, you have input from a board team, where often a PCB design has influence on the package design. But you don’t have quite the level of industry-standard interchange format between the board and the package or interposer teams, like you do from the silicon world; the silicon data formats are far more standardized and common than the PCB format. Although we have found that ODB++ is an excellent interchange format for bringing in PCB data or package data into the planning environment for collaboration and optimization. So, it is possible and we have many customers working with that, and they’re able to make trade-offs of both the package and the board, and communicate those recommended or suggested back to the other design team in formats that are native to those tools, both Siemens and in some of our competitive tools. Various implementation solutions, depending on the technology, it dictates what sort of an implementation tool that the substrate team will use. And often, it also dictates which implementation team will do it. For example, if a design requires a silicon interposer, often that will be done by a chip design team using an IC place in round type tool. If it’s an organic package or a fano package, these type of designs are more commonly done with packaging solutions. And using a different set of tools, package layout, whether it’s from Siemens or from Cadence, or some other providers. All of these solutions can all work together in a comprehensive planning environment. So, the Siemens solution is not a full Siemens-only flow, it works well with placement route tools from some of our competitors, as well as packaging tools from some of our competitors. They work best with a full Siemens solution, of course, but it’s not dictated to that.

[12:06] Mike Walsh: One of the other aspects of the 3D IC design flow is the physical verification and the differences between physical verification of individual pieces and physical verification of the total solution. What this means is I may be able to verify my interposer by itself or my package by itself. We may assume that the silicone team performed DRC and LVS at the die level, so they’re known good. But how do you verify all these pieces when you put them together? Are connected correctly, they line up correctly. Often you meet with teams that are doing some sort of stack substrate design – interposer on package or multiple die on a package – there’s almost always a horror story where some part of the layout was done and somebody interpreted one of the specifications as pins up or pins down instead of the correct orientation. And they ended up building and manufacturing a piece of it mirrored from the die or from the interposer. How do you prevent those types of things from happening, those types of design errors from happening? You need to do a comprehensive verification of what we call the entire assembly. And that is to take all of the pieces – the die, the interposer, the package, even the PCB if it’s available – and put it all together in a single environment and verify that all of the pieces lineup. How do you account for die scaling? How do you account for flipping and rotation and making sure that all of the die bumps or the interposer bumps in the package all lineup and all the things [13:45 inaudible] together?

[13:47] Mike Walsh: Now, these newer processes, almost everything involves some sort of die scaling. So, you’ve got scale factors. If the die is scaled at 0.9995 and there’s a bump compensation factor on the package, how do you account for all of these things so that when you put everything together and manufacture it, it works? Well, we have a comprehensive set of tools that are driven out of our planning tool to do what we call assembly verification. So, this allows you to make sure that all the bumps are aligned, all the bumps have a corresponding bump on the meeting, whether the die meeting to the proposer or the interposer meeting to the package; there’s nothing left out and there’s nothing extra. And all the net names are correct across the tool so that we didn’t create shorts when they’re connected together, everything is nice and DRC cleaned, LVS cleaned independently, but when you put them together you introduce conductivity shorts. We have a solution to account for that, which is probably the area that is most active for Siemens in the 2.5D and 3D IC design. People also refer to this as system LVS, and that is kind of doing LVS of the whole entire system to make sure that the various design domains, the package design may call a set of net clock CLK, the interposer team has them, where the clock signals are similar names but not exactly the same. And then the die calls them yet again something else. How do you validate the connection from the package bumped, BGA bumped through the package to the interposer C4 to the die, that that clock connection from the BGA to the die is correct through all the net name transitions, all the interconnect from the various layers of the package, the various layers of interposer? And how do you do all that? The Siemens planning and verification platform allows you to do that. Those type of problems, finding them when you have a multi-substrate, multi-die system; finding those before you fabricate it is really, really important because it’s very expensive when you don’t.

[15:59] Mike Walsh: One of the areas that we are seeing some emergence recently is an area of logical equivalence checking, and this is mostly coming from silicon teams who have been using logical equivalence checking at the chip level for years now, to verify that the netlist, the RTL implementation matches the synthesis and implementation of a physical layout. Similarly, there are questions, “Okay, if I’ve got a verification netlist of my entire system, then I’ve now got a package that maybe wasn’t imported from the verification netlist, maybe it was hand created by somebody maintaining a spreadsheet. How do I know that that netlist is correct?” So, I want to verify that the Excel file that the project lead put together and the netlist that goes into the package implementation tool is the same as what the verification team and design team [16:55 inaudible] it’s equivalent. So, the ability to produce a Verilog netlist of a subset or all of the complex package assembly, the 2.5D or 3D assembly is becoming important to answer that question. Are these two things the same? Yes, they are the same, by teaching equivalence points or defining equivalence points from the package physical netlist against the verification logical netlist. And then similarly, the desire to do actual functional simulation of the package design using the real RTL models that the verification team worked with. So, yet again, verify that the system is, from a physical perspective, implemented correctly, the netlist is correct and functions as expected.

[17:40] John McMillan: Well, thanks, Mike. That was a lot of great information to take in. For listeners who are new to 3D IC design but knowledgeable of traditional semiconductor package design, what areas should they focus on embracing first?

[17:56] Mike Walsh: I think the easiest place to get started is with what we refer to as the assembly verification, that is the ability to take the package netlist and the silicon die or multiple silicon dies and verify that that system is correct, using the same solution we used for multi-die, multi-substrate verification. It can be used for a single die or a single substrate. Maybe those type of verifications while they’re fairly straightforward, we still find customers finding it valuable to identify some mistakes. Especially early in the design cycle, you don’t have to have the complete package implemented in order to find value in this sort of verification because simple things like the “pins up, pins down” type mistake I mentioned earlier, that can be found very early by doing an overlaps check between the data, the silicon, and the package; just to verify we got the orientation correct, we’re working in the correct orientation as the silicon team, and we’ve got all the bumps in the right spot and the silicon team hasn’t changed any of the bump locations since we got their last snapshot. So, those types of checks are quick to run, they’re high value-add, and they’re easy to do early in the design cycle. So, that’s where we recommend getting started. And by setting that up early, it makes it very easy to run those checks every time there’s an update from the silicon team. Or as the design matures, the implementation is done, adding LVS to that type of verification becomes very simple later in the design process and it almost becomes push button. So, that’s where we recommend starting and then growing from there.

[19:40] John McMillan: Thanks, Mike, for this highly informative discussion in this fifth podcast on 3D IC. And thank you, our listeners, for listening and subscribing to this podcast. Be sure to join us for future episodes as we continue to discuss 3D IC and other emerging integration technologies.


About the Siemens 3D IC Design flow

The Siemens 3D IC Design Flow is a comprehensive set of tools and workflows targeted to develop advanced 2.5 and 3D IC heterogeneous system-in-package (SiP) designs. This proven, complete 3D IC design flow includes 3D architecture partitioning to planning, layout, design-for-test, thermal management, multi-die verification, interconnect IP, manufacturing signoff, and post-silicon lifecycle monitoring. Transform existing design and IP architectures into chiplets or build scalable 3D IC technology for faster time to market.

Learn more about Siemens EDA’s marketing-leading 3D IC technology solution: https://www.siemens.com/3dic

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/semiconductor-packaging/2022/10/24/eda-3d-ic-design-verification-in-multi-die-and-interconnect-systems/