Products

Assembly level layout vs. schematic in 3D IC design verification

By Heather George

In our fifth podcast on 3D IC design workflows, we discussed what a 3D IC physical design workflow looks like, from prototyping and planning to system technology co-optimization to substrate routing and design verification. Today, we will discuss a 3D IC design verification workflow, from assembly level layout vs. schematic using assembly design kits to electrical modeling and parasitic extraction of critical signals. 

Watch the 3D IC podcast episode: 3D IC integration challenges

If you prefer video to text, watch this 21-minute video on challenges associated with 3D IC integration and the components required to make it possible.

We know that everyone has their own preferred learning style, and we want to make sure you can get the most out of this content in the way that works best for you. So, whether you prefer to read the copy below or watch the full episode above, we hope you find this information helpful!

Manufacturing challenges for 3D IC heterogeneous packaging assemblies

Traditional IC designs rely very heavily on sign-off strategies for manufacturing. Sign-off components typically come in the form of decks from the design kit – design rule decks, layout vs. schematic decks, reliability decks, etc. Design kits are process-specific and work well in an SoC experience where everything is in a single process. But it starts to break down with heterogeneous processes with the combination of components from different processes. 

Things get muddy when thinking about handling different layering. A semi-implied historic convention considers everything drawn on a single layer in an SoC as coplanar. So, if two polygons overlap or are above each other, they form one polygon. This gets complicated with chips or dies on top of each other that happen to have items drawn on the same layer. Looking top down, it appears they are the same layer, and you would have assumed that they are coplanar, but they’re at different vertical depths. 

The second challenge is the multiple processes. And how do we intertwine and understand the different processes interacting with each other but also need to check between them without unnecessarily checking the wrong interactions?

Moving forward with design kits and heterogeneous designs

Moving forward with design kits and heterogeneous designs is tricky. A foundry or an OSAT can’t simply hand you a design kit that works out of the box for any combination of processes you want to pull together. However, designers know the processes and decide what to bring together into a combined environment and how. Extracting that essential information calls for planning tools to help the designer. Designers need guidance on completing floor planning and providing information to help them paint a picture. This picture shows what these things are and how they are stacked vertically.

Teams also need separate checking of the specific elements from the individual layer definitions. For two different processes, there may be very different layer numbers in GDS or OASIS that represent a kind of similar structure for the two processes. It becomes important to understand things that are intended for the same purpose (even if they’re not on the same layer or not coplanar). Pulling that insight based on early design information is the easiest way to understand.

Layout vs. schematic (LVS) and design rule checking (DRC) for 3D IC heterogeneous designs

The difference between design rule checking (DRC) and layout vs. schematic (LVS) in the SoC world:

  • DRC: what you’ve created as a layout architecture will be manufacturable
  • LVS: not only will it be manufacturable, but you have correctly created something that represents the electrical structure and behavior you intended. 

When we start talking about combing disaggregate chiplets, it is a little more complicated. It’s still the same concept based on layout vs. schematic, to check the behavior once manufactured matches the electronics intention. 

First, complete the netlist and simulation. With the pre-characterized and initial placement determined, you can verify that the design will behave as intended. Then, compare the detailed analysis of all the dies, the layers, and the devices within them to ensure that they match what you intended. 

Teams will need to be able to take information in to have the proper comparison. This process requires a source netlist, and it is available in different forms:

  • a Verilog for a digital assembly
  • for packets, a Comma Value Separated (CSV) netlist or set of descriptions

Another challenging aspect is handling interposer-type components such as interposers in silicone or IC package compounds. Unfortunately, passive devices don’t jive in layout vs. schematic. The way traditional layout vs. schematic uses a netlist with the devices and pins labeled by net name. By observing the same net name associated with two or more pins, we can assume those items are electrically connected. 

With purely passive items, there’s no tag. All you have is wires and nothing for it to connect to, resulting in no electrical behavior. When looking at it as part of a larger assembly, the wires will connect to things, but it’s external. Therefore, we need to understand passive components. Knowing the different wire placements and material information is critical to accurate post-assembly generation net listing in simulation results.

Impacts of through-silicon vias (TSVs) in heterogeneous integration structures and other extraction challenges

3D IC design integration requires new components, primarily through-silicon vias (TSV), that connect the front and backside metal stack in the dies to allow for vertical die stacking. TSVs are on the interposers, micro bumps for vertical connection of dies interposers, copper pads for hybrid bonding, RDLs to connect various 3D IC chips, through-dielectrics vias or through-mold vias that are also used for some shortcut connection between system components embedded in the mold. 

New components introduce new parasitics into the system. And those parasitics can impact delay, noise, signal integrity, power, and on satisfying system design requirements. In addition, these components get vertically stacked and closer to each other, impacting parasitics due to their proximity. All the new components and their intra-die and inter-die interactions and interfaces need modeling and must take downstream simulation into account. 

Additional extraction challenges include increasing 3D IC design system operational speed, such as fast serial links and high-speed parallel collection between HBM and logic. Accurate high-frequency extraction requires parasitic inductor succession, in addition to usual resistance and capacitance extraction. Inductive effects are long-range and require extraction of many couplings, resulting in a very large netlist. But very efficient netlist reduction is needed to create a reasonable-sized netlist that available simulators can handle. 

In heterogeneous systems, the different dies and components are made in various technologies and technology nodes, and each carries process variation. Accounting for this variation requires analyzing multiple technology nodes, increasing the combinations, complexity and simulation. Teams need methods to reduce the number of combinations and still maintain analysis accuracy.

Available tools to extract design information in heterogeneous designs

Determining a tool to extract information from heterogeneous packaging designs requires assessing what to model and the accuracy needed. Higher accuracy requires more complex models and more sophisticated tools. With the various extraction tools available, the selection is between the rule-based tools with good performance and the field solver-based tools with good accuracy. 

For high accuracy, teams use field solver (especially at high frequencies) but struggle with complexity, performance, and integration in the flow. An example of this is TSV parasitics. The simple method involved coming up offline with accurate TSV models using foundry measurements. During the interconnect parasitic extraction procedure, insert TSV models at the TSV locations with high-performance, rule-based tools. 

It’s more challenging to handle TSV couplings. One method for TSV-to-TSV coupling uses parameterized tables for coupling resistance and capacitance parasitics through the substrate. But coupling parasitic tables have significant limitations because it is not easy to account for all layout situations with a limited number of parameters. Full-wave solvers provide more accuracy but are too slow for many TSVs. In this case, the optimal solution is a specialized field solver fast enough to practically use for the entire TSV set extraction on dies and interposers. So, in some instances, teams may need a combination of tools in future high-frequency extraction.

Parasitic extraction and analysis for heterogenous IC packaging

From a design perspective, the tools used in the two different IC packaging technologies are very different:

  • silicon-based tools deal with place-in-route and typically only handle orthogonal shapes
  • organic space tools lean traditional to PCB-oriented and organic IC packaging structures and handle arbitrary shapes and angle routing

From the analysis perspective, where that source data comes from can present significantly different challenges. There’s a difference between extractions for EM when looking at it from the signal integrity perspective. The place-and-route tools will stream out GDS data, but a traditional EM extraction tool takes a lot of data preparation to bring structures in to generate RLGC parasitics used for further analysis. It requires manual setup, understanding the metallization layers and identifying things to bring out of the GDS data, and ordering those things for the proper connectivity. These extra manual steps lengthen the development and analysis processes. 

On the other side, the more PCB-oriented types of tools tend to have more intelligence associated with the data. Teams can use net names and structure types (signal or power) to enhance the performance of extractions and analysis. With the intelligent data, teams can start performing extractions and analysis to understand parasitics and the impact of the electrical intent of the full system in the chiplet design. By pulling parasitics into the top-level netlist early in the process, teams can identify changes needed to the floor plan to reduce inductance, for example.

From the manufacturing side, this data can present challenges. Teams can spend more time just trying to understand that data and process it versus the time it takes to run the analysis to get an answer. The time spent and the accuracy of the answer are a trade-off between accuracy and performance. It’s about using the right set of analysis capabilities at the right time to complete trade-offs earlier in the process and still sign off the total design at the end.

Want to learn more about the impact of 3D IC on physical design workflows? Listen to the podcast now, available on your favorite podcast platform.

Expand to see the Transcript

[00:10] John McMillian: Welcome to Siemens EDA podcast series on 3D IC design, brought to you by the Siemens Thought Leadership team. In our fifth podcast on 3D IC design workflows, we talked about what a 3D IC physical design workflow looks like from prototyping and planning through to system technology co-optimization through to substrate routing and verification. Today, we will discuss what a 3D IC verification workflow looks like from assembly level LVS using assembly design kits, through to electrical modeling and parasitic extraction of critical signals. I’m pleased to introduce my special guest today, John Ferguson, Director of Product Management; Dusan Petranovic, Principal Technologist; and Steve McKinney, Account Technology Manager. Welcome gentlemen, thank you for taking the time to talk with me today about what a 3D IC verification workflow might look like. And before we dive into the discussion, would you mind giving our listeners a brief description of your current role and background?

[01:07] John Ferguson: Sure. This is John Ferguson. I am in charge of Calibre DRC at Mentor and Siemens EDA. And I also drive a lot of our 3D IC integration work from the Calibre side.

[01:21] Steve McKinney: I’m Steve McKinney. I’m an Account Technology Manager, and work in our signal integrity, power integrity, electromagnetic space for package and board technologies. I’ve been in the signal integrity design area for roughly the last 20 years.

[01:40] Dusan Petranovic: This is Dusan Petranovic. I’m a Principal Technologist working on parasitic extraction. And I’m now focused on our extraction tool, foundry qualifications on advanced technology nodes on 3D IC extraction methodology, and on the high-frequency parasitic extraction.

[02:01] John McMillian: Thank you, gentlemen, appreciate that. So, there has been a great deal of growth and development in the area of advanced heterogeneous packaging in the past several years. Aside from design and analysis, what challenges do such assemblies require to ensure manufacturability and reliability?

[02:18] John Ferguson: That’s an interesting question. Traditionally, in the IC space, when we’re doing design and going into manufacture, we rely very heavily on sign-off strategies. The sign-off components are things that we get usually in the form of decks – design rule decks, LVS decks, reliability decks, etc – that come in the form of a design kit. These kits are process-specific, work perfectly well for an SOC experience where everything is in a single process. But it starts to break down when you’re doing heterogeneous processes, combining components from lots of different processes together. In particular, things get a little bit crazy when we start thinking about how do we handle the different layering. There’s sort of a historic implied convention, if you will, that everything that is on a single layer in an SOC, a drawn layer in GDS or OASIS, that it’s coplanar. So, if you have two polygons that overlap or above each other, they form one polygon. You can imagine that can get pretty complicated when you have chips or die on top of each other that happened to have items drawn on the same layer. If you’re looking from top down, it appears as though they are the same layer and you would have assumed that they are coplanar, but of course, they’re not right, they’re at different vertical depths. So, that creates certainly a challenge. There’s also, frankly, the issue that you do have multiple processes. And how do we intertwine and understand that I’ve got different processes interacting with each other, but I need to be able to check between them without checking unnecessarily the wrong interactions?

[04:18] John McMillian: That’s very interesting. So, given the need for design kits, which imply everything is known with the design dependency involved, how do we reconcile and move forward?

[04:28] John Ferguson: It’s tricky. It’s interesting. A foundry or an OSAT can’t simply hand you a design kit that’s kind of work out of the box for any given combination of processes that you want to pull together. But the designer knows what their processes are, so when they are deciding what they’re going to bring together into a combined environment, and how they’re going to combine them in there together; we need to take some of that information from that user. That really calls for a need for planning tools to help the user decide how they’re going to do their floor planning and provide information to us in the form of what are these things and how are they stacked vertically, and not just what are they spread out as if in an SOC. We also need to separate out checking of the specific elements from the individual layer definitions. And really, this essentially becomes a form of abstraction. We know that for two different processes, you may have very different layer numbers in GDS or OASIS that are used to represent a kind of similar structure for the two processes. So, understanding things that are intended to do the same purpose, even though they’re not on the same layer or not coplanar becomes important. And again, pulling that in based off of the early design information is the easiest way to get that information.

[06:10] John McMillian: Thanks, John. So, with this approach, we can verify the assembly is properly aligned. But does that mean it actually works as expected?

[06:18] John Ferguson: Well, this is where the difference between DRC and LVS would come up in the SOC world. Design Rule Checking means what you’ve created as a layout architecture is going to be manufacturable. LVS says, not only is it going to be manufacturable, but you have correctly created something that represents the electrical structure and behavior that you intended. When we start talking about disaggregate chiplets combined together, it gets a little bit more complicated, but ultimately, it’s still the same concept; we need to do something based off of LVS, where we’re going to check back to make sure what your behavior you’re going to get once manufactured matches what you intended to do electrically. The way to do this kind of goes back to what we’ve always done. Somebody needs to have done some net listing beforehand, and have done simulation. You have pre-characterized and determined based off of initial placement information; “Yes, this is going to behave the way that I intended to.” Now, we’re going to go and compare the actual detailed analysis of all of the dies and all of the layers and all of the devices within them to make sure that they actually do match to what you intended.

[07:46] John Ferguson: So, this does require that you have a source netlist, and getting that not everybody has it, sometimes it’ll come in the form of a Verilog for a digital assembly; sometimes if you’re coming from a packet world, it will come in more of a Comma Value Separated netlist or set of descriptions, if you will. But we need to be able to take that information in to have something that we’re comparing against. There’s also an interesting side part of this, which is, what do we do about interposer-type components. This could be interposers in silicone or interposers in package compound. They could be chips or full die interposers. Doesn’t necessarily really matter but these are passive components. Passive devices don’t really work in LVS. The way traditional LVS works is you need to have– If you think about a spice netlist, for example, a spice netlist will list out the devices and the pins on the devices and will label them by net name. So, by implying or observing the same net name associated with two or more pins, you know they’re electrically connected. But if you have something that’s purely passive, you don’t have anything to tag that on. All you have is a bunch of wires, you have nothing for it to connect to, and so therefore, it has no electrical behavior. But in a sense of a larger assembly, of course it does, those wires are going to connect things but they’re external.

[09:27] John Ferguson: So, we need a way to understand the passive components. And in certain cases, you may have something that’s passive that also has embedded intentional passive devices in them: DCAPs or certain resistors inserted, we’ve seen photonic elements inserted, all kinds of interesting things that can happen in there but they’re not your traditional active device. You still need to be able to handle those while treating the passive in a sense that understands that we’re not driving the actual circuit behavior, we’re just an interconnection port. And ultimately, this then really becomes the same vehicle that drives the input into parasitics. If you think about parasitics, it’s really kind of the same component except for you didn’t intend to or intentionally create these caps and resistors in there. They’re embedded by nature into those components. And if you want to understand their impacts, you’re still going to have to extract them. So knowing the different wire placements, and materials information, and all of that is going to be critical to getting accurate post-assembly generation netlisting in simulation results.

[10:55] John McMillian: So, what do the needs and challenges impacts heterogeneous integration structures?

[11:01] Dusan Petranovic: In order to make 3D IC integration possible, as you know, new components are needed; those include through-silicon vias that connect the front and backside metal stack in the dies to allow for vertical die stacking, TSVs are on the interposers, micro bumps for vertical connection of dies interposers, copper pads for hybrid bonding, RDLs to connect various 3D IC chips, through-dielectrics vias or through-mold vias that are also used for some shortcut connection between system components embedded in the mold. So, all of those new components introduce new parasitics into the system. And those parasitics can impact delay, noise, signal integrity, power, and have impact on satisfying system design requirements. In addition, the 3D IC components like dies, interposers, they get vertically stacked and closer to each other, affecting parasitics of each other due to close proximity. So, all of those new components and their intra-die, inter-die interactions and interfaces have to be modeled properly and taken into account in the downstream simulation. Additional challenges in extraction are increasing speeds of 3D IC system operation – good examples are fast serial links, high-speed parallel collection between HBM and logic. And there is definitely a need for accurate high-frequency extraction. So, that requires, in addition to usual resistance and capacitance extraction, parasitic inductor succession. So, this inductive effects are long range. They require extraction of large number of couplings, resulting in a very large netlist. Consequently, very efficient netlist reduction is needed to come up with a reasonable size that can be handled by the available simulators. And finally, I will say that in the heterogeneous systems, the different dies and components are made in different technologies, different technology nodes, each carrying their all process variation and process corners. And they have to be taken into account, multiple technology nodes increase number of combinations that have to be analyzed, increasing both complexity and simulation. So, the methodologies are needed for reducing the number of combinations while still maintaining analysis accuracy. So, a lot of challenges thus have to be answered.

[13:56] John McMillian: Yes, it sounds like it. So, what tools are available or needed, and how to make a selection of the right extraction methodology and tools?

[14:05] Dusan Petranovic: Yeah, it is very important to know what to model and what effects are important to be modeled, what accuracy is needed. So, a higher accuracy definitely requires more complex models and more sophisticated tools. So, there are various extraction tools available, but it typically comes down to the selection between the rule-based tools with good performance and field solver-based tools with good accuracy. So, field solver, and even full-wave solvers, are needed for very high accuracy, especially at high frequencies. But they have problems with complexity, with performance, with integration in the flow. So, one example to illustrate this is handling of, let’s say, TSV parasitics. The simplest methodology is to come up offline with accurate TSV models using foundries measurements and their internal ful-save solvers. And then during interconnect parasitic extraction procedure, simply insert TSV models at the TSV locations, that can be effectively or efficiently done with high-performance rule-based tools. It is more difficult to handle TSV couplings, various companies. One method, for example, for TSV-to-TSV coupling is to use parameterized tables for coupling resistance and capacitance parasitics through the substrate. But those coupling parasitic stables have significant limitations, since it is not easy to account for all layout situations with a limited number of parameters. So, the most accurate would be to use full-wave solvers, but they are too slow for a large number of TSVs in the real design. So, in this case, optimal solution would be kind of specialized field solver, and to make those solvers fast enough to be able to practically use them for the entire TSV set extraction on dies and interposers. So, some combination of those tools might be needed in the future high-frequency extraction.

[16:20] John McMillian: 3D IC being [16:21 inaudible] organic connectivity. What challenges does that present from a parasitic extraction and analysis perspective?

[16:29] Steve McKinney: From a design perspective, the tools that are used in these two different packaging technologies are very different – silicone-based tools deal with place in route, they typically can only handle orthogonal shapes. And in the organic space, it has tools that are, I’d say, more traditional towards like a PCB oriented and organic packaging type of structures, and they handle arbitrary shapes and angle routing. There are pros and cons to both of those environments. But really, from the analysis perspective, where that source data comes from can present significantly different challenges. Specifically, looking at it from a signal integrity perspective and trying to run EM extractions and do some of the things that Dusan was just mentioning. The place and route tools will stream out GDS data, but a traditional EM extraction tool, like a full-wave tool that he was mentioning there takes a lot of data preparation in order to be able to bring those structures in for a full-wave solve to generate those RLGC parasitics that are necessary for the further analysis. You have to go through a lot of manual setup, understanding which metallization layers and things to bring in out of that GDS data, how to order all those things properly so that you get the right connectivity. So, just takes a lot of extra steps, which lengthens the development process and analysis process.

[18:11] Steve McKinney: On the other side, with the more PCB-oriented types of tools, those tend to have more intelligence associated with the data, where here you might understand things like the net names and the different types of structures they are, whether it’s a signal or a power. Those types of information can be used to enhance the performance of the extractions and the analysis that you’re doing. So, if you can take more intelligent side of data, it allows you to actually push things a little bit further upstream into your design process, meaning that you can start to perform these types of extractions and analysis to understand those parasitics and how they’re going to impact that electrical intent of that full system in the chiplet design. And so you can take all those different parasitics and bring that into that top-level netlist early in the process and say, “Okay, based on these types of parasitics that we’re seeing, we need to change the floor plan, and move some things around in order to reduce inductance.” So, those datas and the way that they’re represented also gives a bit of a challenge in the way that the manufacturing side. So, for example, if you’re dealing with a silicon-based design, and you need to balance out that design, you’re going to end up with hundreds of thousands of little clusters in that silicon substrate, potentially. And from an EM extraction perspective, if you’re running this in full wave or hybrid type of extraction to generate those parasitics, the processing of that information is just very extensive. And you can spend more time just trying to understand that data and process it versus the time it takes to actually run the analysis to get you an answer. So, those types of things can impact the time it takes to get to an answer, the accuracy of your answer, and as Dusan has mentioned, it’s kind of just a trade-off between accuracy and performance and trying to use the right set of analysis capabilities at the right time, so that you can make these trade-offs earlier in the process and still have the capability to do a sign off of the total design at the end.

[20:46] John McMillian: Thanks, Steve. That’s a lot of great information. Well, that’s our time for today. Thank you all for the highly informative discussion in this podcast series on 3D IC. And we’re looking forward to the future podcast with you on this topic.


About the Siemens 3D IC Design flow

The Siemens 3D IC Design Flow is a comprehensive set of tools and workflows targeted to develop advanced 2.5 and 3D IC heterogeneous system-in-package (SiP) designs. This proven, complete 3D IC design flow includes 3D architecture partitioning to planning, layout, design-for-test, thermal management, multi-die verification, interconnect IP, manufacturing signoff, and post-silicon lifecycle monitoring. Transform existing design and IP architectures into chiplets or build scalable 3D IC technology for faster time to market.

Learn more about Siemens EDA’s marketing-leading 3D IC technology solution: https://www.siemens.com/3dic

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/semiconductor-packaging/2022/11/08/eda-assembly-level-layout-vs-schematic-in-3d-ic-design-verification/