“All models are wrong, but some are useful” Part V

A variant on the original G. E. Box quote is:  “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful”. For electronics cooling simulation that really depends on what your expectations are of the simulation. Many use CFD to provide an insight or understanding of the thermal behaviour of the system. Others expect a 99.5% accurate simulation of reality. On average, with a certain amount of care, or at least awareness of the issues already covered in this series, a simulation accuracy of ~+/-10% is common. The fifth and final modelling issue that requires some attention is the concept of environment boundary conditions.

When doing any 3D numerical modelling you consider a portion of space (and time sometimes) and get your software to predict what goes on inside (or during). There will always be an edge of your model, the limit at which you don’t model the ‘outside’.  At this boundary you have to set an assumption of what’s happening on the ‘other side’. This condition can often have a very dominant effect on the simulation of the stuff you are considering.  Such things are called… wait for it…. the clue was in the previous text……

Boundary Conditions

Consider the following modelling level:

scale2Which explicitly contains representations of a PCB and the components on that PCB. To the right of the red model box is that which is not being modelled. However heat will pass out (and cold will pass in) the interface between modelled and not modelled. Defining where the heat passes to, or rather how difficult the heat passes out, is done by specifying a parameter, or two.

The most obvious one is temperature, the temperature just outside the 3D space being meshed and predicted. This is effectively an ambient temperature.  If the interface between model and outside (the boundary condition interface) is air to air then this is the temperature air will assume as it is blown or induced into the modelled space. In addition, for such air to air boundary conditions you may have to set whether the air is stagnant (by specifying gauge pressure = 0), or moving in or out (by specifying a velocity).

If there is solid at the interface then you still need to set a temperature of the air on the other side of the wall but instead of also setting pressure/speed one sets what is called a heat transfer coefficient (HTC) instead. An HTC is defined in terms of degC/Wm^2. I always think of it as the efficiency by which heat is removed from a surface, what degC temperature rise penalty is levied for a Watt going through an area 1 m squared. For the following conduction only package modelling level:


The boundary of the model will all be solid interfaces requiring heat transfer coefficients to be specified on top (to say represent the effect of a heatsink that you’re not explicitly modelling heatsinks are good at increasing HTC, that’s their raison d’etre) and bottom (how much Cu or vias under the package will dominate that HTC). Prediction of HTCs due to air is what CFD is best at really, I’ll come back to this issue in a subsequent blog.

If you get this boundary condition setting wrong, e.g. innacurate or badly specified, then this error will propagate back to the heat source temperature prediction. All heat has to go out the sides of the model, best to ensure the BCs (boundary conditions) set there are right then. To best assure accuracy the first thing is to chose the best point at which to truncate the model from what is meshed and solved to that which is outside and not.

If in doubt increase the size of the model a bit until it extends to a  point where you are confident of correctly specifying the BCs. Not sure of the HTC the heatsink will bequeath to the top of your component? Then model the heatsink and some of the air around it or be willing to accept that your package model will be bound by the HTC you specify.

In reality such BCs are most often assumed and the results of the simulation simply valid for that assumption. Spec sheets will often say “ensure ambient temperature does not exceed 55degC” (is that stagnant air or moving 😉 ) Clever FloTHERM model(ers) will invert the model definition to predict exactly what the maximum ambient temperature can be for a given power dissipation such that maximum limit temperatures are not exceeded. Nice.

Some clever person once said that the best model of the universe is the universe itself. To remove any uncertainty in settings BCs for such electronics cooling simulations your model space should be:


Yeh, OK, FloTHERM’s powerful, fast and robust, but modelling the entire world is a bit beyond even it. So, like it or lump it, you’ll always have to set these BCs (boundary conditions) at the interface between what you model and what you don’t. Either accept what such an assumption imposes or make your model a bit bigger.

2nd July, Ross-on-Wye


6 thoughts on ““All models are wrong, but some are useful” Part V
  • Ian Clark

    I am reminded of a very famous (or is it infamous?) support case in which FloTHERM’s results on a very simple system were significantly over-predicting case temperatures versus thermocouple measurements provided by the customer. After several days of frustrating phone calls and e-mails, our intrepid colleague jumped in the car and drove to the customers facility to see the test bench with the electronics system in question and discovered that right above the test bench, perfectly centered, was a nice wonderfully efficient, air-conditioning diffuser! The right answer of course is to change the test set-up to be a lot more sensible, but in this case we simply extended the model to the ceiling and modeled the diffuser and the “correct” answers were of course the result.

    And the moral of the story?

    Always be aware of comparing apples versus oranges (often the case when numerical results are being compared with test measurements!) and bear in mind a famous cautionary truism from the aerodynamics world:

    “No one believes the CFD [simulation] results except the one who performed the calculation, and everyone believes the experimental results except the one who performed the experiment.”

  • Chris Hill

    I’ve done a lot of this sort of exercise when “calibrating” Flotherm simulations against real test data. Gemerally our setups are one or more dissipating components, mounted on a PCB in some king of test fixture – and we’re primarily interested in the device junction temperature(s). Cooling is by natural convection only, not forced. Just simulating the device on its own produces a temperature result which is far too high (as you would expect). Adding the PCB usually lowers the temperature significantly (provided, of course, that the important bits of the PCB are modelled in sufficient detail). Progressively adding more bits of the test fixture allows us to further “home in” on the empirical answer, usually because of modification of the airflow path around the test setup. At some point, though, adding further bits of the setup yields no further change in simulated answer. At that point, we’ve determined which bits of the setup are important and which aren’t. Working in an air-conditioned lab I learned very early on that excluding airflow from the real tests (rather than attempting to reproduce the air con in simulation) is extemely important – even for airflows of only a fraction of a m/s.

  • Robin Bornoff

    Chris, you’ve described probably the best approach to numerical simulation involving calibration and model refinement to capture the dominant heat flow resistances. Nice.

    A ‘component only model’ with prescribed and fixed boundary HTCs and Ts on the periphery of the component would be valid IF those HTCs were indeed fixed and independent of the heat source. Again, moving those fixed BCs out to a point where they are valid (fixed and independent of the stuff going on inside) is the real key.

Leave a Reply