The length scales represented in electronics cooling problems can span 11 orders of magnitude: from individual transistors that are measured in nanometers (of order 10-9 m) to entire datacenters (of order 102 m).
Now obviously, no tool can account for every single electronic component in a datacenter cooling simulation. Even if it were possible to do so, it is doubtful that such a simulation would provide additional useful information. Out of necessity, engineers use a combination of simplifying assumptions and imposed boundary conditions to focus the simulation on those length scales that are most important for the simulation (using generous amounts of “engineering judgment” in the process). However, care must be taken not to over simplify things: if the assumptions are too great, or the imposed boundary conditions are too unrepresentative, then the results predicted by the simulation begin to diverge from those that would occur in reality. When this happens, no amount of judgment (engineering or otherwise) can rescue useful data from the simulation.
Worse still, wrong or inaccurate results can mislead the design process, potentially sending the product up a non-optimal design branch. So, ideally your simulation tool will allow you to solve problems across multiple length scales. Instead of just simulating flow across a single circuit board, you want to be able to model a whole blade server, or better still, how several blade servers interact with each other and their environment.
Natural convection and thermal radiation
In traditional forced convection “air-cooled” systems, thermal radiation plays a relatively minor role in the overall heat transfer, typically accounting for less than 5 percent of the thermal energy rejected by the system (with the rest split evenly between convection and conduction). However, in “no flow” situations, whether by design or in unintentionally “dead” regions of the compartment, radiation plays a much more important role accounting for between 30-50 percent of heat transfer. Simply neglected in many simulations, for the reasons described below, including radiation heat transfer in a simulation will generally act to decrease maximum temperature in the system, spread out the temperature distribution and reduce the exterior surface temperature (touch temperature).
However, including radiation heat transfer can significantly increase the computational overhead for a simulation, as view factors (basically lines of sight) must be calculated for every computational face of every component in the system. Although these view factors need only be calculated once per geometry, this process can be computationally expensive even for a large uncluttered enclosure. In a typical crowded electronics enclosure, consisting of hundreds, if not thousands of components, calculating these view factors is beyond the capability of a single processor machine (both in terms of memory requirement and physical time needed to complete the calculation) By including radiation, you can reveal the effects of an additional heat flow path – this is critical in low flow or no flow situations and can be important when trying to squeeze every degree of cooling from a forced convection system.
STAR-CCM+ includes parallel view factor calculation, which allows users to exploit the processing power of multiple computer cores when performing radiation view factor calculations
Liquid cooling: Chilling, dunking and spraying
While air-cooling continues to be the most widely employed method of thermal management, its ultimate effectiveness is always limited by the fact that air has relatively poor thermal capacity compared to other fluids. For serious cooling impact for high-power density systems, designers are increasingly turning to different types of liquid cooling.
A common feature of liquid-based cooling systems is that they exploit the additional heat transfer of phase change to increase the cooling effect of the liquid. Because of the higher thermal capacity of the coolant liquid, they benefit from greater sensible heat transfer (which raises the temperature of the coolant) and latent heat transfer (which changes the phase of the coolant, through boiling or evaporation). The simplest way of doing this is by “indirect” liquid cooling, where the coolant never comes into direct contact with the electronic component being cooled, usually accomplished through the attachment of a liquid cooled “cold plate” which is attached to the chip.
A more effective (although less practical) solution is to submerge the chip directly into a (non electrically conductive) coolant. If the temperature of the component increases beyond a critical level (the boiling point of the liquid), then nucleate boiling will commence, greatly increasing the heat flux from the chip to the fluid. At high temperature, this approach is around 5 times more effective than indirect liquid cooling, and about 25 times more effective than direct air-cooling. However, this approach makes routine maintenance much more difficult (as the components must be removed from the liquid bath and cleaned prior to inspection). Care must be taken so that the boiling regime does not progress to “film boiling” at which point the component becomes surrounded by a film of vapor, and heat transfer is suddenly reduced, resulting in a sudden rise in component temperature, followed by rapid failure. Most effective of all are direct spray systems, in which a fine mist of non-corrosive, non-conductive coolant is sprayed directly on the surface of the component, forming a liquid film that rapidly evaporates. The coolant vapor is extracted from the enclosure and condensed, rejecting heat to the surroundings.
The problem?
As we discussed above, many simulation tools are specifically designed to handle single-phase air-cooling and, at a push, simplified indirect liquid cooling (modeled using a source term or a boundary condition). If you want to explore any of the more advanced liquid cooling technologies, your simulation tool needs to be able to model multi-phase calculations, which include the interaction between air, liquid and various gasses, as well as boiling and phase change. Without this functionality, your simulations and your designs will be limited to simple, ineffective air-cooling.
Other Advanced Physics
Of course, it’s not all about cooling. Engineers in the electronics industry have a whole multitude of problems to deal with, to name but a few:
fan performance and acoustics (if you’ve ever been inside a datacenter, then you’ll know how important that is);
water intrusion;
dust build ingress and accumulation;
hydrogen build up (from battery decay).
The benefit of employing a fully featured simulation tool is that, no matter how rarely these problems occur, your engineering software will allow you to solve them when they do.
The all-in-one multiphysics toolbox
More than just a CFD code, STAR-CCM+ is a complete multiphysics toolbox, able to solve flow, thermal and stress problems involving multiple phases. From liquid jets, to water ingression, STAR-CCM+ allows you to simulate any cooling strategy that you can define and even the effect of what happens when those strategies go wrong.
Credit: Siemens Digital Industries Software