In all aspects of life, the use of energy is an increasingly important matter. A domestic electricity bill is enough to get the attention of most householders and initiate thoughts about using more efficient equipment. Filling a car’s fuel tank is an eye-watering experience. Even in the US, where fuel is still quite cheap, the costs have risen drastically in percentage terms over the last decade. Costs aside, there are increasing pressures on the world’s energy resources that will lead to a scarcity of convenient energy in just a few years if we do not take action.
What has this got to do with embedded software? The answer is that electronic systems control most of the consumption of energy and those systems are mostly driven by embedded software. So much of the responsibility for minimizing energy consumption is down to the embedded programmer …
This is a topic that I have touched upon before, but, where ever I look, I see an increasing interest in addressing the matter.
The economics and design issues need to be analyzed carefully, as the best course of action is sometimes non-obvious. For example, there have been reports of large server farms being located in very remote locations. That seems illogical, as the obvious place to put such a facility is close to the communications networks. However, it transpires that it is much cheaper to locate all this energy consuming hardware in the close proximity of cheap power – such as near a dam. Bringing in a telecommunications circuit is relatively cheap.
A significant consideration is the utilization of computing power and its effect upon power consumption. For example, I have a reasonably powerful laptop. It offers enough computing power to do just about anything I could want, but the battery lasts for about 2 hours, which is almost useless. I have a little netbook, which does 95% of the things I need to do, weighs almost nothing and gives me 7-8 hours usage on a charge. My iPad is even better, giving 10-11 hours use. The long battery lives are accomplished in various ways, but the key one is provision of just enough computing power and no more.
There are several other areas where power may be reduced. If the power dissipation of a device is low enough, more power still can be saved as cooling can be by means of free air movement, requiring no fans or active cooling systems. Have you tried actually using a laptop on your lap lately? Ouch!
A device that really seems to deliver on portability convenience is the Kindle, as users report that they can get a month’s normal use between charges. [The downside is that it is easy to go on vacation without taking a charger.] How is this achieved? Broadly by getting the software right. When you are reading a page on a Kindle, the CPU is doing absolutely nothing. It wakes up very briefly when you turn a page. It only uses significant power when you are navigating your library or downloading new material. The software can afford to be so “relaxed” because the display needs no support – once a page is displayed, no refreshing or anything is required and no power is consumed. Of course, such a display is not suitable for every kind of device and more power-hungry back-lit panels are often required. But it is the software’s responsibility to turn off or dim the display whenever possible, as that can result in significant power savings.
I am very interested to hear – by comment or email – about any novel ways that embedded software might help reduce power consumption.