Thought Leadership

Three degrees of freedom

By Colin Walls

Developing embedded software used to be easy. Actually, that is not true. It has never been easy, but certain matters were simpler. Embedded developers have always needed more control of code generation because, as I am often heard to chant, every embedded system is different and the priorities and requirements change from one to another.

It used to be broadly a choice between speed and size of code, but it is no longer that simple …

Although embedded compilers typically offer more fine grain control, optimization of code is usually a matter of selecting a bias towards speed or size. It just works out that, most of the time, the fastest way to implement an algorithm is also biggest and the smallest is slowest. There are exceptions, but they are not so common. Much the same principle applies to data, where it can be packed [smallest] or unpacked [fastest]. This is commonly specified by a compiler optimization switch and may be overridden for specific objects using the [extension] keywords packed and unpacked.

When designing hardware, developers have a very similar choice. They code using a hardware definition language [like VHDL or Verilog] and use a synthesis tool [which is similar to a compiler] to implement the design. In the same way as with software, there are trade-offs between speed and size. However, for some time, another factor has been taken into consideration: power consumption.

As I have discussed before, power is no longer purely a hardware issue, as the software increasingly has an influence on the power efficiency of a device. Conventional code optimization has an influence on power consumption. Small code means the device needs less memory, which reduces power consumption. Alternatively, fast code causes the CPU to burn less power getting the job done. This is not an easy matter to get the balance just right. Although tools like Mentor Embedded’s Sourcery Analyzer can help, I do wonder if ” optimize for power” will soon be a compiler feature …

So, it may be concluded that instead of a two-way tension between speed and size, it now goes three ways: speed, size and power.

It occurred to me that there is an interesting analog to this situation in another technology area that interests me: digital photography. Traditionally, having selected a specific speed [i.e. sensitivity] of film [indicated by its ISO number: 100, 200, 400 etc.], for each picture the photographer needed to balance between shutter speed and aperture. This could be a problem if there was an unexpected situation when an object is moving quickly [i.e. need fast shutter speed to capture] in a circumstance where a good depth of focus [implying small aperture] is also required. Nowadays, the photographer can choose the ISO for each image – i.e. they have three degrees of freedom. Automatic cameras often let you set speed/aperture and let the other one float. I, personally, have yet to see one where both can be set and the ISO adjusted automatically to get a good exposure.

Comments

0 thoughts about “Three degrees of freedom
  • The issue about “optimizing for power” is indeed an upcoming topic, as I have just learned from the presentations at the GNU Tools Cauldron 2013, : Embecosm have been presenting the two topics »The Impact of Different Compiler Options on Energy Consumption« as well as »MAGEEC: MAchine Guided Energy Efficient Compilation«, the latter being an industrial research project that just begun, . Surely a very ambituous project (due to the very many layers that are responsible for a computation’s power consumption), but we shall wish them the best luck in this project!

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at https://blogs.sw.siemens.com/embedded-software/2013/07/08/three-degrees-of-freedom/