Thought Leadership

malloc() – just say no

By Colin Walls

A topic that I find particularly interesting, which is raised by many embedded software developers whom I meet, is dynamic memory allocation – grabbing chunks of memory as and when you need them. This seemingly simple and routine operation opens up a huge number of problems. These are not confined to embedded development – many desktop applications exhibit memory leaks that impact performance and can make system reboots common.

However, I am concerned about the embedded development context …

Because I find this subject interesting, I cover it commonly in technical conferences and articles. There is a white paper available that outlines the problems and some solutions. However, today I want to take a slightly different perspective.

999A2AF8-FBE8-4354-B2B2-92F92BB31872I would normally outline three key reasons to not use standard malloc():

  • Memory allocation may fail
  • The function is commonly not re-entrant [thread friendly]
  • It is not deterministic [predictable]

These are valid points, but may not always be as important as they seem:

  • The function does clearly indicate failure by returning a NULL pointer. It is really quite straightforward to check for this and take appropriate action.
  • It is quite likely that all memory handling is done within a single thread/task.
  • Not all embedded systems are real time, so determinism might not really be needed.

However, malloc() does present another challenge: it is often rather slow. A real time system is fundamentally predictable, but not necessarily fast. Many embedded systems do not need to be predictable to any precision, but do need to be speedy. So, finding a way to provide the functionality of malloc(), without the problems, is worthwhile considering.

The main reason why malloc() is rather slow is that it is providing a lot of functionality – the allocation of chunks of memory of variable size is somewhat complex. However, it turns out that, for many applications, this functionality is really not needed, as the chunks of memory are all the same size [or a small number of known sizes]. It is a simple matter to write an allocation function for fixed size blocks – this can be done using an array with usage flags or a linked list [the latter is often better]. The resulting code will inevitably be faster. It may even be deterministic or could be made so, if that is a requirement. Allocation failure can still occur, but may be handled in an appropriate way for the specific application.

Leave a Reply

This article first appeared on the Siemens Digital Industries Software blog at