I have a very strong resistance to the practice of doing something [anything!] just because “it is the way we have always done it”. I love to play Devil’s Advocate [or “Devil’s Avocado” as I heard someone quip the other day] and propose change just to shake things up. It may be that the tried and tested approach is, indeed, the best, but you cannot be sure until you have considered that other options.
When it comes to embedded programming, it is easy – most people use C or similar languages. But is that the only way? …
Why is C [along with languages similar to it] the most common way to code embedded software, when other options are used somewhat more widely for desktop software development? I think that there are two main reasons. First, embedded developers are quite conservative and resist new ideas until they are proven and the benefits are clearly apparent – “if it ain’t broke, don’t fix it”. That is not such a bad attitude to take if you want to produce reliable code in a predictable time [or, at least, have a good shot at these laudable objectives]. The second reason is that many embedded developers have either started out [like me] in assembly language programming. The programming paradigm of procedural languages is much the same as assembly language and, hence, the native machine code of the CPU. So, C feels “right” and must be most efficient. Well, maybe.
There are other possibilities, which really should be considered when starting out on an embedded project:
- object oriented programming
- threaded interpretive languages
- concurrent programming
- declarative programming
Most developers will have encountered the concepts of object orient programming [OOP] and probably think about C++. However, C++ is really a procedural language [after all, it is in effect a superset of C] with some OOP features. It is a widely held belief that OOP languages result in bloated code/data, which wastes resources on smaller systems. Although this can occur, it is not an intrinsic characteristic of OOP in general or C++ in particular; it all depends on the implementation and how the language is used. OOP can be excellent for embedded development as it enables arcane functionality to be embedded into objects, which can then be safely reused by engineers unfamiliar with the internals of the object. A good example is the interfacing to some complex peripheral devices.
The best known threaded interpretive language [TIL] is Forth [which I wrote about a while back]. It is curious because it is one of only a few languages that was actually designed with embedded applications in mind. TILs commonly offer a very interactive programming environment, which encourages bottom up implementation [which I would tend to favour – that is another story …]. The code tends to be very compact and, instead of using variables [all the time] makes heavy use of a data stack. In my experience, people either get the reverse Polish logic, and are comfortable with it straight away, or they abhor it.
As a software developer, if you write something like this in C:
x = 1; y = 2;
you have clear expectations: the assignment of x will occur first and the assignment of y will follow. However, a hardware developer might write something that looks almost identical to this in a hardware definition language [HDL] like Verilog [or VHDL]. He would expect the two assignments to occur simultaneously [because hardware is like that]. What if software could be written that way! Well, actually it can be, as there are some languages that feature implicit concurrency – notably Plaid and ANI. How is the concurrency achieved? That is a matter of implementation; it can either be simulated [i.e. it is nothing more than language semantics] or built-in concurrency/multi-threading in the hardware may be exploited
Most programming is about writing solutions to problems, which corresponds well to how we think about life in general. I would suggest that an average day is really just a series of problems which have to be solved. It may be argued, however, that many problems solved by software could be approached differently. If the problem – or, rather, the solution you want to arrive at – could be described more clearly, the implementation might be fairly obvious. That is broadly the approach of declarative languages like Prolog, which started out life in the world of artificial intelligence. Maybe this could be a way forward for embedded development?
I found an article which looks at this topic more widely and makes an interesting read.