It would seem intuitive that writing in assembly language is the best possible option if you want the most optimal code in terms of size and/or speed. After all, assuming the programmer is smart and competent in the assembler for a particular CPU and understands its architecture well enough, and he has been fully appraised of the functional requirements of the code, the only possible result is code that uses the processor’s capabilities in an ideal way. A compiler just cannot compete, as it has no information on the precise requirements of the code.
Well, that would just seem common sense, but it does not take into account one key factor: human nature.
Let me illustrate this issue with a simple example: a C language switch statement. There are three patterns which I can envisage for the case values: contiguous values; almost contiguous values, with a few values missing; completely non-contiguous values. For contiguous case values, a good compiler will probably generate code with a simple list of addresses, which is indexed by the case values. The same thing would result for almost contiguous values, except the table would have a few dummy entries. For completely non-contiguous values, the likely code is a look-up table of values and addresses. In short, a compiler will use a strategy which is appropriate to the pattern of case values.
A smart human programmer would take a different approach and almost always code a look-up table. Why? Because this code is maintainable. If the code was written for contiguous values and then a change of requirements made the case values non-contiguous, a re-write would be necessitated and a smart programmer would want to avoid that possibility. A compiler re-writes the code every time you run it, so it does not care.
So, for switch statements anyway, a compiler will, on average, produce better code than a human programmer. The result is that C will, thus, yield a more efficient result than assembly language.