Evolution of the Toradex CE Libraries
LESSONS LEARNED FROM BUILDING AND MAINTAINING LEGACY SOFTWARE
If you are one of our customers who use the Toradex Windows CE code libraries, you might have noticed that there are now two different versions available for download. I want to tell you the story behind this update and finally encourage you to use the new API. But let’s start from the beginning:
Back in 2005, I was lucky enough to be part of a small group of enthusiastic friends who decided to start their own engineering company. After one year we agreed that we should have our own product, so we initiated the development of an Arm based computer module - the Colibri PXA270. The hardware was quickly done, while the software would expectedly keep us busy for the following months, even years. Not only the WinCE operating system needed to be adjusted, we also wanted to provide a whole set of tools to our customers to support them during the development and production phase of the end user application. Writing software for an operating system or for applications are two different kettles of fish. Still there was a lot of code which was used in both worlds, from simple memory allocation functions to toggling LED’s or communicating over i2c and other interfaces.
To make our lives easier we wrote libraries that could be used for both the OS as well as for applications. We were under time pressure, so we went the easy way and wrote the code how it fitted best for the PXA270, and that worked well. Therefore the next logical step was to publish the libraries for our customers. The Colibri PXA270 was very successful, so we designed new Colibri modules based on the latest CPUs of the same PXA family. The feature set was similar, so we extended the software libraries with a number of ifs and elses to take the different CPUs into account, turned the xyz_Init into an xyz_InitEx function to support additional features that we initially didn’t think of, and that worked well.
Also the sales figures of the new devices were more than good, which encouraged us to extend the module family with new CPUs. This time the differences in the architecture were larger, the libraries required more tweaking to work on all modules. Still the concept was to use as much common code as possible, and that again worked well for application code. However, it worked not so well for device drivers, as the code for other CPUs caused linking problems that we needed to solve. As always we took a pragmatic approach and inserted a number of #ifdefs to exclude unused code already at build time.
Another special case arose for the SPI support: We wanted to have a DMA based implementation on one side, but also a PIO based implementation for other occasions. The solution? Simple – generate two libraries with identical function calls, so the user can select the desired implementation at build time. That approach turned out to be too simple: as soon as both PIO and DMA mode were required in the same application, that model failed, and we had to merge the two implementations and add another set of functions to redirect all calls to either the DMA or PIO version.
More problems appeared after we implemented bug fixes for the new CPUs – suddenly we got reports that functions were broken for older CPUs. The dependencies within the code made it very difficult to test, whether a modification for one CPU did not have any unwanted side effects on the others. We could easily predict that we would see many more of these unwanted glitches when we extended the libraries for even more CPU architectures – a clear no-go for the quality of the software!
The only way out of this unpleasant situation was to step back and start from scratch. We had to define a new structure of the libraries, along with a new API, which was flexible enough to cover the requirements for the existing hardware as well as for future CPU architectures, which we didn’t even know by that time. It was hard to take that decision, because our customers rely on our commitment to backward compatibility. However, moving along with the old library architecture would have led to incompatibilities anyway in order to support important proprietary features of the new CPUs.
Now here we are, still completing the implementation of the new library architecture. Should we have taken that road from beginning on? Yes, in an ideal world. But in reality, we could not foresee the evolution of our products, so we took the right decision back then.
Should we have gone that way earlier? Possibly yes. It would have saved us a lot of time that we spent to identify and solve specific issues that could have been detected much earlier with a proper software architecture.
Wait for my next blog to learn about the concept behind the new Toradex code libraries.