Summary

In this chapter, we took a deep dive into creating an efficient interface to a complex driver stack that was very convenient to use. Using stream buffers, we analyzed trade-offs between decreasing latency and minimizing CPU usage. After a basic interface was in place, it was extended to be used across multiple tasks. We also saw an example of how a mutex could be used for ensuring that a multi-stage transaction remained atomic, even while the peripheral was shared between tasks.

Throughout the examples, we focused on performance versus ease of use and coding effort. Now that you have a good understanding of why design decisions are being made, you should be in a good position to make informed decisions regarding your own code base and implementations. When the time comes to implement your design, you'll also have a solid understanding of the steps that need to be taken to guarantee race condition-free access to your shared peripheral.

So far, we've been discussing trade-offs when creating drivers, so that we write something that is as close to perfect for our use case as possible. Wouldn't it be nice if (at the beginning of a new project) we didn't need to re-invent the wheel by copying, pasting, and modifying all of these drivers every time? Instead of continually introducing low-level, hard-to-find bugs, we could simply bring in everything we know that works well and get to work adding new features required for the new project? With a well-architected system, this type of workflow is entirely possible! In the next chapter, we'll cover several tips on creating a firmware architecture that is flexible and doesn't suffer from the copy-paste-modify trap many firmware engineers find themselves stuck in.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.218.70.93