Does Rust keep what its developers promise for programming embedded devices? This article examines their claims and whether they can keep their promises.
The Rust language represents itself as a perfect match for system programming and doing rather low-level tasks without the well-known pains that can be currently experienced with C and C++ especially from a security point of view. On their embedded specific web page (a programming language that has a webpage that is explicitly targeting embedded development?! ), the Rust creators do answer the question: Why use Rust (for embedded devices)? with the following points:
- Powerful static analysis
- Flexible memory
- Fearless concurrency
Let us have a look at the details of their promises, how they stand up to a closer look and if Rust can match the needs and requirements we have for a programming language in the embedded environment.
Powerful static analysis
“Enforce pin and peripheral configuration at compile time. Guarantee that resources won’t be used by unintended parts of your application.”
Powerful static analysis is a point that sounds good to me. Coming from the Linux and GCC world, an actual “good” static analysis would have been a big plus for a programming language some years ago. Today, the embedded (Linux) world is a bit more sophisticated. Since LLVM and clang caught up in this field, GCC is no longer the hardly avoidable tool of choice for embedded (Linux) systems with not-so-popular architectures. And with them, pretty good tools, in particular for static analysis entered the field. Even Linux, the once GCC-only operating system, is nowadays compilable with clang and benefits a lot from the clang static analyzer.
So let us get into the details and check if Rust can keep up with the Clang static analyzer and what is behind the catching sentence “Enforce pin and peripheral configuration at compile time”. The more details button leads to the chapter on static guarantees in the Embedded Rust Book. It speaks from Rust as a strongly typed language. That is nice but nothing completely new – looking at C and C++, for example. They have strongly typed languages as well, even if they are not as strong as Rust is since it is (especially in C) rather easy to trick their typesystem.
This is followed by the example that Rust could statically check peripheral and similar configurations using its type system. That fact sounds more promising as a strong argument for Rust – but how is it done? While I was reading the shiny advertisement text, I hoped for a sophisticated mechanism that is somehow able to detect which pins (or similar peripherals) are used and if they are properly configured. But such an advanced feature remains a dream.
In the following pages, the Rust embedded team presents a programming concept that takes advantage of the language’s strong type system, which is not only useful for defining GPIO and similar peripheral APIs but in other contexts as well. The basic idea behind it is using empty structs as state representation and designing the peripherals in the API as state machines. The empty structures do not cost any memory at runtime but work as strong types which do not allow casts. This enables designing an API or contract in Rust speak which enforces the correct sequence for a peripheral using these states. For a better understanding, read here on how this concept is intended.
That is indeed a clever concept, but it is also a concept that is neither enforced for embedded Rust nor unique to this language. An equivalent solution would be realizable in e.g. C++, too. My colleague Florian did an example implementation for me in modern C++, as you can see here. It is nice, clean and the example is more complete than the one on the Rust page.
In the end, the nice and catchy advertising promise on the website collapses a little bit upon itself after a closer look. I wonder why they did not advertise their nice compiler and its features more instead of this concept.
“Dynamic memory allocation is optional. Use a global allocator and dynamic data structures. Or leave out the heap altogether and statically allocate everything.”
When speaking about memory in embedded Rust, we need to differentiate between target platforms with the Rust standard library available and those without. Mostly, this will result in the question of whether it is needed to run in a bare metal environment or on top of an operating system. The marketing sentence on the embedded Rust website targets bare metal environments without the standard library. From my point of view, it is nevertheless needed to look at both cases, even when using embedded, as there is not necessarily a need for bare metal programming, depending on the problem to solve.
Going without dynamic memory allocations at all bypasses, of course, has a lot of issues but it is neither possible, depending on the use case, nor a selling feature. Instead, I would wish the same safety Rust provides, e.g. for collections on bare metal (that means platforms without the standard library). Besides, it is a bit sad that bare metal programming with Rust relies on a no-std environment. But the C/C++ world also has barely any libraries to use, so it is not better or worse.
“Rust makes it impossible to accidentally share state between threads. Use any concurrency approach you like and you’ll still get Rust’s strong guarantees”
In general, Rust suffers from the same issues with concurrency as any other language does. For embedded software contexts, this includes:
- common multithreading,
- multi-core processors and
- dealing with interrupt handlers and interruptible code.
Multithreading means having a single processor which swaps between the executed programs and/or parts of them, e.g. the same processor executes the main program and a worker thread simultaneously via a time-sharing model. Shared data or states must not be compromised. Multi-core processors put this to the next level as both parts could run exactly at the same time on distinct processors, sharing memory and possibly caches.
These issues are not limited to the embedded world but at least the third variant is more commonly used within the operating system and device drivers, while it is conceptually very similar to the others. When, for example, a peripheral button is pressed, immediate action is usually needed. When running the application in a loop, there is a chance the incoming signal is gone until the execution flow is on the handling code or it just takes more time than it should. Instead, interrupts and interrupt handlers are used. When they share states or data with the main application, we run into the same, known concurrency issues.
The ways to handle these issues are in general very much the same as known from other programming languages. The first one is not allowing any kind of concurrency and thus interrupts at all. This might be a solution for very limited tasks on microcontrollers but more complex applications need to allow interrupts. For example, an application that implements an emergency stop for a machine needs to stop immediately. There is no time for looping through the code. When both parts, the main program, and the interrupt handler, now access an unprotected state variable, maybe even on a multiprocessor system at the same time, the feared concurrency issues could happen.
To sum it up, Rust does not only suffer from the same concurrency issues as other languages do, it offers the same set of mechanisms to cope with them as well. The most basic mechanisms are, among others, defining critical sections by disabling interrupts, the use of atomic data types, and the use of mutexes, which allows only one thread to execute a certain code region exclusively.
And of course, Rust suffers from the same issues with them as well. Disabling interrupts directly, e.g. using cortex_m::interrupt::free, takes advantage of architectural-specific code and is rather not portable directly.
These basic mechanisms do not prevent deadlocks. Additionally, mutexes and critical sections do not provide any safety at all on multiprocessor systems by design.
More sophisticated and specific to (embedded) Rust is the idea behind how sharing peripherals is solved for device crates (the Rust name for library) which were generated using svd2rust. This tool creates abstractions. That ensures the peripheral-representing structs can only exist once at a time. It is a nice idea but leads to some issues when it is needed to share the peripheral, e.g. in case it is needed in the main application and in an interrupt handler. For further information, read here.
Another attractive “third-party” approach is using the RTIC framework, which should not be discussed here in more detail.
To summarize, Rust does not solve all the known concurrency issues in the low-level world via its language design. It offers mainly the same solutions as most languages do and adds some clever additional concepts. But it suffers from the same general pains. Its clever concepts could maybe minimize the pain with concurrency in some situations but are no global solution to avoid concurrency issues for all time. At least for low-level usage, Rust cannot make concurrency fearless (, yet).
“Write a library or driver once, and use it with a variety of systems, ranging from very small microcontrollers to powerful SBCs.”
Portability is indeed a big issue in embedded software. This reaches from general support for different architectures and platforms to related tools and toolchains like cross-compilers. These things were the ones I had in mind and wanted to learn more about when clicking the button. The link leads to the Embedded Rust Book one more time, more explicitly to the Portability chapter which tells me about the embedded-hal. Ok, hardware abstraction layers are a good idea in general as they introduce a layer between the actual hardware and software. But, as often mentioned in this article, this idea as well is neither new nor Rust-exclusive.
In Rust, one central idea behind them seems to be reducing complexity and possible errors as they use the Hardware Abstraction Layer (HAL) to provide well-defined interfaces for central hardware components needed in embedded development, such as GPIO, Serial, I2C, SPI, Timers, and Analog-Digital Converters. As a result, the user does not need to know about how to implement these things on a specific device and it does not matter for which the software is written as the HAL specifies a well-defined interface while the implementation for specific hardware is done by others once. That is a nice approach that is used in many other places – especially in embedded development –, but this does not answer the questions I had in mind when reading portability. To answer these questions, a little search leads to the rustc book and the rustup book (The Rust people seem to like spreading their documentation as a set of books …). The rustc book lists all currently available platforms and architectures subdivided into three tiers, where Tier 1 means guaranteed to work, Tier 2 means guaranteed to build, and Tier 3 means not officially supported at all. When thinking about embedded devices as mostly single board computers or microcontrollers on an ARM basis in different levels, it is a bit annoying that only ARM64 with Linux is supported in the Tier 1 level (since 31.12.2020), while most other valid targets are located in Tier 2 or even Tier 3. That is not the best prerequisite for building safe and reliable embedded or IoT devices. The rustup book answers the remaining questions in regards to cross-compiling. rustup is an officially supported toolchain multiplexer that is intended to bring the needed rustc compiler and standard libraries for possible target platforms. It is a nice idea to bundle toolchains for cross-compiling in a central, well-maintained tool to avoid a mess like known from different Linux toolchains for ARM. Unfortunately, this idea is still not yet ready, too. For example, additional, necessary tools to cross-compile for another target like a particular linker must be installed manually. And that brings the potential of a mess as well.
“As part of the Rust open source project, support for embedded systems is driven by a best-in-class open source community with support from commercial partners.”
When thinking about this point and its description, I felt at first a bit like “Ok, cool, but what’s your point?”, especially when I clicked the read more button and found myself on the GitHub repository of the Rust-embedded Working Group. Thus, I am still not sure what to think about the emphasis on Rust being community-driven. But it gets me to a very relevant and important point: licensing. The Rust programming language and officially related projects are dual-licensed with the Apache 2.0 and the MIT license. These are very permissive licenses that allow the use of Rust in commercial applications, while it is not needed to disclose the source code. This would enable the use of Rust in most projects.
“Integrate Rust into your existing C codebase or leverage an existing SDK to write a Rust application.”
Combining current applications with Rust and swapping suitable parts out to take advantage of Rust’s features sounds like a perfect plan to get started and slowly move to Rust when it offers a benefit. According to the Rust embedded book, this is also possible vice versa.
It would be nicer if this interoperability would be a core feature of Rust and thus available on no_std environments out-of-the-box, but as the functionality is generally available, it is just a minor pain point. Another pain point is the dependence on the Cargo build system. It seems this use case is currently not that widespread and thus rather poorly documented in contrast to other issues of Rust in an embedded environment.
But in general, this interoperability is a very promising starting point for the use of Rust in embedded. Even if Rust is no magic bullet that makes code generally errorless and great all time, it is still a promising language and has the potential to make embedded applications less error-prone using its language features. The option to combine already existing C or C++ code with Rust offers a way to migrate step-by-step, e.g. by putting a (maybe rather high-level) new feature or functionality into a Rust library. This could enable teams to get in contact with the language in a real-world scenario without a huge risk and evaluate it for their exact use case.
And in the end?
When I started this blog post, I did it because the Rust programming language seems to be the rising star in systems and embedded programming. Rust is on its way into Linux which is remarkable since there was not a serious discussion on Linux drivers in any other language, yet. Additionally, Google started to put more and more into the Android (AOSP) tree. So I started playing around with Rust, read a lot, and found that shiny landing page on Embedded Rust.
When reading my results from this deep dive above, one could think Rust is not (yet) suitable at all or a poorly designed language. But that is not the case. Rust is still a very interesting and promising language! But at least for hardcore embedded usage, mostly for bare metal programming, it is just not as far developed as this shiny little embedded landing page suggests. This does not make the whole language bad, even if it is right that Rust is sometimes rather hard to learn and academic.
Looking at the current state, I could not imagine starting a bare-metal project using Rust. There are several reasons for it and the general stability of the language is one major issue. Incompatible changes between major releases result in a high effort to keep the code up to date. Otherwise one must accept it may not compile anymore after a rather short time. This is unacceptable for most commercial projects, but I think a system library within the AOSP could be fine right now. As big companies like Google have time and money to keep up with the development (I forget python 2 which is still all over the AOSP tree for the moment).
However, when considering that the first stable version is only 6 years old, it is probably completely fine to give Rust some more years to become that nice language in the (admittedly difficult and heterogeneous) embedded environment as the shiny landing page already suggests today.