Chat with your data: Unternehmensdaten als Basis für einen eigenen KI-Assistenten nutzen.
Zum Angebot 
Zeichnung Frau steht neben großem Fragezeichen und Rust Logo
Embedded Systems

Rust for Embedded – Is It Ready Yet?

18 ​​min

Does Rust keep what its developers promise for programming embedded devices? This article examines their claims and whether they can keep their promises.

The Rust language represents itself as a perfect match for system programming and doing rather low-level tasks without the well-known pains that can be currently experienced with C and C++ especially from a security point of view. On their embedded specific web page (a programming language that has a webpage that is explicitly targeting embedded development?! ), the Rust creators do answer the question: Why use Rust (for embedded devices)? with the following points:

  • Powerful static analysis
  • Flexible memory
  • Fearless concurrency
  • Interoperability
  • Portability
  • Community-driven

Let us have a look at the details of their promises, how they stand up to a closer look and if Rust can match the needs and requirements we have for a programming language in the embedded environment.

Powerful static analysis

“Enforce pin and peripheral configuration at compile time. Guarantee that resources won’t be used by unintended parts of your application.“

Powerful static analysis is a point that sounds good to me. Coming from the Linux and GCC world, an actual “good“ static analysis would have been a big plus for a programming language some years ago. Today, the embedded (Linux) world is a bit more sophisticated. Since LLVM and clang caught up in this field, GCC is no longer the hardly avoidable tool of choice for embedded (Linux) systems with not-so-popular architectures. And with them, pretty good tools, in particular for static analysis entered the field. Even Linux, the once GCC-only operating system, is nowadays compilable with clang and benefits a lot from the clang static analyzer.

So let us get into the details and check if Rust can keep up with the Clang static analyzer and what is behind the catching sentence “Enforce pin and peripheral configuration at compile time“. The more details button leads to the chapter on static guarantees in the Embedded Rust Book. It speaks from Rust as a strongly typed language. That is nice but nothing completely new – looking at C and C++, for example.  They have strongly typed languages as well, even if they are not as strong as Rust is since it is (especially in C) rather easy to trick their typesystem. 

This is followed by the example that Rust could statically check peripheral and similar configurations using its type system. That fact sounds more promising as a strong argument for Rust – but how is it done? While I was reading the shiny advertisement text, I hoped for a sophisticated mechanism that is somehow able to detect which pins (or similar peripherals) are used and if they are properly configured. But such an advanced feature remains a dream.

In the following pages, the Rust embedded team presents a programming concept that takes advantage of the language’s strong type system, which is not only useful for defining GPIO and similar peripheral APIs but in other contexts as well. The basic idea behind it is using empty structs as state representation and designing the peripherals in the API as state machines. The empty structures do not cost any memory at runtime but work as strong types which do not allow casts. This enables designing an API or contract in Rust speak which enforces the correct sequence for a peripheral using these states. For a better understanding, read here on how this concept is intended.

That is indeed a clever concept, but it is also a concept that is neither enforced for embedded Rust nor unique to this language. An equivalent solution would be realizable in e.g. C++, too. My colleague Florian did an example implementation for me in modern C++, as you can see here. It is nice, clean and the example is more complete than the one on the Rust page.

In the end, the nice and catchy advertising promise on the website collapses a little bit upon itself after a closer look. I wonder why they did not advertise their nice compiler and its features more instead of this concept.

Flexible memory

“Dynamic memory allocation is optional. Use a global allocator and dynamic data structures. Or leave out the heap altogether and statically allocate everything.“

When speaking about memory in embedded Rust, we need to differentiate between target platforms with the Rust standard library available and those without. Mostly, this will result in the question of whether it is needed to run in a bare metal environment or on top of an operating system. The marketing sentence on the embedded Rust website targets bare metal environments without the standard library. From my point of view, it is nevertheless needed to look at both cases, even when using embedded, as there is not necessarily a need for bare metal programming, depending on the problem to solve.

Going without dynamic memory allocations at all bypasses, of course, has a lot of issues but it is neither possible, depending on the use case, nor a selling feature. Instead, I would wish the same safety Rust provides, e.g. for collections on bare metal (that means platforms without the standard library). Besides, it is a bit sad that bare metal programming with Rust relies on a no-std environment. But the C/C++ world also has barely any libraries to use, so it is not better or worse.

Fearless concurrency

“Rust makes it impossible to accidentally share state between threads. Use any concurrency approach you like and you’ll still get Rust’s strong guarantees“

In general, Rust suffers from the same issues with concurrency as any other language does. For embedded software contexts, this includes:

  • common multithreading,
  • multi-core processors and
  • dealing with interrupt handlers and interruptible code.

Multithreading means having a single processor which swaps between the executed programs and/or parts of them, e.g. the same processor executes the main program and a worker thread simultaneously via a time-sharing model. Shared data or states must not be compromised. Multi-core processors put this to the next level as both parts could run exactly at the same time on distinct processors, sharing memory and possibly caches.

These issues are not limited to the embedded world but at least the third variant is more commonly used within the operating system and device drivers, while it is conceptually very similar to the others. When, for example, a peripheral button is pressed, immediate action is usually needed. When running the application in a loop, there is a chance the incoming signal is gone until the execution flow is on the handling code or it just takes more time than it should. Instead, interrupts and interrupt handlers are used. When they share states or data with the main application, we run into the same, known concurrency issues.

The ways to handle these issues are in general very much the same as known from other programming languages. The first one is not allowing any kind of concurrency and thus interrupts at all. This might be a solution for very limited tasks on microcontrollers but more complex applications need to allow interrupts. For example, an application that implements an emergency stop for a machine needs to stop immediately. There is no time for looping through the code. When both parts, the main program, and the interrupt handler, now access an unprotected state variable, maybe even on a multiprocessor system at the same time, the feared concurrency issues could happen.

To sum it up, Rust does not only suffer from the same concurrency issues as other languages do, it offers the same set of mechanisms to cope with them as well. The most basic mechanisms are, among others, defining critical sections by disabling interrupts, the use of atomic data types, and the use of mutexes, which allows only one thread to execute a certain code region exclusively.  

And of course, Rust suffers from the same issues with them as well. Disabling interrupts directly, e.g. using cortex_m::interrupt::free, takes advantage of architectural-specific code and is rather not portable directly.

These basic mechanisms do not prevent deadlocks. Additionally, mutexes and critical sections do not provide any safety at all on multiprocessor systems by design.

More sophisticated and specific to (embedded) Rust is the idea behind how sharing peripherals is solved for device crates (the Rust name for library) which were generated using svd2rust. This tool creates abstractions. That ensures the peripheral-representing structs can only exist once at a time. It is a nice idea but leads to some issues when it is needed to share the peripheral, e.g. in case it is needed in the main application and in an interrupt handler. For further information, read here.

Another attractive “third-party“ approach is using the RTIC framework, which should not be discussed here in more detail.

To summarize, Rust does not solve all the known concurrency issues in the low-level world via its language design. It offers mainly the same solutions as most languages do and adds some clever additional concepts. But it suffers from the same general pains. Its clever concepts could maybe minimize the pain with concurrency in some situations but are no global solution to avoid concurrency issues for all time. At least for low-level usage, Rust cannot make concurrency fearless (, yet).


“Write a library or driver once, and use it with a variety of systems, ranging from very small microcontrollers to powerful SBCs.“

Portability is indeed a big issue in embedded software. This reaches from general support for different architectures and platforms to related tools and toolchains like cross-compilers. These things were the ones I had in mind and wanted to learn more about when clicking the button.  The link leads to the Embedded Rust Book one more time, more explicitly to the Portability chapter which tells me about the embedded-hal. Ok, hardware abstraction layers are a good idea in general as they introduce a layer between the actual hardware and software. But, as often mentioned in this article, this idea as well is neither new nor Rust-exclusive. 

In Rust, one central idea behind them seems to be reducing complexity and possible errors as they use the Hardware Abstraction Layer (HAL) to provide well-defined interfaces for central hardware components needed in embedded development, such as GPIO, Serial, I2C, SPI, Timers, and Analog-Digital Converters. As a result, the user does not need to know about how to implement these things on a specific device and it does not matter for which the software is written as the HAL specifies a well-defined interface while the implementation for specific hardware is done by others once. That is a nice approach that is used in many other places – especially in embedded development –, but this does not answer the questions I had in mind when reading portability. To answer these questions, a little search leads to the rustc book and the rustup book (The Rust people seem to like spreading their documentation as a set of books …). The rustc book lists all currently available platforms and architectures subdivided into three tiers, where Tier 1 means guaranteed to work, Tier 2 means guaranteed to build, and Tier 3 means not officially supported at all. When thinking about embedded devices as mostly single board computers or microcontrollers on an ARM basis in different levels, it is a bit annoying that only ARM64 with Linux is supported in the Tier 1 level (since 31.12.2020), while most other valid targets are located in Tier 2 or even Tier 3. That is not the best prerequisite for building safe and reliable embedded or IoT devices. The rustup book answers the remaining questions in regards to cross-compiling. rustup is an officially supported toolchain multiplexer that is intended to bring the needed rustc compiler and standard libraries for possible target platforms. It is a nice idea to bundle toolchains for cross-compiling in a central, well-maintained tool to avoid a mess like known from different Linux toolchains for ARM. Unfortunately, this idea is still not yet ready, too. For example, additional, necessary tools to cross-compile for another target like a particular linker must be installed manually. And that brings the potential of a mess as well.


“As part of the Rust open source project, support for embedded systems is driven by a best-in-class open source community with support from commercial partners.“

When thinking about this point and its description, I felt at first a bit like “Ok, cool, but what’s your point?“, especially when I clicked the read more button and found myself on the GitHub repository of the Rust-embedded Working Group. Thus, I am still not sure what to think about the emphasis on Rust being community-driven. But it gets me to a very relevant and important point: licensing. The Rust programming language and officially related projects are dual-licensed with the Apache 2.0  and the MIT license. These are very permissive licenses that allow the use of Rust in commercial applications, while it is not needed to disclose the source code. This would enable the use of Rust in most projects.


“Integrate Rust into your existing C codebase or leverage an existing SDK to write a Rust application.“

Combining current applications with Rust and swapping suitable parts out to take advantage of Rust’s features sounds like a perfect plan to get started and slowly move to Rust when it offers a benefit. According to the Rust embedded book, this is also possible vice versa.

It would be nicer if this interoperability would be a core feature of Rust and thus available on no_std environments out-of-the-box, but as the functionality is generally available, it is just a minor pain point. Another pain point is the dependence on the Cargo build system. It seems this use case is currently not that widespread and thus rather poorly documented in contrast to other issues of Rust in an embedded environment.

But in general, this interoperability is a very promising starting point for the use of Rust in embedded. Even if Rust is no magic bullet that makes code generally errorless and great all time, it is still a promising language and has the potential to make embedded applications less error-prone using its language features. The option to combine already existing C or C++ code with Rust offers a way to migrate step-by-step, e.g. by putting a (maybe rather high-level) new feature or functionality into a Rust library. This could enable teams to get in contact with the language in a real-world scenario without a huge risk and evaluate it for their exact use case.

And in the end?

When I started this blog post, I did it because the Rust programming language seems to be the rising star in systems and embedded programming. Rust is on its way into Linux which is remarkable since there was not a serious discussion on Linux drivers in any other language, yet. Additionally, Google started to put more and more into the Android (AOSP) tree. So I started playing around with Rust, read a lot, and found that shiny landing page on Embedded Rust.

When reading my results from this deep dive above, one could think Rust is not (yet) suitable at all or a poorly designed language. But that is not the case. Rust is still a very interesting and promising language! But at least for hardcore embedded usage, mostly for bare metal programming, it is just not as far developed as this shiny little embedded landing page suggests. This does not make the whole language bad, even if it is right that Rust is sometimes rather hard to learn and academic.

Looking at the current state, I could not imagine starting a bare-metal project using Rust. There are several reasons for it and the general stability of the language is one major issue.  Incompatible changes between major releases result in a high effort to keep the code up to date. Otherwise one must accept it may not compile anymore after a rather short time. This is unacceptable for most commercial projects, but I think a system library within the AOSP could be fine right now. As big companies like Google have time and money to keep up with the development (I forget python 2 which is still all over the AOSP tree for the moment).

However, when considering that the first stable version is only 6 years old, it is probably completely fine to give Rust some more years to become that nice language in the (admittedly difficult and heterogeneous) embedded environment as the shiny landing page already suggests today.

12 Kommentare

  1. Your article is so clear with good explanations and I remember these titles/sections on the Rust website…

    I like Rust but sometimes i ask myself if it is ready. I did the same thing like you when i was searching how it used in web dev and here it’s more advanced (Rust usage is more easy in web, less constraints relative directly to the materials or architectures)…

    And coming from embedded I want to test it also on embedded area by rewriting one of my old lib which is in c++ and arduino…

    Thank you.

  2. Thank you so much for writing this blog post! It seems like an excellent overview of the current state of embedded development with Rust in 2022. As someone who mostly writes code in Python getting started with embedded Rust compared to getting more experience with C has been much less daunting. Hopefully, the language can mature well in the embedded space and become a better option in the future.

  3. I am using Rust for communicating with a modem(from STM32), mostly over UART channels. I am using Raspberry Pi and STM32F103RB for different prototypes. Here are my thoughts;
    1. Raspberry Pi version of the Rust software is significantly more Rust idiomatic code. Since you have the power of an OS behind you, I was able to create small but limited responsibility threads and message passing mechanism easily, no mutexes, syncs etc since problematic resources were accessed only in one thread. But, RPPAL crate allows you to create two UART handlers from the same UART channel, funnily enough. You can not clone a UART handler for safety reasons but you can create an effective clone by calling create_handler function twice for the same hw resource.
    2. Bare-metal version on the other hand, looked like C code with Rust syntax. I had to use global states because I was handling interrupts, there is no way around having static variables when you are dealing with that much low level. HAL crates are still developing, their own examples are not updated to be build with their own crates. Documentation is non-existing, you have to know C HAL libraries really well to make sense of Rust crates for the same hardware. There is literally no comments on the functions, arguments. As you mentioned, the crates are not stabilized yet, so I can’t be sure if I will have to rewrite and worse, redesign my code in 3 months.

    In conclusion, Rust in embedded domain needs maybe a decade to get to maturity of C libraries(to cover all the functionalities) and their safety promises do not work in bare-metal environment as much as it works in General Purpose OSes, by the nature of the domain.

  4. When writing concurrent code for Linux, mutexes are as necessary in Rust as they are in C/C++. What Rust brings to the table is that it refuses to compile code that incorrectly uses mutexes, or that tries to access shared state without mutexes. This is great, because concurrency bugs are rarely caused by a missing mechanism in the language, but rather by a lack of proper use of such a mechanism. Having proper use enforced at the language level is invaluable. Naively, I expected bare metal Rust to have the same advantage over C/C++ in that respect. However, after reading your article, it seems that is not the case, although I did not get exactly why. Is this why you do not believe Rust enables „fearless concurrency“ in bare metal code, or have I misunderstood something? Thanks!

    1. Hey Fred,
      thanks for your comment. As for your question, Anna-Lena is quite busy at the moment but she will get back to you as soon as possible 🙂

    2. „[..]it refuses to compile code that incorrectly uses mutexes, or that tries to access shared state without mutexes[..] concurrency bugs are rarely caused by a missing mechanism in the language, but rather by a lack of proper use of such a mechanism.“

      Yes, this is the crucial aspect that Anna-Lena misses in the article.

      You can write correct code in both C, C++ or Rust, the default is what makes the difference.

      The default state in Rust is write correct code thanks to the borrow checker imposing the ownership rules. Coincidentally, the same rules, if broken are causes for data races and thread unsafety. When necessary, one can resort to lowering the restrictions on specific limited areas of code to allow implementation of said primitives/mechanisms, be they mutexes, spinlocks, cache flushing or other things.

      The default state in C and C++ is one writes possibly good or bad code, can write correct code, but explicit human effort needs to be spent write and maintain correct code in its correct state.

      This has been proven in practice by Mozilla’s reason for creating Rust: they needed to write and maintain a correct trivially parallel code – the CSS render – they wrote it 3 times in C++ initially correct, but failed to keep it that way. But wrote it correct and it stayed correct in Rust from the first try.

      „Having proper use enforced at the language level is invaluable. Naively, I expected bare metal Rust to have the same advantage over C/C++ in that respect.“

      It does. What Anna-Lena is complaining about is that, depending on your platform, just because you wrote code on one core in Rust does not guarantee that it might not run into multi-core specific issues just by virtue of using Rust on that one core, or even multiple ones. But, as with C or C++, you need to have something else to reap the benefits of the fearless concurrency across multiple cores.

      Yet again, the missed point is that if a shared resource, say a shared memory, is wrapped into a type that ensures synchronization, then, as you correctly pointed out and seen on Linux, you would be prevented at compile time from using incorrectly that shared memory.

      As a side note, the one funny thing about people complaining about the borrow-checker is that is a very common experience that is particular and real for the people which haven’t yet understood what the borrow checker does for you because they think „I know this is right, I’ve done this previously in XYZ language“. Yes, but you fail to understand that the borrow checker is allowing you to remove the mental load of the minutia, and focus on the logic you want to achieve. I know, I did myself, too.

  5. Hi Fred,
    yes, rust has the borrow checker and other language concepts to avoid accidentally sharing resources (e.g. between interrupts and the mainloop). And yes, this is new in rust. But it’s just another abstraction around existing concepts (critical sections, locks, atomics) to handling concurrency. It works but adds extra conceptually and cognitive overhead for the programmer to handle.
    Even the rust documentation itself says at the end of the concurrency chapter ( „Whew! This is safe, but it is also a little unwieldy. Is there anything else we can do?“
    and then points other mechanisms like message passing and to real time operation systems (RTOS).
    Their claim that rust makes concurrency for embedded „fearless“ is exaggerated in my opinion and as written down by my coworker Anna-Lena in this blog post. Even with rust concurrency is hard and makes developers sweat 😉

  6. This article popped up on my news feed and I’m glad to see that the Rust in AOSP is being mentioned. I work in Android and since last year, we’ve double down on Rust. We’ve put out a full Rust course, which also covers bare metal development:

    It’s a super exciting time to be working with Rust. I see it growing in popularity I’m so many areas: the Linux Kernel, the Windows Kernel, Android, lots of open source projects…

  7. „rust has the borrow checker and other language concepts to avoid accidentally sharing resources[..] It works but adds extra conceptually and cognitive overhead for the programmer to handle.“

    The borrow checker does indeed add in some concepts it, but it is actually the other way around on the „cognitive overhead for the programmer to handle“ claim. It actually allows you to unload the data access correctness thinking from your head, which , if writing in another language you have to think about, or run the risk of writing possibly buggy or thread unsafe code.

    „Even the rust documentation itself says at the end of the concurrency chapter ( „Whew! This is safe, but it is also a little unwieldy. Is there anything else we can do?““

    That book actually goes through creating from scratch something that is expected to be correct from scratch, guided by the knowledge about the hardware, and what the language offers to create a solid basis to end up with an environment that even on constrained systems like bare metal microcontrollers you can write now correct and thread safe (or at least main-loop/interrupt safe) code.

    That’s why the next section in the book talks about RTIC, which anyone can use already without having to go through the pain of writing it from scratch.

    It is the nature of embedded programming to have to think about what is particular about our own platform/microcontroller, in case it has some very specific hardware modules, but Rust allows one to only focus on implementing the defferentiating part of the „magic“ nature of your device, mostly by creating your own types and traits that are subject to the same rules, while the generic part of the code can be reused and can be fearlessly employed in single or multi threaded code, or even multi core code thanks to the expressiveness of the language that can tell the compiler what are resources are thread safe, or even multi-core safe.

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert