This blog post evaluates three different Federated Learning frameworks and the concepts they use to achieve a collaborative training. Basic knowledge about Federated Learning is required to understand this blog post. If you need a refresher, check out the previous part of this series “Federated Learning: A Guide to Collaborative Training with Decentralized Sensitive Data – Part 1” .
Federated Learning enables you to train Machine Learning models on sensitive data in a privacy preserving way. Therefore, multiple participants collaboratively train a model with their sensitive data. With this technique numerous previously unusable data sources now can be used for collaborative Machine Learning. In order to implement your Machine Learning project using Federated Learning, a framework can take on a variety of tasks for you and thereby support the developing process by implementing all necessary features. But what are the necessary requirements a Federated Learning framework needs to comply with? Which features are most important to your use case and which frameworks do support them? These questions are answered in the following evaluation of Federated Learning frameworks.
Ideally, a Federated Learning framework should be ready for use out of the box. This means you don’t need any additional engineering except the deployment to get your setup up and running. Of course, this is unlikely to occur in any Machine Learning project and thus you should choose the most suitable framework for your project requirements. Whether your project is just a small proof of concept, a research experiment or a full grown customer project it can be defined by answering the five following questions:
- Where does the training take place: edge devices, smartphones, personal computers or cloud servers? Should the training be executed on remote devices or is a local execution sufficient for an experimental setup?
- Does your data require protection by Differential Privacy to guarantee data privacy? (Your data should always be protected by Differential Privacy, except in experimental setups)
- Do you need to use Secure Aggregation to prevent the curator from spying on the participants’ model updates?
- Is GPU execution needed to speed up training with a large number of samples or large models?
- Does the framework require a broad community and good documentation that allows quick adaptation to the new framework and helps with debugging, development and issues?
For the purpose of this evaluation, three open source Federated Learning frameworks are introduced and compared. The number of frameworks is very limited, and for the purpose of this evaluation three Federated Learning Frameworks were selected. None of these frameworks fulfill all five evaluation criteria introduced previously, thus there is no single perfect framework available now. While this has been a surprising insight for me, you can at least find a complete overview over the most promising frameworks in the following.
TensorFlow’s approach to Federated Learning is called TensorFlow Federated (TFF) . Currently, it supports local execution for experimental setups and distribution of the participants to dedicated platforms utilizing docker containers. An additional TensorFlow library named TensorFlowPrivacy enables training with sensible data utilizing differential private algorithms. TFF does not support Secure Aggregation for improved security and does not run on GPUs. As it integrates well with the existing modular library concept of TensorFlow, it benefits from a broad community, while there is no community specifically dedicated to TFF.
PySyft integrates Federated Learning into PyTorch, a Machine Learning framework most widely used in the science and research community . It offers the ability to distribute workers as Docker containers on any platform that supports Docker. It does not come with an implementation of Differential Privacy, but there is one that is currently under development. However, it supports Secure Aggregation and execution on GPUs and is therefore best suited for performance testing in real-world scenarios. The community and support behind the framework itself is quite alive.
PaddleFL is developed by the Chinese search engine operator Baidu and is based on its Machine Learning framework called Paddle (“Parallel Distributed Deep Learning“) . It supports all technical aspects of the above criteria: It is highly distributable and loads data from remote, it supports Differential Privacy, Secure Aggregation and execution on GPU. The documentation is also available in English, but the community around Paddle communicates mainly in Chinese. Unfortunately, the number of available code examples is quite small compared to the numerous features in the repository.
Concepts: Integrated Frameworks vs Static Frameworks
A major advantage of using TensorFlow Federated or PySyft is that they integrate very well into the existing syntax of their specific Machine Learning framework. In fact, they allow the use of the native syntax of their base framework with remote data by using remote procedure calls. All tensor operations, training and loss functions can be executed on remote data using the familiar syntax. Thus, the code only needs to be modified in a single central file, which is then executed and controls the computation of the data located at the participants. This enables fast development, as changes need only be done in a central file. In contrast it also limits the number of available operations to those operations which are provided by the specific framework. The inclusion of external libraries for data preprocessing or Differential Privacy is not possible with remote procedure calls.
The concept of PaddleFL is based on static execution using a multiple program multiple data paradigm. Therefore all participants and the curator get supplied with an individual file specifically generated with the configuration scripts. These configurations define the data to be loaded, the ML-Model used for training and the techniques used in the training. After distributing the program files, the participants synchronize and the local training takes place when receiving a corresponding event message. This concept is stable and faster because each participant can work independently and does not have to be constantly supplied with operations by the curator.
Nevertheless, the creation and distribution of program files makes development more complicated compared to remote procedure calls used by TFF and PySyft. Because each time parts of the training have to be changed, new configurations have to be distributed to all participants. In addition, PaddleFL does not allow the inclusion of external libraries. This is not troubling because it supports all technical aspects of the above criteria for Federated Learning.
Three frameworks were introduced based on the initially defined evaluation criteria for Federated Learning projects. To give a compact overview, the results are summarized in the following Table 1. To evaluate the community support of each framework, the table also documents the number of github repository stars. Please keep in mind that all of these frameworks are currently under development and features can be added at any time.
|Based on ML-Framework||TensorFlow||PyTorch||PaddlePaddle|
|Remote Data Loading||No||No||Yes|
|Supported Systems||Local simulation and Docker container distribution||Local simulation and Docker container distribution||OS and K8s (Docker Container), mobile, embedded and internet of things devices|
|Currently supports Remote Worker||No||Yes||Yes|
|Differential Privacy||Yes||Not Now||Yes|
|Secure Aggregation (SMPC)||No||Yes||Yes|
|Resources for Code Examples||Just a few very basic examples in the documentation and repository||Tutorial & examples in documentation and repository + blog posts and community projects||Few examples in the repository|
|Documentation & community support||Only developer documentation and stack overflow community support||Good documentation and good community support||Documentation in English. Most of community, issues and their website are in Chinese|
|Stars of Framework &|
Stars of base Framework
|1.1 k TensorFlow Federated &|
145 k TensorFlow
|6.1 k PySyft &|
41 k PyTorch
|93 PaddleFL &|
11.5 k Paddle
TensorFlow Federated & PySyft
What are the pros and cons of these three frameworks and why choose one over another? TFF’s and PySyft’s base frameworks, TensorFlow and PyTorch, are widely known in the Machine Learning community which provides them with good support. Chances are that you are already familiar with one of them. What is really different is that TFF already comes with an implementation of Differential Privacy which allows training on sensible data. Both allow the distribution of participants as containers to remote locations. In all other criteria PySyft is superior either way because it integrates Secure Aggregation with SMPC or has a community focusing on their Federated Learning implementation.
On the other side, both PySyft and TensorFlow lag a feature that makes them unsuitable for real-world application. The data used for training can not be loaded from the remote worker itself, but must be sliced and distributed by the central curator. This is, of course, contrary to the paradigm of Federated Learning, which consists in never centralizing data to preserve privacy. Consequently, these frameworks are only suitable for experimental setups until they are able to load remote data and in case of PySyft also include Differential Privacy in the training process.
PaddleFL supports all technical criteria required for Federated Learning. It comes with several Federated Learning techniques, including Differential Privacy, Secure Aggregation and can be distributed across multiple platforms. The major problem is the lack of English resources (except for the documentation and a few examples from the repository). Also the community behind it communicates mostly in Chinese. Therefore it is difficult to verify the application of the framework without testing it extensively in a real-world scenario.
Custom made Federated Learning Framework
The number of available frameworks for Federated Learning is quite small and none enables federated learning in a fully sufficient way. Thus it might be an option to implement your own Federated Learning solution. This is beneficial in many ways: Choose your preferred base Machine Learning framework, integrate any arbitrary library, adapt it to your existing architecture and hardware. In contrast, it requires some time and knowledge to implement, but currently may be the only solution suitable for your project — especially if it should run in production.
Generally, the three frameworks presented here can be separated in two categories: integrated frameworks and static frameworks.
Integrated frameworks use Remote Procedure Calls to integrate into pre-existing Machine Learning frameworks with little effort by adapting their native syntax. They allow a fast adjustment of code, but therefore can only execute operations that are provided by the framework.
Static frameworks use Multiple Programs Multiple Data paradigm to distribute logic among the participants. This makes it difficult to adapt the training in the development process, because changes must be communicated to all participants. Nevertheless, they are faster and more robust in their execution, as they do not require continuous supplement of instructions.
The integrated frameworks, PySyft and TFF, currently lack essential technical features which make them suitable for experimental setups only. PaddleFL supplies all technical features but lacks english resources and community.
Additionally, you could implement the Federated Learning process yourself to adapt it to your specific use case. This gives you the possibility to tailor the implementation to your project needs and use any Machine Learning framework as a base framework.
To use Federated Learning in the research project KOSMoS  we develop our own Federated Learning framework. Part 3 of this blog post series will introduce and demonstrate this framework. Stay tuned to read more about the challenges of this implementation.
This blog post compares three Federated Learning frameworks and their underlying techniques. The research behind this started in my bachelor thesis „Evaluation of Federated Learning in Deep Learning” at inovex Lab and meanwhile takes place in the research project „KOSMoS – Collaborative Smart Contracting Platform for Digital Value Networks„, where we currently implement Federated Learning for predictive maintenance based on production machine data ,. The research project is funded by the Federal Ministry of Education and Research (BMBF) under reference number 02P17D026 and supervised by Projektträger Karlsruhe (PTKA). The responsibility for the content is with the authors.
 Christian Becker (2020) Federated Learning: A Guide to Collaborative Training with Decentralized Sensitive Data – Part 1 inovex Blog
 Tensorflow Federated – https://www.tensorflow.org/federated
 PySyft by Open Mined – https://www.openmined.org/
 PaddleFL by Baidu – https://paddlefl.readthedocs.io/en/stable/
 KOSMoS at inovex – https://www.inovex.de/en/our-services/data-science-deep-learning/collaborative-smart-contracting-platform-kosmos/
 KOSMoS official – https://www.kosmos-bmbf.de/
 Marisa Mohr, Christian Becker, Ralf Möller, Matthias Richter (2020) Towards Collaborative Predictive Maintenance Leveraging Private Cross-Company Data published in: Lecture Notes in Informatics, Vol.307, Gesellschaft für Informatik e.V., Bonn, 2020. In press.
 Christian Becker, Marisa Mohr (2020) Federated Machine Learning: über Unternehmensgrenzen hinaus aus Produktionsdaten lernen published in atp magazin, Edition 5, S. 18-20, 2020.