3 frameworks pointing to a central federated learning model
Artificial Intelligence

Federated Learning: Frameworks for Decentralized Private Training – Part 2

Lesezeit
15 ​​min

This blog post evaluates four different Federated Learning frameworks and the concepts they use to achieve a collaborative training. Basic knowledge about Federated Learning is required to understand this blog post. If you need a refresher, check out the previous part of this series “Federated Learning: A Guide to Collaborative Training with Decentralized Sensitive Data – Part 1“ [1]. 

Introduction

Federated Learning enables you to train Machine Learning models on sensitive data in a privacy preserving way. Therefore, multiple participants collaboratively train a model with their sensitive data. With this technique numerous previously unusable data sources now can be used for collaborative Machine Learning. In order to implement your Machine Learning project using Federated Learning, a framework can take on a variety of tasks for you and thereby support the developing process by implementing all necessary features. But what are the necessary requirements a Federated Learning framework needs to comply with? Which features are most important to your use case and which frameworks do support them? These questions are answered in the following evaluation of Federated Learning frameworks.

Evaluation Criteria

Ideally, a Federated Learning framework should be ready for use out of the box. This means you don’t need any additional engineering except the deployment to get your setup up and running. Of course, this is unlikely to occur in any Machine Learning project and thus you should choose the most suitable framework for your project requirements. Whether your project is just a small proof of concept, a research experiment or a full grown customer project it can be defined by answering the five following questions:

  1. Where does the training take place: edge devices, smartphones, personal computers or cloud servers? Should the training be executed on remote devices or is a local execution sufficient for an experimental setup?
  2. Does your data require protection by Differential Privacy to guarantee data privacy? (Your data should always be protected by Differential Privacy, except in experimental setups)
  3. Do you need to use Secure Aggregation to prevent the curator from spying on the participants’ model updates?
  4. Is GPU execution needed to speed up training with a large number of samples or large models?
  5. Does the framework require a broad community and good documentation that allows quick adaptation to the new framework and helps with debugging, development and issues?

Frameworks

For the purpose of this evaluation, multiple open source Federated Learning frameworks are introduced and compared. Because the number of frameworks is very limited four Federated Learning Frameworks were selected fot this evaluation. None of these frameworks fulfill all five evaluation criteria introduced previously, thus there is no single perfect framework available now. While this has been a surprising insight for me, you can at least find a complete overview over the most promising frameworks in the following.

TensorFlow Federated

TensorFlow’s approach to Federated Learning is called TensorFlow Federated (TFF) [2]. Currently, it supports local execution for experimental setups and distribution of the participants to dedicated platforms utilizing docker containers. An additional TensorFlow library named TensorFlowPrivacy enables training with sensible data utilizing differential private algorithms. TFF does not support Secure Aggregation for improved security and does not run on GPUs. As it integrates well with the existing modular library concept of TensorFlow, it benefits from a broad community, while there is no community specifically dedicated to TFF.

PySyft

PySyft integrates Federated Learning into PyTorch, a Machine Learning framework most widely used in the science and research community [3]. It offers the ability to distribute workers as Docker containers on any platform that supports Docker. It does not come with an implementation of Differential Privacy, but there is one that is currently under development. However, it supports Secure Aggregation and execution on GPUs and is therefore best suited for performance testing in real-world scenarios. The community and support behind the framework itself is quite alive.

PaddleFL

PaddleFL is developed by the Chinese search engine operator Baidu and is based on its Machine Learning framework called Paddle (“Parallel Distributed Deep Learning“) [4]. It supports all technical aspects of the above criteria: It is highly distributable and loads data from remote, it supports Differential Privacy, Secure Aggregation and execution on GPU. The documentation is also available in English, but the community around Paddle communicates mainly in Chinese. Unfortunately, the number of available code examples is quite small compared to the numerous features in the repository.

Flower

Flower is developed by the German Startup Adap[9]. It is designed to be agnostic and thus supports arbitrary ML Frameworks like PyTorch, TensorFlow/Keras, or JAX. It is built to support real-world setups with a large number of clients [10]. By providing a template API it allows users to easily “federate“ existing ML pipelines. Supported are execution on GPUs as well as containerization using tools like Docker. Differential Privacy and Secure Aggregation are not yet implemented but seem to be on the roadmap. The community around Flower is growing quickly [11] and also the development community behind it is quite alive.

Concepts: Integrated Frameworks vs Static Frameworks

A major advantage of using TensorFlow Federated or PySyft is that they integrate very well into the existing syntax of their specific Machine Learning framework. In fact, they allow the use of the native syntax of their base framework with remote data by using remote procedure calls. All tensor operations, training and loss functions can be executed on remote data using the familiar syntax. Thus, the code only needs to be modified in a single central file, which is then executed and controls the computation of the data located at the participants. This enables fast development, as changes need only be done in a central file. In contrast it also limits the number of available operations to those operations which are provided by the specific framework. The inclusion of external libraries for data preprocessing or Differential Privacy is not possible with remote procedure calls.

The concept of PaddleFL is based on static execution using a multiple program multiple data paradigm. Therefore all participants and the curator get supplied with an individual file specifically generated with the configuration scripts. These configurations define the data to be loaded, the ML-Model used for training and the techniques used in the training. After distributing the program files, the participants synchronize and the local training takes place when receiving a corresponding event message. This concept is stable and faster because each participant can work independently and does not have to be constantly supplied with operations by the curator.

Nevertheless, the creation and distribution of program files makes development more complicated compared to remote procedure calls used by TFF and PySyft. Because each time parts of the training have to be changed, new configurations have to be distributed to all participants. In addition, PaddleFL does not allow the inclusion of external libraries. This is not troubling because it supports all technical aspects of the above criteria for Federated Learning.

Flower is also based on static execution using a multiple program multiple data paradigm. The Flower server supports any ML-Framework and supplies a default or user-defined federation strategy for orchestration and synchronization of the clients. Clients are stand-alone applications which connect to the server via a default protocol. Flower clients can either be built using a high-level SDK (available for Python) or by directly implementing the protocol (e.g. for optimized client implementations). This architecture is suitable for production use cases because it allows the user to use arbitrary libraries on the client side and to combine Flower with existing ML frameworks/pipelines to build federated systems. For development and research purposes Flower can simply simulate systems on the developer machine. In production a change of the model architecture requires a deployment to each client.

Comparison

Four frameworks were introduced based on the initially defined evaluation criteria for Federated Learning projects. To give a compact overview, the results are summarized in the following Table 1. To evaluate the community support of each framework, the table also documents the number of github repository stars. Please keep in mind that all of these frameworks are currently under development and features can be added at any time.

TensorFlow FederatedPySyftPaddleFLFlower
Conceptintegratedintegratedstaticstatic
Based on ML-FrameworkTensorFlowPyTorchPaddlePaddleAny (TensorFlow, PyTorch, JAX, ...)
Remote Data LoadingNoNoYesYes
Supported SystemsLocal simulation and Docker container distributionLocal simulation and Docker container distributionOS and K8s (Docker Container), mobile, embedded and internet of things devicesOS and K8s (Docker Container), mobile, embedded and internet of things devices
Currently supports Remote WorkerNoYesYesYes
GPU SupportNoYesYesYes
Differential Privacy YesNot NowYesNot Now
Secure Aggregation (SMPC)NoYesYesNot Now
Resources for Code ExamplesJust a few very basic examples in the documentation and repositoryTutorial & examples in documentation and repository + blog posts and community projectsFew examples in the repositoryTutorials / examples in documentation, boilerplate project examples in the repository
Documentation & community supportOnly developer documentation and stack overflow community supportGood documentation and good community supportDocumentation in English. Most of community, issues and their website are in ChineseGood documentation and good community support, e.g. through Slack
Stars of Framework &
Stars of base Framework
1.1 k TensorFlow Federated &
145 k TensorFlow
6.1 k PySyft &
41 k PyTorch
93 PaddleFL &
11.5 k Paddle
233 Flower & plenty of stars from your favourite ML-Framework

TensorFlow Federated & PySyft

What are the pros and cons of these three frameworks and why choose one over another? TFF’s and PySyft’s base frameworks, TensorFlow and PyTorch, are widely known in the Machine Learning community which provides them with good support. Chances are that you are already familiar with one of them. What is really different is that TFF already comes with an implementation of Differential Privacy which allows training on sensible data. Both allow the distribution of participants as containers to remote locations. In all other criteria PySyft is superior either way because it integrates Secure Aggregation with SMPC or has a community focusing on their Federated Learning implementation. 

On the other side, both PySyft and TensorFlow lag a feature that makes them unsuitable for real-world application. The data used for training can not be loaded from the remote worker itself, but must be sliced and distributed by the central curator. This is, of course, contrary to the paradigm of Federated Learning, which consists in never centralizing data to preserve privacy. Consequently, these frameworks are only suitable for experimental setups until they are able to load remote data and in case of PySyft also include Differential Privacy in the training process.

PaddleFL

PaddleFL supports all technical criteria required for Federated Learning. It comes with several Federated Learning techniques, including Differential Privacy, Secure Aggregation and can be distributed across multiple platforms. The major problem is the lack of English resources (except for the documentation and a few examples from the repository).  Also the community behind it communicates mostly in Chinese. Therefore it is difficult to verify the application of the framework without testing it extensively in a real-world scenario.

Flower

Flower is only about one year old but looks promising and supports almost all technical criteria required for Federated Learning or makes it easy to integrate these features with external libraries. As of today Differential Privacy and Secure Aggregation are not yet built into the framework but seem to be on the roadmap (Differential Privacy can be used through third-party libraries until then). It focuses on providing a flexible FL framework by supporting mostly all ML-Frameworks such as TensorFlow, PyTorch, PaddlePaddle, or JAX, but also arbitrary third-party libraries. The clear focus gives hope to expect  Differential Privacy and Secure Aggregation soon to land in the Framework itself. The community behind it is growing and approachable.

Custom made Federated Learning Framework

The number of available frameworks for Federated Learning is quite small and none enables federated learning in a fully sufficient way. Thus it might be an option to implement your own Federated Learning solution. Flower fits very well into a self programmed approach by providing the fundamentals of the federated communication and only requires you to implement server and client algorithms. Choose your preferred base Machine Learning framework, integrate any arbitrary library, adapt it to your existing architecture and hardware. It requires some time and knowledge to implement Differential Privacy and Secure Aggregation, but currently may be the only solution suitable for your project — especially if it should run in production.

Tl;DR

Generally, the four frameworks presented here can be separated in two categories: integrated frameworks and static frameworks.

Integrated frameworks use Remote Procedure Calls to integrate into pre-existing Machine Learning frameworks with little effort by adapting their native syntax. They allow a fast adjustment of code, but therefore can only execute operations that are provided by the framework.

Static frameworks use Multiple Programs Multiple Data paradigm to distribute logic among the participants. This makes it difficult to adapt the training in the development process, because changes must be communicated to all participants. Nevertheless, they are faster and more robust in their execution, as they do not require continuous supplement of instructions.

The integrated frameworks, PySyft and TFF, currently lack essential technical features which make them suitable for research/experimental setups only. PaddleFL supplies all technical features but lacks english resources and community. Flower seems to be the most suitable framework for production setups, lacks first-party implementations of Differential Privacy and Secure Aggregation .

To use Federated Learning in the research project KOSMoS [5][6] we decided to use the Flower Federated Learning framework. Part 3 of this blog post series will introduce and demonstrate this framework. Stay tuned to read more about the challenges of this implementation.

Acknowledgements

This blog post compares three Federated Learning frameworks and their underlying techniques. The research behind this started in my bachelor thesis „Evaluation of Federated Learning in Deep Learning“ at inovex Lab and meanwhile takes place in the research project „KOSMoS – Collaborative Smart Contracting Platform for Digital Value Networks„, where we currently implement Federated Learning for predictive maintenance based on production machine data [7],[8]. The research project is funded by the Federal Ministry of Education and Research (BMBF) under reference number 02P17D026 and supervised by Projektträger Karlsruhe (PTKA). The responsibility for the content is with the authors.

 

[1] Christian Becker (2020) Federated Learning: A Guide to Collaborative Training with Decentralized Sensitive Data – Part 1 inovex Blog

[2] Tensorflow Federated – https://www.tensorflow.org/federated

[3] PySyft by Open Mined – https://www.openmined.org/

[4] PaddleFL by Baidu – https://paddlefl.readthedocs.io/en/stable/

[5] KOSMoS at inovex – https://www.inovex.de/en/our-services/data-science-deep-learning/collaborative-smart-contracting-platform-kosmos/

[6] KOSMoS official – https://www.kosmos-bmbf.de/

[7] Marisa Mohr, Christian Becker, Ralf Möller, Matthias Richter (2020) Towards Collaborative Predictive Maintenance Leveraging Private Cross-Company Data published in: Lecture Notes in Informatics, Vol.307, Gesellschaft für Informatik e.V., Bonn, 2020. In press.

[8] Christian Becker, Marisa Mohr (2020) Federated Machine Learning: über Unternehmensgrenzen hinaus aus Produktionsdaten lernen published in atp magazin, Edition 5, S. 18-20, 2020.

[9] Flower by Adap – https://flower.dev & https://adap.com

[10] Daniel J. Beutel, Taner Topal, Akhil Mathur, Xinchi Qiu, Titouan Parcollett and Nicholas D. Lane (2020) Flower: A Friendly Federated Learning Research Framework published on arxiv

[11] Students get hands-on with Federated Learning – https://www.cst.cam.ac.uk/news/students-get-hands-federated-learning

Hat dir der Beitrag gefallen?

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert