This work analyzes the integration of a permissioned blockchain network for a verification mechanism to perform decentralized, robust, and privacy-friendly federated learning (FL). In particular, a cost-effective variant of the state-of-the-art (SOTA) approach is proposed to address vulnerabilities of federated learning, such as a compromised server, compromised clients, non-robust aggregation, and gradient leakage. The open-source environment Hyperledger Fabric is used for implementation.
In our approach, we analyze the resource overhead of Hyperledger Fabric components and derive the blockchain costs. Assuming a monthly FL session with ten clients, we achieve an annual blockchain cost of about 0.50% of the annual blockchain cost of the state-of-the-art approach. In addition, the performance and robustness of the approach were analyzed to predict the remaining useful life of turbofan engines. Data were distributed among ten participants, and a long-short-term memory model was trained. The mean square error (RMSE) was considered as a quality measure. Differential privacy is used as a method to ensure privacy. In this regard, this paper analyzes the effect of specific privacy budgets for different models. The privacy budget’s effect on the central model’s performance becomes more negligible for a higher number of participants. We also analyze the robustness of our approach to collusion between compromised participants of the blockchain network. The tests performed prove robustness comparable to the SOTA implementation. Finally, we investigate the use of percentage deadline criteria for submitting local models. We consider the worst-case scenario in which valuable data is distributed to the last percentage of clients who have completed their training. We find that a percentage deadline criterion of 80% is a good compromise between the robustness of the approach and the performance achieved.