Not logged in.

Contribution Details

Type Journal Article
Scope Discipline-based scholarship
Title Federated learning for malware detection in IoT devices
Organization Unit
Authors
  • Valerian Rey
  • Pedro Miguel Sánchez Sánchez
  • Alberto Huertas Celdran
  • Gérôme Bovet
Item Subtype Original Work
Refereed Yes
Status Published in final form
Language
  • English
Journal Title Computer Networks
Publisher Elsevier
Geographical Reach international
ISSN 1389-1286
Volume 204
Number 1
Page Range 108693
Date 2022
Abstract Text Billions of IoT devices lacking proper security mechanisms have been manufactured and deployed for the last years, and more will come with the development of Beyond 5G technologies. Their vulnerability to malware has motivated the need for efficient techniques to detect infected IoT devices inside networks. With data privacy and integrity becoming a major concern in recent years, increasing with the arrival of 5G and Beyond networks, new technologies such as federated learning and blockchain emerged. They allow training machine learning models with decentralized data while preserving its privacy by design. This work investigates the possibilities enabled by federated learning concerning IoT malware detection and studies security issues inherent to this new learning paradigm. In this context, a framework that uses federated learning to detect malware affecting IoT devices is presented. N-BaIoT, a dataset modeling network traffic of several real IoT devices while affected by malware, has been used to evaluate the proposed framework. Both supervised and unsupervised federated models (multi-layer perceptron and autoencoder) able to detect malware affecting seen and unseen IoT devices of N-BaIoT have been trained and evaluated. Furthermore, their performance has been compared to two traditional approaches. The first one lets each participant locally train a model using only its own data, while the second consists of making the participants share their data with a central entity in charge of training a global model. This comparison has shown that the use of more diverse and large data, as done in the federated and centralized methods, has a considerable positive impact on the model performance. Besides, the federated models, while preserving the participant’s privacy, show similar results as the centralized ones. As an additional contribution and to measure the robustness of the federated approach, an adversarial setup with several malicious participants poisoning the federated model has been considered. The baseline model aggregation averaging step used in most federated learning algorithms appears highly vulnerable to different attacks, even with a single adversary. The performance of other model aggregation functions acting as countermeasures is thus evaluated under the same attack scenarios. These functions provide a significant improvement against malicious participants, but more efforts are still needed to make federated approaches robust.
Free access at DOI
Official URL https://doi.org/10.1016/j.comnet.2021.108693
Related URLs
Digital Object Identifier 10.1016/j.comnet.2021.108693
Other Identification Number merlin-id:21880
PDF File Download from ZORA
Export BibTeX
EP3 XML (ZORA)