Not logged in.
Quick Search - Contribution
Contribution Details
Type | Journal Article |
Scope | Discipline-based scholarship |
Title | Studying the Robustness of Anti-Adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors |
Organization Unit | |
Authors |
|
Item Subtype | Original Work |
Refereed | Yes |
Status | Published in final form |
Language |
|
Journal Title | IEEE Transactions on Dependable and Secure Computing |
Publisher | Institute of Electrical and Electronics Engineers |
Geographical Reach | international |
ISSN | 1545-5971 |
Volume | 21 |
Number | 2 |
Page Range | 573 - 584 |
Date | 2024 |
Abstract Text | Device fingerprinting combined with Machine and Deep Learning (ML/DL) report promising performance when detecting spectrum sensing data falsification (SSDF) attacks. However, the amount of data needed to train models and the scenario privacy concerns limit the applicability of centralized ML/DL. Federated learning (FL) addresses these drawbacks but is vulnerable to adversarial participants and attacks. The literature has proposed countermeasures, but more effort is required to evaluate the performance of FL detecting SSDF attacks and their robustness against adversaries. Thus, the first contribution of this work is to create an FL-oriented dataset modeling the behavior of resource-constrained spectrum sensors affected by SSDF attacks. The second contribution is a pool of experiments analyzing the robustness of FL models according to i) three families of sensors, ii) eight SSDF attacks, iii) four FL scenarios dealing with anomaly detection and binary classification, iv) up to 33% of participants implementing data and model poisoning attacks, and v) four aggregation functions acting as anti-adversarial mechanisms. In conclusion, FL achieves promising performance when detecting SSDF attacks. Without anti-adversarial mechanisms, FL models are particularly vulnerable with > 16% of adversaries. Coordinate-wise-median is the best mitigation for anomaly detection, but binary classifiers are still affected with > 33% of adversaries. |
Digital Object Identifier | 10.1109/TDSC.2022.3204535 |
Other Identification Number | merlin-id:23177 |
PDF File | Download from ZORA |
Export |
BibTeX
EP3 XML (ZORA) |
Keywords | Sensors, Fingerprint recognition, Data models, Behavioral science, Sensor phenomena and characterization, Robustness, Crowdsensing |
Additional Information | © 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. |