TY - GEN
T1 - Intelligent Model Aggregation in Hierarchical Clustered Federated Multitask Learning
AU - Hamood, Moqbel
AU - Albaseer, Abdullatif
AU - Abdallah, Mohamed
AU - Al-Fuqaha, Ala
AU - Mohamed, Amr
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Clustered federated multi task learning (CFL) is introduced as an effective and efficient approach for addressing statistical challenges such as non-independent and identically distributed (non-IID) data among workers. Workers in CFL are clustered in groups based on similarity (i.e., cosine similarity) in their data distributions, in which each cluster is equipped with an efficient specialized model. However, this approach can be costly and time-consuming when implemented in hierarchical wireless networks (HWNs) due to uploading several models at every round to enable the cloud server to capture the incongruent data distribution from different edge networks. This brings about the need for novel solutions to address these challenges. To this end, this paper introduces a framework with two cloud-based model aggregation approaches, round-based and split-based, so as to minimize latency and resource consumption while attaining satisfying personalized accuracy. In the round-based scheme, the cloud aggregates the models from the edge servers after a predetermined number of rounds. As for the split-based scheme, the models are collected by the cloud only when edge servers perform the split. Extensive experiments are conducted to evaluate and compare the proposed heuristics against approaches presented in the recent literature. The numerical results and findings demonstrate that the proposed heuristics significantly conserve resources by reducing energy consumption by 60% and saving time, all while accelerating the convergence rate for cluster workers across various edge networks.
AB - Clustered federated multi task learning (CFL) is introduced as an effective and efficient approach for addressing statistical challenges such as non-independent and identically distributed (non-IID) data among workers. Workers in CFL are clustered in groups based on similarity (i.e., cosine similarity) in their data distributions, in which each cluster is equipped with an efficient specialized model. However, this approach can be costly and time-consuming when implemented in hierarchical wireless networks (HWNs) due to uploading several models at every round to enable the cloud server to capture the incongruent data distribution from different edge networks. This brings about the need for novel solutions to address these challenges. To this end, this paper introduces a framework with two cloud-based model aggregation approaches, round-based and split-based, so as to minimize latency and resource consumption while attaining satisfying personalized accuracy. In the round-based scheme, the cloud aggregates the models from the edge servers after a predetermined number of rounds. As for the split-based scheme, the models are collected by the cloud only when edge servers perform the split. Extensive experiments are conducted to evaluate and compare the proposed heuristics against approaches presented in the recent literature. The numerical results and findings demonstrate that the proposed heuristics significantly conserve resources by reducing energy consumption by 60% and saving time, all while accelerating the convergence rate for cluster workers across various edge networks.
KW - CFL
KW - Federated learning
KW - Hierarchical networks
KW - Model aggregation
KW - Resource allocation
KW - client scheduling
UR - http://www.scopus.com/inward/record.url?scp=85187348258&partnerID=8YFLogxK
U2 - 10.1109/GLOBECOM54140.2023.10437646
DO - 10.1109/GLOBECOM54140.2023.10437646
M3 - Conference contribution
AN - SCOPUS:85187348258
T3 - Proceedings - IEEE Global Communications Conference, GLOBECOM
SP - 3009
EP - 3014
BT - GLOBECOM 2023 - 2023 IEEE Global Communications Conference
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE Global Communications Conference, GLOBECOM 2023
Y2 - 4 December 2023 through 8 December 2023
ER -