TY - JOUR
T1 - A Joint Communication and Learning Framework for Hierarchical Split Federated Learning
AU - Khan, Latif U.
AU - Guizani, Mohsen
AU - Al-Fuqaha, Ala
AU - Hong, Choong Seon
AU - Niyato, Dusit
AU - Han, Zhu
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
AB - In contrast to methods relying on a centralized training, emerging Internet of Things (IoT) applications can employ federated learning (FL) to train a variety of models for performance improvement and improved privacy preservation. FL calls for the distributed training of local models at end-devices, which uses a lot of processing power (i.e., CPU cycles/sec). Most end-devices have computing power limitations, such as IoT temperature sensors. One solution for this problem is split FL. However, split FL has its problems, including a single point of failure, issues with fairness, and a poor convergence rate. We provide a novel framework, called hierarchical split FL (HSFL), to overcome these issues. On grouping, our HSFL framework is built. Partial models are constructed within each group at the devices, with the remaining work done at the edge servers. Each group then performs local aggregation at the edge following the computation of local models. End devices are given access to such an edge aggregated model so they can update their models. For each group, a unique edge aggregated HSFL model is produced by this procedure after a set number of rounds. Shared among edge servers, these edge aggregated HSFL models are then aggregated to produce a global model. Additionally, we propose an optimization problem that takes into account the relative local accuracy (RLA) of devices, transmission latency, transmission energy, and edge servers' compute latency in order to reduce the cost of HSFL. The formulated problem is a mixed-integer nonlinear programming (MINLP) problem and cannot be solved easily. To tackle this challenge, we perform decomposition of the formulated problem to yield subproblems. These subproblems are edge computing resource allocation problem and joint RLA minimization, wireless resource allocation, task offloading, and transmit power allocation subproblem. Due to the convex nature of edge computing, resource allocation is done so utilizing a convex optimizer, as opposed to a block successive upper-bound minimization (BSUM)-based approach for joint RLA minimization, resource allocation, job offloading, and transmit power allocation. Finally, we present the performance evaluation findings for the proposed HSFL scheme.
KW - Federated learning (FL)
KW - Internet of Things (IoT)
KW - hierarchical FL
KW - split learning
UR - http://www.scopus.com/inward/record.url?scp=85171592375&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2023.3315673
DO - 10.1109/JIOT.2023.3315673
M3 - Article
AN - SCOPUS:85171592375
SN - 2327-4662
VL - 11
SP - 268
EP - 282
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 1
ER -