TY - GEN
T1 - ML-ECN
T2 - 2022 IEEE International Conference on Communications, ICC 2022
AU - Alanazi, Sultan
AU - Hamdaoui, Bechir
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Recently, Explicit Congestion Notification (ECN) has been leveraged by most Datacenter Network (DCN) protocols for congestion control to achieve high throughput and low latency. However, the majority of these approaches assume that each switch port has one queue while current industry trends towards having multiple queues per each switch port. To this end, we propose ML-ECN, a Multi-Level probabilistic ECN marking scheme for DCNs enabled with multiple-service, multiple-queue switch ports. The main design of ML-ECN relies on the separation between small, medium, and large flows by dedicating multiple queues for each flow class to ensure fairness enqueueing. ML-ECN employs a single threshold for each queue in small service-queue class and multiple thresholds with a probabilistic marking for each queue in medium and large service-queue classes to achieve low latency for mice (small) and high throughput for elephant (large) flows. In addition, ML-ECN performs fairness-aware ECN marking that ensures small flows never get marked at early queue build up. Large-scale ns-2 simulations show that ML-ECN outperforms existing approaches for different performance metrics.
AB - Recently, Explicit Congestion Notification (ECN) has been leveraged by most Datacenter Network (DCN) protocols for congestion control to achieve high throughput and low latency. However, the majority of these approaches assume that each switch port has one queue while current industry trends towards having multiple queues per each switch port. To this end, we propose ML-ECN, a Multi-Level probabilistic ECN marking scheme for DCNs enabled with multiple-service, multiple-queue switch ports. The main design of ML-ECN relies on the separation between small, medium, and large flows by dedicating multiple queues for each flow class to ensure fairness enqueueing. ML-ECN employs a single threshold for each queue in small service-queue class and multiple thresholds with a probabilistic marking for each queue in medium and large service-queue classes to achieve low latency for mice (small) and high throughput for elephant (large) flows. In addition, ML-ECN performs fairness-aware ECN marking that ensures small flows never get marked at early queue build up. Large-scale ns-2 simulations show that ML-ECN outperforms existing approaches for different performance metrics.
KW - Datacenter networks (DCNs)
KW - congestion control
KW - explicit congestion notification (ECN)
KW - fairness
UR - http://www.scopus.com/inward/record.url?scp=85137271375&partnerID=8YFLogxK
U2 - 10.1109/ICC45855.2022.9838911
DO - 10.1109/ICC45855.2022.9838911
M3 - Conference contribution
AN - SCOPUS:85137271375
T3 - IEEE International Conference on Communications
SP - 2726
EP - 2731
BT - ICC 2022 - IEEE International Conference on Communications
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 May 2022 through 20 May 2022
ER -