TY - JOUR
T1 - Uncertainty-bounded reinforcement learning for revenue optimization in air cargo
T2 - a prescriptive learning approach
AU - Rizzo, Stefano Giovanni
AU - Chen, Yixian
AU - Pang, Linsey
AU - Lucas, Ji
AU - Kaoudi, Zoi
AU - Quiane, Jorge
AU - Chawla, Sanjay
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2022/9
Y1 - 2022/9
N2 - We propose a prescriptive learning approach for revenue management in air-cargo that combines machine learning prediction with decision making using deep reinforcement learning. This approach, named RL-Cargo, addresses a problem that is unique to the air-cargo business, namely the wide discrepancy between the quantity (weight or volume) that a shipper will book and the actual amount received at departure time by the airline. The discrepancy results in sub-optimal and inefficient behavior by both the shipper and the airline resulting in an overall loss of potential revenue for the airline. In the proposed approach, booking features and extracted disguised missing values are exploited to provide a prediction on the received volume, while a DQN method using uncertainty bounds from the prediction intervals is proposed for decision making. We have validated the benefits of RL-Cargo using a real dataset of 1000 flights to compare classical Dynamic Programming and Deep Reinforcement Learning techniques on offloading costs and revenue generation. Our results suggest that prescriptive learning which combines prediction with decision making provides a principled approach for managing the air cargo revenue ecosystem. Furthermore, the proposed approach can be abstracted to many other application domains where decision making needs to be carried out in face of both data and behavioral uncertainty.
AB - We propose a prescriptive learning approach for revenue management in air-cargo that combines machine learning prediction with decision making using deep reinforcement learning. This approach, named RL-Cargo, addresses a problem that is unique to the air-cargo business, namely the wide discrepancy between the quantity (weight or volume) that a shipper will book and the actual amount received at departure time by the airline. The discrepancy results in sub-optimal and inefficient behavior by both the shipper and the airline resulting in an overall loss of potential revenue for the airline. In the proposed approach, booking features and extracted disguised missing values are exploited to provide a prediction on the received volume, while a DQN method using uncertainty bounds from the prediction intervals is proposed for decision making. We have validated the benefits of RL-Cargo using a real dataset of 1000 flights to compare classical Dynamic Programming and Deep Reinforcement Learning techniques on offloading costs and revenue generation. Our results suggest that prescriptive learning which combines prediction with decision making provides a principled approach for managing the air cargo revenue ecosystem. Furthermore, the proposed approach can be abstracted to many other application domains where decision making needs to be carried out in face of both data and behavioral uncertainty.
KW - Air-cargo
KW - Itemset
KW - Predictive analytics
KW - Prescriptive learning
KW - Reinforcement learning
KW - Revenue management
KW - Uncertainty
UR - http://www.scopus.com/inward/record.url?scp=85135349774&partnerID=8YFLogxK
U2 - 10.1007/s10115-022-01713-5
DO - 10.1007/s10115-022-01713-5
M3 - Article
AN - SCOPUS:85135349774
SN - 0219-1377
VL - 64
SP - 2515
EP - 2541
JO - Knowledge and Information Systems
JF - Knowledge and Information Systems
IS - 9
ER -