R2S100K: Road-Region Segmentation Dataset for Semi-supervised Autonomous Driving in the Wild

Muhammad Atif Butt, Hassan Ali, Adnan Qayyum, Waqas Sultani, Ala Al-Fuqaha, Junaid Qadir*

*Corresponding author for this work

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Semantic understanding of roadways is a key enabling factor for safe autonomous driving. However, existing autonomous driving datasets provide well-structured urban roads while ignoring unstructured roadways containing distress, potholes, water puddles, and various kinds of road patches i.e., earthen, gravel etc. To this end, we introduce Road Region Segmentation dataset (R2S100K)—a large-scale dataset and benchmark for training and evaluation of road segmentation in aforementioned challenging unstructured roadways. R2S100K comprises 100K images extracted from a large and diverse set of video sequences covering more than 1000 km of roadways. Out of these 100K privacy respecting images, 14,000 images have fine pixel-labeling of road regions, with 86,000 unlabeled images that can be leveraged through semi-supervised learning methods. Alongside, we present an Efficient Data Sampling based self-training framework to improve learning by leveraging unlabeled data. Our experimental results demonstrate that the proposed method significantly improves learning methods in generalizability and reduces the labeling cost for semantic segmentation tasks. Our benchmark will be publicly available to facilitate future research at https://r2s100k.github.io/.

    Original languageEnglish
    JournalInternational Journal of Computer Vision
    Early online dateAug 2024
    DOIs
    Publication statusPublished - 23 Aug 2024

    Keywords

    • Autonomous driving
    • Semantic segmentation
    • Semi-supervised learning

    Fingerprint

    Dive into the research topics of 'R2S100K: Road-Region Segmentation Dataset for Semi-supervised Autonomous Driving in the Wild'. Together they form a unique fingerprint.

    Cite this