Multi-modal machine learning for flood detection in news, social media and satellite sequences

Kashif Ahmad, Konstantin Pogorelov, Mohib Ullah, Michael Riegler, Nicola Conci, Johannes Langguth, Ala Al-Fuqaha

    Research output: Contribution to journalConference articlepeer-review

    1 Citation (Scopus)

    Abstract

    In this paper we present our methods for the MediaEval 2019 Multimedia Satellite Task, which is aiming to extract complementary information associated with adverse events from Social Media and satellites. For the first challenge, we propose a framework jointly utilizing colour, object and scene-level information to predict whether the topic of an article containing an image is a flood event or not. Visual features are combined using early and late fusion techniques achieving an average F1-score of 82.63, 82.40, 81.40 and 76.77. For the multi-modal flood level estimation, we rely on both visual and textual information achieving an average F1-score of 58.48 and 46.03, respectively. Finally, for the flooding detection in time-based satellite image sequences we used a combination of classical computer-vision and machine learning approaches achieving an average F1-score of 58.82.

    Original languageEnglish
    JournalCEUR Workshop Proceedings
    Volume2670
    Publication statusPublished - 2019
    Event2019 Working Notes of the MediaEval Workshop, MediaEval 2019 - Sophia Antipolis, France
    Duration: 27 Oct 201930 Oct 2019

    Fingerprint

    Dive into the research topics of 'Multi-modal machine learning for flood detection in news, social media and satellite sequences'. Together they form a unique fingerprint.

    Cite this