Cross-modal retrieval: A pairwise classification approach

Aditya Krishna Menon, Didi Sudan, Sanjay Chawla

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Citations (Scopus)

Abstract

Content is increasingly available in multiple modalities (such as images, text, and video), each of which provides a different representation of some entity. The cross-modal retrieval problem is: given the representation of an entity in one modality, find its best representation in all other modalities. We propose a novel approach to this problem based on pairwise classification. The approach seamlessly applies to both the settings where ground-truth annotations for the entities are absent and present. In the former case, the approach considers both positive and unlabelled links that arise in standard cross-modal retrieval datasets. Empirical comparisons show improvements over state-of-the-art methods for cross-modal retrieval.

Original languageEnglish
Title of host publicationSIAM International Conference on Data Mining 2015, SDM 2015
EditorsJieping Ye, Suresh Venkatasubramanian
PublisherSociety for Industrial and Applied Mathematics Publications
Pages199-207
Number of pages9
ISBN (Electronic)9781510811522
Publication statusPublished - 2015
Externally publishedYes
EventSIAM International Conference on Data Mining 2015, SDM 2015 - Vancouver, Canada
Duration: 30 Apr 20152 May 2015

Publication series

NameSIAM International Conference on Data Mining 2015, SDM 2015

Conference

ConferenceSIAM International Conference on Data Mining 2015, SDM 2015
Country/TerritoryCanada
CityVancouver
Period30/04/152/05/15

Fingerprint

Dive into the research topics of 'Cross-modal retrieval: A pairwise classification approach'. Together they form a unique fingerprint.

Cite this