Predicting the leading political ideology of YouTube channels using acoustic, textual, and metadata information

Yoan Dinkov, Ahmed Ali, Ivan Koychev, Preslav Nakov

Research output: Contribution to journalConference articlepeer-review

16 Citations (Scopus)

Abstract

We address the problem of predicting the leading political ideology, i.e., left-center-right bias, for YouTube channels of news media. Previous work on the problem has focused exclusively on text and on analysis of the language used, topics discussed, sentiment, and the like. In contrast, here we study videos, which yields an interesting multimodal setup. Starting with gold annotations about the leading political ideology of major world news media from Media Bias/Fact Check, we searched on YouTube to find their corresponding channels, and we downloaded a recent sample of videos from each channel. We crawled more than 1,000 YouTube hours along with the corresponding subtitles and metadata, thus producing a new multimodal dataset. We further developed a multimodal deep-learning architecture for the task. Our analysis shows that the use of acoustic signal helped to improve bias detection by more than 6% absolute over using text and metadata only. We release the dataset to the research community, hoping to help advance the field of multi-modal political bias detection.

Original languageEnglish
Pages (from-to)501-505
Number of pages5
JournalProceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH
Volume2019-September
DOIs
Publication statusPublished - 2019
Event20th Annual Conference of the International Speech Communication Association: Crossroads of Speech and Language, INTERSPEECH 2019 - Graz, Austria
Duration: 15 Sept 201919 Sept 2019

Keywords

  • Bias detection
  • Political ideology
  • Propaganda

Fingerprint

Dive into the research topics of 'Predicting the leading political ideology of YouTube channels using acoustic, textual, and metadata information'. Together they form a unique fingerprint.

Cite this