Models and Mechanisms for Spatial Data Fairness

Sina Shaham, Gabriel Ghinita, Cyrus Shahabi

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)

Abstract

Fairness in data-driven decision-making studies scenarios where individuals from certain population segments may be unfairly treated when being considered for loan or job applications, access to public resources, or other types of services. In location-based applications, decisions are based on individual whereabouts, which often correlate with sensitive attributes such as race, income, and education. While fairness has received significant attention recently, e.g., in machine learning, there is little focus on achieving fairness when dealing with location data. Due to their characteristics and specific type of processing algorithms, location data pose important fairness challenges. We introduce the concept of spatial data fairness to address the specific challenges of location data and spatial queries. We devise a novel building block to achieve fairness in the form of fair polynomials. Next, we propose two mechanisms based on fair polynomials that achieve individual spatial fairness, corresponding to two common location-based decision-making types: distance-based and zone-based. Extensive experimental results on real data show that the proposed mechanisms achieve spatial fairness without sacrificing utility.

Original languageEnglish
Pages (from-to)167-179
Number of pages13
JournalProceedings of the VLDB Endowment
Volume16
Issue number2
DOIs
Publication statusPublished - Oct 2022
Event49th International Conference on Very Large Data Bases, VLDB 2023 - Vancouver, Canada
Duration: 28 Aug 20231 Sept 2023

Keywords

  • Bias

Fingerprint

Dive into the research topics of 'Models and Mechanisms for Spatial Data Fairness'. Together they form a unique fingerprint.

Cite this