An efficient adversarial learning strategy for constructing robust classification boundaries

Wei Liu*, Sanjay Chawla, James Bailey, Christopher Leckie, Kotagiri Ramamohanarao

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Citations (Scopus)

Abstract

Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.

Original languageEnglish
Title of host publicationAI 2012
Subtitle of host publicationAdvances in Artificial Intelligence - 25th Australasian Joint Conference, Proceedings
Pages649-660
Number of pages12
DOIs
Publication statusPublished - 2012
Externally publishedYes
Event25th Australasian Joint Conference on Artificial Intelligence, AI 2012 - Sydney, NSW, Australia
Duration: 4 Dec 20127 Dec 2012

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume7691 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference25th Australasian Joint Conference on Artificial Intelligence, AI 2012
Country/TerritoryAustralia
CitySydney, NSW
Period4/12/127/12/12

Fingerprint

Dive into the research topics of 'An efficient adversarial learning strategy for constructing robust classification boundaries'. Together they form a unique fingerprint.

Cite this