TY - GEN
T1 - An efficient adversarial learning strategy for constructing robust classification boundaries
AU - Liu, Wei
AU - Chawla, Sanjay
AU - Bailey, James
AU - Leckie, Christopher
AU - Ramamohanarao, Kotagiri
PY - 2012
Y1 - 2012
N2 - Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.
AB - Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However in some adversarial settings, the test set can be deliberately constructed in order to increase the error rates of a classifier. A prominent example is email spam where words are transformed to avoid word-based features embedded in a spam filter. Recent research has modeled interactions between a data miner and an adversary as a sequential Stackelberg game, and solved its Nash equilibrium to build classifiers that are more robust to subsequent manipulations on training data sets. However in this paper we argue that the iterative algorithm used in the Stackelberg game, which solves an optimization problem at each step of play, is sufficient but not necessary for achieving Nash equilibria in classification problems. Instead, we propose a method that transforms singular vectors of a training data matrix to simulate manipulations by an adversary, and from that perspective a Nash equilibrium can be obtained by solving a novel optimization problem only once. We show that compared with the iterative algorithm used in recent literature, our one-step game significantly reduces computing time while still being able to produce good Nash equilibria results.
UR - http://www.scopus.com/inward/record.url?scp=84871396255&partnerID=8YFLogxK
U2 - 10.1007/978-3-642-35101-3_55
DO - 10.1007/978-3-642-35101-3_55
M3 - Conference contribution
AN - SCOPUS:84871396255
SN - 9783642351006
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 649
EP - 660
BT - AI 2012
T2 - 25th Australasian Joint Conference on Artificial Intelligence, AI 2012
Y2 - 4 December 2012 through 7 December 2012
ER -