An Interpretable SVM Classifier by Discrete-Weight Optimization

Fethi Jarray, Sabri Boughorbel

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The main problem investigated in this paper is to learn ”interpretable” linear classifiers from data. Interpretable models are captured using ”discrete” linear functions. The learning problem is formulated as minimizing the cumulative zero-one loss of a discrete hyperplane, penalized by the standard L2 regularizer. This learning task is cast as a MILP problem, and solved using convex relaxation and rounding. Experiments on both synthetic and real-world datasets corroborate the interest of this approach. We benchmarked the proposed method against two classifiers: i- DILSVM a discrete version of SVM based a hinge-loss and ii- the traditional linear L1-norm SVM. Our algorithm outperforms DILSVM on several datasets in terms of accuracy and computational efficiency. It has close performance to the continuous SVM. These results suggest that the proposed classifier provides a good trade-off between performance and interpretability.
Original languageEnglish
Title of host publicationProceedings of the 34th International Conference on Machine Learning
Publication statusPublished - 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'An Interpretable SVM Classifier by Discrete-Weight Optimization'. Together they form a unique fingerprint.

Cite this