After intensive studies on basic problems such as classification and clustering, the machine learning community has reached a stage where many novel problems contain interrelated variables and the learning process usually contains complex inference sub-problems. Graphical models such as Markov random field (MRF) and maximal marginal methods such as structured support vector machine (SVM-Struct) are popular choices for solving those problems. Graphical models usually suffer great burden in computing the normalization term and marginal methods do not have Fisher consistency with many easy to use surrogate losses such as hinge loss. Adversarial learning has been proposed to overcome those shortcomings with additional benefits such as robust towards noises and easy adaptation to different problems. It has been proved to be advanced in both theoretically and practically on many problems that have exact solutions for the sub inference problem such as multiclass classification and multilabel prediction.
On the other hand, there are many even harder problems in which the inference part can be NP-hard. This means approximation has to be applied to produce an efficient solution. With the additional errors introduced to the computation process, the theoretical guarantees works on normal cases may no longer hold. In this work, we will explore the performance of adversarial learning on solving these kinds of hard problems.
We will focus on two problems. The first one is multiclass classification with pairwise features. This problem represents a group of hard problems that are independently identically distributed (i. i. d) assumption of the data is broken. The inference problem is equivalent to a multiway cut problem and it is NP-hard. We applied adversarial learning on it and reformulated the sub-problem into an integer linear programming problem. The second one is the weighted 3-dimensional matching problem which is also NP-hard. This problem is one of the problems whose output has complex structures. In this problem, we use a different way to solve it by reducing the problem to the marginal space and moving the NP hardness to the prediction stage. We compared the results with state-of-art methods and got better results from adversarial learning for both of the problems. More work will be done to explore whether and how adversarial learning can make better use of inexact results to get to a good optimal point.