1 Introduction
Deep structured prediction models have attracted considerable interest in recent years (Belanger et al. 2017; Zheng et al. 2015; Schwing & Urtasun 2015; Chen et al. 2015; Ma & Hovy 2016
). The goal in the structured prediction setting is to predict multiple labels simultaneously while utilizing dependencies between labels to improve accuracy. Examples of such application domains include dependency parsing and partofspeech tagging for natural language processing, as well as visual recognition tasks such as semantic image segmentation.
In the conventional structured prediction setting a nonlinear classifier is trained first, and its output is used to produce potentials for the structured prediction model. Arguably, this piecewise training is suboptimal as the classifier is learned while ignoring the dependencies between the predicted variables. However, when trained jointly their predictive power can be increased by utilizing complementary information.
Several recent approaches have proposed to combine the representational power of deep neural networks with the ability of structured prediction models to capture variable dependence, trained jointly in an endtoend manner. This concept has been proven efficient in various applications (
Zheng et al. 2015; Schwing & Urtasun 2015; Ma & Hovy 2016; Belanger et al. 2017), despite mostly utilizing only basic structured prediction models restricted to limited variable interactions, e.g., pairwise potentials. In this work, we seek to leverage a deep structured prediction model to incorporate higher order structural relations among labels.One of the most effective forms of label structure is cardinality (Tarlow et al. 2012; Tarlow et al. 2010; Milch et al. 2008; Swersky et al. 2012; Gupta et al. 2007). Namely, the fact that the overall number of labels taking a specific value has a distribution specific to the domain. Such potentials naturally arise in natural language processing, where they can express a constraint on the number of occurrences of a partofspeech, e.g., that each sentence contains at least one verb Ganchev et al. (2010)
. In computer vision, a cardinality potential can encode a prior distribution over object sizes in an image.
Although cardinality potentials have been very effective in many structured prediction works, they have not yet been successfully integrated into deep structured prediction frameworks. This is precisely the goal of our work.
The challenge in modeling cardinality in deep learning is that cardinality is essentially a combinatorial notion, and it is thus not clear how to integrate it into a differentiable model. Our proposal to achieve this goal is based on two key observations.
First, we note that learning to predict how many labels are active for a given input is easier than predicting which
labels are active. Hence, we break our inference process into two complementary components: we first estimate the label set cardinality for a given input using a learned neural network, and then predict a label that satisfies this constraint.
The second observation is that constraining a label to satisfy a cardinality constraint can be approximated via projected gradient descent, where the cardinality constraint corresponds to a set of linear constraints. Thus, the overall label prediction architecture performs projected gradient descent in label space. Moreover, the above projection can be implemented via sorting, and results in an endtoend differentiable architecture which can be directly optimized to maximize any performance measure (e.g., , recall at , etc.). Importantly, with this approach cardinality can be naturally integrated with other higherorder scores, such as the global scores considered in Belanger & McCallum 2016, and used in our model as well.
Our work augments the simple form of unrolled optimization Belanger & McCallum (2016) by extending the scope of its applicability beyond gradient descent. We formulate cardinality potentials as a constraint on the label variables, and demonstrate their capability of capturing important global structures of label dependencies.
Our proposed method significantly improves prediction accuracy on several datasets, when compared to recent deep structured prediction methods. We also experiment with other approaches for modeling cardinality in deep structured prediction, and observe that our Predict and Constrain method outperforms these.
Taken together, our results demonstrate that deep structured prediction models can benefit from representing cardinality, and that a Predict and Constrain approach is an effective method for introducing cardinality in a differentiable endtoend manner.
2 Preliminaries
We consider the setting of predicting a set of labels , with , given an input . We assume are binary labels, however our approach can be generalized to the multiclass case.
To model our problem in a structured prediction framework, we define a score function over the inputoutput pairs. In this setting, is learned to evaluate different inputoutput configurations such that its maximizing assignment over the label space approximates the groundtruth label . Thus, for a given input , we wish to obtain,
(1) 
In practice, we will maximize over by relaxing the integrality constraint and using projected gradient descent. We assume that can be decomposed as follows,
(2) 
where is a function that depends on a single output variable (i.e., unary potential), is an arbitrary learned global potential defined over all variables and independent of the input, and is a global cardinality potential which enforces the maximizing labeling to be of cardinality , as detailed in Section 3. For brevity, we denote the cardinality potential as .
Concretely, is defined by multiplying
by a linear function of a feature representation of the input, where the feature representation is given by a multilayer perceptron
. The global score is obtained by applying a neural network over the variables, and evaluates such assignments independent of , similarly to the architecture used by Belanger & McCallum 2016.The score function is parameterized by a set of weights
. During training we seek the value of these weights which minimizes a loss function over the predicted output
with respect to the groundtruth label . Recall that prediction is made by applying projected gradient descent over the relaxed variables. In what follows we describe our method in more detail.3 Learning with Cardinality Potentials
We propose a deep structured architecture, with the objective of maximizing the score function . The score function takes into account the independent relevance of each variable, as given by the unary potentials, as well as variable correlations modeled by the global potential. In addition, we enhance the ability of our model to represent complex structural relations between variables, utilizing the expressiveness of cardinality potentials. Such potentials are capable of capturing important label dependencies which are harder to model using an arbitrary form global potential.
Specifically, we formulate the cardinality score as a constraint on the sum of labels defined as follows,
Such cardinality potentials are able to use the fact that distributions of label counts, pertaining to a specific value, are dependent on the task of interest. Moreover, we exploit the fact that can be predicted given , using a learned cardinality predictor . This enhances the power of cardinality potentials, by imposing a constraint on the output labels, tailored for their corresponding input data.
Next, we wish to maximize the overall score function with respect to , taking into account both the unary and global potentials, as well as the cardinality potential. There are several ways to address this problem, detailed in Section 4, however we found our Predict and Constrain approach to be the most effective.
3.1 Learning Through Optimization
Given input , we wish to obtain , the maximizing labeling of the score function. However, as represents a complex nonlinear function of , finding the maximizing label is intractable. Additionally, we are interested in constructing an endtoend differentiable network, to take into account the inference process while learning the network parameters.
We devise a differentiable approximation of , by employing projected gradient descent as our inference scheme, as depicted in Figure 1. To this end, we relax the discrete variables to be in the interval . We unroll a gradient update step for
iterations, which is essentially a sequence of differentiable updates to variables. Thus, gradients of the network parameters can backpropagate through the unrolled optimization in a similar manner to backpropagation through a recurrent neural network.
The detailed differentiation of unrolled gradient descent is given in Maclaurin et al. 2015, though in practice this computation is done using deep learning libraries for which the gradient components of the computation graph are differentiable, removing the need to explicitly program it. Similarly, the basic gradient update can be replaced by other differentiable optimization techniques. We leverage this idea to construct a differentiable architecture which computes a Euclidean projection following each gradient step.
Similar to other endtoend approaches as Belanger et al. 2017, we found it useful to apply a loss function over all iterations. Specifically, we opted for the following weighing of the singlestep loss terms, obtaining the overall loss function,
where is a differentiable loss function defined for a single gradient step. By using this method the model learns to converge quickly during inference, and also diminishes the problem of vanishing gradients, as the loss function directly incorporates every layer.
3.2 Enforcing Cardinality via Projection
During inference we iteratively apply projected gradient ascent over the variables for steps, where the initial variables
are given by a sigmoid function applied over the unary terms. In each step
of the inference pipeline we update the label variables as follows,where , is the inference learning rate, and denotes a projection operator. The operator differentially computes an approximation of a Euclidean projection onto a set defined as,
with obtained using the cardinality predictor . Thus, approximately solves the following problem,
(3)  
subject to  
First, we note that it is possible to compute the above minimization directly, however this is harder to obtain in our endtoend differentiable network, as detailed in Section 4.1. Instead, we construct operator using two subprocesses, each computing a projection onto a different set, and , such that . Let . Given , its Euclidean projection onto is obtained simply by clipping values larger than , i.e. . Let , the positive simplex. When ,
is the probability simplex. The Euclidean projection onto
can be done using an algorithm which relies on sorting Duchi et al. (2008).In designing our endtoend differentiable network, we must take into account the smoothness restriction of our network components. Hence, we devise a differentiable variation of the simplex projection algorithm. The soft procedure for computing is given in Algorithm 1
. For differential sorting we use a sorting component based on a differential variation of Radix sort, built in the deep learning library TensorFlow
Abadi et al. (2015). It is an interesting direction for future research to explore other differentiable sorting operations that can be computed more efficiently.Next, we need to combine the outputs of the operators and to output the desired . If and were affine sets we could have applied the alternating projection method Escalante & Raydan (2011), by alternately projecting onto the sets and . In this case, the method is guaranteed to converge to the Euclidean projection of onto the intersection . Since these are not affine sets due to the inequality constraints, this method is only guaranteed to converge to some point in the intersection.
Instead, we use Dykstra’s algorithm Boyle & Dykstra (1986) which is a variant of the alternating projection method. Dykstra’s algorithm converges to the Euclidean projection onto the intersection of convex sets, such as and . Specifically, for each step for a fixed number of iterations we compute the following sequence,
Where . Empirically, we find that a small number of iterations is sufficient, and we set in all our experiments.
4 Review of Alternative Approaches
In this section we review alternative inference methods that could be applied in our pipeline in place of our Predict and Constrain approach. Our goal is to further examine possible directions of dealing with complex global structures of the output labels, as well as to demonstrate the motivation behind the design choices made in our deep structured architecture.
First, we consider an architecture which only consists of the unary and cardinality scores, and , discarding the global potential . In this case, exact maximization is possible by simply sorting the unary terms and obtaining the top values. The maximizer is the binary vector in which the labels corresponding to the top values are on. This approach can be trained using standard structured hinge loss. Although appealing in its simplicity, this method fails to perform as well without utilizing the expressiveness of the global score , which captures variable interactions that cannot be modeled by cardinality or unary potentials alone. SPENs Belanger & McCallum (2016) have demonstrated the expressive power of such global scores to captures important structural dependencies among labels, such as mutual exclusivity and implicature.
It is also possible to abandon constrained optimization altogether and instead of optimizing for variables
, we could optimize for the logits
, such that , by applying the following gradient step,where is the sigmoid function. This tackles our ability to efficiently project onto the cardinality constrained space, which is a core part of our method. We could instead design a global score in the hope it would be expressive enough to capture cardinality relations. However, such a neural network would require using a deeper structure with more parameters, and is thus prone to overfitting. A similar architecture was used by Belanger et al. 2017, which did not yield satisfactory results in our experiments, as demonstrated in Section 6.
Alternatively, we can replace the projection component of our pipeline with an architecture in which the cardinality potential is designed as a weighted sum of cardinality indicators. To this end, we can use the term such that for label sets whose sum is at least , is close to , and to otherwise. Thus, we have,
By using this method, the projection scheme collapses to the simple operation of clipping the values to be in the range . Then, can be maximized along with the global and unary potentials using gradientbased inference, applying a clipping projection following each gradient step. However, this method does not encourage the inner optimization to enforce the cardinality constraint as much as directly projecting the variables onto the constrained space. In practice we found this method to underperform compared to our approach, as shown in Section 6.
Finally, it is possible to frame the task of maximizing
as a mixed integer linear program, obtaining an exact inference scheme for our network. In order to train our model we could use the standard structured hinge loss. This formulation could also be relaxed to a linear program making it more efficient to solve. However, solving an LP for each prediction is impractical.
4.1 Fast Exact Projection
Our projection scheme relies on Dykstra’s algorithm which requires us to alternately apply and in an iterative fashion. In order to solve optimization problem 3 to optimality we would need to encode several iterates of these alternations in our computation graph, to form a deeper network. Instead, we could compute the projection by solving optimization problem 3 directly, without separately applying and . A solution to problem in Equation 3 was given by Gupta et al. 2010. They also describe a fast lineartime algorithm to obtain the maximizer . We will give a brief description of their method.
Assume w.l.o.g. that is sorted such that . Let be the indices up until which all projected values are ones, and after which all projected values are zeros, respectively. Then, the values of the maximizer take the following form,
(4) 
with defined as follows,
(5) 
The algorithm suggested in Gupta et al. 2010 computes in an iterative fashion, based on the fact that is a piecewise linear function in with points of discontinuity at values and . The algorithm requires maintaining an uncertainty interval for , which is initialized at . In each iteration we obtain , the median of the merged set of unique values of and , lying in the current uncertainty interval. We then need to compare to , where can be computed using Equation 5. The size of the uncertainty interval is reduced in every iteration, until the correct value is recovered.
Although this method efficiently computes the correct projection, it requires applying nontrivial combinatorial operations which do not naturally translate to differentiable operations, such as setunion, and median. We have experimented with a differentiable implementation of this algorithm to be used in our endtoend network, detailed in Section 6.
The projection algorithm requires many components added to the computation graph making it deeper and thus harder to backpropagate through. Alternatively, by iterating for only a small number of iterations, the resulting is far from the correct . Overall, in this endtoend setting, we found it to have inferior performance compared to the alternating projection method described in Section 3.2. Instead, the use of Dykstra’s algorithm with a few alternating projection iterations, was both efficient and thus easier to differentiate through, and obtained good a approximation of the correct maximizer.
5 Related Work
Several recent approaches have applied gradientbased inference to a variety of structured prediction tasks (Belanger & McCallum 2016; Gygli et al. 2017; Amos & Kolter 2017). Specifically, Structured Prediction Energy Networks (SPENs) Belanger & McCallum (2016) optimize the sum of local unary potentials and a global potential, trained with a structured SVM loss. Another approach is the Deep Value Network (DVN) Gygli et al. (2017) which uses an energy network architecture similar to SPEN, but instead trains it to fit the task cost function.
Our architecture was constructed similarly to SPEN and DVN for the unary and global potentials , and , though we extended the expressivity of this architecture both by introducing the cardinality potential , as well as an effective inference method for the overall score.
Input Convex Neural Networks (ICNNs) Amos & Kolter (2017)
design potentials which are convex with respect to the labels, so that inference optimization will be able to reach global optimum. Their design achieves convexity by restricting the model parameters and activation functions, limiting the expressiveness of the learned potentials. In practice, it has inferior performance compared to its nonconvex counterpart, as shown by
Belanger et al. 2017.Our approach differs from these methods by two main aspects. First, we embed the inference process within the learning model, and second, our model is capable of encapsulating complex global dependencies in the form of cardinality potentials, which are harder to obtain using general form global potentials.
5.1 Cardinality Potentials
The use of higherorder global potential, and specifically of cardinality relations, have shown to be useful in a wide range of applications. For example, in computer vision they have been used to improve human activity recognition Hajimirsadeghi et al. (2015) by considering the number of people involved in an activity, which is harder to infer using spatial relations alone. In partofspeech tagging, cardinalities can enforce the constraint that each sentence must contain at least one verb Ganchev et al. (2010).
The properties of cardinality potentials and corresponding inference methods have been studied in a collection of prior work (Tarlow et al. 2012; Tarlow et al. 2010; Milch et al. 2008; Swersky et al. 2012; Gupta et al. 2007). General form global potentials often result in nontrivial dependencies between variables that make exact inference intractable, thus requiring the use of approximate inference methods. Conversely, MAP inference for cardinality potential models is wellunderstood. Notably, Gupta et al. 2007 shows an exact MAP inference algorithm, and Tarlow et al. 2010 gives a an algorithm for computing the cardinality potential messages for maxproduct belief propagation, both algorithms are solved efficiently in time. Still, when considering general form global potentials as in our framework, we must employ an approximate inference scheme which can be computed efficiently.
5.2 Unrolled Optimization
Endtoend training with unrolled optimization was first used in deep networks by Maclaurin et al. 2015
for tuning hyperparameters. More recently, other approaches have unrolled gradientbased methods within deep networks in various different contexts (
Metz et al. 2016; Andrychowicz et al. 2016; Greff et al. 2017).In the context of inference, Belanger et al. 2017 explored SPENs which use gradient descent to approximate energy minimization, while learning the energy function endtoend. They have demonstrated that using unrolled optimization as an inference method can outperform baseline models that can be exactly optimized. In computer vision, several works have incorporated structured prediction methods like conditional random fields within neural networks (Zheng et al. 2015; Schwing & Urtasun 2015), where the MeanField algorithm is being used for inference.
An important advantage of these training schemes is that they return not only the learned potentials, but also an actual inference optimization method, tuned on the training data, to be used at test time. However, these methods are either restricted to basic graphical models (e.g with pairwise or low order clique potentials) to ensure tractability, or have used global potentials of arbitrary form which are limited in their ability to capture interesting properties of the output space. Our approach harnesses the effectiveness of unrolled optimization while boosting its ability to infer important structures expressed by cardinality potentials.
6 Experiments
We evaluate our method on multilabel classification datasets, for which the task is predicting a set of binary labels from text inputs given in a bagofwords representation. The MLC task is relevant in a wide range of machine learning applications, and characterized by higherorder labels interaction, which can be addressed by our deep structured network. Therefore, it is a natural application of our method. We use 3 standard MLC benchmarks, as used by other recent approaches (Belanger & McCallum 2016; Gygli et al. 2017; Amos & Kolter 2017): Bibtex, Delicious, and Bookmarks.
6.1 Cardinality Prediction Analysis
We begin by giving an analysis which demonstrates the effectiveness of estimating the cardinality of a label set given the input data. We train a simple feedforward neural network which consists of a single hidden layer with ReLU activations, with the goal of predicting the groundtruth cardinality. The output layer is a softmax over
output neurons, where
is allowed the maximal cardinality. This is the same architecture we used for in the larger setting, while in this experiment the data is the set of inputs and their respective label cardinalities .We evaluate the results over the Delicious dataset using the mean squared error of our predictor’s output with respect to the correct cardinality. We compare our predictor to a random baseline over the range of , which is the range of possible cardinalities in the data, as well as to the constant cardinality of which is the average cardinality in the training data. Our predictor performs better than both baselines, with , , and .
A possible explanation for this phenomenon is that some attributes of the input data might indicate approximately how many active labels the label set contains. Features such as the number of distinct words, or the existence or absence of specific meaningful words could be relevant here, making the task of inferring how many labels are active easier than which labels are active. For example, an article with many distinct words suggests that it discusses a broad range of subjects, and thus relates to many different tags, while an article with few distinct words is more likely to be focused on a specific subject and therefore has only a small set of tags. Learning which combination of input words corresponds to which specific tagging set is harder than learning to predict cardinality based on feature representations of simpler forms, such as the ones discussed above.
The other datasets we tested, Bibtex and Bookmarks, are extremely sparse with average cardinalities of and , respectively. The task of predicting the correct cardinality within this smaller possible range is harder. Accordingly, the predictor approximately learns the average cardinality, and we observed in our experiments that projecting to a predicted cardinality yields similar performance to projection onto a set defined by constant cardinality . However, using the predictor did manage to improve our overall performance for the Bibtex dataset compared to a fixed , whereas for Bookmarks our results are slightly lower than for using a fixed .
The Predict and Constrain approach of first estimating the relevant cardinality and then projecting onto the cardinalityconstrained space is especially useful for datasets of larger average cardinality and cardinality variance, such as Delicious, for which we obtained a significant performance increase using this method.
6.2 Experimental Setup
The evaluation metric for the MLC task is the macroaveraged
measure. We found it useful to use its continuous extension as our loss function , i.e.,(6) 
where is binary and . We observed improved performance by training with as opposed to alternative loss functions, e.g. cross entropy.
The architecture used for all datasets consists of neural networks for the unary potentials , the global potential , and the cardinality estimator , respectively. For all neural networks we use a single hidden layer, with ReLU activations. For the unrolled optimization we used gradient ascent with momentum , unrolled for iterations, with ranging between , and with alternating projection iterations. All of the hyperparameters were tuned on development data. We trained our network using AdaGrad Duchi et al. (2011) with learning rate .
We compare our method against the following baseline,

SPEN  Structured Prediction Energy Network Belanger & McCallum (2016) uses gradientbased inference to optimize an energy network of local and global potentials, trained with SSVM loss.

E2ESPEN  an endtoend version of SPEN Belanger et al. (2017).

DVN  Deep Value Network Gygli et al. (2017) trains an energy function to estimate the task loss on different labels for a given input, with gradientbased inference.

MLP  a multilayer perceptron with ReLU activations trained with crossentropy loss function.
The MLP and SPEN baseline results were taken from Belanger & McCallum (2016). The E2ESPEN results were obtained by running their publicly available code on these datasets. In our experiments we follow the same train and test split as the baseline methods on all datasets. The results are shown in table 1.
Dataset  Bibtex  Bookmarks  Delicious 

SPEN  42.2  34.4  37.5 
E2ESPEN  38.1  33.9  34.4 
DVN  44.7  37.1   
MLP  38.9  33.8  37.8 
SC  42.0  34.6  34.6 
FP  42.1  36.0  34.2 
Ours 
6.3 Alternative Implementations
In additional to prior work, we have also compared our approach to methods that replace the cardinality enforcing component of our network, detailed in Section 4, to examine the role of our unrolled projection scheme in capturing cardinality relations. Specifically, we consider the following methods,
Both methods were trained with unrolled gradient ascent inference, using the negative loss function, shown in Equation 6.
The fast projection method, was implemented using a differentiable approximation of the projection algorithm steps. The algorithm performs a binary search over the values of to obtain the correct , using setunion and median operations. Instead, we maintain a lower and upper bound for the uncertainty interval, and in each iteration we compute the average, rather than the median, until the gap between the lower and upper bounds is below a threshold. In each iteration , we compared to , given by Equation 5.
To obtain the values , , we compute and
, the onehot encodings of
, , in every iteration . Let be the sorted values of the vector we wish to project. We first compute and . Then, we apply Softmax over the elementwise multiplication of a fixed indices vector and or , to obtain and , respectively. We compute the dotproduct of the onehot encodings and the cumulative sum vector of to obtain the partial sum in Equation 5. Finally, we use Equation 4 to obtain the projected values.6.4 Discussion
It can be seen from Table 1 that our method outperforms all baselines we have compared against, obtaining stateoftheart results in these tasks. Comparing our network to another endtoend gradientbased inference method, the E2ESPEN achieves significantly low performance. As stated by the authors in their release notes, their method is prone to overfitting and actually performs worse than the original SPEN on these benchmarks. Additionally, our method improves upon the original SPEN by a large margin. While SPENs obtain these results using pretraining of their unary potentials independently, our method is trained jointly without the need to pretrain any of our architecture components, in an endtoend manner.
The DVN method underperformed on the Bibtex and Bookmarks datasets, compared to our method. They did not report results over the Delicious dataset, whereas running the code released by the authors yields extremely low results for Delicious. This suggests that further fine tuning of their method is required for different datasets. The Delicious dataset has the largest label space with labels, while Bibtex and Bookmarks have and , respectively, and is therefore the most challenging. Thus, these results illustrate the robustness of our method, as it achieves superior performance for Delicious over all baselines, albeit not being specifically tuned for it.
The performance of the SC and FP baselines is surpassed by that of our network. Nevertheless, they obtain competitive results compared to strong baselines. This further demonstrates the effectiveness of unrolled optimization over applying an inference scheme separately from the training process. Moreover, these results indicate the power of combining general form global potentials with cardinalitybased potentials to represent complex structural relations.
By using an architecture similar to all examined baseline methods, for the unary and global scores, our results are demonstrative of the performance improvements obtained by enforcing cardinality constraints through the Predict and Constrain technique, over all three datasets.
7 Extension to NonBinary Labels
Using the Predict and Constrain method in the multiclass case is an interesting extension of this work. For example, in image segmentation tasks in which each pixel could be assigned to one of multiple classes, cardinality potentials can enforce constraints on the size of labeled objects and encourage smoothness over large groups of pixels.
To reframe our method in the general nonbinary form, we consider a matrix where is the number of possible classes, and the number of labels. Here, every row corresponds to the onehot representation of the class which label is assigned to. We relax the matrix values to lie in the continuous interval . We want to enforce the constraint that the rows of must lie in the probabilistic simplex, while the columns should obey cardinality constraints. We denote the space of matrices for which these constraints hold as .
Our approach translates into projection onto the intersection of two sets and , such that , where,
with being the cardinality constraint of class . Since and are convex, we can again apply Dykstra’s algorithm, alternating between the projections onto the row constraints and column constraints.
Since there is no dependence between the rows within , or columns within , the projections can be applied in parallel. Specifically, we can alternately apply simultaneous projection of all rows onto the unit simplex, and of all columns onto the positive simplex (using Algorithm 1 with for each row, for each column ). We note that it is also possible to discard the inequality constraint in to obtain a simpler projection algorithm.
For cardinality prediction, we could learn a predictor for each class , so that we can then apply the projection operator of for . Thus, we utilize the benefits of first predicting the desired cardinalities of each class for a given input and then projecting the columns of onto the cardinality constrained space, which have have shown successful in our experiments for the binary label case.
This extension of our framework generalizes our approach to be useful in a wide range of application domains. However, we did not experiment with this method, and it remains an interesting direction for future work.
8 Conclusion
This paper presents a method for using highly nonlinear score functions for structured prediction augmented with cardinality constraints. We show how this can be done in an end to end manner, by using the algorithmic form of projection into the linear constraints that restrict cardinality. This results in a powerful new structured prediction model, that can capture elaborate dependencies between the labels, and can be trained to optimize model accuracy directly.
We evaluate our method on standard datasets in multilabel classification. Our experiments demonstrate that our method achieves new state of the art results on these datasets, and outperform all recent deep learning approaches to the problem. We introduced the novel concept of Predict and Constrain, which we hope to be further explored in the future and applied to additional application domains.
The general underlying approach we propose is to consider high order constraints where the value of the constraint is predicted by one network, and another network implements projecting on this constraint. It will be interesting to explore other types of constraints and projections where this is possible from an algorithmic perspective and effective empirically.
References
 Abadi et al. (2015) Abadi, Martín, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Mané, Dan, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Viégas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Largescale machine learning on heterogeneous systems, 2015. URL https://www.tensorflow.org/. Software available from tensorflow.org.
 Amos & Kolter (2017) Amos, Brandon and Kolter, J Zico. Inputconvex deep networks. 2017.
 Andrychowicz et al. (2016) Andrychowicz, Marcin, Denil, Misha, Gomez, Sergio, Hoffman, Matthew W, Pfau, David, Schaul, Tom, and de Freitas, Nando. Learning to learn by gradient descent by gradient descent. 2016.
 Belanger & McCallum (2016) Belanger, David and McCallum, Andrew. Structured prediction energy networks. ICML, 2016.
 Belanger et al. (2017) Belanger, David, Yang, Bishan, and McCallum, Andrew. Endtoend learning for structured prediction energy networks. ICML, 2017.
 Boyle & Dykstra (1986) Boyle, James P and Dykstra, Richard L. A method for finding projections onto the intersection of convex sets in hilbert spaces. In Advances in order restricted statistical inference. Springer, 1986.
 Chen et al. (2015) Chen, LiangChieh, Schwing, Alexander, Yuille, Alan, and Urtasun, Raquel. Learning deep structured models. In ICML, 2015.
 Duchi et al. (2008) Duchi, John, ShalevShwartz, Shai, Singer, Yoram, and Chandra, Tushar. Efficient projections onto the l 1ball for learning in high dimensions. In ICML, 2008.
 Duchi et al. (2011) Duchi, John, Hazan, Elad, and Singer, Yoram. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159, 2011.
 Escalante & Raydan (2011) Escalante, Ren and Raydan, Marcos. Alternating Projection Methods. SIAM, 2011.
 Ganchev et al. (2010) Ganchev, Kuzman, Gillenwater, Jennifer, Taskar, Ben, et al. Posterior regularization for structured latent variable models. Journal of Machine Learning Research, 11(Jul):2001–2049, 2010.
 Greff et al. (2017) Greff, Klaus, Srivastava, Rupesh K, and Schmidhuber, Jürgen. Highway and residual networks learn unrolled iterative estimation. ICLR, 2017.
 Gupta et al. (2010) Gupta, Mithun Das, Kumar, Sanjeev, and Xiao, Jing. L1 projections with box constraints. CoRR, abs:1010.0141v1, 2010.
 Gupta et al. (2007) Gupta, Rahul, Diwan, Ajit A, and Sarawagi, Sunita. Efficient inference with cardinalitybased clique potentials. In Proceedings of the 24th international conference on Machine learning, pp. 329–336. ACM, 2007.
 Gygli et al. (2017) Gygli, Michael, Norouzi, Mohammad, and Angelova, Anelia. Deep value networks learn to evaluate and iteratively refine structured outputs. 2017.

Hajimirsadeghi et al. (2015)
Hajimirsadeghi, Hossein, Yan, Wang, Vahdat, Arash, and Mori, Greg.
Visual recognition by counting instances: A multiinstance
cardinality potential kernel.
In
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
, pp. 2596–2605, 2015.  Ma & Hovy (2016) Ma, Xuezhe and Hovy, Eduard. Endtoend sequence labeling via bidirectional lstmcnnscrf. In ACL, 2016.
 Maclaurin et al. (2015) Maclaurin, Dougal, Duvenaud, David, and Adams, Ryan. Gradientbased hyperparameter optimization through reversible learning. In ICML, 2015.
 Metz et al. (2016) Metz, Luke, Poole, Ben, Pfau, David, and SohlDickstein, Jascha. Unrolled generative adversarial networks. ICLR, 2016.
 Milch et al. (2008) Milch, Brian, Zettlemoyer, Luke S, Kersting, Kristian, Haimes, Michael, and Kaelbling, Leslie Pack. Lifted probabilistic inference with counting formulas. AAAI, pp. 1062–1068, 2008.
 Schwing & Urtasun (2015) Schwing, Alexander G and Urtasun, Raquel. Fully connected deep structured networks. arXiv preprint arXiv:1503.02351, 2015.

Swersky et al. (2012)
Swersky, Kevin, Sutskever, Ilya, Tarlow, Daniel, Zemel, Richard S,
Salakhutdinov, Ruslan R, and Adams, Ryan P.
Cardinality restricted boltzmann machines.
In NIPS, pp. 3293–3301, 2012. 
Tarlow et al. (2010)
Tarlow, Daniel, Givoni, Inmar E, and Zemel, Richard S.
Hopmap: Efficient message passing with high order potentials.
In
Proceedings of 13th Conference on Artificial Intelligence and Statistics
, 2010.  Tarlow et al. (2012) Tarlow, Daniel, Swersky, Kevin, Zemel, Richard S, Adams, Ryan P, and Frey, Brendan J. Fast exact inference for recursive cardinality models. In Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence, 2012.
 Zheng et al. (2015) Zheng, Shuai, Jayasumana, Sadeep, RomeraParedes, Bernardino, Vineet, Vibhav, Su, Zhizhong, Du, Dalong, Huang, Chang, and Torr, Philip HS. Conditional random fields as recurrent neural networks. 2015.
Comments
There are no comments yet.