Balamurugan, P and Shevade, Shirish and Babu, Ravindra T (2012) Efficient Algorithms for Linear Summed Error Structural SVMs. In: IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)/International Joint Conference on Neural Networks (IJCNN)/IEEE Congress on Evolutionary Computation (IEEE-CEC)/IEEE World Congress on Computational Intelligence (IEEE-WCCI) , JUN 10-15, 2012 , Brisbane, AUSTRALIA.
2012IJCNN_IEEE_2012.pdf - Published Version
Restricted to Registered users only
Download (836Kb) | Request a copy
Structural Support Vector Machines (SSVMs) have become a popular tool in machine learning for predicting structured objects like parse trees, Part-of-Speech (POS) label sequences and image segments. Various efficient algorithmic techniques have been proposed for training SSVMs for large datasets. The typical SSVM formulation contains a regularizer term and a composite loss term. The loss term is usually composed of the Linear Maximum Error (LME) associated with the training examples. Other alternatives for the loss term are yet to be explored for SSVMs. We formulate a new SSVM with Linear Summed Error (LSE) loss term and propose efficient algorithms to train the new SSVM formulation using primal cutting-plane method and sequential dual coordinate descent method. Numerical experiments on benchmark datasets demonstrate that the sequential dual coordinate descent method is faster than the cutting-plane method and reaches the steady-state generalization performance faster. It is thus a useful alternative for training SSVMs when linear summed error is used.
|Item Type:||Conference Proceedings|
|Additional Information:||Copyright for this article belongs to the IEEE|
|Department/Centre:||Division of Electrical Sciences > Computer Science & Automation (Formerly, School of Automation)|
|Date Deposited:||06 Dec 2012 10:02|
|Last Modified:||06 Dec 2012 10:02|
Actions (login required)