Journal
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH
Volume 224, Issue 3, Pages 560-565Publisher
ELSEVIER
DOI: 10.1016/j.ejor.2012.09.004
Keywords
Large scale optimization; Machine learning; Linear classification; Incremental algorithms
Ask authors/readers for more resources
In this work we consider the problem of training a linear classifier by assuming that the number of data is huge (in particular, data may be larger than the memory capacity). We propose to adopt a linear least-squares formulation of the problem and an incremental recursive algorithm which requires to store a square matrix (whose dimension is equal to the number of features of the data). The algorithm (very simple to implement) converges to the solution using each training data once, so that it effectively handles possible memory issues and is a viable method for linear large scale classification and for real time applications, provided that the number of features of the data is not too large (say of the order of thousands). The extensive computational experiments show that the proposed algorithm is at least competitive with the state-of-the-art algorithms for large scale linear classification. (c) 2012 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available