Journal
COMPUTATIONAL STATISTICS & DATA ANALYSIS
Volume 55, Issue 6, Pages 2197-2208Publisher
ELSEVIER SCIENCE BV
DOI: 10.1016/j.csda.2010.12.012
Keywords
Least squares; Circle fitting; Ellipse fitting; Algebraic distance minimization; Error analysis; Bias removal
Funding
- Ministry of Education, Culture, Sports, Science, and Technology, Japan [C 21500172]
- Grants-in-Aid for Scientific Research [21500172] Funding Source: KAKEN
Ask authors/readers for more resources
This work extends the circle fitting method of Rangarajan and Kanatani (2009) to accommodate ellipse fitting. Our method, which we call HyperLS, relies on algebraic distance minimization with a carefully chosen scale normalization. The normalization is derived using a rigorous error analysis of least squares (LS) estimators so that statistical bias is eliminated up to second order noise terms. Numerical evidence suggests that the proposed HyperLS estimator is far superior to the standard LS and is slightly better than the Taubin estimator. Although suboptimal in comparison to maximum likelihood (ML), our HyperLS does not require iterations. Hence, it does not suffer from convergence issues due to poor initialization, which is inherent in ML estimators. In this sense, the proposed HyperLS is a perfect candidate for initializing the ML iterations. (C) 2011 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available