4.4 Article

Using prevalence indices to aid interpretation and comparison of agreement ratings between two or more observers

期刊

VETERINARY JOURNAL
卷 188, 期 2, 页码 166-170

出版社

ELSEVIER SCI LTD
DOI: 10.1016/j.tvjl.2010.04.021

关键词

Kappa; Observer agreement; Population homogeneity; Statistics; Subjective scoring

向作者/读者索取更多资源

Veterinary clinical and epidemiological investigations demand observer reliability. Kappa (kappa) statistics are often used to adjust the observed percentage agreement according to that expected by chance. In highly homogenous populations, kappa ratings can be poor, despite percentage agreements being high, because the probability of chance agreement is also high. Veterinary researchers are often unsure how to interpret these ambiguous results. It is suggested that prevalence indices (PIs), reflecting the homogeneity of the sample, should be reported alongside percentage agreements and kappa values. Here, a published PI calculation is extended, permitting extrapolation to situations involving three or more observers. A process is proposed for classifying results into those that do and do not attain clinically useful ratings, and those tested on excessively homogenous populations and which are therefore inconclusive. Preselection of balanced populations, or adjustment of scoring thresholds, can help reduce population homogeneity. Reporting PIs in observer reliability studies in veterinary science and other disciplines enables reliability to be interpreted usefully and allows results to be compared between studies. (C) 2010 Elsevier Ltd. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据