Thales Sehn Körting

Is there an "Almost Perfect" agreement in a classification?



I discuss the extensive use of the Table Strength of Agreement based on different Kappa values, provided by: Landis, J.R. and Koch, G.G., 1977. The measurement of observer agreement for categorical data. Biometrics, pp.159-174. According to Google Scholar, this paper has more than 53.000 citations (up to October, 2019). In my opinion this table has been used sometimes with a different purpose than the original paper, which, according to the authors, "have been illustrated with an example involving  only two observers", and "these divisions are clearly arbitrary". The original paper is available at Follow my podcast: Subscribe to my YouTube channel: The intro and the final sounds were recorded at my home, using an old clock that belonged to my grandmother. Thanks for listening