# Icc Consistency Or Absolute Agreement

Reliability on the basis of absolute compliance is always lower than consistency because a stricter test is applied. To this end, we have re-analysid the theory presented by refs. [4-6] supplemented by Monte Carlo simulation demonstrations. We limit the current debate at the ICC with one point; In other words, each measure used in the analysis represents a single measure, not the average of two or more measures. The rating used by McGraw and Wong [6] results in three different ICC formulas called ICC (1) (ICC of origin without bias, introduced by Fisher [3]), ICC (A,1) (absolute ICC agreement in the presence of bias) and ICC (C,1) (ICC bias in the presence of bias). These three formulas are identical to the three ICC (1.1), ICC (2.1) and ICC (3.1) formulas, which are named and discussed by Shrout and Fleiss [5]; However, the rating used by McGraw and Wong is clarified by the systematic separation of absolute agreements (A) and coherence (C) from the ICC; That`s why we decided to follow their rating. If one evaluates the absolute match of a result index measured several times, repetitive variability is to be accounted for, because the factor is considered a random factor, As in the following equation: If we evaluate the consistency of a result index measured several times, repetition is considered a fixed factor that does not include errors, and the following equation can be applied : I used the reliability method in SPSS (Analyze->Scale->Reliability Analysis) and intraclass correlations (ICCs) with a 2-way model model. For comparison purposes, I executed this model once with the absolute definition of agreement and once with the definition of consistency. I was surprised to see that the ICC was higher for absolute agreement than for the coherence agreement. Given that I considered the absolute definition of the agreement to be a stricter definition, this result appears to be counterintuitive. Please indicate some definitions of these criteria that explain this result.