The most popular measure of agreement is Cohen s Kappa which is an agreement index for the correlation between categorical variables often used as a reliability or validity measure. Although kappa has meaning as percentage agreement corrected for chance, its definition does not give a direct probabilistic support. The book studies a probability framework to model rater agreement in medical diagnosis, and hence a new index of agreement based on a probability model is proposed, which carries some useful interpretations along with providing information about intrinsic agreement not due to chance. The new index results in the same form as Cohen s kappa in practice and thus provides a novel means to understand kappa. Estimation based on maximum likelihood and statistical tests for hypothesis related to this new index are developed for inferences. Monte Carlo simulations are used to confirm the theoretical findings and numerical analysis of data from medical studies are used to demonstrate the practical applications. This book will be very helpful for the researchers in biomedical sciences, education, psychology, sociology and also in other fields in measuring agreement index.