Interrater reliability of client categorization system as a psychiatric status rating scale in measuring psychiatric patients’ health status

Authors

  • Intansari Nurjannah Mental Health and Community Nursing Department, School of Nursing, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta Indonesia http://orcid.org/0000-0002-5406-8727
  • Putri Nurmasari School of Nursing, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta Indonesia
  • Irwan Tri Nugraha School of Nursing, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta Indonesia
  • Diki Yuge Katan School of Nursing, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta Indonesia
  • Ki Hariyadi Statistician, Department of Mathematics, Science Tec Faculty Sunan Kalijaga Islamic State, University, Yogyakarta, Indonesia

DOI:

https://doi.org/10.18203/2320-6012.ijrms20171868

Keywords:

Interrater reliability, Psychiatric status rating scale

Abstract

Background: Reliable instrument is needed by nurses to measure patient’s status. This study aimed to determine the interrater reliability of client categorization system instrument used in psychiatric setting.

Methods: Data was collected two periods in a year by twelve raters. They used the CCS to simultaneously gauge the condition of 30 patients for 98 measurements. Data was analyzed using the Cohen’s-Kappa formula and percent agreement.

Results: The analysis showed that the Cohen-Kappa values of interrater reliability were 0.59 which can be categorized as moderate (accepted, if value >0.41) and percent agreement value was 77.78% indicated acceptable.

Conclusions: The CSS has moderate and acceptable category of interrater reliability and can be recommended for use in clinical practice.

Metrics

Metrics Loading ...

Author Biographies

Intansari Nurjannah, Mental Health and Community Nursing Department, School of Nursing, Faculty of Medicine, Universitas Gadjah Mada, Yogyakarta Indonesia

Department of Mental Health and Community Nursing

Ki Hariyadi, Statistician, Department of Mathematics, Science Tec Faculty Sunan Kalijaga Islamic State, University, Yogyakarta, Indonesia

Department of Mathematic, ScienceTec Faculty

References

Schroder PJ, Washington PW, Deering DC, Ciyne L. Testing validitiy and reliability in a psychiatric patient classification system Psychiatric nurses thest their patient classification system. Nursing Management. 1986;49(1):49-54.

Fries BE, Nerenz DR, Falcon SP, Ashcraft MLF, Lee CZ. A classification system for long-staying psychiatric patients. Medical Care. 1990;28(4):311-23.

Overall JE. The brief psychiatric rating sclae in psychopharmacology research. Modern Problems of Pharmacopsychiat. 1974;7(67-68).

Lachar D, Bailley SE, Rhoades HM, Varner RV. Use of BPRS-A percent change scores to identify significant clinical improvement: accuracy of treatment response classification in acute psychiatric inpatients. Psychiatric research. 1999;89:259-68.

Eklof M, Qu W. Validating a Psychiatric patient Classification System. JONA. 1986;16(5):1986.

Croft A. A psychiatric patient classification system that works! Nursing Management. 1993;24(11):66.

Iglesia C, Villa AM. A system of patient classification in long-term psychiatric inpatient: Resources Utilization Groups T-18 (RUG T-18). 2005.

Morath J, Fleischmann R, Boggs G. A missing consideration: The psychiatric patient classification for scheduling staffing systems. Perspective in Psychiatric Care. 1989;XXV(3, 4):40 - 7.

Ehrman ML. Using a factored patient classification system in psychiatry. Nursing Management. 1987;18(5):48-53.

Ringerman ES, Luz S. A psychiatric patient classification system. Nursing Management. 1990;21(10):66-71.

O'Leary C. A psychiatric patient classification system. Nursing Management. 1991;22(9):66.

Nurjannah I. Treatment guidelines for mental illness patient: management, nursing process and nurse-patient therapeutic relationship. Yogyakarta: Mocomedia. 2004.

Robins E. Categories Versus Dimensiions in Psychiatric Classification. 1976;8:39-55.

van der Vleuten C. Validity of final examinations in undergraduate medical training. BMJ. 2000;321(7270):1217.

McHugh ML. Interrater Reliability: The Kappa Statistic. The Journal of Croatian Society of Medical Biochemistry and Laboratory Medicine. 2012:276-82.

Morris R, MacNella P, Scott A, Treacy P, Hyde A, O'Brien J, et al. Ambiguities and conflicting results: The limitations of the Kappa Statistics in establishing the interrater reliability of the Irish nursing minimum data set for mental health: A discussion paper. Inter J Nursing Studies. 2008;45:645-7.

Craddock J. Interrater reliability of psychomotor skill assessment in athletic training: ProQuest. 2009.

Cargo M, Stankov I, Thomas J, Saini M, Rogers P, Mayo-Wilson E, et al. Development, interrater reliability and feasibility of a checklist to assess implementation (Ch-IMP) in systematic reviews: the case of provider based prevention and treatment progrmas targeting children and youth. Med Res Meth. 2015;15:73.

Morris R, MacNeela P, Scott A, Treacy P, Hyde A, O’Brien J, et al. Ambiguities and conflicting results: The limitations of the kappa statistic in establishing the interrater reliability of the Irish nursing minimum data set for mental health: A discussion paper. Inter J Nursing studies. 2008;45(4):645-7.

Gorsuch RL. Factor Analysis. Hillsdale. NJ: Erlbaum Associates. 1983.

Hatcher L. A step by step approach to using the SAS System for Factor Analysis and Structural Equation Modeling. Cary, N, C: SAS Institute, Inc; 1994.

Stuart GW, Sundeen SJ. Principles and Practice of Psychiatric Nursing. St. Louis: Mosby Year Book; 1995.

Rushforth HE. Objective Structured Clinical Examination (OSCE): Review of Literature and Implications for Nursing Education. Nurse Education Today. 2007:481-90.

McCray. Assessing Interrater Agreement For Nominal Judgement Variables. Language Testing Forum. 2013;15-7.

House A, House B, Campbell M. Measures of inter observer agreement: Calculation formulas and distribution effects. J Behavioral Assessment. 1981;3(1):37-57.

Landis J, Koch G. The measurement of observer agreement for categorical data. biometrics. 1977:159-74.

Osborne JW. Best practices in quantitative methods: Sage. 2008.

McHugh M. Interrater reliability: the kappa statistic. Biochemia Medica. 2012;22(3):276-82.

Parloca PK, Henry SB. The usefulness of Georgetown Home Health Care Classification System for Coding Patient Problems and Nursing Interventions in Psychiatric Home Care. Computers in Nursing. 1998;16(1):45-52.

Feinstein A, Cicchetti D. High agreement but low kappa: I. The problems of two paradoxes. J Clinical Epid. 1990;43(6):543-9.

Nurjannah I, Siwi S, M. Guidelines for analysis on measuring interrater reliability of nursing outcome classification. International Journal of Research in Medical Sciences. 2017;5(4):1169-75.

Neuendorf KA. The Content Analysis Guidebook. Thousand Oaks, CA: Sage; 2002.

Eye AV, Mun EY. Analyzing Rater Agreement: Manifest Variable Methods. New Jersey: Lawrence Erlbaum Associates; 2005.

Downloads

Published

2017-04-26

How to Cite

Nurjannah, I., Nurmasari, P., Nugraha, I. T., Katan, D. Y., & Hariyadi, K. (2017). Interrater reliability of client categorization system as a psychiatric status rating scale in measuring psychiatric patients’ health status. International Journal of Research in Medical Sciences, 5(5), 2193–2201. https://doi.org/10.18203/2320-6012.ijrms20171868

Issue

Section

Original Research Articles