Using item analysis on essay types questions given in summative examination of medical college students: facility value, discrimination index

Gul-Ar Navi Khan, Nazia Ishrat, Abdul Qayyum Khan


Background:The most common tool used for assessment of knowledge is the essay questions. Their evaluations depend upon test and item analysis which is consisting of analysis of individual questions and analysis of the whole test. The objective of our study was to calculate the Item analysis (Facility value, FV and discrimination index, DI) of questions given in terminal examination of MBBS students and observes the adequacy of questions framing in examination.

Methods:Study contains 150 medical students undergone terminal examination consist of  questions of essay type, structured essay type and short answer questions were given to the students. We divide them into high ability group and low ability group. Mark range was decided for essay type, structured essay type and short answer type questions and FV & DI was calculated.

Results:Studies described that facility values of 62.5 percentage questions were come under recommended & acceptable range and 50 percentage questions were come under acceptable range. Discrimination value of 100 percentage questions were come under recommended & acceptable range where 75 percentage questions were come under acceptable range.

Conclusion:The importance of item analysis is highlighted from our results. For improvement of examination items with average difficulty and high discrimination should be implemented into future examinations to improve test scores and properly discriminate among the students.



Item analysis of essay type questions, Facility value, Discrimination index

Full Text:



Guthrie DS. Faculty goals and methods of instruction: approaches to classroom assessment. In: Guthrie DS, eds. Assessment and Curriculum Reform. New Directions for Higher Education No. 80. San Francisco: Jossey-Bass; 1992: 69-80.

Lederhouse JE, Lower JM. Testing college professor’s tests. Coll Student J. 1974;8(1):68-70.

McDougall D. College faculty’s use of objective tests: State-of-the-practice versus state-of-the-art. J Res Devel Educ. 1997;30(3):183-93.

Crooks TJ. The impact of classroom evaluation practices on students. Rev Edu Res. 1988;58(4):438-81.

Shifflett B, Phibbs K, Sage M. Attitudes toward collegiate level classroom testing. Educ Res Qrtly. 1997;21(1):15-26.

Ebel RL, Frisbie DA. Essentials of educational measurement. In: Ebel RL, Frisbie DA, eds. A Book. 5th ed. Englewood Cliffs, NJ: Prentice-Hall; 1991.

Hopkins KD. Educational and psychological measurement and evaluation. In: Hopkins KD, eds. A Book. 7th ed. Needham Heights, MA: Allyn and Bacon; 1998.

Considine J, Botti M, Thomas S. Design, format, validity and reliability of multiple choice questions for use in nursing research and education. Collegian. 2005;12:19-24.

Sim SM, Rasiah RI. Relationship between item difficulty and discrimination indices in true/false - type multiple choice questions of a para-clinical multidisciplinary paper. Ann Acad Med Singapore. 2006;35:67-71.

Singh Tejinder, Gupta Piyush, Singh Daljit. Test and item analysis. In: Singh Tejinder, Gupta Piyush, Singh Daljit, eds. Principles of Medical Education. 3rd ed. Delhi: JayPee Publications; 2009: 70-77.

ANGEL group, ANGEL 7.4. Instructor reference manual made by ANGEL support centre of Eastern Florida state college, 2014. Available at:

Wallach PM, Crespo LM, Holtzman KZ, Galbraith RM, Swanson DB. Use of a committee review process to improve the quality of course examinations. Adv Health Sci Educ. 2006;11:61-8.

Fowell SL, Southgate LJ, Bligh JG. Evaluating assessment: the missing link? Med Educ. 1999;33:276-81.