EVALUATING MULTPLE CHOICE QUESTIONS FROM ENGINEERING STATISTICS ASSESSMENT

  • Aishah Mohd Noor Faculty Applied Science and Humanity, Universiti Malaysia Perlis, Perlis, MALAYSIA

Abstract

Item analysis is an essential aspect of test construction and continuous test items improvement. This study evaluates fifteen test items: difficulty index, discriminant index, distractor efficiency, and point-biserial correlation. For this purpose, the data was obtained from a sample of 38 students' test responses. Two sets of questions were randomly assigned to eighteen and twenty students. The analysis was performed using Microsoft Excel. The difficulty index was moderate for both sets, with mean and standard deviation found to be and respectively. Ten items (Set 1) and eight items (Set 2) showed a moderate difficulty index. The discriminant index was reasonably good with a mean of (Set 1) and considered marginal with a mean of (Set 2). The correlation between item scores and total scores was found to be significant for six items (Set 1) at a 5% significance level. All these six items were with moderate difficulty and very good discriminant index. Out of four items, three items (Set 2) were significantly correlated with moderate difficulty and very good discriminant index. Out of 45 distractors, only four (Set1) were non-functioning distractors. Meanwhile, twelve non-functioning distractors in Set 2. The evaluation of the test items provides information on the quality of the test items. The items with a moderate difficulty index and very good discriminating power can be included in the questions bank. Continuous test items modifications based on item analysis performed eventually improve the overall quality of assessments.

References

Baldwin, B. A. (1984). The role of difficulty and discrimination in constructing multiple-choice examinations: with guidelines for practical application. Journal of Accounting Education, 2(1), 19-28.
Brady, A. M. (2005). Assessment of learning with multiple-choice questions. Nurse Education in Practice, 5(4), 238-242.
Buckles, S., & Siegfried, J. J. (2006). Using multiple-choice questions to evaluate in-depth learning of economics. The Journal of Economic Education, 37(1), 48-57.
Burud, I., Nagandla, K., & Agarwal, P. (2019). Impact of distractors in item analysis of multiple-choice questions. International Journal of Research in Medical Sciences, 7(4), 1136.
Butler, A. C. (2018). Multiple-choice testing in education: Are the best practices for assessment also good for learning? Journal of Applied Research in Memory and Cognition, 7(3), 323-331.
D'Sa, J. L., & Visbal-Dionaldo, M. L. (2017). Analysis of Multiple Choice Questions: Item Difficulty, Discrimination Index and Distractor Efficiency. International Journal of Nursing Education, 9(3).
Date, A. P., Borkar, A. S., Badwaik, R. T., Siddiqui, R. A., Shende, T. R., & Dashputra, A. V. (2019). Item analysis as tool to validate multiple choice question bank in pharmacology. International Journal of Basic & Clinical Pharmacology, 8(9), 1999-2003.
Ebel, R. L., & Frisbie, D. A. (1991). Essentials of Educational Measurement. 5th edn. Englewoods Cliffs.
Hingorjo MR & Jaleel F. Analysis of one-best MCQs: The difficulty index, discrimination index and distractor efficiency. J Pak MedAssoc 2012; 62:142-7.
Ingale, A. S., Giri, P. A., & Doibale, M. K. (2017). Study on item and test analysis of multiple choice questions amongst undergraduate medical students. International Journal of Community Medicine and Public Health, 4(5), 1562-1565.
Kelley, T. L. (1939). The selection of upper and lower groups for the validation of test items. Journal of educational psychology, 30(1), 17.
Kheyami, D., Jaradat, A., Al-Shibani, T., & Ali, F. A. (2018). Item analysis of multiple-choice questions at the department of paediatrics, Arabian Gulf University, Manama, Bahrain. Sultan Qaboos University Medical Journal, 18(1), e68.
Kibble, J. D. (2017). Best practices in summative assessment. Advances in physiology education, 41(1), 110-119.
Kornbrot, D. (2005). Point biserial correlation. Wiley StatsRef: Statistics Reference Online.
LeBlanc, V., & Cox, M. A. (2017). Interpretation of the point-biserial correlation coefficient in the context of a school examination. Tutorials in Quantitative Methods for Psychology, 13(1), 46-56.
Mahjabeen, W., Alam, S., Hassan, U., Zafar, T., Butt, R., Konain, S., & Rizvi, M. (2017). Difficulty index, discrimination index and distractor efficiency in multiple choice questions. Annals of PIMS-Shaheed Zulfiqar Ali Bhutto Medical University, 13(4), 310-315.
Malau-Aduli, B. S., Assenheimer, D., Choi-Lundberg, D., & Zimitat, C. (2014). Using computer-based technology to improve feedback to staff and students on MCQ assessments. Innovations in Education and Teaching International, 51(5), 510-522.
Mehta, G., & Mokhasi, V. (2014). Item analysis of multiple-choice questions-an assessment of the assessment tool. Int J Health Sci Res, 4(7), 197-202.
Parkes, J., & Zimmaro, D. (2016). Learning and assessing with multiple-choice questions in college classrooms. Routledge.
Thompson, A. R., & Husmann, P. R. (2020). Developing Multiple-Choice Questions for Anatomy Examinations. In Teaching Anatomy (pp. 405-416). Springer, Cham.
Yang, B. W., Razo, J., & Persky, A. M. (2019). Using testing as a learning tool. American journal of pharmaceutical education, 83(9).
Published
2021-12-07
How to Cite
MOHD NOOR, Aishah. EVALUATING MULTPLE CHOICE QUESTIONS FROM ENGINEERING STATISTICS ASSESSMENT. International Journal of Education and Pedagogy, [S.l.], v. 3, n. 4, p. 33-46, dec. 2021. ISSN 2682-8464. Available at: <https://myjms.mohe.gov.my/index.php/ijeap/article/view/16419>. Date accessed: 15 aug. 2022.
Section
Articles