UPSI Digital Repository (UDRep)
Start | FAQ | About
Menu Icon

QR Code Link :

Type :article
Subject :H Social Sciences (General)
Main Author :Sakinah salleh Salleh
Additional Authors :Zainal Abidin
Abdullah Ibrahim
Rohaya Talib
Bichi, Ado Abdu
Rahimah Embong
Title :Comparative analysis of classical test theory and item response theory using chemistry test data
Place of Production :Tanjong Malim
Publisher :Fakulti Sains Kemanusiaan
Year of Publication :2019
Corporate Name :Universiti Pendidikan Sultan Idris

Abstract : Universiti Pendidikan Sultan Idris
Assessment of learning involves determining whether the content and objectives of education have been mastered by administering quality tests. This study assesses the quality of Chemistry Achievement Test and compares the item statistics generated using CTT and IRT methods. A descriptive survey design was adopted using a sample of N=530 students. The specialised XCALIBRE 4 and ITEMAN 4 softwares were used to conduct the item analysis. Results indicate that, both the two methods commonly identified 13(32.5%) items as “problematic” and 27(67.5%) were “good”. Similarly, a significantly higher correlation exists between item statistics derived from the CTT and IRT models, [(r=-0.985,) and (r=0.801) p

References

1. S. P. Klein and L. Hamilton, Large-Scale Testing: Current Practices and New Directions. Santa Monica: CA: RAND, 1999.

2. A. A. Bichi, R. B. Embong, M. Mamat, and D. A. Maiwada, “Australian Journal of Basic and Applied Sciences Comparison of Classical Test Theory and Item Response Theory: A Review of Empirical Studies,” Aust. J. Basic Appl. Sci., vol. 9, no. April, pp. 549–556, 2015.

3. A. A. Bichi, R. Talib, H. Mohamed, J. Ahamad, and N. A. Khairuddin, “Exploratory Sequential Design to Develop and Validate Economics Placement Test for Nigerian Universities,” Int. J. Recent Technol. Eng., vol. 7, no. 6, pp. 769–772, 2019.

4. R. K. Hambleton and R. W. Jones, “An NCME instructional module on: Comparison of classical test theory and item response theory and their applications to test development,” Educ. Meas. issues Pract., vol. 12, no. 3, pp. 38–47, 1993.

5. T. G. Courville and B. Thompson, “etd-tamu-2004B-EPSYCourville-2.pdf,” no. August, 2004.

6. B. A. Adegoke, “Comparison of item statistics of Physics achievement test using classical test and item response theory frameworks,” J. Educ. Pract., vol. 4, no. 22, pp. 87–96, 2013.

7. N. Guler, G. K. Uyanik, and G. T. Teker, “Comparison of classical test theory and item response theory in terms of item parameters,” Eur. J. Res. Educ., vol. 2, no. 1, pp. 1–6, 2014.

8. H. Nenty and O. O. Adedoyin, “Test for invariance: inter and intra model validation of classical test and item response theories,” Asia Pacific Journal Res. I, 2013.

9. A. A. Bichi and R. Talib, “Item Response Theory: An Introduction toLatent Trait Models to Test and Item Development,” Int. J. Eval. Res. Educ., 2018.

10. A. D. Mead and A. W. Meade, “Item selection using CTT and IRT with unrepresentative samples,” in twenty-fifth annual meeting of the Society for Industrial and Organizational Psychology in Atlanta, GA. Retrieved December, 2010, vol. 22, p. 2018.

11. X. Fan, “Item response theory and classical test theory: An empirical comparison of their item/person statistics,” Educ. Psychol. Meas., vol. 58, no. 3, pp. 357–381, 1998.

12. L. Crocker and J. Algina, Introduction to classical and modern test theory. ERIC, 1986.

13. D.-T. Le, “Applying item response theory modeling in educational research.,” Diss. Abstr. Int. Sect. B Sci. Eng., vol. 75, no. 1, 2014.

14. X. An and Y. Yung, “Item Response Theory: What It Is and How You Can Use the IRT Procedure to Apply It,” SAS Inst. Inc., pp. 1–14, 2014.

15. J. C. Nunnally, “Psychometric theory (2nd edit.) mcgraw-hill,” Hillsdale, NJ, vol. 416, 1978.

16. N. Georgiev, “Item analysis of c, d and e series from raven?s standard progressive matrices with item response theory twoparameter logistic model,” Eur. J. Psychol., vol. 4, no. 3, 2008.

17. D. Ojerinde, K. Popoola, F. Ojo, and P. Onyeneho, “Introduction to item response theory: Parameter models, estimation and application,” Abuja Marvelouse Mike Press Ltd, 2012.

18. M. D. Reckase, “Unifactor latent trait models applied to multifactor tests: Results and implications,” J. Educ. Stat., vol. 4, no. 3, pp. 207–230, 1979.

19. R, Guyer. & N. A, Thompson, User?s Manual for Xcalibre item response theory calibration software, version 4.2, vol. 10, no. 2. Woodbury MN: Assessment Systems Corporation., 2014.

20. D. M. Dimitrov, “An Approach to Scoring and Equating Tests with Binary Items: Piloting with Large-Scale Assessments,” Educ. Psychol. Meas., vol. 76, no. 6, pp. 954–975, 2016.

21. A. Field, Discovering statistics using SPSS:(and sex, drugs and rock?n?roll), vol. 497. Sage, 2000.

22. S. Varma, “Preliminary item statistics using point-biserial correlation and p-values,” Educ. Data Syst., vol. 16, no. 7, pp. 1–7, 2006.

23. C. Stage, Classical test theory or item response theory: The Swedish experience, vol. 42. Univ., 2003.

24. S. Pido, “Comparison of item analysis results obtained using item response theory and classical test theory approaches,” J. Educ. Assess. Africa, vol. 7, pp. 192–207, 2012.

25. M. Erguven, “Two approaches in psychometric process: Classical test theory & item response theory,” J. Educ., vol. 2, no. 2, pp. 23–30, 2013.

26. O. O. Adedoyin, “Investigating the Invariance of Person Parameter Estimates Based on Classical Test and Item Response Theories,” Int. J. Educ. Sci., vol. 2, no. 2, pp. 107–113, 2017.

 


This material may be protected under Copyright Act which governs the making of photocopies or reproductions of copyrighted materials.
You may use the digitized material for private study, scholarship, or research.

Back to previous page

Installed and configured by Bahagian Automasi, Perpustakaan Tuanku Bainun, Universiti Pendidikan Sultan Idris
If you have enquiries, kindly contact us at pustakasys@upsi.edu.my or 016-3630263. Office hours only.