Personality Measurement of Students Using Item Response Theory Models: Stability Responses from Nigerian Institutions
DOI:
https://doi.org/10.6017/ijahe.v10i3.17967Keywords:
response stability, personality traits, personality measurement, Item Response TheoryAbstract
Item Response Theory (IRT) is utilised to detect bias in assessment tools and address issues such as faked or manipulated responses, enhancing the reliability and stability of conclusions in personality assessment. This article examines the item parameter estimates of a scale and the effectiveness of one-, two-, and three-parameter logistic models in analysing response stability in personality measurement from repeated administration. Three hundred undergraduate students at three tertiary institutions in Nigeria were sampled using a multi-stage sampling procedure. Data was collected using an adapted version of the Big Five Inventory (BFI) with a reliability coefficient of 0.85. The results showed that the item parameter estimates (mean threshold) are within the recommended benchmarks. A comparison of the three IRT models based on the Likelihood ratio (InL), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC) values revealed that the two-parameter logistic model best fit the personality data among undergraduates from repeated administration. It is recommended that, rather than relying solely on a statistical decision-making process, IRT fit and model comparison should be applied to gain insight into the functioning of items and tests.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Olawale Ayoola Ogunsanmi, Temitope Babatimehin, Yejide Adepeju Ibikunle
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.