TY - GEN
T1 - Validating item response processes in digital competence assessment through eye-tracking techniques
AU - Bartolomé, Juan
AU - Garaizar, Pablo
AU - Bastida, Leire
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/10/21
Y1 - 2020/10/21
N2 - This paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item.
AB - This paper reports on an exploratory study with the aim to validate item response processes in digital competence assessment through eye-tracking techniques. When measuring complex cognitive constructs, it is crucial to correctly design the evaluation items to trigger the intended knowledge and skills. Furthermore, to assess the validity of a test requires considering not only the content of the evaluation tasks involved in the test, but also whether examinees respond to the tasks by engaging construct-relevant response processes. The eye tracking observations helped to fill an ‘explanatory gap’ by providing data on variation in item response processes that are not captured by other sources of process data such as think aloud protocols or computer-generated log files. We proposed a set of metrics that could help test designers to validate the different item formats used in the evaluation of digital competence. The gaze data provided detailed information on test item response strategies, enabling profiling of examinee engagement and response processes associated with successful performance. There were notable differences between the participants who correctly solved the tasks and those who failed, in terms of the time spent on solving them, as well as the data on their gazes. Moreover, this included insights into response processes which contributed to the validation of the assessment criteria of each item.
KW - Eye-tracking techniques
KW - Digital competence
KW - Eye-tracking techniques
KW - Digital competence
KW - computer-based assessment
KW - eye-tracking
KW - response process validity
UR - http://www.scopus.com/inward/record.url?scp=85100532173&partnerID=8YFLogxK
U2 - 10.1145/3434780.3436641
DO - 10.1145/3434780.3436641
M3 - Conference contribution
SN - 978-1-4503-8850-4
T3 - ACM International Conference Proceeding Series
SP - 738
EP - 746
BT - unknown
A2 - Garcia-Penalvo, Francisco Jose
PB - ACM
T2 - 8th International Conference on Technological Ecosystems for Enhancing Multiculturality, TEEM 2020
Y2 - 21 October 2020 through 23 October 2020
ER -