Abstract
Until recently, most of the digital literacy frameworks have been based on assessment frameworks used by commercial entities. The release of the DigComp framework has allowed the development of tailored implementations for the evaluation of digital competence. However, the majority of these digital literacy frameworks are based on self-assessments, measuring only low-order cognitive skills. This paper reports on a study to develop and validate an assessment instrument, including interactive simulations to assess citizens’ digital competence. These formats are particularly important for the evaluation of complex cognitive constructs such as digital competence. Additionally, we selected two different approaches for designing the tests based on their scope, at the competence or competence area level. Their overall and dimensional validity and reliability were analysed. We summarise the issues addressed in each phase and key points to consider in new implementations. For both approaches, items present satisfactory difficulty and discrimination indicators. Validity was ensured through expert validation, and the Rasch analysis revealed good EAP/PV reliabilities. Therefore, the tests have sound psychometric properties that make them reliable and valid instruments for measuring digital competence. This paper contributes to an increasing number of tools designed to evaluate digital competence and highlights the necessity of measuring higher-order cognitive skills.
Original language | English |
---|---|
Article number | 3392 |
Pages (from-to) | 3392 |
Number of pages | 1 |
Journal | Sustainability |
Volume | 14 |
Issue number | 6 |
DOIs | |
Publication status | Published - 14 Mar 2022 |
Keywords
- Digital competence
- Computer-based assessment
- Netiquette
- Information and data literacy
- Simulations
Project and Funding Information
- Funding Info
- This research received no external funding