Analyzing-Evaluating-Creating: Assessing Computational Thinking and Problem Solving in Visual Programming DomainsGlobal
Computational thinking (CT) and problem-solving skills are increasingly integrated into K-8 school curricula worldwide. Consequently, there is a growing need to develop reliable assessments for measuring students’ proficiency in these skills. Recent works have proposed tests for assessing these skills across various CT concepts and practices, in particular, based on multi-choice items enabling psychometric validation and usage in large-scale studies. Despite their practical relevance, these tests are limited in how they measure students’ computational creativity, a crucial ability when applying CT and problem solving in real-world settings. In our work, we have developed ACE, a novel test focusing on the three higher cognitive levels in Bloom’s Taxonomy, i.e., Analyzing, Evaluating, and Creating. ACE comprises a diverse set of 7x3 multi-choice items spanning these three levels, grounded in elementary block-based visual programming. We evaluate the psychometric properties of ACE through a study conducted with 371 students in grades 3–7 from 10 schools. Based on several psychometric analysis frameworks, our results confirm the reliability and validity of ACE. Moreover, our study shows that students’ performance on ACE positively correlates with their performance in solving programming tasks on the Hour of Code: Maze Challenge by Code.org. The design and validation of ACE provide a basis for further development of tests incorporating aspects of computational creativity.