Towards Attention-Based Automatic Misconception Identification in Introductory Programming CoursesCC
Identifying misconceptions in student programming solutions is an important step in evaluating their comprehension of fundamental programming concepts. While misconceptions are latent constructs that are hard to evaluate directly from student programs, logical errors can signal their existence in students’ understanding. Tracing multiple occurrences of related logical bugs over different problems can provide strong evidence of students’ misconceptions. This study presents preliminary results of utilizing a state-of-the-art Abstract Syntax Tree-based embedding neural network and an attention layer to identify logical mistakes in students’ code. In this poster, we show a proof-of-concept of the errors identified in student programs by classifying correct vs incorrect programs. Our preliminary results show that our framework is able to automatically identify misconceptions without designing and applying a detailed rubric. This approach can improve the quality of instruction in introductory programming courses by providing educators with a powerful tool that offers personalized feedback while enabling accurate modeling of students’ misconceptions.