Using GPT-4 to Provide Tiered, Formative Code FeedbackOnlineIn-Person
Large language models (LLMs) have shown promise in generating sensible code explanation and feedback in programming exercises. In this experience report, we discuss the process of using one of these models (OpenAI’s GPT-4) to generate individualized feedback for students’ Java code and pseudocode. We instructed GPT-4 to generate feedback for 113 submissions to four programming problems in an Algorithms and Data Structures class. We prompted the model with example feedback (few-shot learning) and instruction to (1) give feedback on conceptual understanding, syntax, and time complexity, and (2) suggest follow-up actions to build on students’ code or provide guiding questions. Overall, GPT-4 provided accurate feedback and successfully built on students’ ideas in most submissions. Human evaluators (computer science instructors and tutors) rated GPT-4’s hints as useful in guiding students’ next steps. Model performance varied with programming problems but not submission quality. We reflect on where the model performed well and fell short, and discuss the potential of integrating LLM-generated, individualized feedback into computer science instruction.
Sat 23 MarDisplayed time zone: Pacific Time (US & Canada) change
10:45 - 12:00 | |||
10:45 25mTalk | A Self-Regulated Learning Framework using Generative AI and its Application in CS Educational Intervention DesignOnlineGlobalIn-Person Papers DOI | ||
11:10 25mTalk | Improvement in Program Repair Methods using Refactoring with GPT ModelsOnlineGlobalIn-Person Papers Ryosuke Ishizue NTT DATA Group Corporation / Waseda University, Kazunori Sakamoto WillBooster Inc. / Tokyo Online Unicersity / Waseda University, Hironori Washizaki Waseda University, Yoshiaki Fukazawa Waseda University DOI | ||
11:35 25mTalk | Using GPT-4 to Provide Tiered, Formative Code FeedbackOnlineIn-Person Papers DOI |