Workshop 102: Using Large Language Models for Teaching Computing
In the past year, large language models (LLMs) have taken the world by storm, with computing education being no exception. In computing education research, it has been found that LLMs can solve most assessments in introductory programming courses, including both traditional code writing tasks and other types of tasks such as Parsons problems. As students are already using LLMs, the question instructors might ask themselves is “what can I do?”. We propose that the way forward is to integrate LLMs into teaching practice, giving equal opportunity for all students to learn productive interaction with LLMs and for them to understand the limitations of LLMs. In this workshop, we first present state-of-the-art research results on how to utilize LLMs in computing education practice, after which participants take part in hands-on activities using LLMs. We end the workshop by brainstorming ideas with the participants around how LLMs can best be integrated into computing education.
Wed 20 MarDisplayed time zone: Pacific Time (US & Canada) change
19:00 - 22:00 | |||
19:00 3hTalk | Workshop 102: Using Large Language Models for Teaching Computing Workshops Juho Leinonen Aalto University, Stephen MacNeil Temple University, Paul Denny The University of Auckland, Arto Hellas Aalto University |