Static analysis tools are frequently used to scan the source code and detect deviations from the project coding guidelines. Yet, their adoption is challenged by their high false positive rate, which makes them not suitable for students and novice developers. However, Large Language Models (LLMs), such as ChatGPT, have gained widespread popularity and usage in various software engineering tasks, including testing, code review, and program comprehension. Such models represent an opportunity to reduce the ambiguity of static analysis tools and support their adoption. Yet, the effectiveness of using static analysis (i.e., PMD) to detect coding issues, and relying on LLMs (i.e., ChatGPT) to explain and recommend fix, has not yet been explored. In this talk, we aim to shed light on our experience in teaching the use of ChatGPT to cultivate a bugfix culture and leverage LLMs to improve software quality in educational settings. We share our findings to support educators in teaching students better code review strategies, and to increase students’ awareness about LLM and promote software quality in education.
Yunfei Hou California State University, San Bernardino, Miranda McIntyre California State University, San Bernardino, Jesus Herrera California State University, San Bernardino, Joyce Fu University of California, Riverside, Hani Aldirawi California State University, San Bernardino