In programming courses, test-based automated feedback systems often face a limitation: instructors cannot effectively enhance feedback during ongoing assignments. While a passing test case may adequately indicate that certain rubric criteria have been met by a student’s program, failure can arise from many reasons. When a test case fails due to reasons not previously coded for, it can create confusion. This tends to divert a student’s focus from genuine learning towards guessing the test. A more effective approach would be for the system to notify instructors when a test fails, rather than automatically returning a test case failed'' message. Instructors could then provide initial
human-in-the-loop'' feedback and subsequently code this feedback into the test suite. This method enables the improvement of feedback during an assignment. In this poster, we propose an approach to achieve this objective and introduce a supporting tool.