Title The CoFee Framework: Leveraging Continuous Feedback in Education to Increase C Code Quality Authors Max Schrötter, Bettina Schnor Affiliation University of Potsdam, Department of Computational Science Abstract The programming language C is widely used for operating systems, embedded systems and other performance critical applications. Since these applications are often security critical, they require secure programming. The C language on the other hand allows novice programmers to write insecure code easily. This makes it especially important to teach secure programming and give students feedback on potential security issues. Research [1] analyzing 16 undergraduate CS programs of the R1 universities in the US found that the use of unsafe functions and their security implications are a common mistake in student submissions. This vulnerability is also prominent in instructors code and textbooks reinforcing these mistakes in student code. A recent study [2] interviewing students from those courses found that "the real problem lies in students’ lack of more fundamental knowledge and skills, such as paying attention to compiler and OS messages and carefully reading documentation." Automated feedback systems for student submissions are a heavily researched area. As a recent survey [3] shows, the majority of the systems however focus on functional correctness and do not provide feedback on code quality and security issues. Some systems provide compiler messages and results of static code analysis, but then are usually tightly coupled to this specific tool. The talk will consist of two parts: 1. Overview and demo of the CoFee framework [4] CoFee is a modular framework focusing on C code security and code robustness by using state-of-the-art software analyzers (Clang Static Analyzer, Sanitizers, GCC, Clang Tidy). Further, error messages are supplemented by meaningful hints suited for novice students. It also follows the theory of situated learning by exposing students to typical software engineering workflows using Gitlab for version control, continuous integration and code quality reports. To check code quality, CoFee supports well-established open-source tools which were tested on a purpose build test suite. Its modular architecture allows easy integration of future analyzers. The evaluation within an operating systems course shows that CoFee enhances the code quality of the students' submissions. The average score of handed in homework was increased by ~13% for students using CoFee. Student feedback also indicates that it is well-suited for novice students. 2. Introduction to the Error Handling Analyzer (EHA) One critical bug that is often overlooked is the incorrect handling of errors. These are also not covered by any currently maintained state-of-the-art tools. We have presented an Error Handling Analyzer (EHA) [5] for the CoFee framework. The EHA detects missing or incorrect error handling using the Clang Static Analyzers symbolic execution engine. We evaluated EHA on 100 student submissions and found that error handling bugs are a common mistake and that EHA can detect more than 80% of the error handling bugs in these submissions. [1] Majed Almansoori, Jessica Lam, Elias Fang, Kieran Mulligan, Adalbert Gerald Soosai Raj, and Rahul Chatterjee. 2020. How Secure are our Computer Systems Courses? In Proceedings of the 2020 ACM Conference on International Computing Education Research (ICER '20). Association for Computing Machinery, New York, NY, USA, 271–281. https://doi.org/10.1145/3372782.3406266 [2] Majed Almansoori, Jessica Lam, Elias Fang, Adalbert Gerald Soosai Raj, and Rahul Chatterjee. 2023. Towards Finding the Missing Pieces to Teach Secure Programming Skills to Students. In Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1 (SIGCSE 2023). Association for Computing Machinery, New York, NY, USA, 973–979. https://doi.org/10.1145/3545945.3569730 [3] Strickroth, S. and Striewe, M. 2022. Building a Corpus of Task-Based Grading and Feedback Systems for Learning and Teaching Programming. International Journal of Engineering Pedagogy (iJEP). 12, 5 (Nov. 2022), pp. 26–41. DOI:https://doi.org/10.3991/ijep.v12i5.31283. [4] M. Schrötter and B. Schnor, "Leveraging Continuous Feedback in Education to Increase C Code Quality," 2022 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 2022, pp. 1950-1956, doi: 10.1109/CSCI58124.2022.00351. [5] Schrötter, Max; Falk, Maximilian; Schnor, Bettina (2023): Automated Detection of Bugs in Error Handling for Teaching Secure C Programming. Proceedings of the Sixth Workshop "Automatische Bewertung von Programmieraufgaben" (ABP 2023). DOI: 10.18420/abp2023-1. Gesellschaft für Informatik e.V.. Vollbeiträge. Munich, Germany. October 12-13, 2023