[AVAILABLE] Off the rails? Exploring the effectiveness of generative AI “guardrails” for novice programmers

There is an argument that Generative AI tools (e.g. GitHub Copilot) are too helpful for novice programmers and might contribute to poor learning outcomes. Researchers and educators have been exploring the use of “guardrails” to constrain the output of these tools, for example, by providing guidance or analogous examples, rather than complete code. https://cs50.ai/ is a good example of this. This project will explore how these guardrails are implemented and evaluate how accurate they are. Do they ever come “off the rails” and in what circumstances?