Managing LLM Adaptation for University Use – TAKEN

The massive step up in how Large Language Model can generate convincing and often accurate content based on simple prompts has come as an unanticipated challenge to itching and learning in universities. While the reaction to date has focussed on how to address the risks of LLM being used for plagiarism, less focus has been placed on the impact of LLM more generally on the teaching of knowledge and skills. In particular it is not clear where the cap of the level of skills and knowledge that can be mastered by LLM lies. However we know that systemising knowledge and developing systems for assessing the learning of such knowledge is the primary way in which LLM can learn effectively. Uncontrolled deployment LLMs into areas which require human responsibility, oversight and ethical judgment (e.g. medicine, law, engineering, physical and social sciences) may endanger the economic viability of these professions and how they are currently taught at third level. Therefore asserting responsible control over how LLMs are used is key to a smooth transition to a world where higher level human skills can be supported by LLM.

This project will therefore explore how universities could explore this challenge by offering and managing their own LLM for use in universities and afterwards by alumni in their professions. A controlled LLM would enable universities to explore use of LLMs in teaching and learning by providing a baseline against which unacceptable use in assessment can be measured. Also this approach would allow those gaining and advancing skills and knowledge at third level and professionally to have some control over how LLMs can be used to deploy that knowledge. Key to this would be responsible monitoring, controlling and governing of how LLMs are adapted using prompt engineering, reinforcement learning through human feedback and fine tuning.