[Taken] Design of an AI-based Dissertation Evaluation System

Currently, academics receive PDF copies of dissertations and assess these dissertations by reading them, grading various rubrics in evaluation forms, commenting on their views, and consulting with a 2nd examiner about an agreed mark. This project will investigate the development of a system that analyses submitted PDF copies with the help of Artificial Intelligence eg. Large-Language Models (LLMs), derives marks for individual rubrics and produces justifications for these marks. Past and present evaluations of dissertations are confidential and have to be handled with care. One aspect of the investigation would be a comparison between a bespoke developed and trained LLM that is based on past evaluation results and which keeps all results confidential, with a readily-available, publicly-accessible LLM which can not rely on past evaluation results.