Managing LLM Adaptation for University Use – TAKEN

The massive step up in how Large Language Model can generate convincing and often accurate content based on simple prompts has come as an unanticipated challenge to itching and learning in universities. While the reaction to date has focussed on how to address the risks of LLM being used for plagiarism, less focus has been … Read more

Smart Responsible AI agreements for LLMs – TAKEN

The emergence of generative foundational large language models has put the focus on the responsible use of these models by others. New forms of open source licenses for AI models have emerged that aim to encourage responsible use of AI [1], including conformance to the obligations in high risk application that will be regulated under … Read more

Composable Semantic AI Risk Assessments – NOT AVAILABLE

The emerging EU AI Act call for risk assessment to be conducted for AI system in specific high risk applications and also for generative foundational AI systems that could be used in such applications. The AI Act and its supporting technical standards do not however provide guidance on how risk assessments from foundational model vendors … Read more

Technology Ethics Exploration Tool for AI Researchers – TAKEN

AI Researchers are increasingly called upon to imagine and mitigate the harms arising from future use of their research, e.g. when publishing papers to major conferences [1][2] and journals or seeking funding from bodies like the EU [3]. The more fundamental the AI research they work upon the more challenging it is to imagine and … Read more