The EU AI Act calls for risk assessment to be conducted for AI systems in specific high risk applications and separately for systemic risks introduced by general-purpose AI (GPAI) systems (i.e. systems corresponding to the largest language models and their use as generative foundation models). The AI Act does not however provide guidance on how risk assessments from GPAIs ,e.g. [1], could be used to support risk assessment needed for downstream AI applications that use those models, e.g. in fields such as medical devices [2], despite the fact that GPAI is becoming a dominant form of foundation model for such applications.
This project will address this gap by building on existing work on developing semantic models for AI risk [3][4] to explore how LLM risk assessments, especially using the latest Code of Practice [5] and state of the art [6] could be imported, used and expanded to address risks in a specific regulated domain.
[2] https://www.jmir.org/2023/1/e43682
[3] https://dl.acm.org/doi/10.1145/3593013.3594050
[5] https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai