AI Impact Assessment Tool

tldr; you will develop a tool which will take some inputs (e.g. purpose, technology) describing the use of AI within an use-case, and will provide an output showing: risks in its use and potential impacts, as well as suggest measures to limit the risk (e.g. use specific tests, improve accuracy of outputs, make people aware). You will extend existing prototype implementations with extensive algorithms and rules based on literature and best practice.

Artificial Intelligence (AI) is progressing at an alarmingly rapid rate. As a technology with the potential to be applied and used everywhere, it comes with great risks which range from minor annoyance (e.g. wrong spelling isn’t detected) to major disruptions to society (e.g. humans are harmed, democratic elections are affected). One of the greatest challenges facing our times, and what the AI Act primarily aims to address, is how to understand what are the risks for using AI within specific use-cases. To better explore the topic, we first need to understand: (1) What is meant by AI?; (2) How can AI be used within a specific use-case? (3) How can using AI affect people, society, and organisations?. To answer these, we researched and developed a ‘framework’ that uses 5 concepts: Domain (e.g. Education), Purpose (e.g. Identity Verification), Application or Capability (e.g. Facial Recognition), User or Operator (e.g. Lecturer), and Subject (e.g. Students). Using combinations of these, we express and identify risk categories e.g. high-risk per the AI Act Annex III or how a specific dataset or model might lead to high-risk uses. We also identify relevant risks for specific concepts within taxonomies created from the 5 concepts (e.g. what are the risks of facial recognition? What are the specific risks when facial recognition is used for identity verification?)

In this project, you will continue this work by further developing and extending the taxonomies and the tool used for risk classification. You will further develop the interface, create more rules, and add documentation. You will also make it easy for people to express their use-cases, for example, by providing options for popular uses such as using a LLM like ChatGPT to correct exam answers – and then explain the output (high-risk, impacts students) in terms of this use-case. If you do not have the sufficient programming background, you will conduct legal research i.e. how can the different use-cases from AI Act, GDPR (DPIA high-risk categories) be expressed in terms of these concepts, and assist in the documentation process which does not require a programming background.

For references, please see:

– prototype https://harshp.com/ai_act_risk/ for risk rules

– prototype https://harshp.com/ai-incident/ for user input

– taxonomy Data Privacy Vocabulary (DPV) with RISK and AI extensions