Facial Recognition Technologies (FRT) utilise AI to detect and categorise people, often in real world situations where ‘errors’ can result in severe impact including imprisonment. For example, a London Metropolitan Police’s facial recognition technology van misidentified Shaun Thompson, a 38 year old Black man who was then wrongfully accused and detained at London Bridge station. While there have been many such reported cases, there is currently no systematic single resource that provides a compiled list and analysis of such incidents – including what kind of FRT was being used, where, and how it resulted in harm. Further, cases are often reported with no follow up as to what happened after the incident.
In this project, you will set up a public-facing tool to collect information on the use and harms of FRT. Incidents reported will be maintained in a systematic database that allows analysis of technologies, actors, and roles in a way that supports showing collective and emerging issues e.g. when a particular AI model or technique has multiple incidents or when the affected people belong to a specific race/characteristic which would make the harms discriminatory. In addition, you will also set up a web crawler that looks for follow-up news articles on reported incidents and identifies whether these articles report on the incident being resolved or specific mitigations being applied.
You will be mentored and supported in this by Dr. Abeba Birhane,the founder of the AI Accountability Lab (AIAL) in TCD, where this work will be part of the lab’s high-impact research projects.