This project focuses on the rich field of algorithmic fairness where the goal is to ensure that predictions are not biased against subgroups of the population whilst maximising predictive performance. In the case of opaque deep neural networks in high-stakes domains, this problem is especially challenging. Many open-source toolkits such as OxonFair and Microsoft Fairlearn exist that can help practitioners to deploy fairer models. However, many of these solutions just focus on one protected attribute (such as age) instead of intersectional fairness where multiple protected attributes are considered. These solutions are often poor at communicating uncertainty and unreliability in their predictions. There are several rich open-source datasets in high-stakes domains such as healthcare.
This project would be suited to someone with excellent skills in software engineering and an avid interest in machine learning. Ideally, the student would have an active GitHub repository with a history of commits on different projects. There is an opportunity to contribute to open source fairness toolkits (OxonFair) and collaborate with research partners on challenges during this project.
Related reading (Not essential reading but will give you a flavour of the topic and problems)