Intersectional Fairness in Machine Learning
This project focuses on the rich field of algorithmic fairness where the goal is to ensure that predictions are not biased against subgroups of the population whilst maximising predictive performance. One challenge is when we focus on multiple protected attributes.