Machine learning models are increasingly used to predict time-to-event outcomes — for example, how long a patient might survive after treatment or when a machine is likely to fail. Unlike standard predictions, these models produce survival curves, which change over time. Existing explanation tools like Shapley values can tell us which features matter, but they aren’t designed to handle outcomes that evolve across different time horizons.
In this project, you will explore how to extend Shapley values to survival analysis. The idea is to adapt them so we can see not only which features are important, but also when they matter most (e.g., short-term vs. long-term risk). This will involve experimenting with survival models such as Cox regression, DeepSurv, and DeepHit, and designing intuitive explanation methods. This project is ideal for students interested in machine learning and explainable AI.