Debugging Classifications with Counterfactual Explanations
This project investigates how post-hoc counterfactual explanations can be used to debug opaque models such as deep neural networks by revealing which feature changes most influence predictions. In applications like anomaly detection, counterfactuals help clarify why certain cases are flagged as abnormal and expose when models rely on spurious correlations or biased patterns. By using … Read more