Model Explainability with Shap Values
Explainable Ai, Vector Institute of Technology, 2024
In this project we adrressed the poor explainability of model behaviour, especiallty under distribution shifts. Specifically, in healthcare a predictive model and show drastic degredation in perfromance when looking at patients from different demographic groups - with no explicit difference in the recorded information for these demographics. To address this, we used SHAPley Values in order to measure feature weights for models trained on different patient demographics. By comparing SHAPley value trends betweeen these distributions, we can infer why the model learns different trends across the two distributions.
This project was a component of the Machine Learning Internship at the Data Science Institute.