Model Interpretation and Feature Selection

Feature selection is an important problem in statistical machine learning, and is a common method for dimensionality reduction that encourages model interpretability. Classical feature selection asks for a subset of features that are most informative for the entire data set.

Recently, model interpretation has gained increasing attention in the research community. Modern machine learning models can be difficult to probe and understand after they have been trained. This is a major problem for the field, with consequences for trustworthiness, diagnostics, debugging, robustness, and a range of other engineering and human interaction issues surrounding the deployment of a model. Given a predictive model, an interpretation method yields, for each instance to which the model is applied, a vector of importance scores associated with the underlying features. Similar to classical feature selection, it evaluates importance features. However, there are two main differences. First, such methods are applied in an instancewise fashion. Second, feature importance is evaluated with respect to a given model, instead of the data set.

Our research covers both classical feature selection, and model interpretation. We have developed both parametric and nonparametric methods for feature selection. Our ultimate goal is to provide a better interpretation of complex models such as deep neural networks, kernel methods, and tree ensembles.

Publications

. L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data. In ICLR (Poster), 2019.

Preprint Code Project Poster

. Learning to Explain: An Information-Theoretic Perspective on Model Interpretation. In ICML (20-min Oral), 2018.

Preprint Code Project Oral Poster

. Kernel Feature Selection via Conditional Covariance Minimization. In NIPS, 2017.

PDF Code Project Poster Blog