Zhi Chen  陈挚

About

I am a researcher interested in the intersection of machine learning, optimization and human-model interactions. My research focuses on building interpretable machine learning models that people can easily debug, interact with and gain knowledge from.

Currently, I work as a Quantitative Researcher at Citadel Securities doing systematic equities alpha research. I obtained my Ph.D. in Computer Science from Duke University, under the supervision of Prof. Cynthia Rudin. Prior to joining Duke, I completed my B.S. in Computer Science from Kuang Yaming Honors School, Nanjing University in 2018.

Selected Awards

Projects

My research is primarily focus on the interpretable machine learning, especially building inherently interpretable models. This survey paper I wrote with my advisor and other colleagues in our lab summarizes important technical challenge areas in building inherently interpretable machine learning models. Some of my recent research projects are listed here.

Concept-based Interpretable Neural Networks

We develop a module, concept whitening (CW), to decorrelate and align the axes of the latent space to predefined concepts. CW can provide a much clearer understanding for how the network gradually learns concepts over layers without hurting predictive performances. We are developing methods that discover useful domain-specific concepts in an unsupervised way, and explicitly represent these discovered concepts in the latent space of neural networks.

PAPER CODE TEASER VIDEO

Interpretable Machine Learning for Metamaterials Designs

We develop interpretable machine learning methods to discover key local and global features related to important dynamic material properties such as mechanical band gaps. These physically interpretable features can also transfer information about material properties across scale.

Rashomon Sets

The Rashomon set is defined as a set of well-performing models. This term came from the Rashomon Effect coined by Leo Breiman, which describes the fact that many equally good models explain the same data well. We aim to construct the Rashomon set of various different model classes and study the application domains.

Discover Common Flaws in Data Using Interpretable Models

Every dataset is flawed, often in surprising ways that data scientists might not anticipate. We show how interpretable machine learning methods such as EBMs can help users detect problems that are lurking in their data. Specifically, we provide a number of case studies, where EBM discovers various types of common dataset flaws, including missing values, confounders, data drift, bias and fairness, and outliers. We also demonstrate that in some cases interpretable learning methods such as EBMs provide simple tools for correcting problems when correcting the data is difficult in other ways.

Publications

(* denotes equal contribution)
Sparse and Faithful Explanations Without Sparse Models? AISTATS (2024).
- Winner of INFORMS 2023 Data Mining Best Paper Award (general track).

Yiyang Sun*, Zhi Chen*, Vittorio Orlandi, Tong Wang, Cynthia Rudin

Exploring and Interacting with the Set of Good Sparse Generalized Additive Models? NeurIPS (2023).

Chudi Zhong*, Zhi Chen*, Margo Seltzer, Cynthia Rudin

Missing Values and Imputation in Healthcare Data: Can Interpretable Machine Learning Help? CHIL (2023).

Zhi Chen, Sarah Tan, Urszula Chajewska, Cynthia Rudin, Rich Caruana

Exploring the Whole Rashomon Set of Sparse Decision Trees. NeurIPS (2022).
- Selected as Oral Presentation for NeurIPS 2022.
- Finalist for INFORMS 2022 Data Mining Best Paper Competition Award (student track).

Rui Xin*, Chudi Zhong*, Zhi Chen*, Takuya Tagaki, Margo Seltzer, Cynthia Rudin

How to See Hidden Patterns in Metamaterials with Interpretable Machine Learning. Extreme Mechanics Letters (2022).
- Winner of the 2022 Physical and Engineering Sciences (SPES) and the Quality and Productivity (Q&P) Student Paper Competition of the American Statistical Association.

Zhi Chen, Alex Ogren, Chiara Daraio, Cate Brinson, Cynthia Rudin

TimberTrek: Exploring and Curating Trustworthy Decision Trees with Interactive Visualization. IEEE VIS (2022).

Zijie Wang, Chudi Zhong, Rui Xin, Takuya Tagaki, Zhi Chen, Cynthia Rudin, Margo Seltzer

Using Explainable Boosting Machines (EBMs) to Detect Common Flaws in Data. ECML-PKDD International Workshop and Tutorial on eXplainable Knowledge Discovery in Data Mining (2021).

Zhi Chen, Sarah Tan, Harsha Nori, Kori Inkpen, Yin Lou, Rich Caruana

Interpretable Machine Learning: Fundamental Principles and 10 Grand Challenges. Statistics Surveys (2021).

Cynthia Rudin, Chaofan Chen, Zhi Chen, Haiyang Huang, Lesia Semenova, Chudi Zhong

Concept Whitening for Interpretable Image Recognition. Nature Machine Intelligence (2020).

Zhi Chen, Yijie Bei, Cynthia Rudin

Adversarial Feature Matching for Text Generation. ICML (2017).

Yizhe Zhang, Zhe Gan, Kai Fan, Zhi Chen, Ricardo Henao, Dinghan Shen, Lawrence Carin