Our Research

We conduct human-AI interaction research across the disciplines of computer science, HCI and design. We focus on explainability, interpretability, fairness, data visualization, human-centered AI, and ML for scientific discovery.

You can also check out our publications on Google Research.

2020

Communicating Model Uncertainty Over Space
Adam Pearce - Interactive site
HCI
Expert Discussions Improve Comprehension of Difficult Cases in Medical Image Assessment
Mike Shaekerman, Carrie Cai, Abigail Huang, Rory Sayres - CHI
HCI
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Nets
Ryan Louie, Andy Coenen, Anna Huang, Michael Terry, Carrie Cai - CHI
HCI

2019

Depth Predictions in Art
Ellen Jiang, Emily Reif, Been Kim - Interactive site
Interpretability
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim - NeurIPS
Interpretability
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, Michael Terry - CSCW
HCI
Understanding UMAP
Andy Coenen, Adam Pearce - Interactive site
ML Dev
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez - HCOMP
Interpretability
Language, Context, and Geometry in Neural Networks
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - Interactive site
Interpretability
Language, trees, and geometry in neural networks
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - Interactive site
Interpretability
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost, Nithum Thain, Tolga Bolukbasi - ACL
Interpretability
The Bach Doodle: Approachable music composition with machine learning at scale
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
Bach Doodle, The Bach Doodle: Approachable music composition with machine learning at scale
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft - Interactive site
HCI
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viegas, Jimbo Wilson - VAST
Interpretability
Similar Image Search for Histopathology: SMILY
Narayan Hegde, Jason D. Hipp, Yun Liu, Michael Emmert-Buck, Emily Reif, Daniel Smilkov, Michael Terry, Carrie J. Cai, Mahul B. Amin, Craig H. Mermel, Phil Q. Nelson, Lily H. Peng, Greg S. Corrado, Martin C. Stumpe - Nature
HCI
XRAI: Better Attributions Through Regions
Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry - ICCV
Interpretability
Visualizing and Measuring the Geometry of BERT
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - NeurIPS
Interpretability
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Been Kim, Emily Reif, Martin Wattenberg, Samy Bengio
Interpretability
The Effects of Example-based Explanations in a Machine Learning Interface
Carrie J Cai, Jonas Jongejan, Jess Holbrook - IUI
HCI
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viégas, Greg S. Corrado, Martin C. Stumpe, Michael Terry - CHI
HCI
Towards Automatic Concept-based Explanations
Amirata Ghorbani, James Wexler, James Zou, Been Kim - NeurIPS
Interpretability
Tensorflow.js: Machine learning for the web and beyond
Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, Stan Bileschi, Michael Terry, Charles Nicholson, Sandeep N. Gupta, Sarah Sirajuddin, D. Sculley, Rajat Monga, Greg Corrado, Fernanda B. Viégas, Martin Wattenberg - SysML
ML Dev

2018

Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo - AISTATS
Interpretability
Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim - NeurIPS
Interpretability
GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation
Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas, Martin Wattenberg - VAST
ML Dev
GAN Lab
Interactive site
ML Dev
To Trust Or Not To Trust A Classifier
Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta - NeurIPS
Interpretability
Human-in-the-Loop Interpretability Prior
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez - NeurIPS
HCI
Scalable and accurate deep learning with electronic health records
Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Michaela Hardt, Peter J. Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Yi Zhang, Gerardo Flores, Gavin E. Duggan, Jamie Irvine, Quoc Le, Kurt Litsch, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael D. Howell, Claire Cui, Greg S. Corrado & Jeffrey Dean - Nature
HCI

2017

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres
Interpretability
See How the World Draws
Reena Jana, Josh Lovejoy,
HCI
Direct-Manipulation Visualization of Deep Networks
Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, Martin Wattenberg
ML Dev
Tensorflow Playground
Interactive site
ML Dev
SmoothGrad: removing noise by adding noise
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
Interpretability
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg - IEEE TVCG
ML Dev

2016

Embedding Projector: Interactive Visualization and Interpretation of Embeddings
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B. Viégas, Martin Wattenberg - NeurIPS
ML Dev
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean - ACL
Interpretability

2015

2014

Visualizing Statistical Mix Effects and Simpson's Paradox
Zan Armstrong, Martin Wattenberg - IEEE InfoVis
HCI

2013

Ad Click Prediction: a View from the Trenches
H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica - ACM SIGKDD
HCI