Our Research

We conduct human-AI interaction research across the disciplines of computer science, HCI and design. We focus on explainability, interpretability, fairness, data visualization, human-centered AI, and ML for scientific discovery.

You can also check out our publications on Google Research.


Papers | Interactive Blog Posts and Websites

Papers

2020

Expert Discussions Improve Comprehension of Difficult Cases in Medical Image Assessment
Mike Shaekerman, Carrie Cai, Abigail Huang, Rory Sayres - CHI
HCI
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Nets
Ryan Louie, Andy Coenen, Anna Huang, Michael Terry, Carrie Cai - CHI
HCI

2019

AI in Nigeria
Courtney Heldreth, Fernanda Viégas, Titi Akinsanmi, Diana Akrong
HCI, NBU
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim - NeurIPS
Interpretability
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, Michael Terry - CSCW
HCI
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez - HCOMP
Interpretability
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost, Nithum Thain, Tolga Bolukbasi - ACL
Interpretability
Interpretability
The Bach Doodle: Approachable music composition with machine learning at scale
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, Jimbo Wilson - VAST
Interpretability
Similar Image Search for Histopathology: SMILY
Narayan Hegde, Jason D. Hipp, Yun Liu, Michael Emmert-Buck, Emily Reif, Daniel Smilkov, Michael Terry, Carrie J. Cai, Mahul B. Amin, Craig H. Mermel, Phil Q. Nelson, Lily H. Peng, Greg S. Corrado, Martin C. Stumpe - Nature
HCI
XRAI: Better Attributions Through Regions
Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry - ICCV
Interpretability
Visualizing and Measuring the Geometry of BERT
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - NeurIPS
Interpretability
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Been Kim, Emily Reif, Martin Wattenberg, Samy Bengio
Interpretability
The Effects of Example-based Explanations in a Machine Learning Interface
Carrie J Cai, Jonas Jongejan, Jess Holbrook - IUI
HCI
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viégas, Greg S. Corrado, Martin C. Stumpe, Michael Terry - CHI
HCI
Towards Automatic Concept-based Explanations
Amirata Ghorbani, James Wexler, James Zou, Been Kim - NeurIPS
Interpretability
Tensorflow.js: Machine learning for the web and beyond
Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, Stan Bileschi, Michael Terry, Charles Nicholson, Sandeep N. Gupta, Sarah Sirajuddin, D. Sculley, Rajat Monga, Greg Corrado, Fernanda B. Viégas, Martin Wattenberg - SysML
ML Dev

2018

Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo - AISTATS
Interpretability
ClinicalVis: Supporting Clinical Task-Focused Design Evaluation
Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, Paul Varghese
Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim - NeurIPS
Interpretability
GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation
Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas, Martin Wattenberg - VAST
ML Dev
Deep learning of aftershock patterns following large earthquakes
Phoebe M. R. DeVries, Fernanda Viégas, Martin Wattenberg & Brendan J. Meade - Nature
Other
To Trust Or Not To Trust A Classifier
Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta - NeurIPS
Interpretability
Human-in-the-Loop Interpretability Prior
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez - NeurIPS
HCI
Scalable and accurate deep learning with electronic health records
Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Michaela Hardt, Peter J. Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Yi Zhang, Gerardo Flores, Gavin E. Duggan, Jamie Irvine, Quoc Le, Kurt Litsch, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael D. Howell, Claire Cui, Greg S. Corrado & Jeffrey Dean - Nature
HCI

2017

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres
Interpretability
Direct-Manipulation Visualization of Deep Networks
Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, Martin Wattenberg
ML Dev
SmoothGrad: removing noise by adding noise
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
Interpretability
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg - IEEE TVCG
ML Dev
What Is Better Than Coulomb Failure Stress? A Ranking of Scalar Static Stress Triggering Mechanisms from 105 Mainshock-Aftershock Pairs
Brendan J. Meade, Phoebe M. R. DeVries, Jeremy Faller, Fernanda Viégas, Martin Wattenberg - Geophysical Research Letters
Other

2016

Interactive Visualization of Spatially Amplified GNSS Time-Series Position Fields
Brendan J. Meade, William T. Freeman, James Wilson, Fernanda Viégas, and Martin Wattenberg
Other
Embedding Projector: Interactive Visualization and Interpretation of Embeddings
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B. Viégas, Martin Wattenberg - NeurIPS
ML Dev
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng
ML Dev
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean - ACL
Interpretability

2014

Visualizing Statistical Mix Effects and Simpson's Paradox
Zan Armstrong, Martin Wattenberg - IEEE InfoVis
HCI

2013

Google+Ripples: a native visualization of information flow
Fernanda Viégas, Martin Wattenberg, Jack Hebert, Geoffrey Borggaard, Alison Cichowlas, Jonathan Feinberg, Jon Orwant, Christopher Wren - WWW
Visualization
Ad Click Prediction: a View from the Trenches
H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica - ACM SIGKDD
HCI

2012

Visualization

2011

Luscious
Fernanda Viégas & Martin Wattenberg
Visualization

2010

Beautiful History: Visualizing Wikipedia
Fernanda Viégas & Martin Wattenberg
Visualization

Interactive blogposts and websites

People + AI Guidebook
Google PAIR
HCI, Interpretability, ML Fairness
Visualization
Interpretability
Depth Predictions in Art
Ellen Jiang, Emily Reif, Been Kim
Interpretability
Understanding UMAP
Andy Coenen, Adam Pearce
ML Dev
Language, Context, and Geometry in Neural Networks
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
Language, trees, and geometry in neural networks
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
The Bach Doodle: Celebrating Johann Sebastian Bach
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
See How the World Draws
Reena Jana, Josh Lovejoy
HCI
How to Use t-SNE Effectively
Martin Wattenberg, Fernanda Viégas, Ian Johnson - Distill
ML Dev
Attacking discrimination with smarter machine learning
Martin Wattenberg, Fernanda Viégas, Moritz Hardt
ML Fairness
Design and Redesign in Data Visualization
Fernanda Viégas, Martin Wattenberg
HCI