Our Research

We conduct human-AI interaction research across the disciplines of computer science, HCI and design. We focus on explainability, interpretability, fairness, data visualization, human-centered AI, and ML for scientific discovery.

You can also check out our publications on Google Research.


Papers | Interactive Blog Posts and Websites

Papers

2022

Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Mahima Pushkarna, Andrew Zaldivar, Oddur Kjartansson
HCI, ML, Data-Centric AI
Is Your Toxicity My Toxicity? Exploring the Impact of RaterIdentity on Toxicity Annotation
Tesh Goyal, Ian Kivlichan, Rachel Rosen, Lucy Vasserman - CSCW 2022
HCI, ML, Data-Centric AI
A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research
Ned Cooper Tiffanie Horne Gillian Hayes Courtney Heldreth Michal Lahav Jess Scon Holbrook Lauren Wilcox - CHI 2022
HCI
Whose AI Dream? In search of the aspiration in data annotation
Ding Wang, Shantanu Prabhat, Nithya Sambasivan - CHI 2022
HCI, data, future of work
“Because AI is 100% right and safe”: User Vulnerabilities and Sources of AI Authority in India
Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu SP, Nithya Sambasivan - CHI 2022
HCI, fairness
Deskilling of Domain Expertise in AI Development
Nithya Sambasivan, Rajesh Veeraraghavan - CHI 2022
HCI, future or work, labour
When is ML Data Good? Datafication in Public Health in India
Divy Thakkar*, Azra Ismail*, Alex Hanna, Pratyush Kumar, Nithya Sambasivan, Neha Kumar - CHI 2022
Wordcraft: Story Writing With Large Language Models
Ann Yuan, Andy Coenen, Emily Reif, Daphne Ippolito - IUI 2022
HCI
SynthBio: A Case Study in Faster Curation of Text Datasets
Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, Sebastian Gehrmann - NeurIPS Datasets and Benchmarks
ML/Datasets
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard - ICLR 2022
Interpretability

2021

A Recipe For Arbitrary Text Style Transfer with Large Language Models
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, Jason Wei - ICLR 2022
LLM
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, Tolga Bolukbasi - CVPR 2021
Interpretability
Designerly Tele-Experiences: a New Approach to Remote Yet Still Situated Co-Design
Ferran Altarriba Bertran, Alexandra Pometko, Muskan Gupta, Lauren Wilcox, Reeta Banerjee, Katherine Isbister - TOCHI
HCI
Specialized Healthsheet for Healthcare Datasets
Negar Rostamzadeh, Subhrajit Roy, Diana Mincu, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Razvan Amironesei, Jessica Schrouff, Madeleine Elish, Nyalleng Moorosi, Berk Ustun, Noah Broesti, Katherine Heller: - ML4Health 2021
ML Dev
Isolation in Coordination: Challenges of Caregivers in the USA.
Mark Schurgin, Mark Schlager, Laura Vardoulakis, Laura Pina, Lauren Wilcox - CHI 2021
HCI
Re-imagining Algorithmic Fairness in India and Beyond
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran - FaacT 2021
HCI
Civil rephrases of toxic texts with self-supervised transformers
L Laugier, I Pavlopoulos, J Sorensen, L Dixon - EACL 2021
Augmenting the User-Item Graph with Textual Similarity Models
Federico López, Martin Scholz, Jessica Yung, Marie Pellat, Michael Strube, Lucas Dixon - arxiv
Recommenders

2020

Evaluating Attribution for Graph Neural Networks
Sanchez-Lengeling, Wei, Lee, Reif, Wang, Qian, McCloskey, Colwell, Wiltschko - NeurIPS 2020
Interpretability
The Language Interpretability Tool: Interactive Exploration, Explanation, and Counterfactual Analysis of NLP models
Tenney, Wexler, Bastings, Bolukbasi, Coenen, Gehrmann, Jiang, Pushkarna, Radebaugh, Reif, Yuan - EMNLP 2020
Interpretability
Non-portability of Algorithmic Fairness in India
Sambasivan, Arnesen, Hutchinson, Prabhakaran - NeurIPS 2020 workshop
HCI
Six Attributes of Unhealthy Conversation
I Price, J Gifford-Moore, J Flemming, S Musker, M Roichman, G Sylvain, L Dixon - WOAH 2020 workshop paper
Other
Artificial intelligence assistance improves Gleason grading of prostate needle core biopsies
David F. Steiner, Kunal Nagpal, Rory Sayres, Davis J. Foote, Benjamin D. Wedin, Adam Pearce, Carrie J. Cai, Samantha R. Winter, Matthew Symonds, Liron Yatziv, Andrei Kapishnikov, Trissia Brown, Isabelle Flament-Auvigne, Fraser Tan, Martin C. Stumpe, Pan-Pan Jiang, Yun Liu, Po-Hsuan Cameron Chen, Greg S. Corrado, Michael Terry, Craig H. Mermel - JAMA Open Network 2020
HCI
Evaluation of the Use of Combined Artificial Intelligence and Pathologist Assessment to Review and Grade Prostate Biopsies.
Steiner, Nagpal, Sayres, Foote, Wedin, Pearce, Cai, Winter, Symonds, Yatziv, Kapishnikov, Brown, Flament, Tan, Stumpe, Jiang, Liu, Chen, Corrado, Terry, Mermel - JAMA Open Network 2020
HCI
Debugging Tests for Model Explanations
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim - NeurIPS / WHI 2020
Interpretability
Beyond the Portal: Re-imagining the Post-pandemic Future of Work
Thakkar, Kumar, Sambasivan - ACM interactions 27(6) 2020
HCI
AI Song Contest: Human-AI Co-Creation in Songwriting
Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed Newton-Rex, Monica Dinculescu, Carrie J. Cai - ISMIR 2020
HCI
On the Making and Breaking of Social Music Improvisation during the COVID-19 Pandemic
Carrie J. Cai, Michael Terry - New Future of Work Symposium 2020
HCI
Concept Bottleneck Models
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang - ICML 2020
Interpretability
Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure
Kim, Reif, Wattenberg, Bengio, Mozer - Computational Brain & Behavior
Interpretability
Toxicity Detection: Does Context Really Matter?
Pavlopoulos, Sorensen, Dixon, Thain, Androutsopoulos - ACL 2020
Other
HCI
Classifying Constructive Comments
Kolhatkar, Thain, Sorensen, Dixon, Taboada - Special Issue Journal, WWW, ALW, 2020
Other
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar - NeurIPS 2020
Interpretability
Interpretability
Expert Discussions Improve Comprehension of Difficult Cases in Medical Image Assessment
Mike Shaekerman, Carrie Cai, Abigail Huang, Rory Sayres - CHI
HCI
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Nets
Ryan Louie, Andy Coenen, Anna Huang, Michael Terry, Carrie Cai - CHI
HCI

2019

AI in Nigeria
Courtney Heldreth, Fernanda Viégas, Titi Akinsanmi, Diana Akrong
HCI, NBU
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim - NeurIPS
Interpretability
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, Michael Terry - CSCW
HCI
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez - HCOMP
Interpretability
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost, Nithum Thain, Tolga Bolukbasi - ACL
Interpretability
Interpretability
The Bach Doodle: Approachable music composition with machine learning at scale
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, Jimbo Wilson - VAST
Interpretability
Similar Image Search for Histopathology: SMILY
Narayan Hegde, Jason D. Hipp, Yun Liu, Michael Emmert-Buck, Emily Reif, Daniel Smilkov, Michael Terry, Carrie J. Cai, Mahul B. Amin, Craig H. Mermel, Phil Q. Nelson, Lily H. Peng, Greg S. Corrado, Martin C. Stumpe - Nature
HCI
XRAI: Better Attributions Through Regions
Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry - ICCV 2019
Interpretability
Visualizing and Measuring the Geometry of BERT
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - NeurIPS
Interpretability
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Been Kim, Emily Reif, Martin Wattenberg, Samy Bengio
Interpretability
The Effects of Example-based Explanations in a Machine Learning Interface
Carrie J Cai, Jonas Jongejan, Jess Holbrook - IUI
HCI
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viégas, Greg S. Corrado, Martin C. Stumpe, Michael Terry - CHI
HCI
Towards Automatic Concept-based Explanations
Amirata Ghorbani, James Wexler, James Zou, Been Kim - NeurIPS
Interpretability
Tensorflow.js: Machine learning for the web and beyond
Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, Stan Bileschi, Michael Terry, Charles Nicholson, Sandeep N. Gupta, Sarah Sirajuddin, D. Sculley, Rajat Monga, Greg Corrado, Fernanda B. Viégas, Martin Wattenberg - SysML
ML Dev

2018

Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo - AISTATS
Interpretability
ClinicalVis: Supporting Clinical Task-Focused Design Evaluation
Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, Paul Varghese
Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim - NeurIPS
Interpretability
GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation
Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas, Martin Wattenberg - VAST
ML Dev
Deep learning of aftershock patterns following large earthquakes
Phoebe M. R. DeVries, Fernanda Viégas, Martin Wattenberg & Brendan J. Meade - Nature
Other
To Trust Or Not To Trust A Classifier
Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta - NeurIPS
Interpretability
Human-in-the-Loop Interpretability Prior
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez - NeurIPS
HCI
Scalable and accurate deep learning with electronic health records
Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Michaela Hardt, Peter J. Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Yi Zhang, Gerardo Flores, Gavin E. Duggan, Jamie Irvine, Quoc Le, Kurt Litsch, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael D. Howell, Claire Cui, Greg S. Corrado & Jeffrey Dean - Nature
HCI

2017

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres
Interpretability
Direct-Manipulation Visualization of Deep Networks
Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, Martin Wattenberg
ML Dev
SmoothGrad: removing noise by adding noise
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
Interpretability
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg - IEEE TVCG
ML Dev
What Is Better Than Coulomb Failure Stress? A Ranking of Scalar Static Stress Triggering Mechanisms from 105 Mainshock-Aftershock Pairs
Brendan J. Meade, Phoebe M. R. DeVries, Jeremy Faller, Fernanda Viégas, Martin Wattenberg - Geophysical Research Letters
Other

2016

Interactive Visualization of Spatially Amplified GNSS Time-Series Position Fields
Brendan J. Meade, William T. Freeman, James Wilson, Fernanda Viégas, and Martin Wattenberg
Other
Embedding Projector: Interactive Visualization and Interpretation of Embeddings
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B. Viégas, Martin Wattenberg - NeurIPS
ML Dev
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean - ACL
Interpretability
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng
ML Dev

2014

Visualizing Statistical Mix Effects and Simpson's Paradox
Zan Armstrong, Martin Wattenberg - IEEE InfoVis
HCI

2013

Ad Click Prediction: a View from the Trenches
H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica - ACM SIGKDD
HCI
Google+Ripples: a native visualization of information flow
Fernanda Viégas, Martin Wattenberg, Jack Hebert, Geoffrey Borggaard, Alison Cichowlas, Jonathan Feinberg, Jon Orwant, Christopher Wren - WWW
Visualization

2012

Visualization

2011

Luscious
Fernanda Viégas & Martin Wattenberg
Visualization

2010

Beautiful History: Visualizing Wikipedia
Fernanda Viégas & Martin Wattenberg
Visualization

Interactive blogposts and websites

Collecting Sensitive Information
Adam Pearce, Ellen Jiang
Why some models leak data
Adam Pearce, Ellen Jiang
Participatory Machine Learning
Fernanda Viegas, Jess Holbrook, Martin Wattenberg
Measuring Fairness
Adam Pearce
Hidden Bias
Adam Pearce
Depth Predictions in Art
Ellen Jiang, Emily Reif, Been Kim
Interpretability
Understanding UMAP
Andy Coenen, Adam Pearce
ML Dev
Language, Context, and Geometry in Neural Networks, Part II
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
Language, Context, and Geometry in Neural Networks, Part I
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
The Bach Doodle: Celebrating Johann Sebastian Bach
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
See how the world draws
Reena Jana, Josh Lovejoy,
HCI
How to Use t-SNE Effectively
Martin Wattenberg, Fernanda Viégas, Ian Johnson - Distill
ML Dev
Attacking discrimination with smarter machine learning
Martin Wattenberg, Fernanda Viégas, Moritz Hardt
ML Fairness
Design and Redesign in Data Visualization
Fernanda Viégas, Martin Wattenberg
HCI