Our Research

We conduct human-AI interaction research across the disciplines of computer science, HCI and design. We focus on explainability, interpretability, fairness, data visualization, human-centered AI, and ML for scientific discovery.


Papers | Interactive Blog Posts and Websites

Papers

2023

A Word is Worth a Thousand Pictures: Prompts as AI Design Material
Chinmay Kulkarni, Stefania Druga, Minsuk Chang, Alex Fiannaca, Carrie Cai, Michael Terry - arXiv
HCI
The Prompt Artists
Minsuk Chang, Stefania Druga, Alex Fiannaca, Pedro Vergani, Chinmay Kulkarni, Carrie Cai, Michael Terry - arXiv
HCI, Generative Art
ML, Safety
LLM, Interpretability
Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs
Kelvin Guu, Albert Webson, Ellie Pavlick, Lucas Dixon, Ian Tenney, Tolga Bolukbasi - arXiv
LLM, Interpretability
LLM, Program Synthesis, HCI
KNNs of Semantic Encodings for Rating Prediction
Léo Laugier, Thomas Bonald, Lucas Dixon, Raghuram Vadapalli - arXiv
Recommender, NLP
Large Scale Qualitative Evaluation of Generative Image Model Outputs
Yannick Assogba, Adam Pearce, Madison Elliott - arXiv
Interpretability, Visualization

2022

Multi-Agent Reinforcement Learning for Microprocessor Design Space Exploration
Srivatsan Krishnan, Natasha Jaques, Shayegan Omidshafiei, Dan Zhang, Izzeddin Gur, Vijay Janapa Reddi, Aleksandra Faust - NeurIPS ML4Sys Workshop 2022
ML, RL
Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis
Shayegan Omidshafiei, Andrei Kapishnikov, Yannick Assogba, Lucas Dixon, Been Kim - NeurIPS 2022
ML, RL, Interpretability
Concept-based Understanding of Emergent Multi-Agent Behavior
Niko Grupen, Natasha Jaques, Been Kim, Shayegan Omidshafiei - NeurIPS DeepRL Workshop 2022
ML, RL
Toxicity detection sensitive to conversational context
Alexandros Xenos, John Pavlopoulos, Ion Androutsopoulos, Lucas Dixon, Jeffrey Sorensen, Léo Laugier - First Monday
NLP, Toxicity
Acquisition of chess knowledge in AlphaZero
Thomas McGrath,Andrei Kapishnikov, Nenad Tomaševa, Adam Pearce, Martin Wattenberg, Demis Hassabis, Been Kim, Ulrich Paquet, and Vladimir Kramnik - PNAS
Interpretability
Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
Mahima Pushkarna, Andrew Zaldivar, Oddur Kjartansson - FAccT 2022
HCI, ML Fairness, Data-Centric AI
Is Your Toxicity My Toxicity? Exploring the Impact of Rater Identity on Toxicity Annotation
Tesh Goyal, Ian Kivlichan, Rachel Rosen, Lucy Vasserman - CSCW 2022
HCI, ML Fairness, Toxicity, Data-Centric AI
Towards Tracing Knowledge in Language Models Back to the Training Data
Ekin Akyurek, Tolga Bolukbasi, Frederick Liu, Binbin Xiong, Ian Tenney, Jacob Andreas, Kelvin Guu - Findings of EMNLP 2022
LLM, Interpretability
A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research
Ned Cooper Tiffanie Horne Gillian Hayes Courtney Heldreth Michal Lahav Jess Scon Holbrook Lauren Wilcox - CHI 2022
HCI
Whose AI Dream? In search of the aspiration in data annotation
Ding Wang, Shantanu Prabhat, Nithya Sambasivan - CHI 2022
HCI, Data-Centric AI, Future of Work
“Because AI is 100% right and safe”: User Vulnerabilities and Sources of AI Authority in India
Shivani Kapania, Oliver Siy, Gabe Clapper, Azhagu SP, Nithya Sambasivan - CHI 2022
HCI, ML Fairness
Deskilling of Domain Expertise in AI Development
Nithya Sambasivan, Rajesh Veeraraghavan - CHI 2022
HCI, Future of Work
When is ML Data Good? Datafication in Public Health in India
Divy Thakkar*, Azra Ismail*, Alex Hanna, Pratyush Kumar, Nithya Sambasivan, Neha Kumar - CHI 2022
HCI, Data-Centric AI
PromptChainer: Chaining Large Language Model Prompts through Visual Programming
Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, Carrie J Cai - CHI 2022
HCI, LLM
Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models
Ellen Jiang, Edwin Toh, Alejandra Molina, Kristen Olson, Claire Kayacik, Aaron Donsbach, Carrie J Cai, Michael Terry - CHI 2022
HCI, LLM
Prompt-based Prototyping with Large Language Models
Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, Carrie J Cai - CHI 2022
HCI, LLM
Wordcraft: Story Writing With Large Language Models
Ann Yuan, Andy Coenen, Emily Reif, Daphne Ippolito - IUI 2022
HCI, LLM
IMACS: Image Model Attribution Comparison Summaries
Eldon Schoop, Ben Wedin, Andrei Kapishnikov, Tolga Bolukbasi, Michael Terry - arXiv 2022
HCI, Interpretability, Visualization
SynthBio: A Case Study in Faster Curation of Text Datasets
Ann Yuan, Daphne Ippolito, Vitaly Nikolaev, Chris Callison-Burch, Andy Coenen, Sebastian Gehrmann - NeurIPS Datasets and Benchmarks 2021
Data-Centric AI
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Asma Ghandeharioun, Been Kim, Chun-Liang Li, Brendan Jou, Brian Eoff, Rosalind W. Picard - ICLR 2022
Interpretability

2021

A Recipe For Arbitrary Text Style Transfer with Large Language Models
Emily Reif, Daphne Ippolito, Ann Yuan, Andy Coenen, Chris Callison-Burch, Jason Wei - ICLR 2022
LLM
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Andrei Kapishnikov, Subhashini Venugopalan, Besim Avci, Ben Wedin, Michael Terry, Tolga Bolukbasi - CVPR 2021
Interpretability
Designerly Tele-Experiences: a New Approach to Remote Yet Still Situated Co-Design
Ferran Altarriba Bertran, Alexandra Pometko, Muskan Gupta, Lauren Wilcox, Reeta Banerjee, Katherine Isbister - TOCHI
HCI
Specialized Healthsheet for Healthcare Datasets
Negar Rostamzadeh, Subhrajit Roy, Diana Mincu, Andrew Smart, Lauren Wilcox, Mahima Pushkarna, Razvan Amironesei, Jessica Schrouff, Madeleine Elish, Nyalleng Moorosi, Berk Ustun, Noah Broesti, Katherine Heller: - ML4Health 2021
ML Dev
GenLine and GenForm: Two Tools for Interacting with Generative Language Models in a Code Editor
Ellen Jiang, Edwin Toh, Alejandra Molina, Aaron Donsbach, Carrie Cai, Michael Terry - UIST 2021
HCI
A Gentle Introduction to Graph Neural Networks
Benjamin Sanchez-Lengeling, Emily Reif, Adam Pearce, Alexander B. Wiltschko - Distll
Explorable
Isolation in Coordination: Challenges of Caregivers in the USA.
Mark Schurgin, Mark Schlager, Laura Vardoulakis, Laura Pina, Lauren Wilcox - CHI 2021
HCI
Breakdowns and Breakthroughs: Observing Musicians' Responses to the COVID-19 Pandemic
Carrie Cai, Michelle Carney, Nida Zada, Michael Terry - CHI 2021
HCI
AI as Social Glue: Uncovering the Roles of Deep Generative AI during Social Music Composition
Mia Suh, Emily Youngbloom, Michael Terry, Carrie Cai - CHI 2021
HCI
Program Synthesis with Large Language Models
Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton - arXiv
Program Synthesis, LLM
An Interpretability Illusion for BERT
Tolga Bolukbasi, Adam Pearce, Ann Yuan, Andy Coenen, Emily Reif, Fernanda Viégas, Martin Wattenberg - arXiv
Interpretability
Re-imagining Algorithmic Fairness in India and Beyond
Nithya Sambasivan, Erin Arnesen, Ben Hutchinson, Tulsee Doshi, Vinodkumar Prabhakaran - FAacT 2021
HCI
Civil rephrases of toxic texts with self-supervised transformers
L Laugier, I Pavlopoulos, J Sorensen, L Dixon - EACL 2021
Toxicity, NLP
Augmenting the User-Item Graph with Textual Similarity Models
Federico López, Martin Scholz, Jessica Yung, Marie Pellat, Michael Strube, Lucas Dixon - arXiv
Recommender

2020

Evaluating Attribution for Graph Neural Networks
Sanchez-Lengeling, Wei, Lee, Reif, Wang, Qian, McCloskey, Colwell, Wiltschko - NeurIPS 2020
Interpretability
The Language Interpretability Tool: Interactive Exploration, Explanation, and Counterfactual Analysis of NLP models
Tenney, Wexler, Bastings, Bolukbasi, Coenen, Gehrmann, Jiang, Pushkarna, Radebaugh, Reif, Yuan - EMNLP 2020
Interpretability, Visualization
Non-portability of Algorithmic Fairness in India
Sambasivan, Arnesen, Hutchinson, Prabhakaran - NeurIPS 2020 workshop
HCI
Six Attributes of Unhealthy Conversation
I Price, J Gifford-Moore, J Flemming, S Musker, M Roichman, G Sylvain, L Dixon - WOAH 2020 workshop paper
NLP
Artificial intelligence assistance improves Gleason grading of prostate needle core biopsies
David F. Steiner, Kunal Nagpal, Rory Sayres, Davis J. Foote, Benjamin D. Wedin, Adam Pearce, Carrie J. Cai, Samantha R. Winter, Matthew Symonds, Liron Yatziv, Andrei Kapishnikov, Trissia Brown, Isabelle Flament-Auvigne, Fraser Tan, Martin C. Stumpe, Pan-Pan Jiang, Yun Liu, Po-Hsuan Cameron Chen, Greg S. Corrado, Michael Terry, Craig H. Mermel - JAMA Open Network 2020
HCI
Evaluation of the Use of Combined Artificial Intelligence and Pathologist Assessment to Review and Grade Prostate Biopsies.
Steiner, Nagpal, Sayres, Foote, Wedin, Pearce, Cai, Winter, Symonds, Yatziv, Kapishnikov, Brown, Flament, Tan, Stumpe, Jiang, Liu, Chen, Corrado, Terry, Mermel - JAMA Open Network 2020
HCI
Debugging Tests for Model Explanations
Julius Adebayo, Michael Muelly, Ilaria Liccardi, Been Kim - NeurIPS / WHI 2020
Interpretability
Beyond the Portal: Re-imagining the Post-pandemic Future of Work
Thakkar, Kumar, Sambasivan - ACM interactions 27(6) 2020
HCI
AI Song Contest: Human-AI Co-Creation in Songwriting
Cheng-Zhi Anna Huang, Hendrik Vincent Koops, Ed Newton-Rex, Monica Dinculescu, Carrie J. Cai - ISMIR 2020
HCI
On the Making and Breaking of Social Music Improvisation during the COVID-19 Pandemic
Carrie J. Cai, Michael Terry - New Future of Work Symposium 2020
HCI
Concept Bottleneck Models
Pang Wei Koh, Thao Nguyen, Yew Siang Tang, Stephen Mussmann, Emma Pierson, Been Kim, Percy Liang - ICML 2020
Interpretability
Neural Networks Trained on Natural Scenes Exhibit Gestalt Closure
Kim, Reif, Wattenberg, Bengio, Mozer - Computational Brain & Behavior
Interpretability
Toxicity Detection: Does Context Really Matter?
Pavlopoulos, Sorensen, Dixon, Thain, Androutsopoulos - ACL 2020
Toxicity
HCI
Classifying Constructive Comments
Kolhatkar, Thain, Sorensen, Dixon, Taboada - Special Issue Journal, WWW, ALW, 2020
NLP
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh, Been Kim, Sercan O. Arik, Chun-Liang Li, Tomas Pfister, Pradeep Ravikumar - NeurIPS 2020
Interpretability
Expert Discussions Improve Comprehension of Difficult Cases in Medical Image Assessment
Mike Shaekerman, Carrie Cai, Abigail Huang, Rory Sayres - CHI 2020
HCI
Novice-AI Music Co-Creation via AI-Steering Tools for Deep Generative Nets
Ryan Louie, Andy Coenen, Anna Huang, Michael Terry, Carrie Cai - CHI 2020
HCI

2019

AI in Nigeria
Courtney Heldreth, Fernanda Viégas, Titi Akinsanmi, Diana Akrong
HCI
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim - NeurIPS 2019
Interpretability
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, Michael Terry - CSCW
HCI
Human Evaluation of Models Built for Interpretability
Isaac Lage, Emily Chen, Jeffrey He, Menaka Narayanan, Been Kim, Samuel Gershman and Finale Doshi-Velez - HCOMP 2019
Interpretability
Debiasing Embeddings for Reduced Gender Bias in Text Classification
Flavien Prost, Nithum Thain, Tolga Bolukbasi - ACL
Interpretability
The Bach Doodle: Approachable music composition with machine learning at scale
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft - arXiv
HCI
The What-If Tool: Interactive Probing of Machine Learning Models
James Wexler, Mahima Pushkarna, Tolga Bolukbasi, Martin Wattenberg, Fernanda Viégas, Jimbo Wilson - VIS 2019
Interpretability
Similar Image Search for Histopathology: SMILY
Narayan Hegde, Jason D. Hipp, Yun Liu, Michael Emmert-Buck, Emily Reif, Daniel Smilkov, Michael Terry, Carrie J. Cai, Mahul B. Amin, Craig H. Mermel, Phil Q. Nelson, Lily H. Peng, Greg S. Corrado, Martin C. Stumpe - Nature
HCI
XRAI: Better Attributions Through Regions
Andrei Kapishnikov, Tolga Bolukbasi, Fernanda Viégas, Michael Terry - ICCV 2019
Interpretability
Visualizing and Measuring the Geometry of BERT
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg - NeurIPS
Interpretability, Visualization
Do Neural Networks Show Gestalt Phenomena? An Exploration of the Law of Closure
Been Kim, Emily Reif, Martin Wattenberg, Samy Bengio - arXiv
Interpretability
The Effects of Example-based Explanations in a Machine Learning Interface
Carrie J Cai, Jonas Jongejan, Jess Holbrook - IUI 2019
HCI
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Carrie J. Cai, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viégas, Greg S. Corrado, Martin C. Stumpe, Michael Terry - CHI 2019
HCI
Towards Automatic Concept-based Explanations
Amirata Ghorbani, James Wexler, James Zou, Been Kim - NeurIPS
Interpretability
Tensorflow.js: Machine learning for the web and beyond
Daniel Smilkov, Nikhil Thorat, Yannick Assogba, Ann Yuan, Nick Kreeger, Ping Yu, Kangyi Zhang, Shanqing Cai, Eric Nielsen, David Soergel, Stan Bileschi, Michael Terry, Charles Nicholson, Sandeep N. Gupta, Sarah Sirajuddin, D. Sculley, Rajat Monga, Greg Corrado, Fernanda B. Viégas, Martin Wattenberg - SysML
ML Dev

2018

Interpreting Black Box Predictions using Fisher Kernels
Rajiv Khanna, Been Kim, Joydeep Ghosh, Oluwasanmi Koyejo - AISTATS
Interpretability
ClinicalVis: Supporting Clinical Task-Focused Design Evaluation
Marzyeh Ghassemi, Mahima Pushkarna, James Wexler, Jesse Johnson, Paul Varghese - arXiv
Visualization
Sanity Checks for Saliency Maps
Julius Adebayo, Justin Gilmer, Michael Muelly, Ian Goodfellow, Moritz Hardt, Been Kim - NeurIPS
Interpretability
GAN Lab: Understanding Complex Deep Generative Models using Interactive Visual Experimentation
Minsuk Kahng, Nikhil Thorat, Duen Horng Chau, Fernanda Viégas, Martin Wattenberg - VIS 2018
ML Dev, Visualization, Explorable
Deep learning of aftershock patterns following large earthquakes
Phoebe M. R. DeVries, Fernanda Viégas, Martin Wattenberg & Brendan J. Meade - Nature
Other
To Trust Or Not To Trust A Classifier
Heinrich Jiang, Been Kim, Melody Y. Guan, Maya Gupta - NeurIPS
Interpretability
Human-in-the-Loop Interpretability Prior
Isaac Lage, Andrew Slavin Ross, Been Kim, Samuel J. Gershman, Finale Doshi-Velez - NeurIPS
HCI
Scalable and accurate deep learning with electronic health records
Alvin Rajkomar, Eyal Oren, Kai Chen, Andrew M. Dai, Nissan Hajaj, Michaela Hardt, Peter J. Liu, Xiaobing Liu, Jake Marcus, Mimi Sun, Patrik Sundberg, Hector Yee, Kun Zhang, Yi Zhang, Gerardo Flores, Gavin E. Duggan, Jamie Irvine, Quoc Le, Kurt Litsch, Alexander Mossin, Justin Tansuwan, De Wang, James Wexler, Jimbo Wilson, Dana Ludwig, Samuel L. Volchenboum, Katherine Chou, Michael Pearson, Srinivasan Madabushi, Nigam H. Shah, Atul J. Butte, Michael D. Howell, Claire Cui, Greg S. Corrado & Jeffrey Dean - Nature
HCI

2017

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres - ICML 2018
Interpretability
Direct-Manipulation Visualization of Deep Networks
Daniel Smilkov, Shan Carter, D. Sculley, Fernanda B. Viégas, Martin Wattenberg - arXiv
ML Dev, Visualization, Explorable
SmoothGrad: removing noise by adding noise
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg - arXiv
Interpretability
Visualizing Dataflow Graphs of Deep Learning Models in TensorFlow
Kanit Wongsuphasawat, Daniel Smilkov, James Wexler, Jimbo Wilson, Dandelion Mane, Doug Fritz, Dilip Krishnan, Fernanda B. Viégas, and Martin Wattenberg - VIS 2017
ML Dev, Visualization
What Is Better Than Coulomb Failure Stress? A Ranking of Scalar Static Stress Triggering Mechanisms from 105 Mainshock-Aftershock Pairs
Brendan J. Meade, Phoebe M. R. DeVries, Jeremy Faller, Fernanda Viégas, Martin Wattenberg - Geophysical Research Letters
Other

2016

Interactive Visualization of Spatially Amplified GNSS Time-Series Position Fields
Brendan J. Meade, William T. Freeman, James Wilson, Fernanda Viégas, and Martin Wattenberg
Other, Visualization
Embedding Projector: Interactive Visualization and Interpretation of Embeddings
Daniel Smilkov, Nikhil Thorat, Charles Nicholson, Emily Reif, Fernanda B. Viégas, Martin Wattenberg - NeurIPS
ML Dev, Visualization, Interpretability
Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation
Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean - ACL
Interpretability
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng - arXiv
ML Dev

2014

Visualizing Statistical Mix Effects and Simpson's Paradox
Zan Armstrong, Martin Wattenberg - InfoVis 2014
HCI, Visualization

2013

Ad Click Prediction: a View from the Trenches
H. Brendan McMahan, Gary Holt, D. Sculley, Michael Young, Dietmar Ebner, Julian Grady, Lan Nie, Todd Phillips, Eugene Davydov, Daniel Golovin, Sharat Chikkerur, Dan Liu, Martin Wattenberg, Arnar Mar Hrafnkelsson, Tom Boulos, Jeremy Kubica - KDD 2013
HCI
Google+Ripples: a native visualization of information flow
Fernanda Viégas, Martin Wattenberg, Jack Hebert, Geoffrey Borggaard, Alison Cichowlas, Jonathan Feinberg, Jon Orwant, Christopher Wren - WWW
Visualization

2012

Visualization

2011

Luscious
Fernanda Viégas & Martin Wattenberg
Visualization

2010

Beautiful History: Visualizing Wikipedia
Fernanda Viégas & Martin Wattenberg
Visualization

Interactive blogposts and websites

HCI
Participatory Machine Learning
Fernanda Viegas, Jess Holbrook, Martin Wattenberg
HCI
Visualization
HCI, Visualization, Explorable
Interpretability, Visualization
Depth Predictions in Art
Ellen Jiang, Emily Reif, Been Kim
Interpretability
Understanding UMAP
Andy Coenen, Adam Pearce
ML Dev, Visualization, Explorable
Language, Context, and Geometry in Neural Networks, Part II
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
Language, Context, and Geometry in Neural Networks, Part I
Andy Coenen, Emily Reif, Ann Yuan, Been Kim, Adam Pearce, Fernanda Viégas, Martin Wattenberg
Interpretability
Interpretability
The Bach Doodle: Celebrating Johann Sebastian Bach
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, Jacob Howcroft
HCI
See how the world draws
Reena Jana, Josh Lovejoy,
HCI
How to Use t-SNE Effectively
Martin Wattenberg, Fernanda Viégas, Ian Johnson - Distill
ML Dev, Visualization, Explorable
Attacking discrimination with smarter machine learning
Martin Wattenberg, Fernanda Viégas, Moritz Hardt
ML Fairness, Visualization, Explorable
Design and Redesign in Data Visualization
Fernanda Viégas, Martin Wattenberg
HCI, Visualization