We build open-source tools and platforms that make ML models more understandable, trustworthy, and fair.
You can also check out more of our code on Github.
A toolkit of participatory activities, frameworks, and guidance designed for transparency in datasets.
Interactively investigate your dataset to improve data quality and mitigate fairness and bias issues.
A visual, interactive interpretability and debugging tool for many types of ML models.
Visually probe the behavior of trained models, with minimal coding.
Visualization libraries to explore, understand, and analyze large machine learning datasets.
Interpretability beyond feature attribution: quantitative testing with concept activation vectors.
Interactively visualize high-dimensional data, in a variety of embeddings.
Collaboratively compose counterpoint with an AI agent trained on the chorale canon of JS Bach.
Visualize single-patient Electronic Health Record (EHR) data in an intuitive, easy-to-read format.
Explore how the context of words change their representations in transformer models.
Explore the output of a depth prediction model on artworks.
Visual tool to tinker with a machine learning model right in your browser.