Toolkit

Experiment and brainstorm different explainable interfaces.

Scenario

Imagine you are creating an AI helper that can assist people in determining whether or not a plant is safe or poisonous. The AI helper is imperfect, therefore how might we help people determine whether to trust its prediction or use their own judgment?

In this particular instance, a person finds a blue, spotted, thorny, large plant. The AI thinks it's poisonous, but should the person trust its prediction?

Where do I get started?

Explore how different question and explanation types can apply to this scenario by browsing and selecting a card  below. Note that some explanation types may be better suited for this scenario based on the user, their goals, and the underlying model.

Coming Soon!

You can also download these cards to explore how they may be applied to your work and/or to other scenarios.

Explanation Type
Global

Feature Importance

How
How It Works
Explanation Type
Global

Decision Tree Approximation

Why
Why not
Why if
How It Works
Explanation Type
Global

Rule
Extraction

Why
Why not
Why if
How It Works
Explanation Type
Global

Data
Sources

What Data
Explanation Type
Global

System Capabilities

What Output(s)
Explanation Type
Local

Feature Importance and Saliency

Why
Why not
How to be that
Explanation Type
Local

Rules or Trees

Why
Why not
How To Still Be This
Explanation Type
Local

Contrastive or Counterfactual Features

Why
Why not
How to be that
Explanation Type
Local

Prototypical or Representative Examples

Why
Why not
How To Still Be This
Explanation Type
Local

Counterfactual Example

Why
Why not
How to be that
Explanation Type
Local

Feature Influence or Relevance

Why
Why not
How To Be That
How To Still Be This
Why if
Explanation Type
Local

Model Confidence

Why
Why not
How Confident

References

  • Google. (n.d.). People AI Guidebook.
  • Liao, Q. V., Gruen, D., & Miller, S. (2020). Questioning the AI: Informing Design Practices for Explainable AI User Experiences. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems - CHI 20.
  • Lim, B.Y., & Dey, A.K. (2009). Assessing demand for intelligibility in context-aware applications. Proceedings of the 11th international conference on Ubiquitous computing.
  • Lim, B.Y., Dey, A.K., & Avrahami, D. (2009). Why and why not explanations improve the intelligibility of context-aware intelligent systems. CHI.