Set the right expectations
Because AI systems are probabilistic, your system will probably give an incorrect or unexpected output at some point.
This makes it critical that you help users calibrate their expectations about system functionality and output. Do this by being transparent about both its capabilities and limitations.
For example, indicating a prediction could be wrong may cause the user to trust that particular prediction less. However, in the long term, users may use or rely on your product more, because they’re less likely to over-trust your system and be disappointed.
![Plant pal helps you identify 400+ plant types native to the United States and determine if they're safe for adults, cats and dogs.](/guidebook/media/6a98bcca-046f-4b6a-b872-904c89700a9a/set-the-right-expectations-1-0.png)
Aim for
Clarify the AI’s limitations, especially in high stakes situations.
![A botanist you can keep in your pocket. Use it to identify any plant and determine if it's safe for people and pets.](/guidebook/media/75a7033d-bf91-4e1c-8a24-3bf6c69f538d/set-the-right-expectations-1-1.png)
Avoid
Avoid suggesting that the tech works perfectly in high-stakes situations if the tech isn’t yet reliable.