Set the right expectations
Because AI systems are probabilistic, your system will probably give an incorrect or unexpected output at some point.
This makes it critical that you help users calibrate their expectations about system functionality and output. Do this by being transparent about both its capabilities and limitations.
For example, indicating a prediction could be wrong may cause the user to trust that particular prediction less. However, in the long term, users may use or rely on your product more, because they’re less likely to over-trust your system and be disappointed.
Aim for
Clarify the AI’s limitations, especially in high stakes situations.
Avoid
Avoid suggesting that the tech works perfectly in high-stakes situations if the tech isn’t yet reliable.