Truth is Stranger than Fiction
Managing Risk - People, Process, Data, AI, and “Ethics”
Abstract
When it comes to new technology, the opportunities to create new experiences, do things faster, better, cheaper, or just “differently” are always present and follow typical innovation-adoption scenarios. We are moving through the sequences of “adopt”, “react”, “adapt”, “transform”, and sometimes, “regulate” with data and AI in every industry, product, and service imaginable.
How you treat customers, and their data, is now increasingly visible.
What data AI models ‘see’ and ‘how they use that data’ form a permanent record with ever increasing traceability as to which features influenced a decision and by how much.
Questions are being raised around the appropriateness, permissions, privacy, transparency, consent, and customer outcomes across of all sorts of data, decisions, and data-driven-decisions. Managing these questions when they involve an ethics dimension is a new risk, and new ethical dimensions are popping up all the time.
In this report, we introduce the concept of ‘ethics literacy’ as a complement to data literacy. Examining decisions made by machines is easily extended to examining decisions made by people, and more increasingly the people/machine blended processes.
Privacy, security, transparency, fairness, permissioning, and the awkward “ethics” discussions on fairness, equity, and bias are arising as operational risks as well as reputational risks. New laws are on the horizon. Regulations are evolving, including what is regulated, where those regulations apply, and which applications are not allowed. Suggested penalties look to be meaningful.