
You’ve probably read about AI ethics. I want to go beyond the basics and sell you on the ROI of AI Ethics. We’re talking basic consumer and B2B issues here and more accuracy, not wider scale issues like battlefield autonomy or similar use case issues. I’ve worked on a few AI projects over the past year: one more retrieval-based (prompt/response) and another more generative (content creation with human oversight), plus some testing work. With today’s tools and vendors, you can often kick something out that seems solid pretty quickly, whether it’s a generative product or something using more traditional tools like predictive analytics. Doing it truly well, though, is sometimes orders of magnitude harder. It can be tough to justify the resources to do it right.
Here’s a cynical thought about AI ethics. Often, companies don’t really care. Actually, that’s too cynical. People may care, however, if you look at where pressure and resourcing usually go, it’s often more towards speed to market and growth, with maybe a thin layer of regulatory compliance. A few companies differentiate on quality. But most seem to be more in the feature race. “Non-functional” requirements? We’ll get to them. At some point.
This doesn’t mean everything needs to be perfect. If we waited for 100% certainty, nothing would launch. We make tradeoffs: a light/dark mode toggle is not an emergency cardiac-alert system. People get that. Especially AI, which is harder to understand and harder to control. How can we deal with these realities?
[Read more…]