Good overview in TechCrunch yesterday of the many, many ways in which algorithms and other forms of artificial intelligence are already affecting our ability to existing in public. An A.I.-driven future isn’t coming—it’s already here.
McKinsey recently has been digging in on ways companies can govern the use of A.I. to benefit society, even coining the phrase technological society responsibility :
Technological social responsibility (TSR) amounts to a conscious alignment between short- and medium-term business goals and longer-term societal ones.
TSR is interesting as a concept, but my problem with McKinsey’s application is that the firm can’t help but apply the reductionistic logic of a management consulting firm to what are essentially moral questions. Haven’t forward-thinking business leaders been preaching the benefits of long-term thinking for years?
(The authors urge business leaders to balance a proactive focus on business innovation and managing the transition to a digital future with a more reactive stance and a more conservative posture that focuses on cost reduction and labor substitution. OK … except hasn’t McKinsey itself been one of the biggest advocates of technology applications to find cost efficiencies?)
What we need are serious ethical frameworks to help those designing A.I. to understand the moral implications of what they’re building. We need companies to have some moral imagination—to wrestle with more than just profitability and growth—and stop pawning off ethical decisions about what they’re building and who they’re selling it to as someone else’s problem. That means hiring multi-disciplinary product teams; it means move smart and learn, not move fast and break things.
Public pressure is building to make these kinds of socio-digital business decisions, but not fast enough.