7 steps towards a more ethical artificial intelligence
AI violates privacy. The output generated by the AI cannot be explained. The AI is biased.
All of this is true and happening today, and there is a risk that these issues will accelerate as AI adoption increases. Before lawsuits start pouring in and government regulators start cracking down, organizations using AI need to become more proactive and formulate enforceable ethical AI policies.
But an effective AI ethics policy requires more than a few feel-good statements. It requires actions, embedded in a culture conscious of AI ethics. “An AI ethics statement is a good start. It’s also the tip of the iceberg,” says Reid Blackman, who explores AI ethics in his upcoming book, Ethical Machines: Your Concise. Guide to Totally Unbiased, Transparent and Respectful AI (Harvard Business Review Press) “Even those who spend a lot of time talking and even working in the area of AI ethics have great difficulty understanding what It’s because they’re trying to build structure around something that they still find spongy and fuzzy and subjective.
Blackman, CEO of Virtue, seeks to turn something as “squishy” as the ethics of AI into something concrete. It provides guidelines for instilling actionable ethics into AI systems and processes.
Bringing clarity to AI standards. “Between ‘never do this’ and ‘always do that’, there’s a lot of ‘maybe do this’.” warns Blackman. “Think about difficult cases before they arise.” The body of these findings forms examples of “ethical case law” that can lay the groundwork for ethical standards in AI. “Creating your organization’s AI ethical jurisprudence is an extremely powerful tool for articulating the organization’s AI ethical standards and communicating them to relevant stakeholders.”
Increase awareness of all members of the organization. Educate data scientists, engineers and product owners about the ethical issues of AI, urges Blackman. “All members of the organization who may develop, acquire or deploy AI should be aware. including HR, marketing, strategy, etc. This will require not only training, but also the development of a culture in which this training will be adopted. »
Embed AI ethics into team culture. Teams such as product development “need not only knowledge about problems and how, in principle, to solve them, but also concrete tools and processes to diligently identify and fix them. ‘ethical risk mitigation,’ he says.
Make sure there are AI experts as part of an AI ethics committee. “While standard processes, practices, and tools are helpful, they’re only a first line of defense” in a complex business like AI, Blackman says. “To seriously address ethical, reputational, regulatory and legal risks, we need experts in the room. It would be unwise and unfair to give data scientists, engineers, and product developers primary responsibility for identifying and mitigating ethical risks in products.
Introduce responsibility. There must be a sense of responsibility for the tools and processes introduced in AI. “Personnel must be held accountable for using these tools and following the processes, and face penalties ranging from no bonuses, to disqualification from promotion, to dismissal,” Blackman stresses. “As a related, we need to financially incentivize to take these issues seriously.”
Measure everything. Metrics, in the form of key performance indicators that tie AI ethics to business goals, are a key form of metrics. “We need to track both the extent to which the organization adopts new standards and the extent to which adherence to those standards identifies and mitigates the risks that those standards aim to address,” Blackman says. Determining quality KPIs for your actual ethical performance should be the product of your AI ethics statement and your AI ethics case law.
Obtain management sponsorship. AI ethics needs leadership, in the form of a person “responsible for overseeing the creation, deployment, and maintenance of an AI risk management program,” says Blackman. “The excitement around AI ethics among young people is a wonderful thing, but there is no viable and robust ethical risk program without leadership and ownership from above.”