Q&A | “The Threat of AI is Not in the Future” Message from an AI Ethics Leader

Source: CIO Magazine

Paul Dong, responsible for developing a framework for ethical AI at NatWest Group, highlights critical ethical risks associated with AI that CISOs and boards should heed, stressing the importance of maintaining human agency. The loss of agency, arising from sophisticated AI systems, can undermine human decision-making, marking it as a serious risk that requires management. Alongside this, the robustness of AI responses must be ensured, as inconsistencies can pose significant challenges.

In discussions with Champions Speakers Agency, Dong urges the necessity of ethical committees in large organizations. These committees should comprise diverse executives who comprehend customer needs and ethical decision-making deeply. He asserts that these committees shouldn’t solely rely on IT systems but facilitate discussions among stakeholders to address ethical concerns adequately. Also, every large organization should appoint a responsible AI executive to oversee potential risks throughout the AI application lifecycle.

Dong critiques the pace at which regulators respond to AI risks, advocating for clear regulations akin to traffic rules for safe driving. While executives prioritize profitability, it’s crucial to have government involvement in determining what constitutes acceptable risk. The discussions around AI regulation must center on public interest derived from elected institutions, not driven by technology companies alone.

👉 Pročitaj original: CIO Magazine