Artificial intelligence: rewards, risks and regulation

From unknown power to trusted tool, building the future of AI is all about asking the right questions. 

Few minutes to read
Published on

At the ISO Annual Meeting, ISO and its members join ranks with world change makers to showcase how International Standards help tackle the world’s greatest challenges. Published in the lead-up to the 2023 edition, our series of guest articles provides insight into topics discussed during the week.

This is the title
Dr Kobi Leins
Honorary Senior Fellow, Department of War Studies, King’s College London

In everyday life, the most common conversation about artificial intelligence (AI) goes along the lines, “I used ChatGPT and it did x”. Corporate leaders, governments and international organizations, however, are having a very different conversation. Theirs is about how the benefits of AI can be used in ways that outweigh the risks. 

Some argue we need to urgently regulate AI, others compare AI to the nuclear movement and some even warn that it will end the world. At the same time, many consultants and start-ups would have us believe that AI is the cure-all for all our commercial and personal ills, including love, life and lethality. It is too early to draw many conclusions, but it is important that the right people are having the right conversations. Only then can this groundbreaking technology support and empower humankind. 

Asking the right questions 

The truth is, there are many conversations about AI that we are not having that we should have. These include the broader societal implications of speeding up inequality and reducing people to data points, to a point where they can be presented as redundant or no longer of value. Every development in science throughout history has had benefits and risks. In fact, historical failures can teach us lessons, to avoid making the same mistakes. AI, although distinct in some ways, poses many of the same potential pitfalls as previous paradigm shifts. Overpromise, underplayed risk and commercial interests swaying conversations are not new. So what is new? And why should we care? 

Much of what we are talking about is old. Language models have been around since Weizenbaum, the creator of one of the first chatbots, Eliza, coined the idea of magical thinking around language models in the 1950s. More recently, data science communities themselves started raising concerns about some proposals for how we could use ChatGPT2  – including in automating sentencing, potentially to the death penalty, without human intervention. Although the tech is now supercharged on larger datasets, many of the old issues remain. What is new is the speed and scale of these models – and where their data is coming from. 

The fact is that we cannot think about AI risks along conventional lines. 

Governance 

The good news is, a whole governance toolbox exists already. This includes international and national legislation around intellectual property, corporate behaviour, human rights, discrimination, contracts and privacy – just to name a few. Many experts around the world, such as Prof. Edward Santow, have long advocated for the upskilling of lawyers so they can understand and apply both existing legislation and new technologies within their profession. 

In parallel to legislation, however, more regulation should also be considered. There are regulatory frameworks already in place, such as the recently formulated EU AI Act, National Institute of Standards and Technology, and China’s new policy on AI. But some need updating or revision – and there are gaps. And where there are gaps, we should regulate. 

Mitigating risk, maximizing reward 

The fact is that we cannot think about AI risks along conventional lines. Andrew Maynard, Professor at ASU and longstanding expert on risk, stands firm on this – traditional thinking just doesn’t “get us to where we need to be”. 

International Standards like those by ISO/IEC JTC 1/SC 42 on AI management will help to bridge these gaps in regulation. They empower decision makers and policymakers to create consistent data and processes in a way that is auditable. This will add value to businesses long-term in many ways, including for environmental reporting, operability and credibility with stakeholders. This approach will ensure that rewards outweigh risks, in line with regulations and other governance tools. 

Data ethics also have a role to play. If used and applied properly, data ethics can help to foster a desire – from leadership decisions to everyday tasks – to “do things not just because you can, but because you should”. 

But most importantly, International Standards can ensure that the right conversations are being had by the right people – using a shared language. It may take time to build the regulatory tools and the culture we need. But International Standards can help ensure we strike the right balance of risk and reward. 

About Kobi Leins

Dr Kobi Leins (GAICD) is a global expert in AI, international law and governance. A researcher in digital ethics, prolific speaker and author of numerous publications, she has played a pivotal role in advancing AI understanding. Her work bridges innovation and real-world applications, making complex concepts accessible to a diverse audience.

Kobi Leins will be speaking at the upcoming ISO Annual Meeting. Get in on the conversation! Join our online session “Ready or not, here comes AI” to uncover the political, social and ethical implications of artificial intelligence. Register here

 

 


About the ISO Annual Meeting

The ISO Annual Meeting is the world’s premier event for the international standards community. It convenes 168 national standards bodies from around the world, as well as an impressive range of government, industry and civil society representatives. This high-level forum is a unique opportunity to engage in timely discussion on emerging trends and challenges related to International Standards and their role in achieving the global sustainable development agenda.

Join Kobi at the ISO Annual Meeting

Check out our online session “Ready or not, here comes AI”. 

¿Hablas español?

Este artículo se puede descargar en versión PDF.

Descargar

会说中文吗

本文已由我们的中国成员翻译,可下载中文PDF版本。

下载中心
Press contact

press@iso.org

Journalist, blogger or editor?

Want to get the inside scoop on standards, or find out more about what we do? Get in touch with our team or check out our media kit.