Embracing the power of technology

Few minutes to read
By Kath Lockett
Published on

Just how worried should we be about killer robots? Amidst all the talk about how artificial intelligence (AI) is threatening society, some experts believe AI shouldn’t be feared. Here’s why we can embrace the power of technology.

Artificial intelligence (AI) is everywhere. AI recommends movies and restaurant choices, prevents cars from crashing, books flights, tracks taxis, identifies financial fraud and creates playlists to work out to. In the 1950s, AI was defined as machines operating in ways that were regarded as “intelligent”, or equal to tasks performed by humans. Since then, computer use and data generation have increased enormously, with current estimates of 2.5 quintillion bytes being produced every day.

Hands of a woman using a smartphone.

Much of this data is output, or information, collected from daily use of mobile phones, social media and the Internet. This information is commonly known as “big data” and is where AI steps in to help. AI uses machine learning to analyse this data in real time at a speed and volume no human ever could. Not surprisingly, the private sector has embraced AI and increasingly uses it to gain more accurate information on purchasing behaviour, financial transactions, logistics and predicting future trends.

The United Nations recognizes the power of AI and is working with the private sector on “data philanthropy” so information such as surveys, statistics and consumer profiles can be used for public good. For example, researchers are using satellites and remote sensors with AI technology to predict extreme weather events that affect agriculture and food production in developing countries.

With this in mind, ISO – in conjunction with its sister organization, the International Electrotechnical Commission (IEC) – has identified the need to develop standards for AI that can benefit all societies. The ISO/IEC JTC 1/SC 42 subcommittee for artificial intelligence was established two years ago and has already published three standards relating to big data, with 13 other projects in development. Chaired by business and technology strategist Wael William Diab, it will develop and implement a standardization programme on AI to provide guidance for other ISO committees developing AI applications.

Setting boundaries

SC 42 has a broad scope of AI development that includes basic terminology and definitions, risk management, bias and trustworthiness in AI systems, robustness of neural networks, machine-learning systems and an overview of ethical and societal concerns. Twenty-seven member countries are participating in this programme with another 13 countries observing. Ray Walshe, Assistant Professor of ICT Standardization at Dublin City University, Mr Wo Chang, Digital Data Advisor for the Information Technology Laboratory (ITL) of the National Institute of Standards and Technology (NIST) in the United States, and Dr Tarek Besold, Scientific Advisor of Neurocat in Berlin and Chief Behavioural Officer (CBO) at Telefonica Innovation Alpha Health in Barcelona, are three key members of this committee. Do they identify with Peter Parker when he became Spiderman? With great power comes great responsibility.

Industrial robotic arm picks cardboard boxes off a conveyor belt in a warehouse.

Dr Besold isn’t daunted. “AI is a new and fast-changing field, full of innovators and disruptors. We need to define the state-of-the-art and common-sense definitions of AI mechanisms and technologies. Yes, developing norms and standards is a big task and interoperability is vital because AI is so far-reaching. AI is part of many futures as a tool rather than the leader.”

SC 42 is “building from the ground up,” says Chang. “We provide interoperable frameworks and performance tools in the form of standards on AI and big data, which can then be shared with government and private enterprise. These frameworks set the AI ‘boundary conditions’ that can be defined using probabilities to determine the risk factors. Not just boundaries, but a safety net that uses risk management in implementing them.”

It remains up to governments around the world to decide what they regulate. Ray Walshe says that “the public needs to recognize that there is a difference between standardization, legislation and regulation. Ninety percent of the world’s data has been generated in only the past two years. This is an incredible mountain of both structured and unstructured data to be stored, aggregated, searched and correlated for the myriad of businesses, governments and researchers who provide tools and services. Governments and private industry will often use International Standards as a reference to regulation, to ensure that industry, societal safety and ethical concerns are met”.

Tricking AI

Safety of data and how it is used remains a concern in society, especially when the dreaded “computer error” is mentioned. Mathematics emerges as the crucial ingredient. Dr Besold says AI programs play a “numbers game”, with researchers generating attacks and defences on AI systems, trying to “trick them” and developing solutions to the problems they discover.

AI focuses on high specificity, which means that it’s tailored to a specific task, Besold says. “AI takes away the time-consuming and boring programming from people, but it still needs rules and measures that are set by humans. If you apply safety boundaries to the self-driving car, it’s obvious that this technology needs safeguards and standard definitions. Is it an acceptable risk to run over an elderly person or a small child? Neither is acceptable, of course, and we want to help governments and industries accept and use the measures we recommend.”

Photo collage of a low-battery warning in an electric car and an aerial view of a traffic jam.

“Probability in risk assessment is the key word,” Wo Chang agrees, and he uses cats as a rather powerful example: “If you take image recognition, you’ll see that an effective system will highlight an error if the program has not experienced it before and shut down. The system has been given millions of pictures of cats and dogs so that its ability to differentiate between them is fine-tuned. The system has been trained under well-defined conditions, but it’s impossible to model for everything. What happens if it comes across a cat wearing a bow tie? It shows that if one part of a picture is changed, the outcomes can be very different. This could be a ‘bug’ (or a bow-tie-wearing cat) that does not meet the trained environment and system function and puts in a safety constraint to avoid failures. If applied to more serious applications, then thorough testing can determine probabilities and shut down the system to prevent more catastrophic decisions or failures.”

Trust your data

With use of AI in potentially sensitive areas such as healthcare, surveillance and banking, there remains the risk that human bias affects the data used. Dr Besold acknowledges this. “There is bias in AI, but we can agree on a standard definition to address this bias. Regulators may accept that a 5/10 bias is acceptable for soap dispensers but certainly not when it comes to self-driving cars.”

In the medical field, he says, government and society need to decide if we are OK in a validated world. Are we OK with using data that’s mostly from the first world, for the first world, in the first world? Do regulators accept that the data can only be applied to these people or insist that it has to work for everyone in the world but will be statistically less accurate?

“Look at organ transplants. AI could potentially have access to all available medical records across the world and apply an enormous range of measures to determine which person gets to the top of the list, ensuring less rejection of transplanted organs and much better medical outcomes. However, if you are on a transplant list and realize that other people are receiving organs ahead of you, are you willing to accept the data used to make that decision?”

Trustworthiness is vital. The committee and researchers in the field need to look at how other fields such as medical and automotive apply measures and earn this trust by government and wider society.

Emerging machine learning is starting to look at the more pressing needs of the developing world, according to Wo Chang. “In Africa, access to energy is a big problem in rural areas. With a large uptake of smartphones there, apps are being developed that can diagnose basic medical problems in remote clinics, provide preliminary data such as weather forecasts, soil quality and agricultural tips.”

Fears and phobias

Despite these advances, much of the general public fears AI as a scary development, imagining robots becoming Schwarzenegger-like “terminators” replacing human beings. “This won’t happen in my lifetime,” Ray Walshe says. “Don’t get me wrong, AI is a game changer and is capable of doing very precise jobs very fast. This is impressive and generates huge cost savings, but it’s known as ‘narrow intelligence’. The human brain is capable of doing that ‘narrow’ task but also thousands of other ‘broader’ and more complex tasks.” Robotics is one of the most exciting areas for AI development, but the myth that machines will be capable of artificial general intelligence like “Terminator” will not happen in the foreseeable future.

Engineer works with a HoLoLens headset to place a virtual robotic arm into the production line.

“AI is still more of a promise than an achieved feat,” agrees Dr Besold. “The research side is progressing faster than the application side. Robotic arms in factories can only do what they are programmed for and there’s no ’intelligence’ in this. If a change is needed, such as working on the other side of the car, it requires a change in programming that involves a human being.”

Dr Besold says that AI developers need to engage more with society to provide transparency, and Chang sees that standards developed by the committee to address system robustness, data quality and boundaries will increase trust and the ability to interact with a variety of data repositories.

All three committee members see jobs changing rather than disappearing. AI will perform more manual work and routine tasks such as standard contracts and documents, giving people more time to concentrate on skills involving empathy, “bedside manner” in medical treatment, ethical matters and lateral thinking. Opportunities for re-education and to work on more challenging and interesting situations will arise.

“How ironic if increased use of AI in workplaces resulted in reviving union movements,” Dr Besold says. “If you’re at a school or hospital, then using AI for logistics or declarative knowledge such as facts, dates and figures may result in less staff time per week. Do governments and employers fire some staff or do they negotiate a shorter work week for a more balanced life? This is where consensus is needed: what’s the biggest benefit to society?”

New horizons

Future trends and benefits for AI will see more hands-free applications according to Wo Chang. “Wearing smart glasses will enable users to look at something like a broken washing machine and get information on what is wrong, where the problem is located and how to fix it. For tourism, you’ll be able to look at a building and find out the history, function and services it still provides while you are standing in front of it.”

Woman with a wearable computer in the form of smart glasses.

Smart glasses aside, Chang has loftier hopes. “When government and businesses keep their citizens and customers at the forefront and learn how to leverage the best of AI and their people, it will be a bright future indeed.”

Ray Walshe has a personal interest in seeing how AI can be used to help in reaching the objectives outlined in the United Nations Sustainable Development Goals, a universal call to action to ensure peace and prosperity for mankind. “How can AI be used to help alleviate poverty worldwide, hunger and malnutrition, for better water and sanitation, equal opportunities in education, work and gender, and to accelerate development in developing nations? These are major challenges that require disruptive and game-changing technologies and expert collaboration on a global scale.”

We need to do more than put cat ears on friends’ social media selfies, Dr Besold says. “My hope for the future is that actual applications of AI will result in more effort being put into logistics that help in the field of medicine, agriculture, climate change and scientific discovery – important applications that will benefit society.”

Seems like the ISO/IEC JTC 1/SC 42 subcommittee for artificial intelligence will be busy.

Default ISOfocus
Elizabeth Gasiorowski-Denis
Editor-in-Chief of ISOfocus