Introducing a fascinating exploration of ethics and inclusivity in AI, and how to deal with the ever-accelerating pace of technological change, Chaesub Lee, Director, ITU’s TSB, outlined how the AI for Good Global Summit is bringing together sister UN organizations and AI experts in international dialogue to address the concerns and complexities of AI and develop sustainable, positive use cases for good.
“Our world is now highly interconnected, and working together is the only way to reach a meaningful and sustainable quality of life,” he said.
Due to its ageing population, Japan has unique experience in the practical application of AI, according to Toshiya Jitsuzumi, Research Professor, Chuo University. In order to “respect dignity, diversity and sustainability,” it is important to develop guidelines and common understanding among all stakeholders – but with the proviso that principles of transparency and fairness may vary between nations and cultures.
As the speed of technological development is much faster than that of policy makers or regulators, it is essential to involve private stakeholders as far as possible in this collaborative approach. Implementing existing rules and regulations in a market economy is challenging in the coming age, but there is a real danger of current data giants controlling the entire ecosystem with damaging effects on competition, innovation and transparency.
It is important to establish trust, to educate people on what AI can and cannot do, understand how to interact with AI decisions and behaviour, and explore how AIs will interact with other AIs in a future that is not so far away.
For IBM’s Liam Benham, VP of Government and Regulatory Relations, elements of AI apps are already pervasive today, from chatbots to home assistants or search engines in everyday life to specialized applications in areas such as insurance, natural disaster mitigation or healthcare services. “Technology must go hand in hand with trustworthy AI,” he said, based on the clear principle that “AI is here to augment, not replace, human intelligence.” Humans will remain in control, as for all their mind-bending mathematical powers, machines cannot replicate our judgement, intuition, imagination or morality.
Establishing that trust may not be easy. There are issues of data ownership and privacy, and “AI systems must be transparent, explainable and address bias upfront. We must see inside the black box to demystify” and build trust. Ethical guidelines developed by the EU focus on respect for human autonomy, prevention of harm, fairness and explicability, he stated, with seven principles applicable to new products and applications: technology must be human centric, robust and safe, ensure privacy and data protection, be transparent, ensure fairness and diversity, address environmental and social well-being, and be accountable and open to audit. Piloting these ambitious principles at IBM will allow for any adaptions or tweaks to close the gap between theory and practice.
Business and government work together to ensure that AI is genuinely for all. And the usual regulatory balancing act between protecting the end user and stifling innovation also applies here.
AI-powered innovation is changing the way we live and the way SMEs work, according to Jeannice Fairrer Samani, Managing Director, Fairrer Samani Group, LLC. By 2020 more than 60& of organizations will be using AI in some form, driven by mega data, and boosting dexterity and digital frameworks in use cases as diverse as agriculture, manufacturing, transport and education. Educating families and young people on AI is important to ensure that the technology is both good and really for all.
Echoing Benham, Fruzsina Tari, Innovation Manager, AImotive, underlined that “AI is a cross-sectoral tool for developing applications and services, not the goal itself”, so a cross-sectoral approach to regulation and ethics is essential. “We have to keep up with industrial sector standards and meet these standards using AI, but we also need to have new standards and approaches to interpret in this traditional system.” The huge complexity of AI-driven tech based on the perception and interpretation of the environment used in autonomous vehicles calls for caution, collaboration and regulation to ensure both safety and trust.
Gábor Varga, National Technology Office, Microsoft, reminded the audience that technology can be used for good or evil, “it is not a question of what computers can do, but what they should do,” he explained, and we have the power to harness AI to tackle and solve some of the biggest problems of humankind in a sustainable manner. Rationalizing all the ethical guidelines and principles which have been produced by business, government, academia and international organizations is a challenge, but there is a need to put practices in place to prevent unethical use of AI. He also cautioned against heavy-handed regulation, whilst raising the point that certain hotspots within the broad spectrum of AI, such as facial recognition, may need more control. “The ethical principles of what can and can’t be used is just the beginning: we have to move on to think about how to implement all these principles,” he urged.
But who should be responsible? Government, according to IBM’s Benham “With something as powerful as AI with unintended consequences, government is the final arbiter of what is right. But government needs to be cautious and not rush to overregulate.” Business then needs to demonstrate that it can “walk the talk” in terms of ethical guidelines.
For Fairrer Samani “innovation is in the wild, so AI policies should be public policies to maximise benefits and minimize risk,” with SMEs, academia and corporations engaged in policy creation, in a transparent and inclusive process.
Transparency is also key to convince the public that this technology is being developed responsibly, said Tari, providing information, for example, on how an autonomous car has learnt and been tested, and how it works. If we try to build a reliable and trustworthy ecosystem, it will be more attractive to the public – and public trust translates into commercial benefits.
Inclusiveness means focusing also on multiple languages or translations, so that AI is not limited to certain groups or nations as it developed – an important step in avoiding a deeper AI digital divide opening up.
Looking to an ever-nearer future, Benham warned that “The real game changer will be quantum computing and high performance, very powerful systems. We need to make sure that the foundations of ethics and fairness are deeply embedded.”
New technologies and applications will bring very new situations – so we must expect challenges and failures, and “learn from bad practices as well as good practices” in cooperation with other stakeholders and countries, recommended Jitsuzimi.
Collaboration, integrating processes throughout the chain of developers, designers and consumers will accelerate the pace of innovation in general – but for AI to be good for all, it must be inclusive, trusted, affordable and transparent.