Carla Cico, Chairman of the Board, Vendis
AI has been increasingly adopted in many sectors which have benefited from its use: healthcare, energy, and financial services just to name a few. However, for me the most important questions are not about the operational arguments for and against AI technologies but are instead concerned with the nascent debate on AI ethics, potential regulatory models, and the various types of governance that could be applied to monitor its use.
AI: cure or curse?
Nowadays, not a day goes by that AI is not mentioned either in TV or in major newspapers. However, many of these discussions are colored either by excessively alarmist or utopian perspectives, which respectively either magnify potential threats posed by AI or wax lyrical on the benefits society will reap from its use.
Bodies and institutions ranging from international organizations to government agencies, along with countless sociologists and philosophers are all involved in finding the solution for the ethical application of AI. Are their approaches the right ones? Is strict control really needed to avoid disastrous consequences for humanity?
While I may not be able to offer definitive answers to these questions, I would like to draw attention to some incongruences and overreactions that accompany many discussions about AI and ethics, along with proposing some potential alternative actions that may be adopted in their stead.
The EU guidelines on ethics set out AI core principle that state the EU must develop AI following a “Human-centric approach”1:
“The human-centric approach to AI strives to ensure that human values are central to the way in which AI system are developed, deployed, used and monitored, by ensuring respect for fundamental rights, including those set out in the Treaties of the European Union and Charter of Fundamental Rights of the European Union, all of which are united by reference to a common foundation rooted in respect of human dignity, in which the human being enjoys a unique and inalienable moral status. This also entails considerations of the natural environment and of other living beings that are part of the human ecosystem, as well as a sustainable approach enabling and flourishing of future generations to come.”
As for the United Nations, their Guidelines are built upon the deliberations of the Chief Executive Board for coordination on “Principles for the ethical use of artificial intelligence in the United Nations system”2:
- Right to privacy, data protection and data governance
Individuals’ privacy and rights as data subjects must be respected, protected and promoted throughout the life cycle of artificial intelligence systems. When the use of artificial intelligence systems is considered, adequate data protection frameworks and data governance mechanisms should be established or enhanced, in line with the personal data protection and privacy principles, also to ensure the integrity of the data used.
- Sustainability
Artificial intelligence should be aimed at promoting environmental, economic and social sustainability. To this end, the human, social, cultural, political and economic environmental impacts of such technologies should be continuously assessed and appropriate mitigations and preventive measures should be taken to address adverse impacts, including on future generations.
The above extracts are just a limited example of the details of the regulatory and policy frameworks that are still being developed, but that eventually will become the rules designed to limit the potential negative impacts that the use of AI could bring upon society.
Will regulation be effective?
Concerns about security, data protection and privacy have become well-trodden conceptual ground. Since the mass take up of internet access, such concerns have been faced by a multitude of entities and indeed continue to be a major obstacle to regulation in all sectors, not just AI. Private companies have spent billions of dollars to mitigate these, while regulatory agencies continue to adapt and develop new regulatory strategies. However, they have not been entirely successful; for example, we all know that anytime we log in on a site or simply navigate online, our privacy can be breached, regardless of the level of security of any specific site and/or of the device used to log in.
The same goes for all the other praiseworthy principles listed both by the EU and the UN: safety and security, fairness and non- discrimination, sustainability, right to privacy, data protection and data governance, child protection, etc. are the same principles that are at the core of policy makers in the “analogue” (offline) world. Why should we expect that these issues will be solved in the digital arena when they are still unsolved in the analogue world?
A new approach is needed
When considering a new way ahead, we must understand that AI is not an independent machine or creature. AI is still dependent on humans for the data upon which it is trained, the algorithms which it uses, and the fields in which it is deployed. This fact has largely been ignored to date in the polarized debate on AI and AI and human society are treated as if they were separate entities. Whether society aims to benefit or protect itself from AI, AI is always seen as separate and independent from the society it may impact.
Instead, when issuing new rules and regulations to resolve the issues that have been part of human society since time immemorial in the field of AI, those entities involved in the regulation of AI and ethics should have a more practical approach, focusing on what can be achieved rapidly and with concrete positive outcomes. This principally entails making citizens knowledgeable of what AI is (and how it works). The aim should be to promote a nuanced and balanced debate on AI, free from the fear-mongering propagated by mainstream information channels, while informing citizens about the potential negative effects that can result from excessive reliance on cyberspace and online interaction. Regulators should focus on controlling the areas that are most at risk while making individuals knowledgeable, responsible and accountable for their actions in using and deploying AI.
User education is key
This is the crux of the issue: the average internet user does not have a developed understanding of digital technology in general and AI specifically. To redress this, age-specific campaigns and training courses must be sponsored by Governments and/or Governmental Agencies to explain the basics of AI and in order to increase user awareness of his/her decisions and actions. Users should understand that they are also accountable and responsible for breaches of privacy (whether the data be their own or of others) as much as the companies that should guarantee it. The publication of potentially sensitive personal data, such as photographs of underage children, for example, is one example of how adult users can be unaware of the risks that online exposure brings.
The vulnerable need special protection
A specific emphasis should be placed on children. Why is it that young people can get a driving license to venture out in the real world only from about 16, yet toddlers are allowed to freely navigate an online world – which in many cases can be more dangerous – from the moment they are able to understand how to use a touchscreen? In the former case, we all agree that certain limitations on individual freedom are necessary to protect these same individuals from dangers they might be unaware of or are as of yet unable to properly understand, yet such sensible concerns are seemingly ignored with regard to the digital realm. Limitations on digital exposure at an early age would bring a host of benefits, ranging from better social development to safer interactions and accelerated learning. In the meantime, young people would learn how to navigate the digital realm and become responsible internet users.
The internet allows information to be exchanged easily and with anonymity. To redress negative outcomes that stem from this, such as criminality, terrorism, and illegal pornography, international cooperation between Public and Private entities should be instituted, with the support of local Governments, agencies, police forces as well as large technology companies.
Finding the right balance
I believe that we need to find the right balance and adopt a more down-to-earth approach to AI. This will achieve more concrete results compared to the declarations of wishful thinking which dominate current efforts at AI regulation. These utopian wish- lists, while reflecting a general aspiration, are usually very difficult to implement or are simply used to place the burden of their implementation onto the private sector by avoiding concrete actions by governmental agencies.
AI is the latest creation (for the time being) in the long line of human innovations. Just as with previous technological advancements, its potential effects will prove to be either negative or positive depending on the ethics, regulations, and situations of its implementation rather than the technology itself. It is up to us humans to continue to develop AI and to achieve even more amazing outcomes, while also being guided by a desire to preserve and maintain the best of human society and ethics.
- EU guidelines on ethics in artificial intelligence: Context and implementation – Briefing, 2019
↩︎ - United Nations System, Chief Executives Board for Coordination, 27 October 2022 ↩︎
Carla Cico has many years’ experience as a CEO in both public and privately held companies. With a strong international background, backed up by M&A and extensive management experience, she has been a pioneer in both digital transformation and ESG implementation. She has served as a Member on many Boards, as well as the Chair of Committees, in listed and unlisted companies, companies backed by Private Equity and startups. She is a sought-after speaker at many conferences and summits, with a focus on Telecommunications, Management and Strategy, Digital Transformation and developing countries. She was the first South American female CEO in the Telecom sector.