Dóra Petrányi, CEE Managing Director and Co-Head of the TMC Group, CMS and Francesca Rossi, IBM Fellow and AI Ethics Global Leader
The following is an edited transcript of a GTWN/CMS webinar on AI which took place on 8 November 20231.
Dóra: AI is a very hot topic nowadays. Francesca, could you start with an update on what you see as the current highlights concerning AI and some of the AI related work you have been involved in recently.
Francesca: IBM delivers technology to many corporate and government clients. Over the past year the priority for IBM has been to understand how to use generative AI for our clients and for many kinds of applications. We have been in close discussion with our clients to understand what can be useful to them in using generative AI for their internal operations and services.
The increasing attention on generative AI resulted in the building of an IBM large language model (LLM) as well as Watson X, which is a platform that can be used with any LLM to build an AI solution. We have also been doing a lot of work in evolving our AI ethics framework to account for the expanding number of new risks that generative AI and LLMs create, including misinformation, deep fakes and hallucinations. These new risks are in addition to the well-known AI risks like fairness, explainability, transparency, privacy, etc. These new risks have developed because of the ability of generative AI to create its own content.
I’ve also been working with some multi-stakeholder organisations like the OECD. Yesterday, for example, there was a meeting of the OECD.AI Network of Experts which tries to envision what the future could be with AI by building a taxonomy of the risks and the benefits of AI along different dimensions. For example, we consider how important each risk or benefit is and how easy it would be to mitigate those risks or to achieve those benefits.
Recently I also participated in a high-level meeting on AI safety. There is increasing attention on the additional safety risks as AI models develop more and more capabilities. The UK AI Safety Summit2 was held on 1-2 November, a day after the US Administration published an Executive Order from President Biden about AI safety3. Until recently these risks were not being considered at the highest policy level, but now they are being given such a high level of scrutiny, they may bring about an evolution of AI frameworks, from companies, regulators and civil society.
Dóra: I know that you also attended a very special visit to the Vatican in early 2023, related to the renewal of the Rome Call for AI Ethics. Can you tell us more about this very special visit?
Francesca: The Rome Call for AI Ethics4 is a document that was initially published by the Pontifical Academy for Life in February 2020, just before the covid pandemic started in Rome. And it’s a document that includes several principles that should be used and values that should be considered when building and using AI, including human centeredness or digital inclusion, so that nobody is left behind, and that AI is developed responsibly.
But at the beginning, in February 2020, it was signed by only two companies, Microsoft and IBM, as well as some other Institutions like the European Parliament and the Italian Parliament. Two and a half years later, in January 2023, we went back to Rome and this time we met with the Pope, who gave a 10 minute speech on AI ethics and the value of using and building AI in a way that is centred around human beings and humanity. This was a very historic occasion because the Rome Call was signed by the three Abrahamic religions, Judaism, Christianity and Islam. I think it was the first time that these three religions signed a document concerning the right approach to a technology.
IBM and Microsoft renewed their commitment and we described what we have been doing around those values and those principles in the intervening three years and what we plan to do in the future.
Dóra: AI has been with us for some time and you’ve been researching it for many years. But with degenerative AI tools especially, I think it’s fair to say that what used to be more niche and for experts only is now centre stage. I find it fascinating that the Pope did the Rome Call three years ago, before there was such global interest in AI. This shows how forward thinking the Vatican actually is, and that the emphasis on human centred use of technology is extremely important when we talk about AI.
New AI risks
Dóra: I think it’s fair to say that many of the early discussions on the risks of AI and AI ethics were addressing the old concerns regarding bias, discrimination and transparency. There is now a new wave of concerns as well about truth, about data accuracy, and about security and safety, just to mention a few.
Francesca: The fact that these models have the capability to generate content expands some of the well-known risks that we were considering earlier with the more traditional AI approaches. For example, bias can occur not just in the decisions made by classifiers or predictive models, it can also be in the actual content that is generated – in the pictures and in the text. Explainability was already challenging with traditional machine learning approaches. Now it’s even more of a challenge since generative AI is based on unsupervised learning rather than supervised learning with humans labelling the data and the examples in the training data. These models generate their own content, the provenance of which is not easy to trace. If a LLM generates a piece of text, it is not easy to trace where this came from.
There are also new issues such as the phenomenon of so-called ‘hallucinations’ where the model can generate content that is not true, but it looks very plausible. It can generate content that is not true even if everything in the training data is true and correct, because of the way it combines different elements of the training data.
It is important to remember that these systems are not built to distinguish truth from false content. They are built to generate one word after the other one, in the case of a LLM, or one pixel after the other, in the case of text to image models. Every word is just the most probable word given the context and given the words that have been generated so far. This is all they are trained to do well, and they can master natural language very well in this way. But it doesn’t mean that they have a perfect model of the world which would generate only true and correct content.
Dóra: From a legal perspective, it’s very interesting for me to see how different authorities are taking different approaches when it comes to protecting users. To raise the awareness about, for example, hallucinations and about the limitations of the current models.
Francesca: True. And we must not forget there is also the complex issue of copyright. There could be copyright in the training data used by the LLM to create its own content. But the actual generation does not trace back to the source and does not mention that what is generated comes from copyrighted information in the training data. Further, if the input contains confidential information, like proprietary code for example, and this is put in the prompt, then this could be used to retrain the system and therefore could be used and generated by other users. In this case it’s no longer protected, as it’s shared with others where it should not be shared. That happens because people may not be aware of what could be done with the data that they write in the prompt.
Different cultures, different approaches to AI
Dóra: I also think it’s important to see that this is a global issue. You mentioned the UK Safety Summit, which is a perfect example, where 28 nations signed the Bletchley Declaration. There are so many cultural and societal differences in our world. When it comes to AI and the use of AI and the feeding of AI with big data inputs, we need to be very mindful of these differences. It is very encouraging to see that there seems to be emerging global consensus about the overriding principles like safety and human centeredness, just to highlight a couple.
Francesca: For the first time, China was at the table at the UK AI Safety Summit in a discussion about AI ethics and safety and signed this declaration together with the other countries. While this is a very encouraging first step, the agreement on some principles does not mean, and it is reasonable that it is so, that the same laws or the same approach will be used everywhere. Because these countries have different cultures where different values are prioritised over others. This implies that there are different priorities among the values to be embedded into AI, such as the importance given to individual privacy versus societal harmony.
We all have completely different legal systems, meaning not just China but also the US and the European Union. We’ve already seen these differences emerge in the approaches to regulating AI. In Europe we have the draft European AI Act that is very close to being approved and is a law that will be applied to all the member states. In the US the President’s Executive Order is not the law, it just directs the various federal agencies how to work on AI implementation with a view to possible laws in the future. In the US there has not been a uniform approach, with laws relating to the use of AI being passed in individual states or even in cities, including the city of New York.
In Europe we have the example of the GDPR for protecting privacy. The US does not have anything similar, so we cannot expect that an approach to AI will be completely uniform at every level. But there should at least be some discussion about the main principles which should apply to this technology when we build it and when we use it. These principles should come from society as a whole. It should not be a matter for individual companies to decide what principles and what values should be embedded in AI.
Dóra: These cultural differences also define the role of regulation. As a lawyer, I am always fascinated how different parts of the world take totally different approaches to the same issue. I’ve always worked in technology and innovative areas of law and probably this is the reason why I like regulation which is the least intrusive and at the same time provides enough regulatory certainty for all the parties involved. And that not only includes those who are developers, and those taking innovative approaches to issues, but also the potential users of these new technologies.
This is a very similar challenge to what we saw with GDPR in Europe, which provides a single approach to privacy across the European Union. But the EU is still only one relatively small part of the world, especially in terms of population. What we have learnt from the development of the GDPR is that AI will affect the competitiveness of European business and the traditional European values, especially the right to privacy. Given that AI is very much data centred and data focused – let me stress that this is at the moment – we can see a very similar approach to AI regulation as that taken in the draft European Data Act. We see a risk-based approach, which I very much welcome. But even with these similar approaches, when you look at the definitions, there are still so many questions yet to be answered.
And when you think about the world beyond Europe, in the US, Africa or China, we can see how governments will potentially be approaching the very same questions differently. It might well be the case that rushing to regulate AI in these various jurisdictions could cause harm to those who want to do business there, while also not necessarily protecting the values that you would want to see protected.
The scope and timing of AI regulation
Dóra: I have read many of your articles and speeches on AI and totally agree with you that less is more when we are considering regulation of new technologies. I always caution against over- regulation. It’s very important to properly educate users and set or confirm some basic rules, especially about risks, risk awareness, reliability and safety. These are included in the draft AI Act and I very much back that approach. Can I ask for your thoughts on regulation and the role of regulation where AI is concerned?
Francesca: I think some regulation is needed, but of course it should be done in a way that addresses the real risks and does not impact innovation and competitiveness of individual countries, where the regulation is being applied to a whole region, such as in the case of the European AI Act. I also prefer a risk-based approach, in other words imposing more obligations on areas where there is more risk. If there is low or no risk, I’m not going to impose many rules. Only when you know how you’re going to use AI can you understand whether it is high risk or not. I therefore prefer an approach where risk is associated with the uses of AI and not with the technology itself. It makes sense to focus on the use cases.
The draft AI Act lists a number of high-risk AI applications, including those related to transport, education, employment and welfare, among others. Before putting a high-risk AI system on the market, or in a service in the EU, companies must conduct a prior “conformity assessment” and meet a long list of requirements to ensure that the system is safe.
However, there is a growing push to move upstream from the use cases to the technology, especially with regard to LLMs and the more open-ended models that can be used in many different ways. Some proposed amendments to the draft AI Act are based on the view that if the technology can be used in high-risk ways, then it’s high risk and should be put under all the obligations relating to high risk use cases. I think this would be a big mistake because it would really stop innovation. If these proposals are adopted, then companies in Europe, especially small companies, would not use LLMs, especially in low-risk applications, because then they would have to abide by all of the obligations that apply to high risk scenarios. I would rather focus on the AI system, in other words the specific purpose for which the technology is to be used and not the AI per se.
Another reason not to focus on the technology per se, is that the approach is not very future proof, especially where the technology is evolving very rapidly. In two years there will be another form of AI or evolution of the technology that you’re not covering in the regulation. So you should focus on the actual uses of the technology, as the initial draft did. It mimicked product-based regulation that focuses on the product itself, rather than on the pieces that you use to build the product, to assess the level of risk.
Dóra: I totally agree with you that we should take that approach. To me the biggest challenge is that we don’t know what’s going to happen in two years’ time. But what we know for sure is that AI will be at a totally different stage than it is now. We have just started to see the potential of these LLMs and foundation models, so we need to allow for many other use cases.
When we consider how fast things are changing, we also need to consider that even if we had the AI Act agreed in Europe this year, it would only enter into force in two years’ time. AI developers need that amount of lead time to prepare. This is why you should be very, very careful when you pick the topics to regulate, or you could end up regulating something in a very niche form which no longer exists or is easy to circumvent. Technology in my view is always at least 2-3 years ahead of the law and that is why we need to use our common sense. We need to go back to ancient, basic legal principles that I believe are very important to cherish and to protect when it comes to technology.
Francesca: Some people think we need AI regulation because there are no rules, but this is not true. For example, IBM has a lot of clients in the financial sector, which is very heavily regulated. The healthcare sector is also very heavily regulated whether you use AI or not. So another possible approach would be to adapt these sector-based regulations to the application of AI in those areas.
Dóra: One of the key challenges for sectoral regulation might be that all the sectors would take a common denominator like the overall regulation and then start deriving their own sector specific applications from it. As you say, highly regulated industries already have a highly regulated environment around them and actually many of the existing rules do provide protection against abusive AI use cases which would to a certain extent intersect with the main principles applying to that industry. That could actually help address the global challenge, that is how to introduce any regulation nationally or regionally when these questions are global and applications are available globally.
The role of market forces
Dóra: If regulation is only part of the answer to the challenges of AI, what else do you think we can do to enable the best of AI while mitigating the risks around it?
Francesca: Some regulation is needed, if it is done well, but it is just one ingredient of many complementary mechanisms, to ensure that the technology is used in the right way. Companies have a part to play in terms of their internal processes such as auditing and internal governance. There is also a role for international standards bodies, which have shown themselves to be very useful in creating some harmonisation and even forcing companies to do something in a certain way.
Market forces will also have an impact on our clients, who are usually in very regulated environments. They ask us to provide assurance that our technology has the right properties, will be deployed in an inclusive way, and that the user is educated enough to use that technology in the appropriate way. Any team who is developing a use case always comes to the AI Ethics Board to get a review before a deal can be signed with the client.
Companies also have a role to play in educating themselves, their clients and their partners, in AI best practice. The technology is moving very fast, and the companies can evolve much faster if they focus on their internal processes, than they could under regulation.
Culture change is needed
Dóra: One of the challenges is that bigger entities with lots and lots of resources are able to set up an ethics board to set up the rules and the principles. They can lead by example by actually showing how this is done. Microsoft and IBM are often mentioned as the two leaders in this sense. On the other hand, what happens if we are talking about a smaller entity in fast startup mode. This is when it can become extremely challenging.
Francesca: I agree because you need to put together a lot of separate elements, and technology is the easiest one. The most difficult part is really to understand how to build and use it in a responsible way. That requires more than just testing the technology. It also requires a lot of internal culture change. So building a playbook and building the tools was just the first step.
When we started doing bias testing, some of our developers could not really understand what we meant by bias. Some said that in their model there were no protected variables, so there are no features that relate to people, so there can’t be any bias in the model. They were not aware that there could be proxies like zip codes or others that are highly correlated with a protected variable. You can have bias and you can create discrimination within your model even if there is no feature corresponding to human beings. We needed to help these developers change their culture and their point of view, educate them and also help them understand that they could not just talk amongst themselves as technologists, but they as needed to consult with other people, who could help them understand the societal impact of what they were doing.
Ethics create value
Francesca: Companies need to understand that investment in AI ethics, including the tools and the time it takes to do the tests, will improve the business outcome and also the value to the client. At first it may seem like an ethics review is slowing down the ability to deliver the technology. This is a challenge that we have even seen internally at IBM, when Teams were initially reluctant to have their models reviewed by the Ethics Board. But they saw we added value and that it is also a business differentiator to be able to demonstrate you are anticipating the European AI legislation by developing and delivering technology that is done in the right way.
Further, the work that large companies like IBM and others have done in AI ethics has resulted in experience that can also be useful to small companies who don’t have to go through what we went through over several years. At first we didn’t have this type of board, we had a discussion group, then we had an Advisory Board and then we had the decision making board. Smaller companies can accelerate their understanding and apply this approach immediately.
A multi-disciplinary approach
Francesca: It is so important to have a form of education which is much more multidisciplinary than we have now. For example, in Italy the university system is very siloed. If you study computer science, you study only computer science and you learn nothing about sociology, philosophy, psychology and so on. You don’t have to become a philosopher or a sociologist, but you should be able to understand the possible impact of what you’re studying and the technology that you are building. I still don’t see that multi-disciplinarity in Italy. When you want a career in academia, if you start being multi-disciplinary, your results may suffer because you may not be able to demonstrate sufficient study in certain areas. Universities don’t evaluate your study as a whole. Culture and multi-disciplinarity are important and the first step is to understand this and to adapt courses accordingly.
Dóra: We also need much greater digital literacy in the population generally. Europe is still lagging behind other regions. Education about digital issues is also very important, not just for youngsters but also for the generations who did not grow up as digital natives. This is probably the biggest challenge, because they might not understand the risks like the younger generations. They might not know how their data is being used or might not care as much about putting their data into their devices.
Fast versus slow thinking
Dóra: We are at a very interesting crossroads or milestone of the evolution of the AI, because we can see what the technology can do and now we are waiting to see the full potential, which is still far, far away. There are some people who see this future in terms of doomsday scenarios, like the end of humanity or the end of all jobs for humans. I’m a natural born optimist, so I’d like to think they’re all wrong and actually this is just a better way to do things and to drive benefit for mankind, if we use it well. This leads me to an issue that you have written about, and that is the difference between fast and slow thinking, and what this might mean for the future of AI.
Francesca: The technology right now is very useful for many different applications. But it has a lot of limitations and raises a lot of issues because of these limitations. This is because LLMs don’t distinguish between true and false.
In some scenarios it is very important to make sure that what you get from an AI is true because otherwise the whole thing, even the legal system, will be disrupted. Developers are trying to prevent the generation of hallucinated or abusive content by using some techniques that are in the pipeline, but these approaches are still very primitive. After you build a LLM and it has been trained on the training data, you can use techniques like reinforcement learning, trying to prevent it generating content that you don’t want. This is a type of patch which corrects the problem, but it doesn’t resolve the issue. There are ways to bypass this patch by carefully crafting a prompt that generates unwanted content.
It would be definitely much better to have guidelines embedded into the system when you train it on the data so that it only learns to behave in the correct way. We are not there yet. This leads me on to my research into fast and slow AI that my research project is about, where we try to get inspiration from how humans think and behave to extrapolate some principles.
Humans have two broad modalities of reasoning. One is fast thinking, which means almost unconscious decision-making that we use every day for things that we don’t need to think too deeply about. The second one is slow thinking for things that require all our attention and all of our conscious cognition to solve the problem. We know how to combine and switch between one and the other type of thinking.
An example is learning to drive a car or riding a bicycle. When we start to learn we use slow thinking, to learn each complicated step deliberately and safely, but then after a while we switch to fast thinking and we do it almost unconsciously. We know how to combine the two types of thinking to acquire information and knowledge from the world, transform it into knowledge that then use to make conscious decisions.
Current AI just learns from data. It acquires information from implicit knowledge, the knowledge that is embedded in the data, and then it tries to derive something to demonstrate the behaviour that reflects what has been learned from the data. But this is all implicit knowledge. It is not yet at the stage of explicit knowledge or slow thinking.
I and many others think that we need to remove some of the limitations on AI that generate some of the risks. There are many different approaches that are being tried. Our project aims to get inspired by these cognitive theories on how humans make decisions and how humans self-regulate. For example, maybe using my fast thinking I want to eat a whole jar of cookies. But my slow thinking tells me that my doctor has told me not to. This is how we self-regulate according to some values or for some other reason. There are many approaches that are trying to embed these capabilities into an AI model, rather than trying to patch and correct the behaviour afterwards.
Focus on humanity
Dóra: This is where I believe it will be even more important to acknowledge the cultural differences and the societal differences across the globe. I really like UNESCO’s approach to AI. On the UNESCO homepage5, you’ll find a lot of publications, announcements and notices on AI, especially on education and ethics.
As I understand it, slow thinking methods relate to your routines, your everyday behaviours. If you want to teach the AI to adopt this slow thinking mode, you will have to teach them, probably, in different ways, in different cultural contexts, in different countries. AI is not just a technology issue but also a cultural, societal and educational one. Can you share any actions or initiatives which demonstrate this type of approach?
Francesca: IBM is a founding member of the Partnership on AI6, which has about 100 members now, of which only 20 are companies. The other stakeholders are universities, research labs, civil society organisations. You need to not only have different mechanisms but also different people involved in designing the right mechanism to drive the technology in the right direction. The Partnership on AI, in this multi-stakeholder way, recently released the guidance for a foundation model deployment which addressed many of the issues that we have been talking about – what to do while developing and deploying AI and even post deployment. This guidance covers technology, social consultation, regulation and many other things. This shows that these organisations understand that these are social and technical issues that need both social and technical solutions.
Dóra: Let us close with a technical question. Do you think future AI safety can be addressed using kill switches?
Francesca: To terminate an AI in this way you would embed something in the software or even in the hardware that cannot be removed by anybody. But you need more than a kill switch, what is needed is a very well-designed risk management framework. This is done also in many other technologies like in cybersecurity for example. Nobody can assure you that things are 100% safe, but you have to mitigate the risk with a risk management framework. So I am not sure that a kill switch is feasible, in the hardware or software. It also runs counter to the idea of open source, which I support. I would rather see other mechanisms to address the risks, especially those that focus on our humanity.
If we focus on our humanity as we create our technologies, our potential is unlimited because the ultimate goal is not to improve AI but to improve us as human beings through the advancement of AI.
- This transcript was prepared with the aid of an AI transcription tool and edited by Vicki MacLeod
↩︎ - https://www.gov.uk/government/topical-events/ai-safety- summit-2023 ↩︎
- https://www.whitehouse.gov/briefing-room/presidential- actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy- development-and-use-of-artificial-intelligence/
↩︎ - https://www.vatican.va/roman_curia/pontifical_academies/acdlife/ documents/rc_pont-acd_life_doc_20202228_rome-call-for-ai-ethics_en.pdf ↩︎
- https://www.unesco.org /en/artificial-intelligence ↩︎
Dóra Petrányi is a partner and CEE Managing Director at CMS and Co-Head of the Technology, Media and Communications Group (TMC). Dóra is the Co-Chairman of the Regulatory & Ethics Committee of the Hungarian AI Coalition and has assisted the Hungarian Ministry of Justice in providing comments and formulating a legal position in relation to the EU’s draft AI Liability Directive and the EU’s draft AI Act. She has also advised a wide range of clients on AI-related issues.
Francesca Rossi is an IBM Fellow and the IBM AI Ethics Global Leader. She is based at the T.J. Watson IBM Research Lab, New York, USA, where she leads research projects and she co-chairs the IBM AI Ethics board. Her research interests focus on artificial intelligence, with special focus on constraint reasoning, preferences, multi- agent systems, computational social choice, neuro- symbolic AI, cognitive architectures and value alignment. On these topics, she has published over 220 scientific articles in journals and conference proceedings, and as book chapters. She is a fellow of both the worldwide association of AI (AAAI) and the European one (EurAI). She has been president of IJCAI (International Joint Conference on AI). She is a member of the scientific advisory board of the Future of Life Institute, the board of the Partnership on AI, the steering committee of the Global Partnership on AI, and she chaired the 2023 AAAI/ ACM Conference on AI, Ethics, and Society. She also co- chairs the OECD Expert Group on AI Futures and she has been a member of the European Commission High Level Expert Group on AI. She is the current president of AAAI.