From AI ethics to AI regulation

February 27, 2024

Dóra Petranyi, CEE Managing Director and Co-Head of the TMC Group, CMS


I have been working on the legal and regulatory context of Artificial Intelligence for at least 5 years now, well before it was considered a priority issue. During that time, I have often considered the arguments relating to whether AI should be regulated, and if so, how and to what purpose.

Until recently, there has not been much attention on the need for regulatory intervention in the use of AI. But this question has taken on a new dimension in the wake of the attention surrounding ChatGPT over the past year, and in view of the proposed EU regulation of AI1. And considering what is happening with AI in the rest of the world, in China, the US, Asia, we need to ask ourselves whether the EU should be pursuing AI regulation? And if you agree that we should have AI regulation in the EU, then what is the purpose and scope of EU regulation of AI? Is it all about ensuring human rights and preserving European values? Or should Europe take a more economic, competitive approach? Or should it be both?

In thinking about these questions, we can look at what industry leaders are saying about AI and its impact on the economy and society.

According to the CEO of IBM, Arving Krishna, this is the Netscape moment of AI.2 He was referring to the web browser that made the internet widely known in the 1990. ChatGPT is, in his view, now putting artificial intelligence in the public sphere in the same way.

“AI and ChatGPT today is kind of a 30-year overnight wonder,” Krishna said. “There are lots of things in technology like that, they look like overnight wonders but there has been 30 years of hard, grinding work.”

According to Microsoft CEO, Satya Nadella, AI is moving fast but in the right direction. Every time a new disruptive technology emerges, he says, there is a ‘real displacement’ in the job market, but he believes that AI will also create new jobs.

“I mean, there can be a billion developers. In fact, the world needs a billion developers,” … So the idea that this is actually a democratizing tool to make access to new technology and access to new knowledge easier, so that the ramp-up on the learning curve is easier… Humans are in the loop versus being out of the loop. It’s a design choice, which, at least, we have made.”3

We are already seeing many examples in the media, not just in Europe but around the world, of how AI is being used in what many would consider unethical or at least dubious ways.

Firstly, the impact on media and the creative industries. In the US, we have seen the launch of RadioGPT4, which promotes itself as the world’s first localized radio content powered entirely by artificial intelligence. This AI-driven radio station has raised many concerns about how AI is being used in the media, and the future of human presenters on radio and TV. The 148 day strike by Hollywood writers and artists, which ended with a new deal with the studios in October 2023, demonstrates the real impact of the use of AI on the creative industries.

Secondly the use, or misuse of biometric data in breach of individual privacy. In Australia, retail giants and police have been using AI software Auror to catch repeat shoplifters. But to do this they also capture all of the biometric data of all of the shoppers who come into the shop, without their prior knowledge or consent. This has raised concerns that privacy regulation has not kept pace with technology, especially as retailers are already under investigation for using facial recognition technology in a similar way5. Because of these and many other examples, everyone is now interested in how AI is being used and in the possibility of AI regulation.

Should AI regulation focus on economics or human values?

If we need AI regulation, should it be focussed on preserving our human values, or should we mainly be concerned about creating jobs and growth in Europe, so that we do not lose out to countries where AI is not being regulated? In fact, we already have an answer, now that the EU is a step closer to setting the first rules on Artificial Intelligence.

On 14 June 2023 the EU Parliament agreed their draft proposal for AI regulation. There were 499 votes in favour and 28 against, with 93 abstentions. The negotiation position to be taken by the Parliament in the Council debate on the AI Act was approved. So negotiations between the Commission, Parliament and Council, the tripartite negotiations, have already commenced. The rationale for the approach taken in the draft Act was set out at the time of the release of the agreed draft:

“The legislation is intended to ensure that the development and use of artificial intelligence in Europe fully respects European fundamental rights and values, ie. That it remains under human control, is safe, transparent and non-discriminatory, and that it benefits society and the environment.”6

364 pages and 771 paragraphs, it is different from that of the European Commission. So this is not the end of the debate about AI regulation in the EU, it is just the beginning of the debate amongst the three European authorities – the Parliament, the Commission and the Court. And what is very important is that it is all about fundamental European rights and values. The human is always put first.

We have already had many discussions at the European and Hungarian level about the possible regulation of AI and these always focus on two basic questions: Firstly, what happens if Europe regulates too early or too late. We know that in China there is not going to be AI regulation and companies there will have access to a huge amount of data. Does this mean that developers there have a huge commercial advantage over the EU? On the other side, we know that the general approach to regulation in the US is to always wait and see what happens in the market, and only if there are some areas of concern or there is seen to be unfair competition, do the regulators step in.

The policy framework for AI regulation

One of the key policy arguments put forward against regulating AI by its opponents is that moving too early, before the market is fully developed, will stifle innovation and investment, and sideline European businesses in the global marketplace.

To see if this could be true, we can consider the example of the General Data Protection Regulation (GDPR). Before it was brought into force, there were many people claiming that it was dangerous to regulate privacy in this way, that it was far too early to do so, and that the GDPR would severely disadvantage European competitiveness. The reality has been quite different. In fact, many countries, including many US states, take the GDPR as the starting point for their own regulation. The EU hopes that the proposed regulation of AI will become a benchmark for other countries, in a similar way to the GDPR, and I personally hope that this will prove to be the case.

The focus on human rights, rather than just economic considerations, is the foremost advantage of the EU approach to issues, whereby the EU always tries to provide a balance between these two competing objectives. This is very important because this is rather different to the policy approach initially taken by the European Commission. It initially based its discussion about AI on the basis of ensuring European competitiveness in the global market, not European values and rights.

On the other hand, the discussion in Europe has not taken place in a policy vacuum. In fact, the EU has been considering the ethical framework for AI for some time. On 8 April 2019, the High-Level Expert Group on AI presented Ethics Guidelines for Trustworthy Artificial Intelligence. The Guidelines7 are based on 7 principles, which aim to help verify the application of each of the key requirements:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability.

In my view these are the fundamental principles which will be reflected in the EU’s AI regulation, when it is finalized and eventually comes into force. There have already been quite a few changes made to the draft legislation, compared to the negotiating position that the EU Commission originally put forward. This is understandable given that the Commission’s draft was completed before the so-called ‘Netscape moment’, while the Parliament had about 4 months to work on their draft during or after the Netscape moment.

The key changes reflected in the EU Parliament’s proposal are:

  • The definition: The rules on the AI Act’s scope include the additional descriptor ‘human centric’ alongside‘trustworthy’ in the description of desirable AI.
  • There is a new requirement that generative AI systems must be designed in line with EU law and fundamental rights,such as freedom of expression.
  • A “fundamental rights impact assessment (FRIA)” obligationis now placed on the deployer of all high-risk AI systems, except for operating critical infrastructure.

This shows that fundamental rights are now front and center in the EU approach to AI.

The scope of the regulation

It is very important to get the definition of AI right in the legislation, as this will determine the scope of the regulation. If something is outside the scope of the definition of AI, then it will not be covered, and so I found it disappointing that one element was missing from the definition of AI in the Commission’s original proposal – the word ‘human’. In that original draft, there is no mention of ‘human instructions’ or ‘human purpose’ and so I was very happy to see that finally the word ‘human’ is back in the draft Act passed by the Parliament. It is now a purpose driven definition of the scope of AI and refers to ‘human centric’ and ‘trustworthy’ as two key elements of the legislation. In other words, if the system or service impacts human rights, then this is within the scope of the AI regulation. That is a very big step forward, which I believe is in the right direction.

There is also a new definition of generative AI, which puts it within the scope of the regulation. Often when something is hyped, like generative AI has been, firstly a name is created for the concept, and then a regulation is framed around that word. I totally disagree with this approach, as if the definition is too narrow, you can undermine the whole purpose of the regulation. For example, someone could say, I am not really using ‘generative AI’ which falls within the scope of the regulation, I am really using ‘generativity AI’, so then this would no longer be included in the scope of the regulation.

But at least we now have a definition of ‘generative AI’, which is one of the examples given for general purpose AI. That means that you will have to navigate several layers within the regulation to work out what you need to do about generative AI, which is something which needs further clarification.

There is also a new phenomenon introduced in the draft, which is the Fundamental Rights Impact Assessment (FRIA), which is very similar to what we already have in the GDPR. The onus will be placed on anyone developing a service using AI to determine whether it falls within the scope of the legislation, and if so, that an FRIA is done and submitted for approval if necessary, or kept on record for any future audit.

If you are using a general purpose AI, you will need to have done a FRIA, to make sure that you are not in breach of the Act, because general purpose AI is now included in the list of high risk AI definitions.

There are a number of AI practices that are prohibited in the current draft:

  • “Real-time” and “ex-post” remote biometric identification in public places.
  • Biometric categorisation based on sensitive personal data (gender, race, ethnicity, nationality, religion, political affiliation).
  • Predictive policing based on profiling, place of residence or criminal history.
  • The use of machine emotion recognition in law enforcement, border control, the workplace and educational institutions.
  • Creating facial recognition databases using non-targeted facial images from the internet or from closed-circuit television networks (in violation of human rights and privacy).

This list is longer than in the earlier draft and it is all about preserving human rights. In the earlier draft there was a very broad exception for biometric identification. Now the only exception to the prohibition on the use of biometric identification in public places is if you have a warrant from a judge.

There are exceptions for anti-terrorism or national defence applications, which are outside the scope of the regulation.

Scope of existing law vs new AI law

There are existing laws which cover the use of these types of AI driven applications In fact, we have already had a case of predictive policing (like the Australian example cited earlier), where AI was used to recognize emotions.

There was a case in Hungary, the Data Protection Office8 levied a heavy fine against some banks for using software which used AI to determine the emotions of people calling their call centres. If the AI felt that the caller was too emotional or too worked up, they were immediately connected to a manager. This practice was punished by the DPO, not on the basis of this EU prohibition, because it didn’t exist at that time, but on the basis of existing regulation, as the callers had not been told by the banks that AI was being used in this way, without human intervention. The caller didn’t have a choice about whether he wanted his call handled in this way, which was the problem.

This case shows that you don’t always need new rules when you have old rules that result in the same outcome. But in any case, under the proposed new EU regulation, there is a general prohibition on the use of machine emotion recognition. This applies in the case of law enforcement, border control, workplace and educational institutions. It is a good start.

New obligations regarding general purpose AI systems

Under the proposed new regulation, developers of general purpose AI systems would only be allowed to place their products on the EU market after: (i) Having assessed and mitigated the potential risks (to health, safety, fundamental rights, the natural environment, democracy and the rule of law), and (ii) have registered their models in the relevant EU database.

So this means that you can only launch a general purpose AI once you have registered it. You need to be able to show you have made an assessment of the impact on health, safety, fundamental rights, natural environment, in other words, against European values. This is all going back to the 7-point list we discussed above. Human agency and oversight is not yet in the draft, but I am sure it will be.

There are further transparency requirements in relation to general purpose AI. In all cases, it must be indicated that the content was produced by AI. It is all about driving innovation, supporting SMEs and protecting rights. This is very European, supporting innovation despite to balance the introduction of regulation.

Help must be provided to distinguish “deepfake” images from real images. Appropriate safeguards must be put in place to prevent illegal content generation. For example, a detailed summary should be published of the copyrighted data used to teach the systems.

What are considered high risk AI applications?

AI applications will be considered ‘high-risk” if there is a significant risk to people’s health, safety, fundamental rights or the environment. New items on the list are:

  • AI systems capable of influencing the outcome of an election and the voting behaviour of natural persons. A further condition for “high-risk” is if there is a significant risk to people’s health, safety, fundamental rights or the environment.
  • Referral systems used by social media platforms with more than 45 million users.

These two examples relate to current concerns about the dangers of AI influencing the outcome of an election or the impact on social media platforms with more than 45 million users. The targets for both of these inclusions are obvious.

Driving innovation, supporting SMEs while protecting human rights

The draft seeks to drive innovation and support SMEs, while protecting rights, by limiting the obligations relating to research activities and AI components made available under open-source licenses. There are also special considerations for regulatory test environments, where public authorities test AI systems by simulating real-life situations.

There are also more rights for data subjects – they may raise a complaint about the use of AI systems. Detailed information must be provided on decisions with a significant impact on fundamental rights. The powers of the EU’s AI agency will be expanded to monitor the implementation of the rules.

Global view of alternative approaches to AI regulation

While the EU has already reached in principle agreement on its AI regulation, it is not in force yet. Given the accelerating pace of change with AI, is this too long? What happens in the interim, and are there alternative proposals which could be implemented immediately?

One alternative proposal that has been put forward is for an AI Pact to curtail the immediate risks of AI. Under this proposal, countries and companies would voluntarily adopt the draft regulation so it can apply earlier.9

A second proposal is for a Code of Conduct based on a set of principles which align with the ethics outlined in the draft legislation.10

In July 2023 the US National Telecommunications and Information Administration (NTIA) received more than 1400 responses to its earlier call for comments on its proposal on AI accountability, which is part of President Biden’s commitment to seizing the opportunities AI presents while managing its risks. The White House has also released a Blueprint for an AI Bill of Rights, with the aim of “making AI systems work for the American people”.11

The UK Information Commissioner’s Office has released guidance on AI and data protection, including “Guidance on “Generative AI: eight questions that developers and users need to ask.”12 This approach relies on developers to know their data protection responsibilities and to comply with them and be prepared to show how they are compliant.

“Organisations developing or using generative AI should be considering their data protection obligations from the outset, taking a data protection by design and by default approach.13 This isn’t optional – if you’re processing personal data, it’s the law.”

In Hungary, where I am based, there is draft legislation under consideration, which is open to public comment. Its focus is on regulating the data input to AI based systems, for example what US based companies can do with your data.

Both Australia and Canada are considering their policy and regulatory response to AI in the wake of the proposed EU legislation. Meanwhile, other jurisdictions are taking more of an industry collaborative approach. The Singapore government is making efforts to build trust on the basis of ethical AI, asking companies to collaborate in the world’s first AI testing toolkit, called AI Verify This is a not-for-profit wholly owned subsidiary of the Singapore regulator and supported by tech companies including Google, Microsoft, IBM, Red Hat and Salesforce.

Next steps

In the EU, with the adoption of a compromise text by both the Council and the Parliament, the legislative procedure moved to the next stage, the so-called ‘trialogue’ procedure, ie the phase of tripartite consultation between the three EU authorities – the Commission, the Council and the Parliament.

In the trialogue procedure, the Commission acted as a mediator, to facilitate the convergence of the Council and Parliament positions, which led to the adoption of the AI Act. MEPs reached a deal on 8 December 2023 and the EU Parliament will vote on the proposed Act in early 2024. It is expected to come into force in early 2025.

  1. This is based on a presentation given on Wednesday 28 June 2023 at the Conference on Human-Centred Regulation of AI in Budapest, Hungary. Shared album – Péter Hanák, Rozália Feri – Google Photos (https://photos.google. com/share/AF1QipMkCIcI3kxZm515fZKq-pOVu52E-E0YIjdNeGFoepfc5kqiEvAqMM 7v5rXYrTc_LQkey=V0UzclhuLWZRbE10Uk5HcXFBeWZ4YnlKMC1tbnRR) ↩︎
  2. https://wraltechwire.com/2023/05/04/ibms-ceo-sees-a-netscape- moment-in-ai-powerful-future-of-quantum-computing /#:~:text=Krishna%20 said%20we%20are%20in,overnight%20wonder%2C%E2%80%9D%20Krishna%20 said. ↩︎
  3. https://www.cnbc.com/2023/05/17/microsoft-ceo-talks-ai-concerns- and-its-impact-on-jobs-education-.html
    ↩︎
  4. https://futurimedia.com/radiogpt/ ↩︎
  5. https://www.abc.net.au/news/2023-06-10/retail-stores-using-ai- auror-to-catch-shoplifters/102452744
    ↩︎
  6. https://www.europarl.europa.eu/news/en/press- room/20230505IPR84904/ai-act-a-step-closer-to-the-first-rules-on-artificial- intelligence ↩︎
  7. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines- trustworthy-ai ↩︎
  8. https://www.lexology.com/library/detail.aspx?g=a9c66d5f-4faf-4500- a1bd-458bf9ebcec7 ↩︎
  9. https://www.politico.eu/article/big-tech-rumble-europe-global- artificial-intelligence-debate-ai-pact/
    ↩︎
  10. https://techcrunch.com/2023/05/31/ai-code-of-conduct-us-eu-ttc/ ↩︎
  11. https://www.whitehouse.gov/wp-content/uploads/2022/10/ Blueprint-for-an-AI-Bill-of-Rights.pdf ↩︎
  12. https://ico.org.uk/about-the-ico/media-centre/blog-generative-ai- eight-questions-that-developers-and-users-need-to-ask/
    ↩︎
  13. https://ico.org.uk/for-organisations-2/guide-to-data-protection/ guide-to-the-general-data-protection-regulation-gdpr/accountability-and- governance/data-protection-by-design-and-default/ ↩︎

Dóra Petranyi is a partner and CEE Managing Director at CMS and Co-Head of the Technology, Media and Communications Group (TMC). She also heads the TMC, Data Protection and Intellectual property (IP) practices, and is a partner in the Competition practice in the Budapest office.

She is expert in all three sectors of TMC (Technology, Media and Communications), with a special focus on communications, media and all types of regulatory matters, having been general counsel for the largest telecommunications provider in the region.

Her major clients are TMT and pharma companies, foreign-owned commercial banks, and major joint ventures. Her areas of specialisation include AI, digital infrastructure, cybersecurity, data protection, GDPR, competition law, IP law, general commercial contracts, corporate restructuring and M&A.

She has established and leads a managed services delivery centre from Budapest. As part of this project, a team including UK, German, US, and PRC lawyers provided support to a company, undertaking and managing its procurement support on a global scale. In addition, she has in-depth experience managing and coordinating multi-jurisdictional projects in over 20 countries.

Dóra is the Co-Chairman of the Regulatory & Ethics Committee of the Hungarian AI Coalition. She is also a member of the Digital Civil Code Review Working Group, being the only outside counsel in the team. She is the first and only lawyer to be a member of the co-regulatory committee between the local telecommunications’ regulatory authority and the Association of Hungarian Content Providers. Dóra is a member of the International Board at Global Telecom Women’s Network (GTWN). She is also a member of the Board of Directors of UNICEF Hungary.

Dóra is the co-author of several sector-specific publications and is a regular speaker at key international conferences, including the World Economic Forum (Davos), the Mobile World Congress and ECTA.