Artificial Intelligence and its Impact on Society

March 1, 2022

Anne Bouverot, Former Director General of GSMA, Chairperson of Technicolor, Co-Founder of Fondation Abeona “Championing Responsible AI”


Nowadays we hear “AI”, Artificial Intelligence, mentioned in all sorts of ways, in various contexts, sometimes as science fiction, sometimes as a potential solution for health or education problems, sometimes as “big tech evil”... But let’s start with: what is AI?

There is not a single, simple definition of artificial intelligence. Maybe the easiest way to think about this is to view AI as a combination of data (generally lots of data), computers (generally with significant computing power) and algorithms. You can think of algorithms as instructions, like a kitchen recipe, instructions for assembling a product, or programs composed of lines of code.

What experts think of when they speak of AI is very often machine learning, these are programs that actually learn from training data, for example annotated images (of cats and dogs, apples and cupcakes, or whatever…) and on the basis of these images they make predictions or recommendations (“this image looks like that of a cat”). Maybe telling dogs from cats doesn’t seem really useful, but it does when you can start thinking about applications like personalized medicine which have great potential.

There is also a broader definition of AI, which is pretty much everything that computer programs do that seems to reproduce human intelligence, like doing complicated calculations, playing chess, reading medical images or sorting through CVs. The more we get used to using AI tools and the better we understand how it works, the less scary it is of course.

Now is AI purely science fiction, or is it already part of our daily lives?

AI is actually used quite a lot already, in the physical and in the digital world! For example, post offices and mail handlers around the world use handwritten text recognition to automatically read addresses on letters and packages and help direct them to their destination. Also, border control often relies on facial recognition, which in turn uses AI. On a recent trip from Paris to London, I inserted my passport in the automatic control gates, the photo in the passport chip was read and compared to the image captured in real time by the camera, and then I was let through.

There are many examples in the digital world of course. Email systems use AI to automatically detect which emails should be classified as spam, and they also have “smart reply” and “smart compose” functions which propose short sentences to answer messages or suggestions as we are typing a new message. Search engines use AI to index sites and rank them in a relevant order according to the terms used in a search, or even according to a person’s search history (Google Suggest). Machine translation services like Deepl use deep learning to translate text from one language to another and produce good quality texts. Navigation applications such as Waze determine the shortest route, estimate the arrival time according to the traffic in real time, propose an alternative route in case of traffic jams. Merchant sites, such as Amazon, suggest items of potential interest based on the user’s history and/or the browsing history of other visitors.

AI is also starting to be used in health and this is very promising. Modern vaccine design is a hugely information intensive endeavor and machine-learning systems are playing an important role in the development of Covid vaccines. AI is helping researchers understand the virus variants and their structure and predict which of its components will provoke an immune response, as well as the virus genetic mutations over time. It also can help scientists choose the elements of potential vaccines and make sense of experimental data.

AI is also helpful in analyzing medical imaging, for example of tumors and melanomas, and can help predict risks of developing specific types of cancer, for example breast cancer. However, the reality is that few algorithms are currently ready to be deployed at the clinical level, and regulation will be vital to ensure risks and patient’s right to privacy are taken into account.

So we already use AI in a number of instances, and there are a number of potential new applications. According to ResearchAndMarkets, less than 20% of companies and organizations use AI systems today, but this should grow to 70% by 2027. This is only the beginning.

But should we be afraid?

There are of course risks associated with AI, as with every new technological development. Let’s start with the risk of job losses due to automation. If we look at this more closely, it is not so much about replacing entire jobs but more about automating tasks as part of a particular job, usually rather repetitive tasks (classifying data, sorting and filing items, analyzing medical imagery) and it also allows for things that were not possible before, like the analysis of millions of data points in a few hours. So, yes a number of jobs will be significantly impacted, but it will rather be a transformation of work as we know it today. We need professionals to be able to continuously learn new skills and adapt to new needs and changes. I also predict that we will see increased focus on emotional intelligence and interpersonal relationships, which remain very human attributes.

Indeed another risk is that AI could dehumanize customer relationships. Gartner estimates that 15% of customer service interactions in the world are handled by AI today, a significant increase since just a few years ago and in constant progression. Benefits include the ability to access customer service information round the clock, but answers and recommendations made by an AI system and communicated by a chatbot are not the same as provided by a human being! We need to train customer service professionals so that they can explain recommendations produced by AI systems and put the emphasis on human decision and human interaction for decisions (versus information).

There is also a real risk of increased inequalities: AI not only reproduces biases that are present in real life data sets but can also amplify human biases and prejudices. There is a well-known example from a few years ago, when Amazon decided to use AI to help recruit computer developers. Since most previous hires had been men the algorithm learned to deprioritize women’s resumes (based on apparently innocuous mentions such as “member of women’s sailing team” or “winner of women’s chess championship”). When Amazon realized this, they of course stopped the project, but this shows that we must test algorithms and systems before use, with real data, and seek out and correct any inherent bias with independently verifiable standards and audits.

As remote work, access to education and health develops, we must be mindful also of inequalities in access to digital technology or digital illiteracy (not knowing how to use digital technology, not understanding how a program will classify a request, etc.). It is critical to provide easy and affordable access to communications, to computers or smartphones, to provide training for people who are not at ease with digital tools, due to their age, their current job not requiring this, disabilities or just feeling that it is not for them. This is clearly something that the GSMA and its members are well positioned to do!

Then of course whenever data is used there are risks to privacy and protection of personal data. This is an area which is already highly regulated in Europe (by GDPR regulations) and in a number of other places. Maybe because people feel more protected by these regulations, (or just because of a lack of awareness of the risks) a Salesforce study1 has shown that 62% of consumers are ready to share data with AI systems in order to get a better customer experience.

Last but not least, power intensive calculations linked to AI algorithms cause digital pollution and carbon emissions. We need to ensure we focus on optimizing algorithms, processing data as close as possible to the source and without unnecessary calculations. I think there should be standards associated with this.

What kind of future will AI create?

After two years of a crisis that has revealed all that digital technology can bring to society, people are more and more confident in the ability of technology to have a positive impact. In a recent study in France, 64% said they believe it is an opportunity for the environment, 58% for social issues (inclusion and equal opportunities) and 56% for corporate and institutional governance. After the experience of the pandemic, technology is perhaps viewed by a majority as less scary and just a part of everyday life.

Artificial intelligence systems are tools that allow us to do things that we could not do otherwise, either not at all, or with much more time and effort. Thanks to AI, companies and economies are becoming more efficient and competitive. We should however be very aware of the risks and ensure we deploy trusted and responsible AI systems. There will very likely be some specific regulations in future, like the AI Act that the EU is currently working on, but we also need standards, audits, tests, a lot of transparency and above all, the human touch.

1 https://www.salesforce.com/blog/consumer-privacy/


Anne Bouverot is Chairperson of the Board of Technicolor, a world leader in visual effects and animation services. She is also a Senior Advisor to TowerBrook Capital Partners. She spent most of her career in the technology sector, in France and globally, first with Orange, then as Director General of the GSMA (Global Mobile Operators Association) and later as CEO of Morpho (digital security and identity solutions). She is a graduate of Ecole Normale Supérieure in mathematics and holds a PhD in artificial intelligence. She co-founded Fondation Abeona “Championing Responsible AI” on societal impacts of artificial intelligence.