The Age of Data: Vision and Reality

February 25, 2022

María González Gordon, Partner and Head of the Industrial Property, Intellectual Property and Digital Business, CMS


Jules Verne’s Paris in the 20th century, written in 1863, was not published until 1994 because Verne considered its tone to be excessively pessimistic. It is the story of a man who lives in a city with glass skyscrapers, high-speed trains, and gas- powered cars, where there is an international communications network, which he describes as a kind of global telegraph that connects different regions to share information, something akin to today’s internet.

In 1950 Isaac Asimov published I, Robot, a collection of nine stories describing intelligent robots created to assist humanity by respecting the famous three laws of robotics. In Futuredays: A Nineteenth-Century Vision of the Year 2000, Asimov presented a series of illustrations shown at the Universal Exhibition in Paris in 1900 by Jean-Marc Côté and other French artists. This group of visionaries made predictions that still seem surreal, such as a machine to change the weather or aerial firemen, but others are now commonplace, such as video calls or smart home appliances.

Both Verne and Asimov were science fiction writers, famous for ideas that were considered far-fetched and dystopian at the time, but which have in some cases been realised. On the other hand, many of the technological advances over the past thirty years are far removed from what was imagined. In the 1980s the Internet was born as were the first personal computers and Motorola’s DynaTAC 8000X, a predecessor to mobile phones. The Digital Age had begun. The 1990’s then welcomed new inventions such as mp3, DVD and Google.

In the last two decades science has advanced at an ever- increasing pace, giving rise to new technologies such as Wi-Fi and 4G and 5G, Bluetooth, Artificial Intelligence (AI), 3D printing and augmented and virtual reality (AR/VR). Built on those technologies, social networks (Facebook, Twitter, Instagram or TikTok), YouTube, Google Maps, Amazon and video platforms like Skype (now Teams) have been launched.

My brother-in-law released the short film “The APP” five years ago. The tech behind the plot led to it being awarded more than eighty times and presented in up to 200 film festivals1. “The APP” pictures the dilemma of a grey man who is offered a tech app that turned him into a “success’’ in life in exchange for surrendering his human freedom to decide. Described as “the short that foretold the future”2 this film lays out some risks associated with technology when taken to the extreme and emphasizes the need for a balance between individual rights, such as the need for safety and privacy, and the advancement of technology.

We have entered the Age of Data

While we are still in the Digital Age, we are also in the Data Age, since these new digital technologies use data as their driving force and business driver and are what might be called data centric. This raises several important questions about the future of the industry.

Google Maps, for example, may tell you how long it will take to get to the office (having learned by your daily activity where your home and your office are and whether you are running late because you stayed up watching videos late at night!). Or the highly customized marketing in social networks and e-comm platforms may suggest that you need to buy groceries because it has learned how often you go shopping, your favorite brands (perhaps tempting you with competitor offers), or how much you usually spend (or you can afford to spend because you have gone over budget that month). Think of enjoying your morning coffee while reading news items tailored to your interest, with the list of items magically adjusting to the time you have allowed for your break. (You probably did not even consider what might be missing from the list). How many of us have suspected that our phones are listening to us, when they make suggestions to us apparently out of the blue?

Industry 4.0 and the Data Economy

While Industry 3.0 drove the enhancement of ICT, Industry 4.0 focuses on the full integration of information to develop applications that enable a fully digitized society. A revolution has been unleashed involving new and disruptive technologies: the fourth industrial revolution or Industry 4.0. This encompasses advanced production and operational techniques powered by smart technologies integrated into business and our daily routine. These data enable individuals and businesses to reduce search and transaction costs and make informed choices. In the Data Economy many of our social and business interactions are datacentric. Technologies such as blockchain, robotics, AR/VR, AI, nanotechnology, and Internet of Things (IoT), among others, are driving this new technological stage. Industry 4.0 will fundamentally impact all economic ecosystems and, particularly, organizations.

On the plus side, it will lead to the optimization of production processes and the improvement of the relationship between business and consumers thanks to the use of smart systems and the generation, processing and analysis of data. It will also change the workforce, with the requirement for new skills and roles. Forecasts indicate that by 2030 between 400 and 800 million people might be affected by automation of their jobs. The replacement of mechanized tasks by technological systems with great analytical capacity reinforces the need for workers to focus on developing so-called soft skills.

According to www.willrobotstakemyjob.com, I will be keeping my job as a lawyer (which is good news on the personal level…), but when it comes to paralegals and legal assistants, 85% of their current tasks will be automated. But the impact of Industry 4.0 is not limited to the legal profession. Since 2000 Apps based on these new technologies have become part of our everyday lives. We wake up and listen to music on Spotify or Youtube, review the latest posts on Twitter, update our CV on LinkedIn, etc. Decisions assisted by prediction systems are more and more frequent: we may watch what is suggested on Netflix, check our Bitcoin wallet value fluctuating on the stock market or just simply order food on UberEATS. Going to work in an autonomous car, living in an intelligent house, or undergoing a personalized medical treatment based on the analysis of your own data, now all seem to be just around the corner.

Challenges of legislation: under-regulation and over- regulation

While we may all agree that these new technologies make our daily lives easier, there are associated risks to data privacy, lack of transparency of algorithms that might lead to adverse impacts on competition or consumers. There are also many ethical issues with no clear answers.

Discussion about greater protection for privacy grows louder every day. Three years after Mark Zuckerberg appeared before the US Congress regarding the role of Cambridge Analytica, there was a new leak of Facebook documents about internal policies and practices relating to consumer harms. In Australia, pressure is mounting on social media companies through proposed anti-troll laws that, if they become law, will enable courts to force social media giants to release the details of trolls in defamation cases. At the same time, legislation is being considered to treat platforms as publishers, with liability for illegal content.

We need to adapt our legislative approach to solve the conflicts that will undoubtedly arise in future. Unfortunately, the approach around the world has so far lacked consistency. While in Europe the regulatory framework focusses on the rights of the individual, other countries such as the United States or Japan advocate for more flexible regulation that promotes the development of the economy. This lack of harmonization has a clear impact on regions’ competitiveness and organizations’ strategic decisions.

When it comes to the US, fragmented regulation has led to many federal legal provisions that contemplate numerous exceptions and, moreover, are proven to have significant loopholes3. This means that companies might develop their business and products in one jurisdiction using large amounts of data where there are fewer regulatory hurdles but may have difficulties translating this to other jurisdictions.

Japan set a goal in 2016 to become the leader in the transition Japan set a goal in 2016 to become the leader in the transition from “Industry 4.0” to “Society 5.0”. In 2018, it adopted the Declaration to Become the World’s Most Advanced Nation in Information Technology and published the Basic Plan for Advancing the Utilization of Public and Private Sector Data, which outlines the government’s policy for promoting technologies such as AI and IoT. To meet these goals several legislative amendments were passed, both to the Personal Information Protection Act -to facilitate the use of Big Data in 2015-, and to the copyright protection regime -by creating exceptions to facilitate text and data mining in 2009.

New technologies, new values?

The EU began the process of developing a new framework to regulate new technologies in 2018 with the General Data Protection Regulation (GDPR), followed recently by the approval of the Data Package driven by the European strategic framework including the draft of the Data Governance Act, the Free Flow Regulation and the amendment to the Public Sector Information Directive, that became the Open Data Directive.4

In 2020 the EU launched its initiative “A Europe fit for the Digital Age”5, with the aim of striking a balance between competitiveness and the protection of individual human rights. One of the key instruments of this project, “Shaping Europe’s Digital Future”6, aims to address three challenges: (i) technology that works for people, (ii) a fair and competitive digital economy; and (iii) an open, democratic, and sustainable society.

It is no coincidence that Margrethe Vestager, the EU Commissioner for Competition, was appointed as Executive VP for Europe fit for a Digital Age. In Vestager’s own words: “Ensuring a global playing field in terms of competition is of the utmost importance, in particular when our competitors are not subject to the same rules as regards State subsidies. This is why I will work on developing the appropriate tools to guarantee fair competition both in the Single Market and at the global level.”7 This implies that setting boundaries around the use of data from third States can benefit the European Union and its citizens.

Ursula von der Leyen, President of the European Parliament, in her Mission letter to Margrethe Vestager declared “Over the next five years, Europe must focus on maintaining our digital leadership where we have it, catching up where we lag behind and moving first on new-generation technologies. This must cut across all our work, from industry to innovation. At the same time, we must ensure that the European way is characterized by our human and ethical approach. New technologies can never mean new values.”

The EU has recently begun to adapt its existing regulatory framework after a thorough discussion of the general and ethical principles on which it will be based with the aim of creating a legislative package that enables the use of technology while ensuring a level playing field.

Advances in digital technologies are clearly progressing much faster than legislation, resulting in unpredictability and legal loopholes. Legal uncertainty always generates a risk that can manifest itself in two ways: under-regulation and over- regulation. Under-regulation usually means that top-tier tech companies are subject to light touch restrictions, with the aim of stimulating innovation. While this may seem beneficial to the consumer, the use of datasets employed in AI systems may adversely impact individuals’ fundamental rights, such as privacy, intimacy or safety. Additional regulation may then be needed to overcome any potential negative effects. On the other hand, over-regulation can constrain R&D activity and competitiveness. There should be flexibility to allow for the development or inclusion of future technologies and applications, avoiding legislation that quickly becomes obsolete.

Regulation enables a balance to be struck between connectivity and users’ rights (including but not limited to privacy and safety). While it may be difficult to determine a legislative framework appropriate to all relevant stakeholders, setting some key principles that enable clear responsibility and governance should enable us to move forward in the right direction. In addition, the following principles should apply: i) standardized regulatory frameworks that allow for new guidelines to be developed when new technologies emerge, (ii) proactive collaboration between the private sector and civil society (NGOs, think tanks) to identify and address emerging risks as quickly and effectively as possible; and (iii) a thorough analysis of all the different sort risks that may arise from innovation. So, can new technologies mean new values? It is all about finding the right balance.

Ethical dilemmas

As philosopher David Wong once said: “New technology is not good or bad, it has to do with how people choose to use it.” Technology ethics encompass the respect for employees and customers, the moral use of data and resources, the responsible adoption of disruptive tech and the creation of a culture of responsibility. In other words, the focus should be on whether particular uses of technology are ethically acceptable, and what restrictions should be placed on those uses to protect individuals’ rights and a competitive market.

The tech industry and some institutions are trying to establish specific best practices and guidelines to guide tech ethics, but this is not new territory for business, with many companies now struggling with ethical dilemmas around the use of AI driven technologies, and the need to set and promote their ESG credentials.

Autonomous decision-making systems can lead to biased decisions in processes such as tenant selection and mortgage qualifications, as well as hiring and financial lending discrimination. Bias is often found in the outcomes predicted by black-box machine-learning systems, because of the inherent biases in data used to train them8. Disagreement remains on how to globally address this problem, with some proposing the creation of public-private oversight committees, with the aim of increasing transparency in algorithms, in addition to government regulation.

Autonomous vehicles also pose a cybersecurity risk. Self- driving cars might fail in their predictions and decisions due to a technical mistake or because of the loss of a connection. But even if the AI does not actually fail, it could make decisions that might be considered morally wrong one. Autonomous vehicles could also be vulnerable to cyberattacks or even manipulation of the navigation system by adding paint on the road for example. These types of alterations can lead to the algorithm wrongly classifying objects, and subsequently to the autonomous vehicle behaving in a way that could be dangerous.

Different countries will probably develop their own guidelines to reduce the risks in human-robot interactions and the possibility of using these AI based technologies to manipulate and abuse humans in sensitive circumstances, such as in health care, care of the elderly or persons with disabilities,

n education, or as used by children in toy robots, chatbots and companion robots. Similar ethical concerns exist for AR/ VR. When designing a simulation, there needs to be careful consideration of what to include, prioritizing user’s safety and balancing the freedom to create with ethically valid content restrictions.

Internet governance also remains a highly contested topic, brought on by the power of social networks and other gatekeepers. There has been a focus on internet governance in some very specific areas, under the ITU (International Telecommunications Union), WIPO (World Intellectual Property Organization) and ICANN (Internet Corporation for Assigned Names and Numbers). Nonetheless, there is a gap when it comes to certain aspects such as freedom of speech or users’ privacy. The Internet Governance Forum (IGF) seeks to promote a “healthy balance between States, society and companies”, but there is remaining uncertainty about how to deal with some aspects and the lack of legislation makes it difficult to achieve an effective standard of internet governance.

UNESCO has declared internet governance to be a key issue, acknowledging the potential of the Internet for fostering sustainable human development and building inclusive knowledge societies and for enhancing the free flow of information and ideas throughout the world. It advocates an open, transparent and inclusive approach to internet governance based on (and guided by) the principle of openness, encompassing the freedom of expression, respect for privacy, universal access and technical interoperability.9

The Spanish Data Protection Agency (SDPA) has been a pioneer in attempting to address these ethical dilemmas and has issued a Digital Pact for the Protection of People10. This is considered an instrument to encourage signatories to put in place sustainability policies and respectful business models, based on (i) the greatest possible transparency for users, which implies users’ knowledge of what data are being collected, when they are registered and what they are used for; (ii) the promotion of gender equality, protection of children and women victims of gender-based violence and other people in situations of vulnerability; (iii) the prevention of biased decisions made by algorithms on grounds of race, origin, beliefs, religion or gender, among others; and (iv) the endorsement of values like dignity, freedom, democracy, equality, individual autonomy and justice and insert them within the governance of mechanical reasoning. This list of best practices constitutes a tool for protecting individuals’ fundamental rights and also an asset and a distinctive element of competitiveness for both public and private sectors. As an example, Telefónica has released in 2021 its own Digital Pact.

Trust is key to overcome this ethical dilemma. In fact, trust is the cornerstone of the European Union’s digital strategy. Trustworthy technology should allow science to progress without most relevant rights of the individual being seriously affected. Trustworthy technology involves users understanding how the technology works, its risks and benefits, allowing them to make a responsible use of the technology.

Guidance for the future

When Internet and Web pioneers devised the foundations of the digital technologies we use today, they were trying to create a global network of knowledge accessible to all. Most experts at the time could not imagine where their vision would lead: to a complex parallel virtual world, driven by economic interests and in which people carry out most of their daily lives. Rights and freedoms that individuals and societies have fought for and won have been brought into play: equality, physical and moral integrity, honor, reputation, intimacy and privacy, property, image, free expression, dignity, freedom of opinion, circulation, assembly, association. As in the physical world, the all-encompassing nature of life in the digital world has led nations and legislators to try to establish boundaries to avoid the most perverse and damaging uses of these technologies.

How we adapt to this change will be fundamental. No one can anticipate how technologies will develop further. By 2030 experts predict that space trips will become commonplace, and advances such as 3D printed organs or mind-controlled prosthetics will be considered normal. Advances in technology will continue to rewrite the rules for business behavior, whose leaders are eager to use these technologies to drive growth, for fear of missing out.

We need to move from a “data centric” to a “people centric” approach. Big data analytics have a positive impact on people’s lives, but they also have real consequences for the safety and privacy of individuals. Technological advances must still respect fundamental rights. Companies should be encouraged to continue R&D; regulation should not hold back technological progress where risks have been mitigated.

We must find a balance between over-regulation and under- regulation. A comprehensive, transparent, and efficient framework should enable future innovations while protecting users’ rights. We also need harmonized regulation. The size and importance of the EU internal market (and population) means that the EU’s more restrictive regulatory frameworks, in comparison to the US, might become a de facto global standard. This could lead to a ratcheting up of regulation, as non-EU corporations might advocate for similar restrictions in their countries of origin to harmonize their products and services as much as possible to make them globally competitive.

We need an ethical level playing field. Corporations should be obliged to guarantee the ethical standards of their products from the very moment of their design (ethics by design). Monitoring of this ethical approval should be based on

public-private collaboration to establish moral principles and technological progress for the common good. We need to face the challenges that science and technology bring, addressing our (cybersecurity) fears, and managing ethical questions.

The solution is awareness and trust. Real issues and real risks need to be known and understood by users, especially minors or vulnerable people. The burden of raising awareness cannot only be placed on platforms or the tech creators. Users must do their part. By understanding the potential positive and negative consequences of the use of a given technology, we can (best) ensure trust.

The pioneers of digital technology understood the impact it would have in the context of their goals of enhancing global communication and knowledge. Let us achieve a similar level of awareness and trust for the nearly five billion users of the internet and emerging technologies today and we will ensure that they will make our lives better, as their visionary creators intended.

1 YAQ DISTRIBUCIÓN (2021). “The APP”. YAQ Distribución, available at <https:// www.yaqdistribucion.com/cortos/the_app>. Date of reference: November 29, 2021.
2 MERINO, J. (2016). “The APP”. Cortos De Metraje, available at < https:// cortosdemetraje.com/the-app/>. Date of reference: November 29, 2021.

3 CLARK, K. (2021). “The current state of US state data privacy laws”. The Drum News, available at <https://www.thedrum.com/news/2021/04/26/the-current-state-us- state-data-privacy-laws>. Date of reference: November 18, 2021; DAVIS, M. (2021). “US must catch up with rest of the world on data privacy”. Roll Call, available at <https://www. rollcall.com/2021/10/14/us-must-catch-up-with-rest-of-the-world-on-data-privacy/>. Date of reference: November 18, 2021; SALDAÑA, M. (2007). “La protección de la privacidad en la sociedad tecnológica: El derecho constitucional a la privacidad de la información personal en los Estados Unidos”. Araucaria: Revista Iberoamericana de filosofía, política y humanidades, No 18, 2007, ISSN 1575-6823, p. 85-115.

4 Directive (EU) 2019/1024 of the European Parliament and of the Council of 20 June 2019 on open data and the re-use of public sector information.
5 EUROPEAN COMMISSION. (2020). “A Europe fit for the Digital Age”. European Commission Factsheets: A Europe fit for the Digital Age, available at <https://ec.europa. eu/info/publications/factsheets-europe-fit-digital-age_es>. Date of reference: November 29, 2021.

6 EUROPEAN COMMISSION. (2020). “Shaping’s Europe’s digital future”. European Commission Press Corner: Shaping’s Europe’s digital future, available at <https://ec.europa. eu/commission/presscorner/detail/en/fs_20_278>. Date of reference: November 29, 2021.

7 EUROPEAN COMMISSION. (2020). “Answers To The European Parliament Questionnaire To The Commissioner-Designate Margrethe Vestager”. European Commission, available at <https://ec.europa.eu/commission/commissioners/sites/ default/files/commissioner_ep_hearings/answers-ep-questionnaire-vestager.pdf>. Date of reference: November 30, 2021.

8 AKSELROD, O. (2021). “How Artificial Intelligence Can Deepen Racial and Economic Inequities”. ACLU News, available at <https://www.aclu.org/news/privacy- technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities/>.

9  https://en.unesco.org/themes/internet-governance

10  AGENCIA ESPAÑOLA DE PROTECCIÓN DE DATOS (2021). “Digital Pact for the Protection of People”. AEPD: Pacto Digital, available at <https://www.aepd.es/es/ pactodigital>. Date of reference: November 22, 2021.


María González heads up the Industrial / Intellectual Property & Digital Business department at CMS. She specialises in advising domestic and international companies on intellectual property, industrial property, copyright and technology, particularly in dispute resolution. She is expert in the drafting, negotiation and termination of a wide range of IP/IT agreements (licences, trademarks, designs, software, outsourcing, distribution agreements, transfers, assignments, etc.). She has particular expertise in technology, digital transformation and data analytics in sectors such as insurtech, fintech, energy, health and wellbeing and real estate, among others. Maria was appointed by INTA as a member of its European Global Advisory Board as well as its representative at the Observatory of the EUIPO in the IP in the Digital Working Group. She is a member of the steering committee of EUIPO’s expert group in the Anti-Counterfeiting technology Guide project. She is also a member of the board of the Spanish group of AIPPI. She has been recognized in the field of IP by leading legal directories including Chambers & Partners, Legal 500, IP Star, IAM patents, MLI and Who’s Who Legal.

https://cms.law/en/esp/people/maria-gonzalez-gordon