AI: The power to generate ideas should be balanced by knowledge sharing

March 1, 2023

Rachel Free, Partner and Patent Attorney, CMS


If you thought that “Cathedral with pink sky” was the text prompt used to generate the image below you could well be right. This type of photorealistic image can easily be generated by AI models such as Rombach et al.’s Latent Diffusion1 today2. Michele Merrell’s article on ChatGPT, also in this edition of The Mobile Century magazine, explains the power of generative AI applied to all sectors of the world economy. Using generative AI to come up with new ideas, it is possible to dramatically accelerate the pace of technology innovation.

In this article I argue that such power to generate new ideas should be balanced by sharing knowledge; sharing knowledge about the technologies used to generate those new ideas and sharing knowledge about the new ideas themselves.3

Many of us have tried generative AI tools but not so many of us understand how generative AI algorithms work. This makes it difficult for citizens and society to “look after themselves” and ensure that AI technologies remain “human-centric”.

If a new innovation is generated by AI technology, that new innovation may easily be kept as a trade secret and exploited as “black box” technology deployed as a cloud service for example. Where innovation is protected with trade secrets only it is difficult for society to understand how that innovation is being used or may be used.

AI technology has a significant impact on our lives and it is clear that generative AI is no exception. Already AI

technologies are being used in the courtroom4 to advise defendants what to say. AI technologies are increasingly used in healthcare to triage5 patients and for many other purposes. AI technology today has some degree of independence and uncontrollability since machine learning models are not written with explicit rules.

AI technology is likely to advance

In recent years many authors have written about the singularity, which is a theoretical time in the future at which AI technology is on a par with human intelligence and is soon afterwards able to create AI technology with even greater powers. It is argued that in the time after the singularity there would be an exponential increase in ability of AI algorithms until humans are left behind, potentially with little or no freedom. Whilst I am not sure I agree with the notion of the exponential increase in ability of AI algorithms at the singularity, I do think it is likely that AI technology will advance, possibly in terms of independence and “uncontrollability”. For that reason, I think it would be prudent to promote knowledge sharing of AI algorithms now.

How to promote knowledge sharing

Mechanisms to promote knowledge sharing of AI algorithms include:

  • Market pressure due to scarcity of AI experts
  • Regulation and standards
  • Intellectual property
  • Access to data.

Market pressure due to scarcity of AI experts leads to knowledge sharing

Because of the scarcity of AI experts, industry is prepared at present to allow the AI experts to work in a similar way to academics and to publish their work at AI conferences, in order to attract and retain AI talent. The expanding opportunities for scientists for moving between academia and business as companies step up recruitment is explained in Nature6: “In the past, there were fewer research labs in industry and it was harder to publish. Now, many companies have an open-publication policy, and that means you’re participating in peer review and embedded in a research community. It’s easy to go back to academia. It’s certainly not held against a faculty candidate if they were in industry for a few years, as long as they continued to publish. In fact, industry experience is highly valued.” Thus at present, market pressure due to scarcity of AI experts is leading to knowledge sharing.

Regulation and standards are also mechanisms to promote, or in the case of regulation, mandate knowledge sharing of AI algorithms

In Europe, the proposed AI Act contains a plan for regulation of high-risk AI. The implication of the proposed regulation of high-risk AI in Europe is that it may become mandatory to disclose information about AI algorithms in high-risk sectors such as health care and self-driving vehicles. Lower risk AI algorithms would be outside the mandatory disclosure system and presumably that would include AI applications such as information retrieval, recommender systems, digital assistants, crop planning and management and others.

In the case of AI standards, if an AI algorithm is to be standards compliant, it is likely to be subject to knowledge sharing or disclosure requirements such as a link to a code registry with source code of the model.

Intellectual property system as a mechanism to promote knowledge sharing

Patents are an incentive to knowledge sharing since a patentee is given a monopoly right in return for publishing full details of the technology. Patents are written using an internationally agreed standard document structure and so are a type of “universal language” that is understood by readers in many nations. Patents are classified according to carefully designed and maintained classification schemes and are a corpus of data which is easily searched using those classification codes as well as keywords. In contrast to many academic publications and peer reviewed academic journals, patent publications are freely available.

Generally AI patents include both high level and detailed information about AI algorithms but not source code. Typically patent documents set out problems that are to be addressed by the invention and explain how the problems are solved. This type of explanation is arguably just what is needed for transparency in terms of ethical and economic reasons for knowledge sharing.

The UK Supreme Court will consider the issue of knowledge sharing when it considers the appeal in the DABUS7 test case in March 2023. DABUS is an AI algorithm listed as an inventor on two UK patent applications. The applications were previously refused on the grounds that DABUS is not a human and so cannot, under current patent law, be recognised as an inventor. Similar test cases were mounted in other jurisdictions, including in Australia in late 2022, with similar results. Therefore, the outcome of the appeal in the UK will potentially have significant implications worldwide for knowledge sharing of technologies created by generative AI algorithms.

1  https://arxiv.org/abs/2112.10752

2  In case you are wondering, the photograph of St Paul’s Cathedral was taken manually by myself just before snowfall in central London. The sky was naturally pink at that time.

3 Some of the material in this article first appeared in the chapter I wrote for the book “Artificial Intelligence Law and Regulation” 2022 Edward Elgar Pub- lishing ISBN: 978 1 80037 171 2
4 https://futurism.com/court-case-ai-defendant-earpiece

5 https://proxet.com/blog/artificial-intelligence-based-triage-using-ai- to-triage-patients-in-a-healthcare-facility/

6 https://www.nature.com/articles/d41586-019-01248-w

7 Device for the Autonomous Bootstrapping of Unified Sentience


Rachel Free is a partner in the patent team at International Law Firm CMS where she helps clients to protect their technology through patents. The patent filing and prosecution team at CMS Cameron McKenna Nabarro Olswang, LLP is band 1 ranked in Legal 500. She has an MSc in Artificial Intelligence and a DPhil in Vision Science. Rachel is a European and UK patent attorney and has worked on computer related patent drafting and prosecution through her career. Rachel is a member of the All Party Parliamentary Group on AI (APPG AI) in the UK. She is also an advisory board member of the University of Bath centre for doctoral training on AI. She is vice Chair of the Chartered Institute of Patent Attorneys (CIPA) computer technology committee and is mentioned in the legal 500 CMS entry for law firms with patent attorneys as having ‘deep technical knowledge’.