On 7 September 2020, the UK’s Intellectual Property Office (IPO) published a call for views on the impact of Artificial Intelligence (AI) on the UK’s Intellectual Property (IP) laws. Responses are due by 30 November.
The consultation takes place against the backdrop of worldwide interest in AI and the law by governments and regulators.
The UK consultation seeks views primarily by reference to its impact on existing IP rights:
- copyright (and database rights)
- design rights
- trade marks
- confidential information (aka ‘trade secrets’)
The following sections are not full summaries of the consultation – instead they are a highly personal selection of topics which I found interesting:
Can AI be an ‘inventor’?
In 2019, both the UK Intellectual Property Office and the European Patent Office rejected patent applications which asserted that an ‘inventor’ was an artificial intelligence named ‘DABUS’. The decisions are being appealed. The reason for the rejection was that only a natural person (under the current law) can be an ‘inventor’.
The consultation seeks views on whether AI can ‘devise’ an invention? Should AI be considered a tool for human inventors to use, or will there be circumstances in which either a human inventor cannot be identified or in which the ‘nominal’ inventor has no involvement in the actual invention?
Disclosing how to work AI inventions
A fundamental part of the policy ‘balance’ in the patent system is that in order to be granted a time-limited exclusive right, the patent applicant has to provide enough detail so that a ‘skilled person’ can perform the invention to the full extent claimed in the patent.
However, how does this work if the skilled person also needs access to a particular AI or if the process cannot be understood by a human – either due to complexity or decisions being made in a ‘black box’? The consultation seeks views on whether and how the law on disclosure should be updated.
Can AI infringe?
Almost (but not entirely) mirroring the question as to whether only natural persons can invent, can only natural persons infringe? The consultation recognises that legal responsibility for acts of an AI goes much wider than the patent system (e.g. referencing autonomous vehicles) and so both seeks views and recognises that policy decisions will need to be informed by the wider context.
Use of copyright materials by AI
Some AI systems ‘learn’ by being exposed to a very large number of inputs. Whether these are written works, images, sounds or video it is likely that many of the learning inputs are copyright protected works or collections of data protected by database rights. In many cases, the ‘training’ of AI will involve infringing copyright, related rights or database rights. Is the current system fit for purpose or does the scope of permitted use need to widen or narrow and what changes might be needed to protect copyright owners – the consultation seeks views on these issues.
Can AI create works?
Whilst Anglo-American systems see copyright as an economic tool to reward and stimulate creativity, other traditions (esp Continental Europe) see recognition and protections for authors and performers as equally important. However, both traditions place human creativity at the heart of copyright, and in most jurisdictions AI cannot create copyright protected works. However, the UK already takes an unusual approach, and it is not clear where it should now go. The IPO explain:
“Unlike most other countries, the UK protects computer-generated works which do not have a human creator (s178 CDPA). The law designates the author of such a work as “the person by whom the arrangements necessary for the creation of the work are undertaken” (s9(3) CDPA). Protection lasts for 50 years from the date the work is made (s12(7) CDPA).
When proposed in 1987, this was said by Lord Young of Graffham to be “the first copyright legislation anywhere in the world which attempts to deal specifically with the advent of artificial intelligence”. It was expressly designed to do more than protect works created using a computer as a “clever pencil”. Instead, it was meant to protect material such as weather maps, output from expert systems, and works generated by AI.
Although it was expected that other countries would follow suit, few countries other than the UK currently provide similar protection for computer-generated works.
Since these provisions became law in 1988, the concept of originality has evolved. This has led to some uncertainty about how the computer-generated works provision applies.
Literary, dramatic, musical and artistic works are only protected by copyright if they are original. In 1988, “original” meant a work must be the product of the “skill, labour or judgement” of its author. But the current approach is that a work must be “the author’s own intellectual creation”. This means it must result from the author’s free and creative choices and exhibit their “personal touch”. It is unclear how these concepts can apply to AI works and some argue that a separate definition of originality may be needed.
By designating a human as the author of a work generated by an AI, the UK approach also separates authorship and creativity. The creator of the original work is the AI, but the “author” under the law is a person who has not made any creative input to it. This sits uneasily with the modern approach to originality in wider copyright law, where creativity and authorship go hand-in-hand.
As computer-generated works have “no human author”, it appears that the concept of “joint authorship” does not apply to works co-created by humans and AI systems. As such, there is some ambiguity about the status of AI-assisted works….
.So-called “entrepreneurial works” – sound recordings, films, broadcasts and typographical arrangements – do not have an originality requirement. These belong to their producers, makers and publishers, regardless of their creative input. This protection would appear to apply to AI-generated material, without need for specific provision. However, it is less extensive than the protection granted to original works. For example, the owner of musical copyright can prevent any reproduction of their music but the owner of copyright in a sound recording can only prevent copying of that particular recording.
In 1987, when the government legislated to protect AI-generated works, the Earl of Stockton said he hoped this would allow future investment in AI “to be made with confidence”. But it is unclear whether it has had this effect. The UK remains one of only a handful of countries worldwide that provides this protection. Investment in AI has taken place in other countries, such as the United States, which do not provide this type of protection. Some people argue that this protection is not needed, and others that it should be provided differently.
From the perspective of an AI system, the role of copyright as an incentive would appear to have little meaning. An AI does not seek protection of its personal expression nor financial reward for its work. Regardless of copyright, an AI system will generate content.
It also seems hard to justify protecting AI-generated works on natural rights grounds. AIsystems are still far from being considered individuals with their own personalities.
There may also be a more fundamental reason to distinguish between human and AI-generated works. Some argue that copyright should promote and protect human creativity, not machine creativity. According to this view, works created by humans should be given protection but those generated by machines – and potentially competing with human-created works – should not. Addressing these arguments could mean removing or limiting protection for computer-generated works.
On the other hand, protection for AI-generated works may be justified if it incentivises investment in AI. This was the original basis for providing this protection. If there is evidence that this is the case, it could make sense to continue to protect these works. Depending on the economic and legal impacts, this may mean maintaining the current approach, or providing a different type of protection.”Extract from UK IPO Consultation on AI and copyright
With the terms of Brexit not yet settled, the legal regime for the protection of design rights in the UK after 31 December 2020 remains unclear. Against this uncertainty, the IPO seeks views on whether AI can own and/or infringe design rights.
Who is the ‘average consumer’ in the age of recommendation bots?
The current pandemic lockdown has accelerated the trend from physical shopping to online shopping. Most consumers will start by searching for a product (either from a search engine or within an online ecommerce platform) and then will be ‘assisted’ by purchase suggestions.
This is a very different process to a consumer scanning a shelf for a product and raises existential questions about the use and value of trade marks – consider the situation in which a consumer searches for a product by its trade mark results in more highly-rated suggestions for alternative products. Who now is the ‘average consumer’, and how does one assess the ‘likelihood of confusion’?
Can AI use trade marks ‘in the course of business’?
The current law references a ‘person’ using trade marks in the course of business. How does this apply when AI ‘sale optimisation’ bots interface with a ‘purchase suggestion’ bot – in each case without humans making a decision to use (or not) a trade mark?
The IPO seeks views on these issues.
Is this really how AI is protected?
Despite the UK IP system protecting both software and AI created works by copyright (but not patents), the IPO notes that in practice much AI research is protected by keeping it secret. From a policy perspective does this create issues: on the one hand smaller businesses may struggle to properly protect their work, and on the other the lack of publication may inhibit research progress, as scientific progress could be described as the work of ‘midgets standing on the shoulders of giants’?
Ethical issues: e.g. oversight of ‘mutant algorithms’
A further challenge arising from keeping AI secret is that their use may have unintended real-world consequences. Whilst it is not clear that AI was responsible, the UK government recently described an algorithm responsible for school grades in 2020 (when exams were cancelled due to lockdown) as a ‘mutant algorithm’ that has not been subject to adequate scrutiny or oversight. Other examples include automated credit scoring and benefits assessment systems which can create ethical oversight issues.
The IPO seeks views on these issue, although it seems to me that any solutions would need to go much wider than the intellectual property legal framework.