If a software program which purports to be an early form of A.I. malfunctions, who is liable?
Technology (second edition)
As Indonesian law has yet to regulate artificial intelligence, reference must be made to Law No. 8 of 1999 regarding Consumer Protection (April 20, 1999) (the “Consumer Protection Law”). Under the Consumer Protection Law, an entrepreneur shall be liable for any damages sustained by the consumer due to the goods and/or services produced or traded by the entrepreneur. As such, in the event of an A.I. malfunction, the entrepreneur shall be liable for all damages suffered by the consumer of such A.I.
The use of A.I. has not been specifically regulated by Mexican law and therefore general liability principles would apply. Thus, pursuant to the Consumers Protection Act, in general terms, the vendor would be liable before the final consumer for any malfunction of the relevant software program.
In case the supplier is not the manufacturer of the software program, the manufacturer would be liable before the supplier of any damage caused by the relevant malfunction.
Under certain circumstances, the supply of a software program can qualify as a sale of goods. The buyer can claim breach of contract if the software program is not in conformity with the agreement. The software program is not in conformity with the agreement if it does not have the qualities that the buyer, given the nature of the object and the statements of the seller about it, could have expected on the basis of the agreement. In case of a non-conform delivery, the seller can be held liable vis-à-vis the buyer for malfunctions in the A.I. functionality.
Liability can also be based on general rules of unlawful conduct or certain strict liabilities. Art. 6:162 of the Dutch Civil Code qualifies an unlawful act as – amongst other – an act or omission in violation of a duty of care. If a person/entity fails to observe a duty of care and as a result brings certain risks to existence, said person/entity can be held liable for damages is said risks actually materialize. In relation to A.I. software programs, a duty of care rest on a number parties, such as the creator, reseller or the party actually using the A.I. software. The Dutch Civil Code also provides for certain types of strict liabilities which can also apply in case of A.I. software malfunctions. In this respect reference is made to strict liability on the part of a possessor of a movable thing if the A.I. software forms part of such a movable thing (i.e. as part of a robot, self-driving car etc.) (art. 6:173 of the Dutch Civil Code). Strict liability can also arise from the EU Product Liability Directive 85/374/EEC as implemented in the articles 6:185 et seq of the Dutch Civil Code.
Under the Consumer Code, product liability is based on a strict liability regime and any entity that participates in the chain of development, distribution and/or offer of the product is jointly and severally liable for any product defect or malfunction. Therefore, all the entities of the production chain may be subject to liability. The main causes of action that may trigger liability relates to defects in the products and services, failure to provide clear information to consumers on the risks and limitations of the products and services and misleading advertising. Therefore, liability may be triggered if the customer does not receive all information on how the A.I. works and any possible malfunction/risks associated to the product or, in any case, if the product is deemed, by its nature, defective.
As for the Civil Code regime, which is usually applicable to contracts between corporations, liability is imposed to the entities that caused or contributed to the damage caused; in this scenario, joint liability may only be imposed based on express statutory or contractual provisions. In any event, the Civil Code contemplates the “theory of risk”, imposing strict liability on any service provider that oﬀers services that are deemed to expose people to an unreasonable and unexpected risk.
Ordinarily, there will be strict liability for the producer of defective products for consumers, cf. the EU Product Liability Directive 85/374/EEC as implemented into Luxembourg law with the law of 21 April 1989 related to civil liability for defective products. Where such defects have resulted from computer assisted design or creation or other software assisted processes, it will generally be the person who programmed the software who will be held liable. Again, this can be problematic - as in theory it isn´t necessarily the same person that feeds the AI the data or "trains" it and that originally programs it.
When the software starts to make decisions "on its own" based on machine learning, this liability concept becomes less clear. However, for the time being at least, it would seem most likely that the licensor/programmer of the AI product would be liable pursuant to the strict liability regime under Luxembourg law as referred to above. In the business (i.e. non-consumer) context, contractual provisions will usually specify where liability will sit in any event.
Currently the Romanian national legal framework does not contain any explicit provisions with regard to any form of A.I. Therefore, the general rules on civil contractual liability and tort law, as well as administrative and criminal liability would apply on a case-by-case basis, depending on the specific circumstances of the case.
For the time being, there is no specific Artificial Intelligence existing regulation in Spain. Notwithstanding the above, on 10th April 2018, 25 European Union Member States, including Spain, signed a Declaration of Cooperation on Artificial Intelligence. Consequently, the European Commission will now work with Member States on a coordinated plan.
Additionally the European Commission released a Communication in April 2018 that the European Union will present measures to ensure an appropriate ethical and legal framework regulating artificial intelligence, including, among others, guidance on the interpretation of the Product Liability Directive in the light of technological developments, to ensure legal clarity for consumers and manufacturers in case of defective products. For the time being, the European Commission has appointed experts to build a new High Level Group on Artificial Intelligence, who will make recommendations on how to approach these innovative techniques.
Europe wants to be at the forefront of these developments and therefore its intention is to enact a legal framework that the continent can meet together for artificial intelligence to succeed and work for everyone.
In light of the above, as Spain has not any specific Artificial Intelligence responsibility framework, current software provisions will apply to early forms of Artificial Intelligence malfunctions, being the software developer entity generally found liable for the malfunction of the program.
A.I. is not regulated per se and courts in India are yet to adjudicate on a matter involving loss/ harm caused due to an A.I. based system. At present a machine based on A.I. will be treated like a regular machine and liability arising due to the use of such a machine will be settled through the strict product liability principle whereby the creator/ manufacturer shall be held liable. Product liability in India is based on the Consumer Protection Act, 1986, The Sales of Goods Act, 1930 and the law of torts.
Currently there is no specific legislation regulating such issues. Therefore, general rules on liability under Turkish Law will be applicable. Causal link with the action (or the lack thereof), malfunction, and the loss/damage/injury will be the determining factor when attributing the criminal or legal liability.
The question of A.I. malfunction liability would, for the time being, probably have to be resolved with the help of provisions concerning product liability and classical principles of liability. If this would result in a fair result remains to be seen.
There are no specific rules as regards A.I. functionality under Swiss law. From a civil law perspective, technology-neutral general Swiss liability law is applied, with the main sources of law being the Code of Obligations as regards fault-based and contractual liability as well as the Federal Road Traffic Act of 19 December 1958, as amended (RTA), and the Federal Law on Product Liability of 18 June1993, as amended (PLA), which address strict liability. The decisive factor for any liability is to whom the unlawful conduct is attributable. Only individuals or legal entities may be liable while a liability for a machine or A.I. functionality is excluded, meaning that the liability for the operation of autonomous systems (including A.I. functionalities) must always be based on the act or omission of a person, irrespective of an integrated software’s capability to amend the underlying software code. A similar logic applies from a criminal law perspective: Only individuals (and not machines) can be primarily criminally liable pursuant to the Swiss Criminal Code of 21 December 1937, as amended (SCC), meaning that an individual must have caused (by act or omission) an unlawful offense (e.g. a bodily injury or damage to an object) wilfully or negligently. A subsidiary liability for legal entities may apply if it is not possible to attribute an act or omission to an individual due to the inadequate organisation of such legal entity. Sanctions of up to CHF 5 mio. may be imposed for a felony or misdemeanour attributed to such legal entity.
Under the current law, AI developers and operators may be liable for the early form of AI malfunctions. We consider the nature of the early form of AI is like a machine with more human control rather than that with little human control. When AI malfunctions, the AI operator may be liable in the first place and the AI user may seek liquidated damages and other remedies based on the user contract; while the AI developer may be liable eventually if the AI developer breaches the AI development contract, resulting in the AI malfunction. According to the Consumer Protection Law, when the AI user’s legitimate interests are infringed due to AI’s malfunction, the AI user may claim compensation from the AI operator and the AI operator may then claim compensation from the AI developer if the developer should be blamed for the malfunction. Where the AI user suffers from personal injuries or property damages due to the defect of AI, in accordance with the Tort Liability Law, the user may seek damages from either the AI operator or the developer in the first place, and the AI operator or the developer may be liable entirely or proportionally according to their faults in the AI malfunction.
There is no specific legislation regulating artificial intelligence (“AI”) in Malaysia. Software programmes with an early form of AI would be treated similarly with other consumer products. In the event of malfunction, liability would be addressed by the Sale of Goods Act 1957 (“SOGA”), Consumer Protection Act 1999 (“CPA”) and law of torts, which collectively serve as a platform for product safety and consumer protection.
Section 68(1) of the CPA states that “where any damage is caused wholly or partly by a defect in a product, the following persons shall be liable for the damage:
(a) the producer of the product;
(b) the person who, by putting his name on the product or using a trade mark or other distinguishing mark in relation to the product, had held himself out of the producer of the product; and
(c) the person who has, in the course of his business, imported the product into Malaysia in order to supply it to another person.”
The SOGA and the CPA impose several implied terms which cannot be excluded by contract when dealing with consumers. These include implied guarantees and conditions regarding title and lack of encumbrances, correspondence with description, satisfactory or acceptable quality, fitness for purpose, price, and repairs and spare parts. The AI software manufacturer or supplier will be liable for any malfunction that results in a breach of these mandatory implied terms, depending on the extent of non-compliance with the representations and guarantees made by the manufacturer to the supplier and the supplier to the consumer respectively regarding the AI software programme.
Manufacturers may rely on the development of risk defence to exonerate liability by demonstrating that apart from observing the industrial standard, the scientific and technical knowledge at the relevant time disabled any attempts of discovering the defect. However, the strict liability rule introduced in the CPA will have a significant bearing in negating the defence. Manufacturers and/or suppliers may also be found liable for AI software malfunctions under the tort of negligence.
However, with the rapidly growing development of AI such as the introduction of Google Duplex, AI may no longer be a mere product, but one capable of human mimicry. In such event, the legal position on AI would drastically change.
Currently, the general principle is that a person shall be considered as liable not only for damages he/she causes through his/her own act, but also for those caused by items under his/her custody (Civil Code, Art.1242). The scope of application of this principle may decrease as items that are prompted by Artificial Intelligence deviate from their owner’s custody.
To further develop this liability principle and following EU legislation, a strict liability regime was enacted in 1998, which applies to the producer of a product in regard to damages caused by a defect to his product. This liability applies irrespective of whether or not the producer is bound to the victim by contract (French Civil Code, Art.1245 et seq.). Strict liability makes things easier for the victim, who may sue the manufacturer, a supplier of individual parts or, ultimately, the reseller of the product. The victim must only prove the lack of security of the item.
This liability regime might address situations where AI is embedded or associated to pieces of equipment or hardware, but less so in respect to a software program per se insofar as this is not (yet) considered as a ‘product.’ In addition, risks that will arise after fabrication and sale, due to the AI autonomous learning curve, will ultimately cease to be predictable and might barely be assumed by the initial designer. Disputes may then arise as to who is liable: the owner of the item, the designer, the software programmer, or the user instructing the item…
In this context, the liability regime in respect of artificial intelligence seems likely to evolve.
The liability for malfunctions of a software program which purports to be an early form of A.I. is in German law still unsolved. Three different approaches are discussed amongst legal scholars. One opinion attributes the liability to the operator according to sections 280, 823 BGB. In a legal sense the attribution of a breach of duty or a fault is the big problem in this context. Another opinion wants to solve this problem with a new regulation about strict liability which is independent of negligence and intent similar to product liability. But there is still no legal basis for this concept in German law. A third idea, which also lacks a legal basis, is to invent an own legal entity for A.I. – the so-called “e-person” - as counterpart to natural and legal persons.
If the A.I. software is sold to a consumer and subsequently malfunctions, the issue of liability may be governed by consumer protection laws such as the Sale of Goods Act (Chapter 393) and the Unfair Contract Terms Act (Chapter 396). The Sale of Goods Act imposes several implied terms which cannot be excluded by contract when dealing with consumers. These include implied conditions or warranties regarding title and lack of encumbrances, correspondence with description, satisfactory quality and fitness for purpose. Therefore, the A.I. software provider will be liable for any malfunction that results in a breach of these mandatory implied terms.
Otherwise, the issue of liability will generally depend on the contractual agreement between the A.I. software provider and the software user.
In the consumer landscape, under the Australian Consumer Law, a supplier guarantees its product is fit for purpose. Where an AI product malfunctions in circumstances which enliven this regime, the supplier would bear liability for the defective product. However, this interpretation relies on a linear scenario where the supplier has held out its AI product can do A but it instead does B.
In the business scenario, generally contractual provisions related to defects or malfunction will be negotiated between the parties. Such provisions will allocate the risk and any consequential liability to the appropriate party.
The liability for malfunctioning of an AI will typically be determined by the terms of the agreement under which the AI was provided. License agreements frequently limit the liability of the licensor/provider, and may even require the licensee/user to indemnify the licensor for liabilities arising from the licensee's use (regardless of malfunction).
In the absence of a contractual relationship, the liability analysis would be in tort. The injured party would have to demonstrate negligence - that it was owed a duty of care, that the duty was breached, and that the malfunction was the cause of the injury. Depending upon the facts, a tort claim could be maintained against the developer/licensor or against the user who deployed the AI.
In Japan, there is no clear rule on the liability for malfunctions of a software program that purports to be an early form of A.I. Theoretically, such liability may be found based on (i) strict liability under the Product Liability Act, (ii) tort under the Civil Code, or (iii) breach of contract or defective product under the Civil Code. If such software program is incorporated into certain equipment or other product and such product is found to be defective, the manufacturer of such product may be liable under the Product Liability Act. If such malfunctions were foreseeable by a party (e.g., a manufacturer or user of the software program) and the negligence (or intent) of such party is established, such party may be liable for damages flowing from a causal relationship under a tort claim, but it would heavily depend on the nature of the A.I. and the malfunctions or other circumstances whether such malfunctions were foreseeable.