If a software program which purports to be a form of A.I. malfunctions, who is liable?
Technology (3rd edition)
No relevant regulation are adopted.
We do not have specific regulations for software programs or artificial intelligence creations; however, our Civil Code establishes who is responsible for the damages caused by inanimate objects and regarding that subject matter article 1384 of the Civil Code indicates that the owner of a thing, even if said thing is inanimate, if it is under his custody, it is the owner´s responsibility. In that respect, the damage that it is caused by the thing will be liable to the owner. Under this premise and under Dominican laws, if a software program which purports to be a form of artificial intelligence malfunctions, the owner/guardian of said software will be liable for the damage that it might cause to others.
There are no specific provisions under Egyptian law that deal with A.I.. However, the liability in connection with the malfunctions of A.I. shall be governed by the general principles of contractual or tortious liability, as the case may be. Under both types of liability, the liability shall fall on the party that committed a fault, which lead to the damages i.e. the manufacturer of the A.I. or the operator, as the case may be.
There is no special law regulating A.I. The State Chancellery and the Ministry of Economic Affairs and Communications has launched a cross-sectoral project to analyze and prepare for the deployment of artificial intelligence in Estonia. The aim of the expert group is also to draft amendments to laws to make it possible to use fully autonomous information systems in different economic areas and to ensure the clarity of the legal area and the necessary supervision.
Currently we may say that the issue of liability is regulated in Estonia by the Estonian Law of Obligations Act (LOA). The question who is liable in case of A.I. malfunctions does not have clear answer in the law. However, the LOA regulates situations where someone’s animal/pet (like a dog) causes damage or damage is caused by major source if danger (for example a car). In such cases, the person, who has the power over the pet (owner) or the major source of danger (driver or in some cases the owner of the car, is liable for the damage caused. A.I. can in some cases considered major source of damage or as an animal (in Estonian legislation animals are subject to the provisions applicable to things). Based on these provisions in the law, the owner of the A.I. should be liable in case of damage due to the malfunctions of the A.I. However, there might be also the liability of the developer of the software, manufacturer or the seller of the A.I. within the relationships of developer-manufacturer, manufacturer-buyer, seller-buyer.
Currently, the general principle is that a person shall be considered as liable not only for damages he/she causes through his/her own act, but also for those caused by items under his/her custody (Civil Code, Art.1242). To further develop this liability principle and following EU legislation, a strict liability regime was enacted in 1998, which applies to the producer of a product in regard to damages caused by a defect to his product. This liability applies irrespective of whether or not the producer is bound to the victim by contract (French Civil Code, Art.1245 et seq.). Strict liability makes things easier for the victim, who may sue the manufacturer, a supplier of individual parts or, ultimately, the reseller of the product. The victim must only prove the lack of security of the item.
The scope of application of this principle may decrease as items that are prompted by Artificial Intelligence deviate from their owner’s custody. Furthermore, this might address situations where AI is embedded or associated to pieces of equipment or hardware, but less so in respect to a software program on its own, insofar as this is not (yet) considered as a ‘product.’ The main other option that seems to emerge would be to recognize the legal personality of robots. However, this approach might only shift the problem, since legal personality means patrimony and this would then require finding ways to endow this patrimony with assets and not just liabilities.
In this context, the European Parliament adopted a resolution on 12 February 2019 in favour of developing an EU industry policy and of implementing governance measures concerning Artificial Intelligence. The Parliament proposes, among other steps, to define a legal framework, based on the notion of ethics and applicable as from the conception of the application. The Parliament considers that "any comprehensive AI law or regulation should be carefully considered, as sectoral regulation may provide for policies that are sufficiently general, but also refined to a level that is significant for the industrial sector".
On its side, the European Commission adopted in April 2019 a coordinated plan drawn up with the Member States to promote the development and use of AI in Europe. This initiative includes the works of an independent expert group on Artificial Intelligence which published ethics guidelines in April 2019. These guidelines place emphasis on the need for a human-centric approach to AI, in particular accountability and the need for mechanisms to ensure adequate redress when unfair negative impacts occur. The EU Commission declared that as a next step it aims to build an international consensus on AI ethics guidelines.
As an initial matter, taking two common scenarios involving commercial use of software, the party that sells/distributes the software program to end users or provides the software program as a service would normally be liable for any malfunction of the software, though contract provisions may result in the software developer (to the extent it is not the seller/distributor or service provider) or another party bearing liability.
There is currently no straightforward answer for this question as various legal doctrines may apply, depending on the context and various factors. Further, the current main liability regimes do not fully or adequately resolve the challenges posed by AI-based programs.
Under the current main liability regimes, products liability law may be the most adequate doctrine for applying liability in this scenario. Under products liability law, the manufacturer of the software will be held liable for harms caused by a software with an unintentional defect (namely, the program does not function as intended due to a manufacturing failure or defect against the manufacturer’s specification), or where a design effect occurred, or where provision of instructions or warnings could have reduced or avoided foreseeable risks of harm. However, this doctrine is generally limited to physical injuries and damages to property and cannot necessarily account for other types of damages (such as violations of privacy). Moreover, the application of product liability in the context of AI-based products is challenging with respect to AI-based products that, by nature and at least to some extent, perform in an unforeseeable manner (in which event, if the product operated as intended then the products liability may not apply).
The negligence regime may also potentially apply to this scenario. Under the negligence doctrine, the manufacturer of the program may be held liable if it acted at fault, namely if it breached a duty of care (i.e. the duty to act as a reasonable person). However, negligence as a liability regime may be inadequate due to the expected difficulty to set the level of care in the context of AI-based programs, specifically given that the duty of care and the standards for reasonable precautions are dependent on a baseline that is constantly changing in these technological fields, and are disrupted by new types of unexpected harms and a general lack of foreseeability, undermining both the concept of breach of duty and the general concept of causation.
Apart from the “White Paper on Artificial Intelligence at the service of citizens” issued by the Task Force on Artificial Intelligence of the Agency for Digital Italy (“AgID”) on March 2018, a specific Italian regulation on A.I. is still to be drafted. Therefore, the civil law principles on contractual and non-contractual liability are applicable.
Specifically, in case of malfunctions of software malfunctioning, the following subjects could be theoretically liable:
i) the seller of the software, being the subject liable from the contractual standpoint to provide a product with a certain level of safety and cybersecurity and, generally, without defects; ii) the manufacturer of the software, being the subject liable from an extra-contractual standpoint (if it does not act as seller of the software); in this context, the liability for damages for a defected product, as set out under Sections 114-117 of the Consumer Code could be construed so to encompass also goods as software. If the case, the same manufacturer might claim compensation for damages suffered to the developer of the software who has been entrusted of such task.
In Japan, there is no clear rule on the liability for malfunctions of a software program that purports to be a form of A.I. Theoretically, such liability may be found based on (i) strict liability under the Product Liability Act, (ii) tort under the Civil Code, or (iii) breach of contract or defective product under the Civil Code. If such software program is incorporated into certain equipment or other product and such product is found to be defective, the manufacturer of such product may be liable under the Product Liability Act. If such malfunctions were foreseeable by a party (e.g., a manufacturer or user of the software program) and the negligence (or intent) of such party is established, such party may be liable for damages flowing from a causal relationship under a tort claim, but it would heavily depend on the nature of the A.I. and the malfunctions or other circumstances whether such malfunctions were foreseeable.
There is no specific legislation regulating artificial intelligence (“AI”) in Malaysia. Software programmes with a form of AI would be treated similarly to other consumer products. In the event of a malfunction, liability would be addressed by the Sale of Goods Act 1957 (“SOGA”), CPA and the law of torts, which collectively serve as a platform for product safety and consumer protection.
The Contracts Act 1950 (“Contracts Act”), which serves to address the rights and liabilities of parties pursuant to a contract, would be relevant in determining liability for AI malfunctions. In a contract relating to the use of an AI, provisions may be included to determine which of the parties will sustain liability arising out of AI malfunctions.
Section 68(1) of the CPA states that “where any damage is caused wholly or partly by a defect in a product, the following persons shall be liable for the damage:
(a) the producer of the product;
(b) the person who, by putting his name on the product or using a trade mark or other distinguishing mark in relation to the product, had held himself out of the producer of the product; and
(c) the person who has, in the course of his business, imported the product into Malaysia in order to supply it to another person.”
The SOGA and the CPA impose several implied terms which cannot be excluded by contract when dealing with consumers. These include implied guarantees and conditions regarding title and lack of encumbrances, correspondence with description, satisfactory or acceptable quality, fitness for purpose, price, and repairs and spare parts. The AI software manufacturer or supplier will be liable for any malfunction that results in a breach of these mandatory implied terms, depending on the extent of non-compliance with the representations and guarantees made by the manufacturer to the supplier and the supplier to the consumer respectively regarding the AI software programme.
Manufacturers may rely on the “development of risk” defence to exonerate liability by demonstrating that apart from observing the industrial standard, the scientific and technical knowledge at the relevant time disabled any attempts of discovering the defect. However, the strict liability rule introduced in the CPA will have a significant bearing in negating the defence. Manufacturers and/or suppliers may also be found liable for AI software malfunctions under the tort of negligence.
If an AI is tasked with creating content and malfunctions by incorporating third-party works protected by copyright in such content without authorisation, the liability for such infringement pursuant to the SOGA, CPA, Contracts Act, and law of torts would accrue to the creator of the AI, subject to any contractual provisions addressing liability between the creator and the user of the AI for such infringement occasioned by the AI.
However, with rapidly growing development of AI such as the introduction of Google Duplex, AI may no longer be a mere product, but one capable of human mimicry and potentially gaining legal personality, consciousness, personhood, authorship and autonomy. In such event, the legal position on AI would drastically change.
In spite of the fact that the Maltese government is exploring the regulation of AI and has launched a public consultation on the matter, there still exists quite some legal uncertainty with respect to AI liability. Under current general contract and tort rules found in our Civil Code (Chapter 16, Laws of Malta) this analysis would require a review of the warranties, representations and limitations established in the software agreement (whether it is a licensing, development or sale agreement) and also a consideration of the link of causation between the malfunction and any material damages resulting from that malfunction which may have been suffered by the user or licensee of that software or any third party.
To the extent that that malfunctioning causes personal injury, there is unlikely to be any liability due to the existence of New Zealand's no-fault accident compensation scheme (known as "ACC").
Where AI is part of products or services which are sold to consumers, liability for malfunction will be subject to New Zealand consumer law, such as the Consumer Guarantees Act and the Fair Trading Act. The obligations these Acts impose are discussed above.
Given the extent to which AI often relies on the processing of data (including personal information) liability under the Privacy Act is also possible in circumstances where AI malfunctions. In that scenario, the relevant agency holding or processing the personal information would be liable where a privacy breach occurred, and may also be subject to mandatory breach notification obligations (if that part of the Privacy Bill is enacted).
Outside of obligations at law, liability for defects would ordinarily be negotiated between contractual counterparties and managed through the terms of the applicable contract.
The liability for malfunctions of a software program which purports to be an early form of A.I. is in German law still unsolved. Three different approaches are discussed amongst legal scholars. One opinion attributes the liability to the operator according to sections 280, 823 BGB. In a legal sense the attribution of a breach of duty or a fault is the big problem in this context. Another opinion wants to solve this problem with a new regulation about strict liability which is independent of negligence and intent similar to product liability. But there is still no legal basis for this concept in German law. A third idea, which also lacks a legal basis, is to invent an own legal entity for A.I. – the so-called “e-person” - as counterpart to natural and legal persons.
There is no specific provision under Indonesian law regarding the liability in case of AI malfunctions. However, the other existing laws might apply. For example, Indonesian tort law under Article 1365 of Indonesian Civil Code (ICC) might apply. Article 1365 of ICC in essence stipulates that any individual who illegitimately causes loss to others must provide compensation for such loss.
If someone suffers loss because of the AI malfunction, the person who suffers loss because of the malfunction may file civil lawsuit against the party who creates or operates the AI. It is important to note that the party who suffers the loss must be able to establish a correlation between the loss and the fault/negligence of the party who operates or owns the malfunctioned AI.
There is no specific legislation in Pakistan to regulate A.I. and courts in Pakistan are yet to adjudicate on a matter involving loss/ harm caused due to an A.I. based system. In the current scenario, any system or product based on A.I. will have to be treated at par with a similar system or product not based on A.I., and remedies available to consumers vis-à-vis such system or product based on A.I. will be the same as otherwise available to such person.
An affected person may seek remedy under the applicable consumer protection laws, or in the event of damage incurred by such party, the same may be recoverable to the extent that the actual damage incurred can be proved.
Currently the Romanian national legal framework does not contain any explicit provisions with regard to any form of A.I. Therefore, the general rules on civil contractual liability and tort law, as well as administrative and criminal liability would apply on a case-by-case basis, depending on the specific circumstances of the case.
For instance, as per the Romanian Civil Code, any person is under the obligation to repair the damage triggered by any object in its custody. This obligation exists irrespective of any fault on behalf of the custodian.
It is considered that a person has custody over an object when it owns or where (based on the law or on an agreement, or even just as a matter of fact) it has control over the object and uses it in its own interest.
From another perspective, administrative offences may apply in case the A.I. software causes infringements of legislation (fo
r example, an infringement of competition law, if the A.I. facilitates collusion between the players on a certain market).
Lastly, in certain circumstances, it is not excluded that the malfunction of an A.I. system may amount to a criminal offence, in which case the question of the fault will be analysed on a case by case basis.
Nonetheless, as it happens on a global level, the question of accountability for A.I. systems and their results may raise rather difficult questions.
There has been much discussion on this topic, but there is currently no specific law or regulation that is applicable. Depending on the specific facts of the case, the developer or distributor of the software program may be subject to general tort liability under the Civil Act and/or product liability under the Product Liability Act. In any event, the program developer/distributor’s liability will be determined on a case-by-case basis after considering the developer/distributor’s level of intentionality/negligence and the unlawfulness of the act.
For the time being, there is no specific Artificial Intelligence regulation in Spain. Notwithstanding the above, on 10 April 2018, 25 European Union Member States, including Spain, signed a Declaration of Cooperation on Artificial Intelligence. Consequently, the European Commission will now work with Member States on a coordinated plan.
On June 2018 the European Commission appointed experts to build a new High Level Group on Artificial Intelligence, who will be in charge of making recommendations on how to approach these innovative techniques. The European Commission has taken many initiatives to regulate liabilities arisen from Artificial Intelligence, among others; it has issued the Commission Staff Working Document about Liability for emerging digital technologies dated on April 25, 2018 or the Ethics Guidelines for Trustworthy Artificial Intelligence, dated on April 8, 2019.
Europe wants to be at the forefront of these developments and therefore its intention is to enact a legal framework that the continent can meet together for artificial intelligence to succeed and work for everyone.
In light of the above, as Spain has not any specific Artificial Intelligence responsibility framework, current software provisions will generally apply to early forms of Artificial Intelligence malfunctions, being the software developer entity generally found liable for the malfunction of the program.
The question of A.I. malfunction liability would, for the time being, probably have to be resolved with the help of provisions concerning product liability and classical principles of liability. If this would result in a fair result remains to be seen.
Based on the current legal framework of Taiwan, it would be the one that offers, sells, or licenses the software program (the “Software Provider”) to the market that shall be liable to the consumer who suffered losses or damages from the A.I. malfunctions, unless the Software Provider is able to establish the fact that the software program meets the “state of art” at the time when the program was introduced to the market. This person or entity may in turn to claim any service provider that participated in the development or completion of the software program to share such liabilities.
Liability in terms of Artificial Intelligence (AI) is not expressly regulated under Turkish laws and is subject to the general provisions under the Turkish tort law. Fault is the basis to deem that there is liability in Turkish liability regime in principle. The applicability of these general laws on AI disputes is not an established clear issue in the Turkish jurisdiction yet.
Ordinarily, there will be strict liability for the producer of defective products for consumers (Consumer Protection Act 1987) and this would include products which are themselves software or which include software components. Where such defects have resulted from computer assisted design or other software assisted processes, it will ordinarily be the person who programmed the CAD tool who will then face liability. However, this is all predicated on a principle of casual connection, i.e. "Because of A + B, C necessarily came next". When the software starts to make decisions for itself based upon its "learning" from what it observes/receives from external sources and so ceases to be predictable, this liability concept becomes more strained.
There are discussions as to whether AI could be given a separate legal personality, and so be held accountable in a similar way to a company. In addition, the licensors/programmers responsible for the AI could be open to claims through vicarious liability. Alternatively, a similar framework to that which is applied to animals who cause harm or damage, where there is strict liability on their owner(s), could be applied. However, both of these discussions have not been fully developed and therefore, for the time being at least, it would seem most likely that the licensor/programmer of the A.I product would be liable pursuant to the strict liability regime in the Consumer Protection Act as referred to above. In the business (i.e. non-consumer) context, contractual provisions will usually specify where liability will sit in any event.
The liability for malfunctioning of an AI will typically be determined by the terms of the agreement under which the AI was provided. License agreements frequently limit the liability of the licensor/provider, and may even require the licensee/user to indemnify the licensor for liabilities arising from the licensee's use (regardless of malfunction).
In the absence of a contractual relationship, the liability analysis would be in tort. The injured party would have to demonstrate negligence - that it was owed a duty of care, that the duty was breached, and that the malfunction was the cause of the injury. Depending upon the facts, a tort claim could be maintained against the developer/licensor or against the user who deployed the AI.
In the consumer landscape, under the Australian Consumer Law, a supplier guarantees its product is fit for purpose. Where an AI product malfunctions in circumstances which enliven this regime, the supplier would bear liability for the defective product. However, this interpretation relies on a linear scenario where the supplier has held out its AI product can do A but it instead does B.
In addition to the above, a supplier may also be potentially liable in tort relating to malfunctions of an AI software program. Potential liability would involve an assessment of whether a duty of care has arisen and the impact of any potential disclaimer in relation to the AI software program.
In the business scenario, generally contractual provisions related to defects or malfunction will be negotiated between the parties. Such provisions will allocate the risk and any consequential liability to the appropriate party.