To what extent is your legal system ready to deal with the legal issues associated with artificial intelligence?
Technology (second edition)
As discussed above, Indonesian does not as yet have any specific regulation for artificial intelligence; there is only the general Consumer Protection Law. We believe that more specific regulations concerning A.I. (e.g., whether A.I. can, on its own, be considered a legal subject equivalent to individuals and entities; what rights and obligations must be conferred upon an A.I.) must be issued by the Indonesian government to fully deal with the legal issues associated with A.I.
Dutch law does not provide for specific legislation dealing with artificial intelligence. However, given that the Dutch Civil Code contains numerous open norms and that it is technology agnostic, the Dutch legal framework is fairly well equipped to deal with the legal implications of emerging technologies such as artificial intelligence.
To a certain extent, existing laws may apply to AI providers and to seek redress in case of any damage that may be caused by an AI device. The strict liability regime under the Consumer Code and the Civil Code could be applied in such cases. In any event, regulating AI more broadly could be important, given the ramiﬁcations of the AI technology, mainly when the AI products and services start to create actions independently.
At the moment, as Luxembourg, along with most European countries, lacks legislation specifically dealing with artificial intelligence issues - there is lack of legal certainty. For example, more clarity would be needed under Luxembourg law, and other EU countries, in connection with issues related to liability in case of damages caused by an AI solution and ownership of intellectual property rights in connection with works created by AI systems.
The Romanian legal system is neither less nor more prepared than other legal systems to deal with the risks and legal issues associated with artificial intelligence. Currently, there is no national or European legal framework in the sense of AI tailor-made legislation. Whilst for the moment existing legislation may seem as largely sufficient, once the AI will spread and become more sophisticated there will be a stringent need for dedicated legal framework.
Spain has not provided a specific legal framework applicable to artificial intelligence yet but, as explained above, on 10th April 2018, Spain signed a Declaration of Cooperation on Artificial Intelligence, as Artificial Intelligence will be approached from a European Union law perspective. Until then, artificial intelligence is to be applied the current laws by analogy, which of course will imply huge legal challenges.
Regarding autonomous vehicles, Spain is doing its best to regulate driverless cars circulation through a "XXI Traffic Act". Nowadays, although there is still no law which specifically regulates autonomous vehicles in Spain and autonomous vehicles are currently governed by the broader regulatory framework that is applicable to vehicles, since 2015 there is an instruction (Instruction 15/V-113) issued by the Spanish General Directorate of Traffic related to the granting of special authorisations for the testing of such vehicles in Spanish public roads.
India does not regulate Artiﬁcial Intelligence (A.I.) yet. As stated in response to Question 13, a hardware based on A.I. is presently treated at par with any other machine, and liabilities are assigned on the basis of the strict product liability principle, wherein the creator/ manufacturer is held liable. The product liability principle is based on the provisions of the Consumer Protection Act, 1986, the Sales of Goods Act, 1930 and the law of torts.
However, the policy framers in India do recognise A.I. as the likely cause for the next stage of technological evolution of the digital economy.
As stated in our above, current legal provisions under Turkish law is not fully ready to deal with legal issues arising from use of AI and other autonomous technologies. Although general provisions of law can be applied to come up with makeshift legal solutions, a legislative overhaul will be necessary to properly deal with legal problems arising from such disruptive technologies.
The Swedish legal system is not yet completely ready to deal specifically with legal issues associated with A.I. Since Sweden is far from alone in the EU in this regard, it is not unlikely that this will be resolved within the EU.
There are currently no specific rules as regards A.I. functionality under Swiss law and we are not aware of any intentions of the legislator to enact such rules. While government authorities periodically review respective practical developments, it is emphasized that regulation should be technology-neutral. The Federal Council has repeatedly stated and it is widely held in the legal community that A.I. specific legislation is neither required nor desirable in order to allow for the regulatory latitude and room for development required for technology driven business models and companies.
To some extent, legal issues arising from AI may be solved under the current law regimes. The Consumer Rights Protection Law provides rules for protecting AI users if the AI product/service has defect and thus the user suffers loss from the defect. The Tort Liability Law and the Contract Law is also used in cases that the AI infringes anyone’s legitimate interests. Further, data collection and machine learning is the core of the AI technology. The Cybersecurity Law and the privacy law give instructions on how AI should collect, process, use and transfer information in related to national security and privacy. However, China has not fully prepared to deal with ethic issues arising from AI. There is no such an integrated law defining rights and obligations of AI developers, AI operators and AI users. Last, the current IP law does not define the ownership of IP created by AI.
The regulatory framework in Mexico might face challenges in dealing with A.I., as certain provisions thereof, especially those relating to liability and criminal behaviour often consider the “intent” of an individual for the imposition of legal sanctions. To the extent these new technologies become more common in everyday activities, Mexican legislators will be forced to amend the relevant legal framework in order to make it compatible with such reality.
Another aspect to consider is the judiciary system. There are few (if any) precedents in the utilization of digital tools and/or evidence (e.g. electronic signature and digital authentication). Alternative dispute resolution methods, such as arbitration, may thus become more appealing, although their use is currently not the general rule.
AI and the use of AI is currently not regulated in Malaysia, by legislation or otherwise. The development of AI has been so rapid that the law has failed to keep pace, not just in Malaysia but across the globe. Currently, only product safety and consumer protection laws (which have been discussed in detailed in Question 13 above) would apply to AI.
Despite the possible introduction of the National Artificial Intelligence (AI) Framework and other similar initiatives introduced to accelerate the adoption of digital technology in Malaysia, the present laws in Malaysia are insufficient to deal with complex ethical and liability issues relating to AI such as personhood, agency, negligence, and autonomy, amongst others.
French authorities have for many years understood the importance of setting up mechanisms to foster pilot projects as well as large-scale experiments in the area of Artificial Intelligence (for instance, ‘France is AI,’ ScanR, etc.), but also the necessity to develop a regulatory framework that will protect consumers and citizens at large.
In respect of the foregoing, the GDPR prohibits any legal person to enable a decision legally binding on an individual to be made on the sole basis of a data processing that will profile such individual or assess certain aspects of his personality. Furthermore, the Act no.2016-1321 of 7 October 2016 for a digital Republic establishes the right for a natural person to be informed when an individual decision concerning him/her is taken on the basis of an algorithm and to request communication of the rules and major features of this data processing.
In light of the risks raised by Artificial Intelligence, essentially, the risk that “It would take off on its own, and re-design itself at an ever increasing rate" (Stephen Hawking), legislators may be required to reinforce such principles (along with the liability regime, as explained in Question 13) as AI develops and becomes less and less understandable to those subject to its stipulations.
There is still quite some legal uncertainty regarding artificial intelligence. (See above)
There is currently no legislation in Singapore that specifically deals with artificial intelligence. As mentioned above, consumer protection laws will apply to liability issues in respect of the artificial intelligence software purchased by the consumer. However, such laws may not be sufficient to deal with more complex liability issues relating to artificial intelligence when consumer protection laws are inapplicable. Furthermore, the use of artificial intelligence is currently not regulated in Singapore. This is likely to become a greater area of concern as artificial intelligence technology becomes more sophisticated and takes an increasingly active role in our economy.
In June 2018, the IMDA announced that it will be setting up an advisory council on the ethical use of artificial intelligence and data ("Advisory Council"). The Advisory Council will assist the Singapore Government to develop ethics standards and reference governance frameworks and publish advisory guidelines, practical guidance and codes of practice for voluntary adoption by businesses. It is hoped that this initiative will help to clarify some of the legal issues relating to the use of artificial intelligence as mentioned above.
There is no exclusive artificial intelligence (AI) regulatory system or ethical standard in Australia. This means that, as the law stands, AI needs to fit within a pre-existing legal framework comprising privacy, competition and consumer law.
Australia's privacy laws are readily able to address the implementation of AI given an organisation's obligations under the APPs will continue to apply irrespective of the technology used to collect, use and disclose personal information. However, competition laws are only moderately prepared. Data itself is becoming an increasing source of market power, and this issue is yet to be addressed in a significant way. Furthermore, there is a risk that data-sharing through AI may constitute cartel conduct in certain circumstances without further legislative clarification.
In its present form, the Australian Consumer Law does not clearly allocate liability between the supplier and the consumer where an AI product malfunctions on its own accord. Finally, Australia's criminal law is generally slow to develop in contrast with technological advancement. The introduction and increasing use of AI in the community may call into question whether new offences that better address negative exploitation of AI by individuals and organisations are necessary.
The U.S. legal system is not well-suited to address legal issues associated with artificial intelligence. First, there is no sui generis protection of databases, and therefore the right to use databases is either subject to bilateral contract terms or not well-settled by statute. Second, foundational principles of contract law are based upon the intentions of the contracting parties, which is a standard that is inapt for machine based contracting. Third, injury related jurisprudence under tort law is based on a fault regime that is difficult to apply to artificial actors. The U.S. legal system is not unique in facing these challenges, as artificial intelligence will require all legal systems to re-evaluate the application of basic legal principles.
As mentioned above (see question 14), the current legal system can solve the legal issues associated with artificial intelligence (AI) to some extent, but there is no legislation that specifically deals with AI. Thus there remain many uncertainties related to the legal issues associated with AI (such as civil and criminal responsibilities concerning malfunctions of AI and protections of AI software and AI deliverables).
The Polish legal system does not specifically address the issue of artificial intelligence, and it lacks special regulations in the matter. Therefore, it may not seem to be ready to deal with the legal issues associated with AI.
However, the strong need to introduce new regulations is being recognised, and the matter is currently being discussed.