To what extent is your legal system ready to deal with the legal issues associated with artificial intelligence?
Technology (3rd edition)
The legal system is mostly based on the traditional legal concepts of continental law system. Currently the regulation of artificial intelligence may be possible within that scopes. The point is that if no changes is made in future the legal professionals will discuss the content of legal concepts in comparing with the fact relate with artificial intelligence and the dispute will not end until the amendment to be made in legislation.
Given that there is no regulation in place to guide the use of artificial intelligence, at present our legal system is not yet fit to deal with wide-ranging issues associated with this matter. At the time any problems of this king would have to be resolved with our Civil Code or common law, which is very old and outdated.
The Egyptian legal system adopts the latin system. Accordingly, it is based on establishing general rules that can flexibly apply to new developments and can accordingly, apply to A.I.. However, given the complexity of the A.I. and of the technology on which it is based as well as the broad application of the same, depending on the general principles of law may only suffice for a short period of time; but soon, specialized regulations will be required.
The wording of Estonian legislation seems to be general enough, that it is possible to apply already existing legislation to artificial intelligence. However, from time to time, changes are still needed. In recent years, Estonia added clauses to the traffic act, to keep up with the development of technology – there was a need for clauses to regulate parcel-robots in traffic. The changes came quite quickly, which shows Estonia’s readiness to handle technological development.
So far, legal experts have started a discussion about how to solve liability issues regarding AI. In addition, the Estonian Ministry of Economic Affairs and Communications has carried out an in-depth study, focusing on the readiness of the Estonian legal framework for the adoption of AI. The conclusion of this study is that no fundamental amendments are necessary to be made to Estonian laws, however some changes to liability issues are necessary. However, not all Estonian legal experts and other professionals dealing with AI agree with this conclusion.
French authorities have for many years understood the importance of setting up mechanisms to foster pilot projects as well as large-scale experiments in the area of Artificial Intelligence (for instance, ‘France is AI,’ ScanR, etc.), but also the necessity to develop a regulatory framework that will protect consumers and citizens at large. Many initiatives show that the legal system will be adapted as much as necessary to the problems posed by AI - perhaps even faster than in-country developments of the AI itself.
There are currently no PRC laws or regulations specifically regarding the creation, development or use of artificial intelligence (AI). However, that does not mean AI is wholly unregulated (or wholly prohibited). Given that the operation of AI generally calls for large data sets, those developing and implementing AI services will be subject to the requirements of the PRC Cybersecurity Law and associated regulations. While China’s cybersecurity regime is in a relatively early phase of development, China thereby also has advantages such as flexibility in further developing the regulatory framework applicable to AI in tandem with the development of the technology itself. At the same time, China will have to update other, older laws, such as the PRC Copyright Law, which currently does not appear to protect work and content created via AI.
There is currently no legislation in Israel that specifically deals with artificial intelligence. General consumer protection laws will apply to legal issues in respect of the use of artificial intelligence software. However, they may be inapplicable to certain liability issues since these do not deal with virtual acts or omissions and as such, cannot be based on the concept of mens rea and thereby, attributing liability or fault for causing damage. Accordingly, issues of liability will inevitably be more complex with regard to artificial intelligence. In this regard, artificial intelligence will require legal systems to fundamentally change the concept of basic legal norms.
Traditionally, the development of the legal system in new areas follow (and does not anticipate) the new business needs. Hence, whilst a civil law system by its own nature is able to address some initial issues of new business developments, no doubt the full flourishing of AI applications will require ad hoc legislation.
As mentioned above (see question 16), the current legal system can solve the legal issues associated with artificial intelligence (AI) to some extent, but there is no legislation that specifically deals with AI. Thus there remain many uncertainties related to the legal issues associated with AI (such as civil and criminal responsibilities concerning malfunctions of AI and protections of AI software and AI deliverables).
To address such uncertainties, in June 2018, METI published the “Contract Guidelines on Utilization of AI and Data”, which consists of two sections: Data and AI (the METI Guidelines). The AI section explains a fundamental approach to be taken in relation to contracts that concern the development and utilization of AI-based software from the perspective of promoting the development and utilization of software using AI technology. The METI Guidelines also provide sample provisions for development contracts for AI-based software.
AI and the use of AI are not regulated in Malaysia. The development of AI has been so rapid that the law has failed to keep pace, not just in Malaysia but globally. While no laws and/or updates to existing legislation have been prepared to specifically address the myriad of legal issues associated with AI, it cannot be said that the Malaysian legal system is wholly unprepared to deal with such issues, as Malaysian product safety and consumer protection laws (which have been discussed in detail in Question 16 above) would apply to the current legal status of AI in Malaysia.
However, the Malaysian legal system is continuously evolving in terms of AI adoption and will need to improve in terms of its readiness to adopt and/or implement AI systems, and focus on all areas, particularly investment in AI adoption to accelerate its AI journey. The Malaysian government is taking steps to drive the country’s AI ecosystem by pursuing the development of the National AI Framework through MDEC which would lend guidance on how AI and the legal issues associated therewith may be addressed in Malaysia.
In March 2019, the Maltese Parliamentary Secretary for Financial Services, Digital Economy and Innovation within the Office of the Prime Minister launched a high-level policy document on artificial intelligence: “Malta: Towards an AI Strategy”, which was issued for public consultation. By implementing a legal and ethical framework for the implementation of artificial intelligence, the Maltese government aims to attract leading artificial intelligence companies to Malta and increasing local start-up activity in Malta.
Traditional legal doctrines have coped fairly well with technological developments historically and it is likely that the New Zealand judiciary will remain flexible and responsive when interpreting the law in relation to new technologies. Regulators and legislators tend to opt for technology-neutral responses to issues, to avoid new laws and regulations becoming quickly outdated or being likely to cause unanticipated issues.
Some new issues which may arise as a result of AI could already be dealt with through existing practices and legal principles. For example, if artificial intelligence which were to cause personal injury (such as accidents involving autonomous vehicles), this could be dealt with under New Zealand's existing no-fault ACC regime.
However, as noted above, there are very few instances where the legal system has sought to specifically legislate or regulate AI, meaning that we are unlikely to be well-placed to deal with issues arising from artificial intelligence which are unique and not readily able to be resolved by reference to traditional laws and legal principles. For example, legal responsibility has historically rested with persons or organisations, determined by reference to principles such as causation and remoteness. This approach has limits in the context of AI because, as artificial intelligence continues to develop, it is foreseeable that there could be circumstances where a loss has been caused with minimal and/or remote human or organisational involvement. Continuing to attribute liability in the way we currently do could therefore eventually lead to unfair or unexpected outcomes, which, although currently untested, the law does not appear to yet be well-placed to deal with.
There is still quite some legal uncertainty regarding artificial intelligence. (See above)
The use of Artificial Intelligence AI undoubtedly raises legal questions in many fields of law such as IP and consumer protection. Since a specific law on AI is yet to exist, we must rely on the existing laws and regulations to answer to these legal challenges.
Most laws and regulations were designed without having AI in mind. Therefore, the legal issues with the use of AI are not properly addressed. For example, the issue of IP ownership over work created by AI. In some other jurisdiction, this question may have been answered through judicial decision. However, the current copyright regime in Indonesia may not provide a clear-cut answer to this issue.
Another field of law that needs to be improved is consumer protection particularly relating to personal data protection. Some types of AI require a big chunk of data to function. Some of this data may include personal data. The absence of strong personal data protection regulation in Indonesia is a big hole in Indonesian consumer protection regulatory regime. Without the existence of this piece of law, the consumers who utilize or become the object of the AI technology will not receive a proper protection.
While policy framers do recognise A.I. as one of the likely causes for the next stage of technological evolution of the digital economy, the same is not regulated by any legislation. A system or product utilizing A.I. technology is presently treated at par with any other system or product of similar nature.
The Romanian legal system is neither less nor more prepared than other legal systems to deal with the risks and legal issues associated with artificial intelligence. There is yet no fully-fledged legislation at national or European specifically targeting A.I.
Recently, however, the Independent High-Level Expert Group set up by the European Commission issued the Ethics Guidelines for Trustworthy AI which provides a framework for achieving trustworthy AI, namely:
- it should be lawful, (by observing all applicable laws and regulations);
- it should be ethical (by complying with ethical principles and values); and
- it should be robust (technically and socially, since even with good intentions A.I. systems can cause unintentional harm both from a technical and social perspective).
Moreover, as mentioned under question 15, existing general legislation may still provide answers concerning many topics involving A.I.
The Intelligent Robots Development and Distribution Promotion Act and Brain Research Promotion Act were enacted to promote R&D and investment in certain areas, but there are no current laws or regulations that specifically concern the overall use and development of AI technology across various sectors. Research on legal issues associated with AI (e.g., whether AI has a legal personality and thus may be subject to penalties, product liability, user protection, intellectual property rights associated with the outputs produced by AI, licenses/permits for use, strict liability, registration system, documentary proof of AI usage records) is on-going, and a bill for a new law called the Framework Act on Intelligence Information Society has been proposed to the National Assembly.
Meanwhile, because the value of AI depends on the availability and quality of data, a bill for amending the current data (i.e., personal information) protection laws to better accommodate the changes brought about by the arrival of the AI era has been proposed to the National Assembly.
Spain has not provided a specific legal framework applicable to artificial intelligence yet but, as explained above, on 10 April 2018, Spain signed a Declaration of Cooperation on Artificial Intelligence, as Artificial Intelligence will be approached from a European Union law perspective. Also, on March 4, 2018, the Government presented the Spanish RDI Strategy in Artificial Intelligence, which will be the basis for the future National Artificial Intelligence Strategy, which will allow for the coordination and alignment of national investments and policies. The strategy emphasizes as a priority the implementation of ethical principles applicable to artificial intelligence, including the drafting of an Artificial Intelligence Code of Ethics.
Regarding autonomous vehicles, Spain is doing its best to regulate driverless cars circulation through a "XXI Traffic Act". Nowadays, although there is still no law which specifically regulates autonomous vehicles in Spain and autonomous vehicles are currently governed by the broader regulatory framework that is applicable to vehicles, since 2015 there is an instruction (Instruction 15/V-113) issued by the Spanish General Directorate of Traffic related to the granting of special authorisations for the testing of such vehicles in Spanish public roads.
The Swedish legal system is not yet completely ready to deal specifically with legal issues associated with A.I. Since Sweden is far from alone in the EU in this regard, it is not unlikely that this will be resolved within the EU.
The Taiwan government has been preparing itself for the artificial intelligence era. The Ministry of Science and Technology launched many programs to equip our government and industries with the ability to adapt them in the new era. Meanwhile, each primary regulator of different industries have been studying and researching whether there shall be any regulatory amendments or changes in response to the upcoming article intelligence. Although some commentators deem that the progress has been slow, the Taiwan government has been on track on this trend.
Turkish legal system is at a very early stage to deal with legal issues associated with artificial intelligence. Legal framework and practice concerning digital technologies are relatively new and immature. There is no remarkable legislative or judicial work conducted for preparation to the potential legal issues associated with artificial intelligence.
AI will place significant strains upon the English legal regime; many criminal acts and civil offences depend on questions of state of mind, foreseeability and/or intent… all of which are difficult enough to apply to fellow human beings, but which become near impossible to apply to a computer programme which has no emotions or "intentions" per se, but which will rapidly cease to act in a manner which is foreseeable on the part of the original programmers. It is likely therefore that further legislation would be required so as to remove elements of doubt that would otherwise persist.
The House of Lords Report entitled "AI in the UK: ready, willing and able?" (found here: https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf), published in April 2018, concluded that the UK is in a strong position to be among the world leaders in the development of artificial intelligence. The Report recommended various proposals and initiatives such as government-backed targeted procurement and a national policy framework to help the growth of AI and AI technologies. However, the Report does note that there are distinct areas of uncertainty regarding the adequacy of existing legislation should AI systems malfunction, underperform or otherwise make erroneous decisions which cause harm (as described above) and the Report called upon the Law Commission to provide clarity on these legal issues. In its response, the UK Government agreed with the House of Lords Report recommendations regarding the need for greater regulation of AI and welcomed the Law Commission's input into the legal issues surrounding AI.
Moreover in April 2018, the UK Government launched an industry, government and academic-led 'AI Sector Deal' to boost the UK’s global position as a leader in developing AI technologies. The latest Sector Deal publication from May 2019 (found here: https://www.gov.uk/government/publications/artificial-intelligence-sector-deal/ai-sector-deal), outlined recommendations to (among others): help attract AI talent from around the world; help raise and deliver funding for major upgrades to the UK digital and data infrastructure; and to help create a flexible approach to AI regulation that promotes innovation and growth of AI whilst protecting citizens and the environment. The launch and level of support directed towards this Sector Deal suggests that the UK is certainly trying to be proactive in not only promoting AI, but also addressing the legal challenges that it will pose in the UK.
The U.S. legal system is not well-suited to address legal issues associated with artiﬁcial intelligence. First, there is no sui generis protection of databases, and therefore the right to use databases is either subject to bilateral contract terms or not well-settled by statute. Second, foundational principles of contract law are based upon the intentions of the contracting parties, which is a standard that is inapt for machine-based contracting. Third, injury related jurisprudence under tort law is based on a fault regime that is diﬃcult to apply to artiﬁcial actors. The U.S. legal system is not unique in facing these challenges, as artiﬁcial intelligence will require all legal systems to re-evaluate the application of basic legal principles.
There is no exclusive artificial intelligence (AI) regulatory system or ethical standard in Australia. This means that, as the law stands, AI needs to fit within a pre-existing legal framework comprising privacy, competition and consumer law.
Australia's privacy laws are readily able to address the implementation of AI given an organisation's obligations under the APPs will continue to apply irrespective of the technology used to collect, use and disclose personal information.
Competition laws are only moderately prepared to deal with the legal issues associated with AI. Data itself is becoming an increasing source of market power, and this issue is yet to be addressed in a significant way.
The Australian Competition and Consumer Act 2010 (Cth) prohibits a number of anti-competitive practices, including cartel conduct, anti-competitive agreements and the misuse of market power. There is a risk that certain algorithms may engage in anti-competitive behaviour, particular when they are involved with making decisions in relation to the pricing of products or services. The ACCC has highlighted such concerns as a particular area of focus and the Competition and Consumer Act has previously been amended in an attempt to negate such practices.
In its present form, the Australian Consumer Law does not clearly allocate liability between the supplier and the consumer where an AI product malfunctions on its own accord. Finally, Australia's criminal law is generally slow to develop in contrast with technological advancement. The introduction and increasing use of AI in the community may call into question whether new offences that better address negative exploitation of AI by individuals and organisations are necessary.