To what extent is the legal system ready to deal with the legal issues associated with artificial intelligence?
The legal industry is uniquely ready to deal with the issues Artificial Intelligence raises, because it was constructed with flexibility in mind. As new ways of doing business, of communicating, and of memorializing human interactions have come (and sometimes gone), the legal industry has adapted and changed the way attorneys practice - as well as how attorneys support their clients. And, too, in addition to justice, the practice of law is part intentional contrivance and part pageantry - rules determined by people, for people. In that same vein, attorneys, judges, and the legislature will determine the extent to which Artificial Intelligence supports and furthers legal work, rather than simply being pulled along the march of progress generally.
Maltese law already contains various concepts which can be used to compensate for damage caused by AI machines. Apart from product liability, which may be made use of by the consumer who would have purchased the AI machine, should such machine cause the consumer damage, it is highly likely that tort law, under the Maltese Civil Code, Chapter 16 of the Laws of Malta (the ‘Maltese Civil Code’) may also be applied in various other instances to allocate responsibility in the case of damage caused by artificial intelligence.
Further to this, Maltese law provides for indirect liability of employers, parents, proprietors of animals or buildings, which could probably provide for an adequate basis should it be determined that specific legal provisions regulating AI would need to be inserted into Maltese law, since such approaches of indirect liability provided for in the Maltese Civil Code are generally only applied exclusively to the specific scenarios set out in the law.
Moreover, should legal personality be applied to AI machines, as is done for companies, this would create a potential scenario where the AI machine itself could possibly be sued for any damage it could have caused to an individual.
Should the legal personality concept never be applied to AI machines, the most likely scenario envisaged would be that Malta would have to update its legal system and adapt the existing legal framework to the issues that AI brings with it; in particular amendments to Maltese tort law.
While Norway does have a strong IT-environment, its legal system is not currently equipped to deal with the potential legal issues that would arise out of AI.
For instance, the current laws and regulations relating to the processing of personal data does not necessarily pose any hurdles on the analysis of pseudonymised and aggregated data by AI. However, the details surrounding such analysis remains unclear, hereunder the breadth and traceability of the data collected, the determination of who has used the data, the right of access, or the requirements of pseudonymisation.
Furthermore, the debate on the allocation of ownership rights of such data is still ongoing in Norway, hereunder whether such rights shall befall the party collecting the data or the party conducting the analysis.
Lastly, contract and torts law is unequipped to deal with potential issues of product liability of autonomous vehicles or robots, or malfunctions or malice in the provision of health services by AI and robot diagnostics.
While it is possible to deal with legal issues with AI using general principles of law in Turkey, revision in legislation is required to be in a better position.
To some extent, legal issues arising from AI may be solved under the current law regimes. The Consumer Rights Protection Law provides rules for protecting AI users if the AI product/service has defect and thus the user suffers loss from the defect. The Tort Liability Law and the Contract Law is also used in cases that the AI infringes anyone’s legitimate interests. Further, data collection and machine learning is the core of the AI technology. The Cybersecurity Law and the privacy law give instructions on how AI should collect, process, use and transfer information in related to national security and privacy. However, China has not fully prepared to deal with ethic issues arising from AI. There is no such an integrated law defining rights and obligations of AI developers, AI operators and AI users. Last, the current IP law does not define the ownership of IP created by AI.
In the history of humankind, any legal or justice system has been developed by adapting itself to the reality of the occurrence of the facts. In this sense, we believe that despite the fact many of the “general” rules pertaining to liabilities, protection and regulation of artificial intelligence are already established, all the new issues related to problematics on artificial intelligence will shape the current and new legal framework on such matters.
In this sense, the more development and applications on artificial intelligence the better for the legal framework, as those developments and applications will encourage amendments to properly regulate artificial intelligence. However, a full-implemented digital ecosystem which allows the adoption of new technologies will be required.
AI will place significant strains upon the English legal regime; many criminal acts and civil offences depend on questions of state of mind, foreseeability and/or intent… all of which are difficult enough to apply to fellow human beings, but which become near impossible to apply to a computer programme which has no emotions or "intentions" per se, but which will rapidly cease to act in a manner which is foreseeable on the part of the original programmers. It is likely therefore that further legislation would be required so as to remove elements of doubt that would otherwise persist.
The Romanian legal system is neither less nor more prepared than other legal systems to deal with the risks and legal issues associated with artificial intelligence. Currently, there is no national or European legal framework in the sense of AI tailor-made legislation. Whilst for the moment existing legislation may seem as largely sufficient, one the AI will spread and become more sophisticated there will be a stringent need for dedicated legal framework.
At the moment, in lack of a legislation specifically dealing with artificial intelligence issues, also in Italy - like in most other jurisdictions - there are some areas with lack of legal certainty. For example, we believe that more clarity would be needed in connection with issues like liability in case of damages caused by an AI solution and ownership of intellectual property rights in connection with works created by AI tools.
Dutch law does not provide for specific legislation dealing with artificial intelligence. However, given that the Dutch Civil Code contains numerous open norms and that it is technology agnostic, the Dutch legal framework is fairly well equipped to deal with the legal implications of emerging technologies such as artificial intelligence.
To a certain extent, existing laws may apply to AI providers and to seek redress in case of any damage that may be caused by an AI device. The strict liability regime under the Consumer Code and the Civil Code could be applied in such cases. In any event, regulating AI more broadly could be important, given the ramifications of the AI technology, mainly when the AI products and services start to create actions independently from the software programmers.
As discussed above, Indonesian does not as yet have any specific regulation for artificial intelligence; there is only the general Consumer Protection Law. We believe that more specific, advanced regulations concerning A.I. (e.g., whether A.I. can, on its own, be considered a legal subject equivalent to individuals and entities; what rights and obligations must be conferred upon an A.I.) must be issued by the Indonesian government to fully deal with the legal issues associated with A.I.
There are presently no specific laws dealing with AI. Existing legislations, such as the IT Act, tort laws, intellectual property laws and criminal laws, provide a broad framework to address liabilities arising from the use of technology. AI systems are designed in such a manner that many times the acts are not foreseeable by the engineers and may attract the concept of negligence under the tort law. The Copyright Act, 1957 as specified in the response to Question 6 above includes computer database under the definition of “literary work”. As any literary work is protected under the Copyright Act, 1957 in India and AI functioning mainly on the basis of database, it may be concluded that AI systems are copyright protected. Further, the same rules which are applicable in regard to “computer programmes” under Patents Act, 1970 may be applicable to the AI systems. If the processes associated with the creation or functionality of AI systems make them novel, inventive and capable of industrial application such processes are patentable. However since AI systems have limited human action, control or intervention, there is the added challenge of establishing ownership of intellectual property rights generated through the use of AI systems. Further, if a service provider using AI as part of the software, in the course of performing a contract, discloses an individual’s personal information including sensitive personal data or information without such individual’s consent, then such service provider may be liable for penalties under the IT Act as specified in the response to Question 9 above. Criminal law, specifically, the Indian Penal Code, 1860, may be applicable to numerous crimes committed by the creators of AI systems under the disguise of machines or algorithms. Having said the above, the real challenge lies in attributing liabilities under any of the aforementioned laws to AI systems operating with limited human intervention, since a machine or algorithm cannot be held liable for committing intellectual property infringement, for negligence, for committing crimes or for disclosing confidential or personal information.
We expect that the legal and ethical questions that arise out of use of artificial intelligence will require new and specific regulation. With that being said, as the Israeli legal system is based on court precedents, the legal issues associated with the use of artificial intelligence may be settled faster than expected as increased use will force the courts to set precedents on these issues. It would seem that until legal precedents are set, legal issues associated with artificial intelligence will be assessed on a case-by-case basis.
There is currently no legislation in Singapore that specifically deals with artificial intelligence. As mentioned above, consumer protection laws will apply to liability issues in respect of the artificial intelligence software purchased by the consumer. However, such laws may not be sufficient to deal with more complex liability issues relating to artificial intelligence when consumer protection laws are inapplicable. Furthermore, the use of artificial intelligence is currently not regulated in Singapore. This is likely to become a greater area of concern as artificial intelligence technology becomes more sophisticated and takes an increasingly active role in our economy.
French authorities have for many years understood the importance of setting up mechanisms to foster pilot projects as well as large-scale experiments in the area of Artificial Intelligence (for instance, ‘France is AI,’ ScanR, etc.), but also the necessity to develop a regulatory framework that will protect consumers and citizens at large.
In this latter respect, the 1978 Act on personal data protection forbids any legal person to let a decision legally binding on an individual be made on the sole basis of a data processing that will profile such individual or assess certain aspects of his personality. Furthermore, the Act n°2016-1321 of 7 October 2016 for a digital Republic enacts the right for a natural person to be informed when an individual decision concerning her is taken on the basis of an algorithm and to request communication of the rules and major features of this data processing.
In light of the risks raised by Artificial Intelligence, essentially, the risk that “It would take off on its own, and re-design itself at an ever increasing rate" (Stephen Hawking), legislators may have to reinforce such principles (along with the liability regime, as explained in Question 13) as AI develops and becomes less and less understandable to those who become subject to its prescriptions.
There is still quite some legal uncertainty regarding artificial intelligence. (See above)
There are currently no specific rules as regards A.I. functionality under Swiss law and we are not aware of any intentions of the legislator to enact such rules. While government authorities periodically review respective practical developments, it is emphasized that regulation should be technology-neutral. The Federal Council has repeatedly stated and it is widely held in the legal community that A.I. specific legislation is neither required nor desirable in order to allow for the regulatory latitude and room for development required for technology driven business models and companies.
As of today, in the Ecuadorian legal system there is not specific law that regulates aspects of or issues associated with artificial intelligence. Moreover, from our legal perspective, Ecuador is not prepared to deal with legal issues related to artificial intelligence, since that would mean it would need to create and amend a number of laws in order to regulate its application, uses and protection. However, artificial intelligence is a subject that every day gains more force and importance around the world, which is why Ecuador will eventually find itself in a position where it needs to enact legislation that regulates said matter, specially taking into account that artificial intelligence will be an issue that will have a great impact in society.