Author – Dr. Paulius Astromskis
This paper inquires into the complex issue of the regulation of intelligent machines. The aim is to develop, considering all ontological levels of law, a framework that would ensure a balance between freedom and control of the most sophisticated technological innovations. To achieve this objective, the first part of the paper defines the general concept, elements, and ontology of law and regulation. This is followed by a critical analysis of the Trifecta model of regulation based on information technology in order to determine whether it can be used as a framework for intelligent machines regulation. After having ascertained that the Trifecta model needs to be augmented with a moral (values) domain, the quadfecta framework for intelligent machine regulation is proposed in the last part or the paper. This framework promulgates the idea of a dynamic interrelation between the behavior, the technology in use, the rules, and “the Law”, where changes in one domain result in changes in the other three. A follow-up research agenda for issues involving intelligent machine regulation is then discussed in the concluding remarks of the paper.
artificial intelligence; regulation; machine morality; machine justice; machine behavior; technology law
Klaus Schwab (2016) eloquently warns that the technological revolution will fundamentally alter the way we live, work, and relate to one another. Indeed, technological transformations, in their scale, scope, and complexity, are unlike anything humankind has ever experienced before. Clearly, technology is a permanent structural change, leading to unprecedented legal challenges. Already today, intelligent machines, such as the most sophisticated artificial intelligence (AI) systems, , are being developed and introduced to make autonomous decisions or interact otherwise with third parties independently. However, the associated rule sets, including those that serve to protect human rights and fundamental values, have not been explicitly defined. Due to the novelty of the application of technology in certain domains of practice, the emerging intelligent machines lack cognitive (expectations) or normative rule sets regarding how they can or should be used, or what their use means. That is, the existing legal systems and rules originally intended and designed for human-to-human (in personam) and human-to-machine (in rem) processes cannot work well in machine-to-human and machine-to-machine environments (Fomin 2018).
The issues of intelligent machine regulation might be analyzed through the lens of “The Law of the Horse”, assuming “that the best way to learn the law applicable to specialized endeavors is to study general rules” (Easterbrook 1996), thus neglecting the need for new specific regulations and fostering the application of old ones. Of course, historical developments of general rules and their applications to various contexts cannot be ignored. However, legal systems tailored to regulate the “horse” issues and real-world behaviors already cannot cope with the speed of technological disruptions and (at least) two specific cyberspace legal challenges: jurisdictional fragmentation and attribution of behavior (Appazov 2014). Therefore, concurring with Lawrence Lessig (1999), the potential of the horse law to regulate the most sophisticated intelligent machines seems to be overestimated.
The proposition that existing regulations will have to be changed to reflect new technological innovations was promoted by the Industrie 4.0 workgroup (Kagermann et al. 2013) and the RoboLaw research group (Palmerini et al. 2014). It is widely supported by various policy forming forums and initiatives, including the United Nations Centre on Artificial Intelligence and Robotics, the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, the European Commission High-Level Expert Group on Artificial Intelligence, and many others. Accordingly, there is a need for review of all ontological levels of law in the context of intelligent machines which, according to the European Parliament (2016), at some point of time may be responsible for making good any damage they may cause, and for making autonomous decisions or otherwise interact with third parties independently.
Yet the question of regulation includes the complex opposition between freedom and control. More control means less freedom and hence over-protectionism might unnecessarily hinder welfare enhancing innovations. Therefore, the main challenge for regulators is to balance freedom and control in a way that would maximize the personal and the overall well-being of society, thus promoting the benefits and inhibiting the harmful effects (den Hertog 2010). Although many scholars have offered generalized or specialized solutions to various technology regulation problems, the debate on the regulation framework is far from being exhausted. For the purposes of this paper, due to reasons of convenience and personal interest in the concept, the Trifecta model of IT-based regulation developed by deVaujany et al. (2018) has been chosen for critical analysis. The Trifecta model articulates an IT-based regulation as “an experiential, dynamic, evolving constellation of rules, IT artifacts, and practices encompassing recursive and generative relationships”, and assumedly may be applied to intelligent machine regulation, although not without a critique and expansion.
Therefore, the aim of this paper is to develop a general framework for intelligent machine regulation that would ensure a balance between freedom and control of the most sophisticated technological innovations, while considering all ontological levels of law. To achieve this aim, the following objectives are undertaken within the four parts of this paper: (1) define the general concept, elements, and ontology of law and regulation; (2) critically evaluate the IT-based Trifecta model of regulation; (3) develop a holistic framework for intelligent machine regulation; (4) establish a follow-up research agenda for intelligent machine regulation issues.
The objectives listed above reflect the overall structure of the paper. The first three parts provide analysis and discussion on first three objectives indicated above, while the research agenda and other concluding remarks are presented in the last section of the paper. This research contributes to the understanding of various legal aspects of technological evolution and its regulation. It may serve as a basis for legal and regulatory solutions that would encourage the development of the most sophisticated technologies while simultaneously ensuring the protection of human rights and other fundamental values of society.
This paper has certain limitations and constraints that need separate consideration. First, due to space constraints, this research was based on the limited number of selected resources, largely focusing on substantiating proposed general framework for intelligent machine regulation. While every effort was made to collate the most relevant sources, the level of scientific and policy developments is very high, and it is possible that many additional sources may be available yet not considered. Secondly, the broad aim of this paper, given these constraints, also leads towards the adhered high level of abstraction explaining proposed framework, and to the disadvantage that the paper may lack empirical evidence to support the key findings and observations. Finally, there is no intention to provide a technical opinion or recommendations on the specific legal regulation of intelligent machines. The observations made are intended to help advance from theory to practice of intelligent machines regulation, promoting general scope of questions and considerations, but without necessarily reaching or empirically substantiating specific legal recommendations for any specific jurisdiction. However, by setting the general framework, an overall agenda is formed for a follow-up research and development regarding proposed fields of machine morality, machine justice, machine psychology, legal personhood and relationships between them.