Imagine, if you will, a murder trial where the defendant is charged with shooting the victim at point-blank range in broad daylight with multiple witnesses present. The scenario plays out like hundreds of murder trials before it: a judge presides over the case, the state’s lawyers carefully lay out the prosecution, and the defense vigorously defends their client.
The one astonishing part of this scenario is that at the end of closing arguments, the 12 jurors do not leave the courtroom, nor do they convene privately to comb through the evidence and transcripts. In fact, the case is decided in microseconds with the foreperson reading the judgment under a guaranteed unanimous conclusion.
The jurors in the scenario are 12 separate and distinct artificially intelligent applications intricately programmed to listen to all of the evidence, apply the facts to the law, and even take into account the human condition rife with emotions, illogic, and biases.
While this scenario sounds far-fetched and the work of science fiction derived from the likes of JJ Abrams or Arthur C Clarke, this is in fact a very realistic scenario and arguably a foregone conclusion of our judicial system. As our technological capabilities compound almost daily, we may evidently reach what Ray Kurzweil calls the Singularity, where nonbiological intelligence will match the range and subtlety of human intelligence.
In fact, artificial intelligence, or “AI,” is rapidly becoming pervasive in our society. The stock market uses AI in high frequency trading and other electronic exchanges. The credit card industry uses AI to perform transactions and track fraud. Our government uses AI for a variety of functions. Elon Musk recently tweeted his fear that artificial intelligent programs could in fact be more dangerous than nuclear weapons, highlighting the fact that AI programs could become “super intelligent” within the next 10 years – thus giving these programs the ability to absorb all of the world’s knowledge and process this knowledge in a specific context to solve very complex and nuanced problems. In fact, Ray Kurzweil is convinced that humans and machines will be fully integrated within our lifetime.
The questions to be wrestled with here are twofold. First, can an artificially intelligent program be considered a legal person capable of serving on a jury, being a trustee, or even becoming a plaintiff or defendant in a court of law? And second, if we can answer the first question in the affirmative, would this legal “person / AI” meet the legal requirements to enjoy such rights, duties, and obligations available in our judicial system?
In his essay Legal Personhood for Artificial Intelligences, Lawrence B. Solum dives deeply into a fascinating discussion on the capability of legal personhood being attached to an artificially intelligent program for the purpose of being a trustee to a trust. Solum lays out the case that an AI application could not only demonstrate the nuances, competencies, and sophistication of a human in order to carry out the duties of a trustee, but an AI program could also ultimately gain constitutional personhood.
Today we give legal personhood status to inanimate entities like corporations, as demonstrated further by the Hobby Lobby case recently heard by the Supreme Court. Thus, an argument can be made that if corporations and non-living beings can have personhood and we can satisfy the legal personhood requirements on a constitutional basis, artificial intelligences need only to meet the legal requirements of self-awareness and the doors of justice and our legal system should swing wide open.
Such a reality would offer some distinct advantages to our justice system. As a practical matter, having a jury composed of artificially intelligent programs would eliminate all the problems associated with the Voir dire process. Defendants and plaintiffs would not have to worry about prejudice or bias; indeed, personality traits and life experiences could be tailored within each AI juror and meet the interests of all parties involved. AIs serving as trustees and other legal custodial roles would come with a guaranteed assurance of compliance with the prudent investor rule and other ethical requirements. According to Solum, if an AI were to breach a fiduciary duty, there are a number of ways that AI program could be held responsible.
The idea of AIs participating in our legal system lies as plaintiffs or defendants is perhaps the most titillating scenario. Of course, one has to consider the slippery slope that lawyers will all have to face: When we’re staring at opposing council and that black pointer and LCD computer screen is staring back at us, who’s most likely to win that case?