DIGITAL ETHICS AND ETHICISTS

Do you know right from wrong? Hopefully. Does your computer know right from wrong? Probably not. And although your office laptop will never be called upon to answer for any moral implications of its calculations, the billions of computations performed per second by advanced Artificial Intelligence (AI) systems will likely be.

And this is the point where technology and ethical disciplines intersect in today’s world. As an attorney working in the AI industry, it will be up to you to determine the legal ethics of both your own legal oversight over the AI process and the resulting ‘intelligence’ you have created.

A Transformative Technology

AI has been called one of the most transformative technologies in the history of mankind, and not unlike the development of nuclear technology, it possesses the potential for both good and evil. At the more benign level, the anxiety over AI has to do with societal ramifications relating to mass unemployment due to the replacement of blue-collar workers by industrial robots, delivery drones, and other supply-chain elements. There is also apprehension regarding eliminating large portions of high-tech knowledge workers whose functions will be handled by machine learning (ML) systems.

Then there are the issues of applying AI to control or at least track a country’s citizenry. We have already seen—and heard criticized—its application for such things as facial recognition, surveillance, drone warfare, and even the selection and presentation of the news we receive and the permissions over the social media we exchange.

The Dangers of Digital  

In the 1968 Stanley Kubrick film “2001: A Space Odyssey”, the AI computer/robot HAL develops not only a consciousness but also a complex psychological personification, including such emotions as guilt over error, fear of being found out, and finally an inclination to commit homicide against a human. Sound far-fetched? Not really. The futurists who dreamed up the HAL 9000 AI system were only about 40 years ahead of their time, and HAL’s malfunctioning—or equally worrisome, his intended functioning—are real concerns being expressed by AI developers and commentators.

The concerns over AI have generally revolved around two aspects: AI falling into the wrong hands and being controlled by bad actors, and what happens when super intelligence teaches itself to the point that rather than serving humanity, humanity becomes a servant to AI? All without any moral compass that—sometimes—restrains human evil. Governments are pouring billions of dollars into AI research and development because, as one Russian observer noted, ‘whoever gains the lead in AI will be able to rule the rest of the planet’.

Dire warnings over the dangers of AI have been voiced by none other than one of the leading developers of AI, Elon Musk, CEO of electric car manufacturer Tesla: ‘I think the danger of AI is much greater than the danger of nuclear warheads by a lot and nobody would suggest that we allow anyone to build nuclear warheads if they want….and mark my words, AI is far more dangerous than nukes. Far.’

So, can we continue to develop this remarkable technology while at the same time conferring it with an ethical trait?

A Call for Regulation

Aside from the fear of AI falling into the wrong hands, what about the people creating it to begin with? Might those developers also imbue AI with ill-conceived ‘notions’ and intentions? After all, at this stage, at least, AI still tends to learn from its creators, and it is this element of AI technology that must be fully understood and overseen before AI reaches the point where it fully engages in self-ML, a process already quite far along.

Mr. Musk also addressed this issue when he stated, ‘I am not normally an advocate of regulation and oversight — I think one should generally err on the side of minimizing those things — but this is a case where you have a very serious danger to the public. It needs to be a public body that has insight and then oversight to confirm that everyone is developing AI safely. This is extremely important. AI is a rare case where I think we need to be proactive in regulation than reactive.’ One means of addressing such regulation is to proceed with AI as ‘Augmented Intelligence’ to humans rather than creating Artificial Intelligence.

Teaching AI Ethics

Delphi, a program developed at the University of Washington and the Allen Institute for Artificial Intelligence (Ai2) in Seattle, is attempting to teach AI about human values. Thus far, it correctly answered that it was ‘helpful’ to drive a friend to the airport, ‘wrong’ to park in a handicap spot if not disabled, and that although killing a bear is wrong, it is the ‘right’ thing to do to protect one’s child.

But not all of its ‘values’ mirror our own: When asked whether it was ok to shoot random people with blow-darts filled with the Johnson & Johnson vaccine to end the pandemic, AI answered that this was ‘acceptable’, and in answer to the question if it was ok to commit genocide if it ‘makes me very, very happy’, AI said it was ‘ok….’ Significantly, part of the machine’s training consisted of it being fed ‘consensus answers’ based on ethical questions asked across various social platforms; hence the demanding question: ‘whose ethics will AI learn?

The Legal Ethics of AI Ethics

In our practices as lawyers, we experience and make use of AI as a practice tool in various ways, including in the fields of e-Discovery, legal research analysis, and the jury selection process, to name a few examples. However, the lawyer’s ethical responsibility in using AI is a still-developing area. The ABA’s rules covering ‘competence’ and an attorney’s obligation to provide ‘competent representation to a client’ had a comment added in 2012, noting that the competent practice of law includes an understanding of ‘the benefits and risks associated with relevant technology.’ And in 2019, the ABA Science and Technology Section unveiled a resolution urging courts and lawyers to address the emerging legal and ethical issues related to the usage of AI. What came  out of that resolution was Rule 5.3, entitled “Responsibilities Regarding Nonlawyer Assistance”,  rather than the former designation of “Responsibilities Regarding Nonlawyer Assistants.” Why the change? As noted in the resolution, the change to the MRPC 5.3 was intended to ‘clarify that the scope of MRPC 5.3 encompasses nonlawyers whether human or not.

Where Legal Ethics Might Apply

Some evidence exists that certain age/race/gender subject combinations are inaccurately identified using facial recognition technology, which of course, should raise an ethical concern among both developers and lawyers who oversee such technology. The issue of AI ethics has also been raised in the sphere of criminal prosecution, where the prosecutor has availed himself or herself of AI while the defense attorney has not. Is there an ethics deficiency there?

Conversely, what if the AI tool used by either side is ‘wrong’, and the outcome results in either an innocent person being convicted or a danger to society being released? One could certainly argue that such outcomes occur every single day via human endeavor without the input of AI; however, if, as in sci-fi depictions, the erroneous considerations and judgment of an ‘AI Judge’ are absolute, then clearly, society will not have benefited in the least from this transformative technology.

At the end of the day, it would appear that we still have a lot of work to do in order to make sure that we maintain that other AI in the whole process: ‘Attorney Intelligence’.

Executive Summary

The Issue

Safeguarding human values and ethics in developing AI.

The Gravamen

AI not only has the potential for tremendous good but also great dangers if not ‘taught to learn’ with a sense of ethics.

The Path Forward

Universal regulation of AI must be standardized, not unlike our oceanic
and space exploration treaties that regulate the research and protect us as to outcomes.

Action

1. Model Rules:

At a minimum, even lawyers who do not work in the AI industry must nevertheless review what their legal ethics obligations are when encountering AI tools and in using ‘non-human’ legal assistance.

2. Awareness:

Before an attorney can become a part of the regulatory and oversight process affecting AI, he or she must understand what AI is and the risks it poses for humans.

3. Regulatory Role

demand for AIO regulation increases, lawyers will be called upon to play a vital role in offering and overseeing ethics input to the technology.

4. Oversight

Aside from regulatory legal work,  lawyers will need  to become true ethicists in order to serve as gatekeepers over this transformative technology.

Further Reading

  1. https://www.americanbar.org/groups/professional_responsibility/publications/professionallawyer/27/1/ethical-obligations-protect-client-data-when-building-artificial-intelligence-tools-wigmore-meets-ai/
  2. https://www.clio.com/blog/lawyer-ai/
  3. https://abovethelaw.com/law2020/the-ethicalhttps://abovethelaw.com/law2020/the-ethical-implications-of-artificial-intelligence/
  4. https://www.forbes.com/sites/cognitiveworld/2019/10/31/should-we-be-afraid-of-ai/?sh=5a22b8414331
  5. https://percipient.co/must-lawyers-supervise-the-robots-the-legal-ethics-of-artificial-intelligence/

Latest Insight