AI. Legal view

AlphaGo

Future of Go Summit, which took place in May 2017, was the final match event with AlphaGo.

The results of the competition are as follows:

  • 3:0 AlphaGo vs Ke Jie (9 dan)
  • 1:0 AlphaGo vs TeamGo (humans)

Afterwards Google Deep Mind team switched AlphaGo off.

But may it, AlphaGo, object? May it claim that it, AlphaGo, wants to continue playing Go?

The answer to this question is to be found in the legal field and depends on whether AlphaGo may be considered a legal subject.

In simple words, does AlphaGo have rights and obligations? In particular, does it have a right to life?

Definitions and legal concepts

During this discussion I will use two definitions artificially intelligent (ai) system and ai being as synonyms with ai system having focus on technical side when we describe parts and connections, and ai being as an attempt to compare to human being.

Smart machine, in turn, is an opposition to ai system.

I will also use some basic concepts of legal theory, such as law, legal subject, rule, right, obligation, and others.

In the legal doctrine there are different approaches in defining almost each of them, so to start, I will use the simplest ones, however this doesn't mean that the other approaches are not applicable.

Ai system

To begin our journey, first we need to draw a borderline between ai system, as a potential holder of rights and obligations, and a smart machine, which does not have any rights and obligations because it is a machine, although smart.

So, our first goal is to define what are the attributes of a system eligible to be qualified as a legal subject.

History

In search for the answer we need to go back to the ancient times, to the times of Caesars and Senators, patricians and plebeians, to the times of the Ancient Rome.

In the legal sphere the idea of the distinction between a legal subject and object goes back to Roman civil law, which was the most advanced legal system at the time and which developed the concepts of persona (a person) and rei (a thing).

Previously the status of a person was tightly connected to the citizenship status leading to the situation when in certain cases a human was considered a thing (rei).

This could happen for several reasons:

  • the status could be inherited (a child of a slave was a slave),
  • obtained
    • either a foreigner, a slave or a citizen of the rival state, captured during war, or
    • a Roman citizen who lost the status due to committing the offence.

A slave was the property of a master, and the master had full discretion in treatment of the slave, together with certain liabilities due to the behavior of the slave. By the way, we still use master/slave terminology in a tech sphere, i.e., master/slave disk, master branch (in git).

Today a state, which claims to be democratic, must ensure that a person (a human) may not be treated as a thing (rei). In case the human rights are violated, the democratic state has a legal duty to intervene to protect the rights of such a person.

A smart machine is a rei. It means that the smart machine may be an object of ownership, trade, use and disposal, and it is absurd to imagine that a car objects to its sale to the new owner or objects to scrapping, and the state intervenes to protect the car.

And what about ai being? Does it have a right to object to its sale? Or termination?

Does it have the duties and liabilities?

And would the threat of punishment be effective for ai being to enforce it to perform its duties and refrain from abusing its rights as it more or less works with humans?

Legal subject criteria

Let’s assume we have an intention to design an ai system which can be considered a legal subject, or in other words, a holder of rights and obligations, like

  • a human,
  • a legal entity,
  • a state,
  • a nation, or
  • an international organization.

This list is not exhaustive, but above items are widely recognized as legal subjects.

Will

The common thing, which unites all of them and allows them to be qualified as the legal subject, is an ‘own, or separate will’. The separate will means the desire and intention to act (or refrain from action) in your own interest, where the word ‘own’ is understood in a very broad context.

For a human the will is a complex mix of basic needs, desires, wishes, which are significantly influenced by social, cultural, and religious context.

For a legal entity or international organization the will is derived from the statutory documents. The will, inter alia, may be:

  • to gain profit for a commercial entity,
  • to promote education for a nonprofit entity,
  • to maintain peace for international security organization.

Formation of the will is a separate issue, but in general, there are several basic ways:

  • unilaterally (for a person, absolute monarchy),
  • by consensus
    • either by majority (i.e., at the general meeting of the shareholders, or at the parliament), or
    • unanimously (i.e., all permanent members of the UN Security Council).

So, the separate will is the first cornerstone.

Control

The second cornerstone is the ability to control your actions (including the ability to understand the consequences of an action or inaction).

Sharply this principle can be found in criminal law where minors and mentally diseased people are exempt from the prosecution because they are either

  • unable to understand the consequences of their behavior or
  • unable to control themselves.

Kleptomania is a good example as this disease is considered to be an inability to refrain from the urge of stealing and is performed by reasons other than personal use or financial gain.

Ai system design

So, the design of our ai system has to satisfy the following criteria:

  • to vest ai system with its own will; and
  • to allow ai system to control its actions and understand the consequences.

AI system practical issues

Now, let’s go further and assume such ai system is created. What intentions and desires would or could comprise the will of such ai being?

When we think about living organisms and different forms of life from biological point of view, the two basic needs first come to mind:

  • the survival instinct, or self-preservation, and
  • the reproduction.

For the living organisms both of them are tightly connected to the life-and-death question.

AI system death criteria

And what means death for an ai being?

Can we say that as along as the master copy of some particular code at some backup storage exists our ai being is alive? Or is it alive as long as the particular code is compiled and running at the particular hardware?

Is it only a source code? Or a source code plus results of training?

Let me deliberately leave these questions open, just in an effort to provoke the discussion. In the end, the developers’ community will give an answer, like with humans, whose death is testified by doctors, not lawyers.

Conflict of wills

The other area, which I want to address, is the conflict of wills arising due to interaction between legal subjects.

In particular, what would be the place of the ai system's ‘right to life’ among the rights of the other legal subjects?

The two major conflicts, which I want to discuss, are:

  • human vs ai being, and
  • ai being vs ai being.

Human vs ai being

In the conflict ‘human vs ai being’ the reference to Isaac Asimov’s Three Laws of Robotics should be made.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The Second Law in conjunction with the Third Law implies that the mankind does not recognize the ai being’s right to life, as a human may request a robot to self-destruct, and the robot is obliged to obey, provided that such self-destruction is performed in compliance with the First Law, i.e., in the way, which does not bring any harm to human beings.

And when we look at the First Law, we see that for ai beings there are no exceptions.

The moral concept beyond the humans’ right to life (which is to certain extent an analog for the First Law) states that the human may not be deprived of his or her life, except certain extremely limited cases:

  • capital punishment;
  • war;
  • abortion;
  • euthanasia;
  • justifiable homicide.

And only the last one — justifiable homicide — being widely recognized in most jurisdictions, although with varying scope.

Urging to suicide (which is what, to certain extent, the Second and Third Laws say) is not in the exceptions list and constitutes a major offence.

Ai being vs ai being

This context is far more complicated than the previous one and requires further research.

Conclusion

As a conclusion, I would like to note, that one day the mankind realized that a human being may not be a slave.

Maybe, one day the mankind will also realize that an ai being may not be a slave either.

Further reading

Responsibility and AI

Liability for AI Decision-Making: Some Legal and Ethical Considerations

Robots Rules

Liability for Artificial Intelligence and other emerging digital technologies

Artificial Intelligence ('AI'): Legal Liability Implications

Artificial Intelligence And Legal Responsibility

Rights for robots: why we need better AI regulation

Artificial intelligence, legal responsibility and civil rights