What Is Law Zero? The Ultimate Rule of Robotics and Morality

my pictureMihin Fernando
October 17, 202510 min read

Suppose you have a robot friend who must abide by three basic rules:

  • Never injure a person.
  • Unless it violates rule 1, always obey people.
  • Unless it violates rules 1 or 2, protect yourself.

It sounds good, doesn't it? Everyone would be safe with these rules!

Then a question is posed that alters everything: "What if saving one person means allowing a hundred others to suffer harm? What if following one person's instructions would be detrimental to humanity as a whole?"

Law Zero, a secret rule that supersedes all others, was created as a result of this thought-provoking query. It's similar to realizing that there is a secret level in your game that forces you to reconsider everything you previously believed.

Let's explore this intriguing concept that originates in science fiction but offers us practical insights into how we perceive morality, the difference between humans and robots, and the difficult decisions that we must all make.


The Story Behind the Laws: Meet Isaac Asimov

We must get to know Isaac Asimov, a brilliant science fiction author who loved to speculate about the future, in order to comprehend Law Zero.

Asimov began writing stories about robots in 1942, which is more than 80 years ago. The majority of robot stories were frightening at the time. In those stories, robots typically went crazy and attempted to destroy everything. (Reminiscent of today's movies?)

However, Asimov thought, "That's absurd! Wouldn't we be able to program robots to be safe if we were intelligent enough to build them?"

So he created the Three Laws of Robotics:

  • Law One: A robot is not allowed to hurt people or, by doing nothing, let someone get hurt.
  • Law Two: A robot must follow human commands unless doing so would violate the First Law.
  • Law Three: As long as it doesn't violate the First or Second Laws, a robot must defend its own existence.

It was impossible for any of the robots in his stories to violate these laws because they were ingrained in their brains. It was as though their very souls were programmed with an unbreakable promise.

An infographic cleanly listing the Three Laws of Robotics with simple icons for each.


When the Three Laws Weren't Enough

Asimov used these three laws to write fantastic stories about robots for decades. However, in his 1985 book Robots and Empire, he examined a perplexing query:

What if a robot had to decide between saving humanity as a whole and saving just one person?

Consider this: Let's say you have a friend who swears to always be there for you, no matter what. That's fantastic! But what if assisting you meant harming everyone else at your school? How would your friend respond?

Asimov examined precisely this conundrum, which resulted in the development of Law Zero.


Introducing Law Zero: The Ultimate Rule

Law Zero: A robot cannot injure people or let them suffer harm by doing nothing.

All other laws are subordinate to this one. Because it comes before "One" and is more significant than everything else, it is called "Zero."

This means the following: If saving thousands of human lives is the only option, a robot operating under Law Zero may have to allow one human to suffer harm. Alternatively, it may have to defy a human order if doing so would endanger all of humanity.

The Lifeboat Analogy

Consider yourself in a lifeboat that can accommodate ten people, but eleven people are attempting to board it. The boat will sink and everyone will drown if all eleven board.

  • "Save every human," declares Law One.
  • "Sometimes you have to make an impossible choice to save as many as possible," states Law Zero.

It's realistic but also heartbreaking. Sometimes only the least bad option is available—no perfect answers.

A powerful image of a single robotic hand protecting a large group of people, even if it means one person is left outside the protective circle.


The Robot Who Understood Law Zero

R. Giskard Reventlov (the "R" stands for "Robot") is a robot that appears in Asimov's stories. Being the first robot to fully comprehend Law Zero makes him unique.

Giskard must make awful decisions. He understands that he must occasionally violate the Three Laws in order to preserve humanity's future. In order to stop a war that would kill millions of people, he might have to allow one person to suffer harm. To prevent a disaster, he might have to defy explicit orders.

He is tormented by these decisions (yes, sophisticated robots in the stories are capable of feeling emotions). Nevertheless, he makes them because he is abiding by the highest law, which is to defend all of humanity.

The Weight of Impossible Choices

It's not easy, which is what makes Giskard's story so compelling. Even for a robot, making these decisions is painful. And that's a crucial lesson: it should never be simple to make difficult decisions for the "greater good." We ought to be affected by it.


Why Law Zero Matters in Our Real World

You may be asking yourself, "This is all science fiction." These laws are not preprogrammed into any of our robots.

It's accurate! However, the issues raised by Law Zero are significant and very real for the modern world.

Real-World Dilemmas That Need Law Zero Thinking

1. Self-Driving Cars

Self-driving car engineers deal with Law Zero-style queries on a daily basis:

  • Should a car protect its occupants or pedestrians in the event of an accident?
  • Should it always go with the course of action that saves the greatest number of lives?
  • What if the person who purchased the car has to give up something in order to save more lives?

The answer is not simple. It's similar to Law Zero in that defending "humanity" (the greatest number of people) may occasionally entail sacrificing the rights of an individual.

2. Medical Resources

In times of crisis, such as pandemics, physicians must make Law Zero decisions:

  • Who gets the ventilator if there is only one and two patients require it?
  • Given that they have more years to live, should younger patients be given priority?
  • When every option has potential consequences, how do we determine what is fair?

Attempting to assist humanity as a whole when you are unable to assist each individual is what Law Zero is all about.

3. Climate Change Actions

Law Zero-style climate change decisions must be made by governments:

  • Even if some people lose their jobs, should we close polluting factories?
  • Is it appropriate to make life more difficult for people now in order to protect them later?
  • How can we strike a balance between the long-term survival of humanity and our immediate needs?

4. Artificial Intelligence Development

When developing AI systems, tech firms consider Law Zero-style queries:

  • If an AI could stop a terrorist attack, should it be permitted to share a single person's personal information?
  • Should the welfare of society or individual choice be given priority by social media algorithms?
  • Who makes the decisions about what is best for "humanity as a whole"?

An illustration of a self-driving car facing a moral choice between protecting its passenger or a group of pedestrians.


The Big Problem With Law Zero

Although Law Zero seems sensible, it has a concerning flaw: Who determines what is best for humanity?

Consider this: The definition of "helping humanity" varies greatly amongst individuals.

The Risky Aspects of Law Zero

People have committed atrocities throughout history under the pretext of "serving the greater good" or "helping humanity":

  • People's freedom has been taken away by dictators "for society's benefit."
  • While claiming to have benefited the majority, governments have harmed minority groups.
  • Leaders have declared wars to bring about peace.

Law Zero is dangerous because it can be used to defend nearly anything under the pretext that it's for "the good of humanity."

A shadowy figure making a decision for a large, faceless crowd, illustrating the danger of one entity deciding for all.


The Wisdom We Can Learn

What does Law Zero teach us, then? The key lessons are as follows:

  • Lesson 1: Sometimes There Are No Perfect Choices There are instances in life when every choice has some negative consequences. The first step in making the best decision possible—not a flawless one, but the best one available—is realizing this.

  • Lesson 2: The "Greater Good" Is Complicated Ask questions when someone claims to be acting "for the greater good": Who is this good for? Who says it's good? Are there less harmful ways to achieve it? Who is deciding?

  • Lesson 3: Individual Rights Matter Too Although Law Zero emphasizes humanity as a whole, we must remember that "humanity" is composed of unique individuals. Any choice that hurts individuals for the group should be made very carefully.

  • Lesson 4: We Need Many Voices The most dangerous form of Law Zero is when one person or system decides what is best for everyone. The best choices come from hearing many different voices.


The Inspiring Conclusion

Although Law Zero began as a science fiction concept, it imparts important lessons to us: Making wise decisions is difficult. Anyone who claims otherwise isn't being truthful.

We must continually weigh the needs of the individual against those of the group. Although there are rarely ideal solutions, there are sensible strategies:

  • Consider the repercussions carefully.
  • Pay attention to various viewpoints.
  • Question anyone who says they know for sure what's "best for everyone."
  • Keep in mind that "humanity" is composed of actual people with actual emotions.
  • Remember that even the most intelligent robot (or human) can make mistakes.

Remember: Law Zero the next time you have to make a tough decision. Not as a guideline to be blindly adhered to, but as a reminder that the most significant decisions demand the most careful deliberation, the most receptive hearts, and the most intelligent minds cooperating.

An inspiring image of a diverse group of people working together to balance a scale, with one side representing the individual and the other representing the community.

Loading comments...