Laws of Robotics
The Laws of Robotics were first described by Isaac Asimov in his collection of science-fiction stories I, Robot (1950). Since then, they have influenced concepts of what a robot should be and how it should act. These laws are binding on the way the robots described by Asimov act and make decisions.
Initially, these laws only applied to “literary” robots, but they have since come to influence the programming of modern robots and are used in modified forms in competitions, e.g. for cleaning robots. Modern industrial robots are also programmed in accordance with these laws, even if most robot programmers are unaware of the fact.
Asimov’s laws state:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
It should be noted that the laws are hierarchical in nature. Although the laws appear to be clearly formulated, they are not “foolproof”, primarily because they are interpreted by humans, i.e. imperfectly and incompletely.