If a modified robot were to drop a heavy weight upon a
human being, he would not be breaking the First Law, if he did so with the knowledge
that his strength and reaction speed would be sufficient to snatch the weight away before
it struck the man. However once the weight left his fingers, he would be no longer the
active medium. Only the blind force of gravity would be that. The robot could then
change his mind and merely by inaction, allow the weight to strike. The modified First
Law allows that" (79).
Reference Quote
ShuffleSimilar Quotes
Quote search results. More quotes will automatically load as you scroll down, or you can use the load more buttons.
If, by virtue of the Second Law, we can demand of any robot unlimited obedience in all respects not involving harm to a human being, then any human being, any human being, has a fearsome power over any robot, any robot. In particular, since Second Law supersedes Third Law; any human being can use the law of obedience to overcome the law of self-protection. He can order any robot to damage itself or even to destroy itself for any reason, or for no reason.
Is this just? Would we treat an animal so? Even an inanimate object which had given us good service has a claim on our consideration. And a robot is not insensitive; it is not an animal. It can think well enough so that it can talk to us, reason with us, joke with us. Can we treat them as friends, can we work together with them, and not give them some of the fruits of that friendship, some of the benefits of co-working?
If a man has the right to give a robot any order that does not involve harm to a human being, he should have the decency never to give a robot any order that involves harm to a robot, unless human safety absolutely requires it. With great power goes great responsibility, and if the robots have Three Laws to protect men, is it too much to ask that men have a law or two to protect robots?
The trouble with you, Peter, is that when you think of a witness to a planetological statement, you think of planetologists. You divide up human beings into categories, and despise and dismiss most. A robot cannot do that. The First Law says, 'A robot may not injure a human being or, through inaction, allow a human being to come to harm.' Any human being. That is the essence of the robotic view of life. A robot makes no distinction. To a robot, all men are truly equal, and to a robopsychologist who must perforce deal with men at the robotic level, all men are truly equal, too.
One. A robot may not injure a human being or, through inaction, allow a human being to come to harm. ' Two. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. ' Three. A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Un robot no puede lastimar a la humanidad o, por falta de acción, permitir que la humanidad sufra daños. La considero ahora la ley Cero de la Robótica. La primera ley debería decir: Un robot no debe dañar a un ser humano, o permitir, por inacción, que el ser humano sufra algún daño, a menos que tal acción viole la ley Cero de la Robótica.
The Three Laws of Robotics:
1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;
2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;
3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;
The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Works in ChatGPT, Claude, or Any AI
Add semantic quote search to your AI assistant via MCP. One command setup.
No? Then listen to this. It is my belief that throughout the history of the positronic robot, the First Law of Robotics has been deliberately misquoted." Leebig moved spasmodically. "Misquoted? Fool! Madman! Why?" "To hide the fact," said Baley with complete composure, "that robots can commit murder.
The orthodox view has the following reading: '1) A robot may not harm a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; 3) A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law.
Try QuoteGPT
Chat naturally about what you need. Each answer links back to real quotes with citations.
The Three Laws Of Robotics:
- First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Handbook of Robotics, 56th Edition, 2058 A.D.
Un robot no puede lastimar
a la humanidad o, por falta de acción, permitir que la humanidad sufra daños.
La considero ahora la ley Cero de la Robótica. La primera ley debería decir: Un robot no debe dañar a un ser humano, o permitir, por inacción, que el ser
humano sufra algún daño, a menos que tal acción viole la ley Cero de la
Robótica.
Loading more quotes...
Loading...