Reference Quote

Shuffle
I first worked out the final form of the Three Laws, and used them explicitly, in my fourth robot story, "Runaround," which appeared in the March 1942 issue of Astounding. The Three Laws first appear on page 100 of that issue. I looked that up, because where they appear there is the very first use of the word "robotics" in the history of the world, as far as I know.

Similar Quotes

Quote search results. More quotes will automatically load as you scroll down, or you can use the load more buttons.

en mi cuarta historia de robot, El círculo vicioso, que apareció en el número de marzo de 1942 de Astounding Science Fiction. En este ejemplar, en la página 100, hacia la tercera parte de la primera columna (no tengo más remedio que recordarlo), uno de mis personajes le dice al otro: «Ahora, escucha, vamos a empezar con las Tres Reglas Fundamentales de la Robótica.»

The Three Laws Of Robotics:

- First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.

- Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

- Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Handbook of Robotics, 56th Edition, 2058 A.D.

The Three Laws of Robotics:

1: A robot may not injure a human being or, through inaction, allow a human being to come to harm;

2: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law;

3: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law;

The Zeroth Law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

The Three Laws of Robotics 1 – A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2 – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3 – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Handbook of Robotics, 56th Edition, 2058 A.D.

The Three Laws are the essential guiding principles of a good many of the world's ethical systems. If a man were to live up to them, he would be a good man.

Asimov's Three Laws of Robotics, and here they are: 1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Works in ChatGPT, Claude, or Any AI

Add semantic quote search to your AI assistant via MCP. One command setup.

apareció en el número de junio de 1950 de Astounding. Era la primera historia que escribía que trataba principalmente de computadoras (las llamé «Máquinas» en la historia) más que de robots en sí mismos.

Share Your Favorite Quotes

Know a quote that's missing? Help grow our collection.

Because, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world’s ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That’s Rule Three to a robot. Also every ‘good’ human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to follow rules, to conform to custom — even when they interfere with his comfort or his safety. That’s Rule Two to a robot. Also, every ‘good’ human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That’s Rule One to a robot. To put it simply — if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.

«Las Tres Leyes de la Robótica de Asimov» y son las siguientes: 1. Un robot no puede hacer daño a un ser humano, o, por medio de la inacción, permitir que un ser humano sea lesionado. 2. Un robot debe obedecer las órdenes recibidas por los seres humanos, excepto si estas órdenes entrasen en conflicto con la Primera Ley. 3. Un robot debe proteger su propia existencia en la medida en que esta protección no sea incompatible con la Primera y la Segunda Ley.

Because, if you stop to think of it, the three Rules of Robotics are the essential guiding principles of a good many of the world's ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That's Rule Three to a robot. Also every 'good' human being, with a social conscience and a sense of responsibility, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to follow rules, to conform to custom — even when they interfere with his comfort or his safety. That's Rule Two to a robot. Also, every 'good' human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That's Rule One to a robot. To put it simply — if Byerley follows all the Rules of Robotics, he may be a robot, and may simply be a very good man.

Again, how will we keep them loyal? What measures can ensure our machines stay true to us? Once artificial intelligence matches our own, won’t they then design even better ai minds? Then better still, with accelerating pace? At worst, might they decide (as in many cheap dramas), to eliminate their irksome masters? At best, won’t we suffer the shame of being nostalgically tolerated? Like senile grandparents or beloved childhood pets? Solutions? Asimov proposed Laws of Robotics embedded at the level of computer DNA, weaving devotion toward humanity into the very stuff all synthetic minds are built from, so deep it can never be pulled out. But what happens to well-meant laws? Don’t clever lawyers construe them however they want? Authors like Asimov and Williamson foresaw supersmart mechanicals becoming all-dominant, despite deep programming to “serve man.

No? Then listen to this. It is my belief that throughout the history of the positronic robot, the First Law of Robotics has been deliberately misquoted." Leebig moved spasmodically. "Misquoted? Fool! Madman! Why?" "To hide the fact," said Baley with complete composure, "that robots can commit murder.

The chronological order of the books, in terms of future history (and not of publication date), is as follows: The Complete Robot (1982). This is a collection of thirty-one robot short stories published between 1940 and 1976 and includes every story in my earlier collection I, Robot (1950). Only one robot short story has been written since this collection appeared. That is

Loading more quotes...

Loading...