I, Robot

Intro

I, Robot (coll., 1950) is a collection of Isaac Asimov‘s first nine ‘Robot’ stories, including “Runaround” (1942) which sets out Asimov’s ”Three Laws of Robotics” which were devised by Asimov and Astounding Science Fiction (ASF) editor John W. Campbell Jr.

All of the stories were first published in ASF apart from the earliest, “Robbie” (1940), which was published in Super Science Stories.

It formed the basis of the Alex Proyas movie I, Robot (2004)

Contents:

“Introduction”

Robbie” was first published in the Super Science Stories, September 1940, as “Strange Playfellow”

Runaround” was first published in ASF in March 1942 and features the first formulation of the ”Three Laws of Robotics‘:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Reason” (ASFApril 1941).

Catch that Rabbit” (ASF, February 1944).

“Liar!” (ASF, May 1941)

Little Lost Robot” (ASF, March 1947).

Escape!” was first published as “Paradoxical Escape” in ASFAugust 1945.

Evidence” ASFSeptember 1946

The Evitable Conflict” (ASF, June 1950) features Stephen Byerley from ”Evidence”’

Analysis

These simple Laws appear to be rather proscriptive at first sight, and to guarantee human safety, but Asimov’s genius was to construct a series of stories and novels based around these Laws without actually violating them. Take the first Law, for instance: A robot may not injure a human being or, through inaction, allow a human being to come to harm. It seems straightforward but a logical consequence of the First Law’s clause ”or, through inaction, allow a human being to come to harm” is that a robot might violate the Second Law to prevent humans harming themselves – even if the human wants to harm themselves, or indulge in an activity which might harm himself indirectly. A Robot would logically be obliged to prevent a human from smoking, for instance, or taking part in any potentially dangerous activity.

In The History of Science Fiction (2005) Adam Roberts describes Asimov’s robots as ”properly Kantian ethical beings” (p.199) :

With his robots Asimov created a race of sentient, thoughtful beings in whom the Kantian moral imperative is internalised; robots do not consult their conscience when faced with an ethical dilemma, they obey the three laws that absolutely govern their behaviour. The genius of the invention is that the resulting race of beings is not absolutely determined; Asimov’s robots do not run on rails, their behaviour is not necessarily predictable, they are not morally clockwork figures. Indeed the great theme of nearly all the robot novels and stories is the working out of the implications of what it would be like to live under this trefoil categorical imperative.

— Adam Roberts, The History of Science Fiction (p.99)

For Immanuel Kant only a creature capable of understanding the reasons for and against doing something could be said to be behaving morally or immorally, and therefore that morality was a possibility for rational creatures alone.

Second Formulation of the Categorical Imperative: Humanity as an End in Itself:

A. Will: a faculty which causes actions in accordance with the concept of law
B. Only rational beings have a will
C. If the end of an action is determined by reason alone, it is true for all rational creatures
D. Any rational being exists as an end in himself, not merely as a mean to be used by another’s will
E. An objective principle of the will, universally true (for all rational creatures): rational nature exists as an end in itself
F. Practical imperative of the will: So act as to treat humanity, whether in thine own person or in that of any other, in every case as an end withal, never as a means only

In Asimov’s novelette ”The Bicentennial Man” (1976) (filmed, rather saccharinely, by Chris Columbus as Bicentennial Man, 1999) Asimov had explored the meaning of the term ”human”. The First Law fall apart if we cannot distinguish between ”human” and Robot. D84 is a conscious machine – and one with a conscience.

As far back as ”The Evitable Conflict” (1940) Asimov’s machines had reinterpreted the First Law to their own:

A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

This was later incorporated into Asimov’s novel Robots and Empire (1986) as the ”Zeroth Law”. According to this interpretation it is possible to harm an individual human being for the benefit of humanity as a whole.

Comments

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s