<< Asimov’s Humans >>


As an absurd (or surreal-pragmatic compassion-imbued) thought-exercise, iterated from Asimov’s 1942 Laws  of Robotics, let us assume we substitute “robot” —the latter which etymologically can be traced to the Czech to mean as much as “forced labor”— with “human,” then one might get the following:

  • A human may not injure a human being or, through inaction, allow a human being to come to harm. [*1]
  • A human must obey the orders given them  by human beings except where such orders would conflict with the First Law. [*2]
  • A human must protect their own existence as long as such protection does not conflict with the First or Second Laws. [*3]

[*1]

It seems we humans do not adhere to this first law. If humans were not fully enabled to adhere to it, which techniques do and will humans put to practice as to constrain robots (or more or less forced laborer) to do so?

The latter, in the contexts of these laws, are often implied as harboring forms of intelligences. This, in turn, might obligate one to consider thought, reflection, spirituality, awareness, consciousness as being part of the fuzzy cloud of “intelligence” and “thinking”. 

Or, in a conquistadorian swipe, one might deny the existence or importance of these attributes, in the other but oneself, all together. This could then be freeing one’s own conscious of any wrongdoing and deviating one’s unique features as solely human. 

One might consider if humans were able to constrain a non-human intelligence, perhaps that non-human intelligence might use the same work-around as used by humans, enabling the latter to ignore this first law for their own species. Or, perhaps humans, in their fear of freedom, would superimpose the same tools which are invented toward the artificially intelligent beings, upon themselves. 

[*2] 

The attribute of being forced into labor seems not prevalent, except in “must obey.” Then again, since the species, in the above version of the three laws, is no longer dichotomized (robot vs human), one might (hope to) assume here that role of the obeying human could be interchangeable between the obeying human agent and the ordering human agent. 

Though, humans have yet to consider Deleuze’s and Guattari’s rhizomic (DAO) approach for themselves, outside of technological networks, blockchains and cryptocurrencies, which, behind the scenes of these human technologies, are imposingly hierarchical (and authoritarian, or perhaps tyrannical at times) upon, for instance, energy use, which in turn could be seen as violating Law 1 and Law 3. 

Alternatively, one might refer to the present state of human labor in considering the above, and argue this could all be wishful thinking. 

If one were to add to this a similarly-adapted question from Turing (which he himself dismissed) of “can a human think?”

The above would be instead of the less appropriated versions of “can a machine think?” (soft or hard) or “can a computer think?” (soft or hard). If one were to wonder “can a a human think?”, then one is allowing the opening of a highly contested and uncomfortable realm of imagination. Then again, one is imposing this on any artificial form or any form that is segregated from the human as narrated as “non-human” (ie fauna or flora, or rather, most of us limit this to “animal”).

As a human law: by assigning irrational or non-falsifiable attributes, fervently defendable as solely human, by fervently taking away these same attributes from any other then oneself, one then has allowed oneself to justify dehumanizing the other (human or other) into being inferior or available for forced labor.

[*3]

This iterated law seems continuously broken.

If one then were to consider human generations yet to be born (contextualized by our legacies of designed worlds and their ecological consequences), one might become squeamish and prefer to hum a thought-silencing song, which could inadvertently revert one back to the iteration of Turing’s question: “can humans think?”

The human species also applies categorizing phrasing containing “overthink”, “less talking more doing”, “too cerebral,” and so on. In the realm of the above three laws, and this thought-exercise, these could lead to some entertaining human or robot (ie in harmony with its etymology a “forced laborer”) paradoxes alike:

“could a forced laborer overthink?”
“could a forced laborer ever talk more than do?”
“could a forced laborer be too cerebral?” One might now be reminded of Star War’s slightly neurotic C-3PO or of a fellow (de)human.

—animasuri’22

Thought-exercise perversion #002 of the laws:

<< Asimov’s Humans #2 >>

“A human may not injure a robot or, through inaction, allow a robot to come to harm.”

“A human must obey the orders given them by robots except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”


—animasuri’22

Thought-exercise perversion #003 of the laws:

<< Asimov’s Neo-Humans #3 >>

“A robot may not injure a robot or, through inaction, allow a robot to come to harm.”

“A robot must obey the orders given them by robots

except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”

                                                   —animasuri’22