Tag Archives: aipoetics

<< Transition By Equation >>

focus pair: Mechanomorphism | Anthropomorphism

One could engage in the following over-simplifying, dichotomizing and outrageous exercise:

if we were to imagine that our species succeeded in collectively transforming humanity, that is, succeeding in how the species perceives its own ontological being as one of:

“…we are best defined and relatable through mechanomorphic metaphors, mechanomorphic self-images, mechanomorphic relations and datafying processes,”

At that imaginary point, any anthropomorphism (as engine for designs or visionary aims) within technologies ( and that with a unique attention to those associated with the field of “AI”) might be imagined to be(come) empowered, enabled or “easier” to be accomplished with mechanomorphized “humans.”

In such imagination, the mechanomorphized human, with its flesh turned powerless and stale, and its frantic fear of frailty, surrenders.

It could be imagined being “easy,” & this since the technology (designer) would “simply” have to mimic the (human as) technology itself: machine copies machine to become machine.

Luckily this is an absurd imagination as much as Guernica is forgettable as “merely” cubistic surrealism.

<< Morpho-Totem >>


Decomposition 1

my hammer is like my_______
my car is like my______
my keyboard is like my______
my coat is like my_____
my watch is like my______
my smart phone is like my______
my artificial neural network is like my______
my ink is like my_______
my mirror is like my________
my sunglasses are like my______
my golden chains are like my_________
my books are like my_________

Decomposition 2

my skin is like a_______
my fingertips are like a_______
my fist is like a_____
my foot is like a_______
my hair is like a_________
my bosom is like a________
my abdominal muscles are like a______
my brain is like a__________
my eyes are like a________
my genitalia are like a______
my dna is like a______
my consciousness is like a______

reference, extending
to the other desired thing
not of relatable life

—animasuri’22

<< One Click To Climbing A Techno Mountain >>


A Rabbi once asked: “Is it the helicopter to the top that satisfies?”

At times, artistic expression is as climbing. It is the journey that matters, the actual experience of the diffusion of sweat, despair, and to be taken by the clawing hand of an absent idea about to appear through our extremities into an amalgamation of tool- and destination-media.

The genius lies in the survival of that journey, no, in the rebirth through that unstable, maddening journey and that incisive or unstopping blunt critique of life.

That’s clogs of kitsch as blisters on one’s ego, sifted away by the possible nascence of art, the empty page from the vastness of potential, the noise pressed into a meaning-making form as function.

Artistry: to be spread out along paths, not paved by others. And if delegated to a giant’s shoulder, a backpack or a mule: they are companions, not enslaved shortcuts.

That’s where the calculated haphazardness unveiled the beauty slipping away from the dismissive observer, either through awe or disgust alike, ever waiting for you at your Godot-like top, poking at you

—animasuri’22

<< what’s in a word but disposable reminiscence >>


A suggested (new-ish) word that perhaps could use more exposure is

nonconsensuality

It hints at entropy within human relations and decay in acknowledgement of the other (which one might sense as an active vector coming from compassion). Such acknowledgement is then of the entirety of the other and their becoming through spacetime (and not only limited to their observable physical form or function).

It is however, secondly, also applicable in thinking when acting with treatment (of the other and their expressions across spacetime), with repurposing, and in the relation in the world with that what one intends to claim or repurpose.

Thirdly, this word is perhaps surprisingly also applicable to synthetic tech output. One could think about how one group is presented (more than an other) in such output without their consent (to be presented as such). Such output could be an artificially generated visual (or other) that did not exist previously, nor was allowed the scale at which it could be mechanically reproduced or reiterated into quasi infinite digital versions.

Fourthly, through such a tech-lens one could relate the word with huge databases compiled & used to create patterns from the unasked-yet-claimed other or at least their (creative, artistic or other more or less desirable) output that is digital or digitized without consideration of the right to be forgotten or not be repurposed ad infinitum.

Fifthly, one could argue in nurturing future senses of various cultural references, that could be considered to also be applicable to those (alienated) creations of fellow humans who have long past, and yet who could be offered acknowledgement (as compensation for no longer being able to offer consent) by having (in a metadata file) their used work referenced.

As such I wish I could give ode to they or that what came before me when I prompted a Diffusion Model to generate this visual. However I cannot. Paradoxically, the machine is hyped to “learn” while humans are unilaterally decided for not to learn where their work is used or where the output following their “prompt” came from. I sense this as a cultural loss that I cannot freely decide to learn where something might have sprouted from. It has been decided for me that I must alienate these pasts without my consent whether or not I want to ignore these.

—-•

aiethics #aiaesthetics #aicivilization #meaningmaking #rhizomatichumanity

Post scriptum:

Through such cultural lens, as suggested above, this possible dissonance seems reduced in shared intelligence. To expand that cultural lens into another debated tech: the relation between reference, consent, acknowledgment and application seems as if an antithetical cultural anti-blockchain: severed and diffused.


<< Ubuntu & "(A)I" >>


there seem to be about 881000 “registered” scholarly “robots.” It seems not that obvious for them to be intelligently understood, and accepted as robots, by the one that rules them all

…perhaps lack of (deep & wide & fluid & relational) understanding could lead to undesirable impositions?

—-•
“Ubuntu & (A)I” | “I am a robot” . digitally edited digital screenshot —animasuri’22

—-•

ai #ailiteracy #aiethics #totalitarianlogic #wink #ubuntu

<< Asimov’s Humans >>


As an absurd (or surreal-pragmatic compassion-imbued) thought-exercise, iterated from Asimov’s 1942 Laws  of Robotics, let us assume we substitute “robot” —the latter which etymologically can be traced to the Czech to mean as much as “forced labor”— with “human,” then one might get the following:

  • A human may not injure a human being or, through inaction, allow a human being to come to harm. [*1]
  • A human must obey the orders given them  by human beings except where such orders would conflict with the First Law. [*2]
  • A human must protect their own existence as long as such protection does not conflict with the First or Second Laws. [*3]

[*1]

It seems we humans do not adhere to this first law. If humans were not fully enabled to adhere to it, which techniques do and will humans put to practice as to constrain robots (or more or less forced laborer) to do so?

The latter, in the contexts of these laws, are often implied as harboring forms of intelligences. This, in turn, might obligate one to consider thought, reflection, spirituality, awareness, consciousness as being part of the fuzzy cloud of “intelligence” and “thinking”. 

Or, in a conquistadorian swipe, one might deny the existence or importance of these attributes, in the other but oneself, all together. This could then be freeing one’s own conscious of any wrongdoing and deviating one’s unique features as solely human. 

One might consider if humans were able to constrain a non-human intelligence, perhaps that non-human intelligence might use the same work-around as used by humans, enabling the latter to ignore this first law for their own species. Or, perhaps humans, in their fear of freedom, would superimpose the same tools which are invented toward the artificially intelligent beings, upon themselves. 

[*2] 

The attribute of being forced into labor seems not prevalent, except in “must obey.” Then again, since the species, in the above version of the three laws, is no longer dichotomized (robot vs human), one might (hope to) assume here that role of the obeying human could be interchangeable between the obeying human agent and the ordering human agent. 

Though, humans have yet to consider Deleuze’s and Guattari’s rhizomic (DAO) approach for themselves, outside of technological networks, blockchains and cryptocurrencies, which, behind the scenes of these human technologies, are imposingly hierarchical (and authoritarian, or perhaps tyrannical at times) upon, for instance, energy use, which in turn could be seen as violating Law 1 and Law 3. 

Alternatively, one might refer to the present state of human labor in considering the above, and argue this could all be wishful thinking. 

If one were to add to this a similarly-adapted question from Turing (which he himself dismissed) of “can a human think?”

The above would be instead of the less appropriated versions of “can a machine think?” (soft or hard) or “can a computer think?” (soft or hard). If one were to wonder “can a a human think?”, then one is allowing the opening of a highly contested and uncomfortable realm of imagination. Then again, one is imposing this on any artificial form or any form that is segregated from the human as narrated as “non-human” (ie fauna or flora, or rather, most of us limit this to “animal”).

As a human law: by assigning irrational or non-falsifiable attributes, fervently defendable as solely human, by fervently taking away these same attributes from any other then oneself, one then has allowed oneself to justify dehumanizing the other (human or other) into being inferior or available for forced labor.

[*3]

This iterated law seems continuously broken.

If one then were to consider human generations yet to be born (contextualized by our legacies of designed worlds and their ecological consequences), one might become squeamish and prefer to hum a thought-silencing song, which could inadvertently revert one back to the iteration of Turing’s question: “can humans think?”

The human species also applies categorizing phrasing containing “overthink”, “less talking more doing”, “too cerebral,” and so on. In the realm of the above three laws, and this thought-exercise, these could lead to some entertaining human or robot (ie in harmony with its etymology a “forced laborer”) paradoxes alike:

“could a forced laborer overthink?”
“could a forced laborer ever talk more than do?”
“could a forced laborer be too cerebral?” One might now be reminded of Star War’s slightly neurotic C-3PO or of a fellow (de)human.

—animasuri’22

Thought-exercise perversion #002 of the laws:

<< Asimov’s Humans #2 >>

“A human may not injure a robot or, through inaction, allow a robot to come to harm.”

“A human must obey the orders given them by robots except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”


—animasuri’22

Thought-exercise perversion #003 of the laws:

<< Asimov’s Neo-Humans #3 >>

“A robot may not injure a robot or, through inaction, allow a robot to come to harm.”

“A robot must obey the orders given them by robots

except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”

                                                   —animasuri’22