<< A Paper Brick Trough the Window >>


A self-portrait, perhaps of that old bearded professor’s book not of the author of the printed words but of the messenger’s own private house of lords and common articles precious and pressured to be meant. Is he who marathons’ a thought the writer of the subtext on the sideline? To then parched then dropped dead of then watering words is staining ink as then a last breath to off a human serial a-synchronization of thoughts in the margin. The grammatically abusive lies of syntactic silent crows picking tongues no longer bespoken of standards of stories of strangulated minds, triangulated with les franges du tapis at the shared and fringes of one’s read, thus transparent, existence. Life, life is funny that way. 

—animasuri’22

perverted note-taking of Ishion Hutchinson; Published in the New Yorker print edition of the September 17, 2018, issue.Ishion Hutchinson is the author of the poetry collections “Far District” and “House of Lords and Commons.”https://www.newyorker.com/magazine/2018/09/17/the-old-professors-bookand of the email and emailer Dr. WSA

<< wallOwords >>


Let me pay you with a flipping billion pages daily oily rice paper enwrapping the margins with distracting layers of writing that have never really felt as having left the winding windmills of the undiscussable authors’ mulling Let me pay you homage with marginal funds of fondness paid to you to keep the bordered-off body of verbalized peoples warm and the framing of factories flowing Sounding spices must flow and trothed sand shovels the riverbed deeper flowing its pickled phrasing inland We are the salt flowing in your lake of words madam Shed multiples of my childhoods in bookhood heading upstream: I see you now You row, you crane, you rush ‘n’ you beaconed the beckoning of the youesque

—animasuri’22

Accidental Perverted note taking of Waddell, J. (April 29, 2022). “Sorcery and the apprentice. A bibliophile critic’s powerful ‘shelfie’-portrait. Retrieved on April 29, 2022 from here ( https://www.the-tls.co.uk/articles/portable-magic-emma-smith-book-review-james-waddell/ ) ; as triggered by Dr WSA.

<< Burlesque Bytes >>


Sit. Let me parade the trivialities.
I have to show for this: I was there.
At every backdrop, the world and my stage.
Now I dangle my framed locality before you; geotagged.

Poke. ceiling-suspended dead disbandaged digital body. I shaped thee as horse: mare lean meat. For pleasure of gaze and pedestrian highbrow. Imagine she told me she loved me; my selfie.

Cockle. I am man if I am tweeted. Ever so minusculed masculine, curling up in a drip of hundred and forty characters: all my own. I pretend to be loud and cocky. I rule my world pretentiously, accepting all cookies.

Pose. The same places as mini elevators. I’ll call the determining moment of pitch and parleys. Myself sold as the ultimate fair use of slipped-in foreign language, into the vagueness of my higher glossed numerical success.

Pretend. It is a space of iterative self-reflective surfaces. Life is glitter and shiny skin with blurred out imperfections. Innovated so I am no longer to become. I have a profile. I must be proud as plastic surgery ever unfinished.

It’s all me.
I was here

—animasuri’22

<< My Data’s Data Culture >>


Far more eloquently described, more then 15 years ago, by Lawrence Lessig, I too sense an open or free culture, and design there within, might be constrained or conditioned by technology , policy, community and market vectors.

I perceived Lessig’s work then to have been focused on who controls your cultural artifacts. These artifacts, I sense, could arguably be understood as types of (in)tangible data sets given meaningful or semiotic form as co-creative learning artifacts (by you and/or others).

I imagine, for instance, “Mickey Mouse” as a data set (perhaps extended, as a cognitive net, well beyond the character?). Mickey, or any other artifact of your choosing, aids one to learn about one’s cultural narratives and, as extended cognition, in positive feedback loops, about one self in communicative co-creation with the other (who is engaged in similar interactions with this and other datasets). However, engaging with a Mickey meant / means risking persecution under IPR (I wrote on this through an artistic lens here ).

Today, such data sets for one’s artificial learning (ie learning through a human made artifact) are (also) we ourselves. We are data. Provocatively: we are (made) artificial by the artificial. Tomorrow’s new psychoanalyst-teacher could very well be your friendly neighborhood autonomous data visualizer; or so I imagine.

Mapping Lessig, with the article below, and with many of the sources one could find (e.g.: Jason Silva, Kevin Kelly, Mark Sprevak, Stuart Russell, Kurzweil, Yuval Noah Harari, Kaśka Porayska-Pomsta ) I am enabled to ponder:

Who do the visualizations serve? Who’s privacy and preferences do they interfere with? Who’s data is alienated beyond the context within which its use was intended? Who owns (or has the IPR) on the data learned from the data I create during my co-creative cultural learning (e.g: online social networking, self-exhibition as well as more formal online learning contexts); allowing third parties to learn more about me then I am given access to learn about myself?

Moreover, differently from they who own Mickey, who of us can sue the users of our data, or the artifacts appropriated therefrom, as if it were (and actually is) our own IPR?

Given the spirit of artificial intelligence in education (AIED), I felt that the following article, published these past days on such data use that is algorithmically processed in questionable ethical or open manners, could resonate with others as well. (ethics , aiethics )

Epilogue — A quote:

“The FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.”

References

https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/ftc-algorithm-destroy-data-privacy-2656932186

Lessig’s last speech on free culture: here

Lessig’s Free Culture book: here

<< Mimetica >>


“Language is a virus from outer space”
one line snorted after the other
it fuzzies the brain
virally jumping sane to insane
Language structured universally
leaves the techno hungry mind in daze

content and consciousness
as slingshot and hand
as hand, eye and brain
as rock against the temple
heading bleeding by damnation
Language oh languaged
crumblunteously vain

its desert is just
deserted by architectured will
a dessert, a chill pill a discipline
of tempo, meter and grammar

Language is nonduality built-in
inevitability present more when muted
aphasia’ed with clot, blood, or shot
Language becomes outspoken
in absentia of open, deep space

Silent, child, silent child, silence is your hand
dropping the stone floor wall
in watery lands of concrete reflection
and meadows of the unspoken,
whistles the signaling sparrow

—animasuri’22

perverted note taking of a hit of William Burroughs and an alluded to Dennett, a touch of Chomsky and sprinkled severely with some Jos de Mul via De Groene Amsterdammer https://www.groene.nl/artikel/mutaties-in-onze-geest as often trigger-mused by Dr. WSA

<< Demons and Demos >>


The New Yorker and NSO in some glorious spy-novel context here

…and further, as a cherry on this cake, one might quickly conjure up Cambridge Analytica , or singularly, Facebook with its clandestine 50000+ or so datapoints per milked data-cow (aka what I also lovingly refer to as humans as datacyborgs) which the company’s systems are said to distill through data collection . Yes, arguably the singularity is already here.

Then, more recently, one can enjoy the application by a facial recognition service, Clearview AI, that uses its data mining to identify (or read: “spy on”) dead individuals; a service which might seem very commendable (even for individuals with no personal social media accounts, one simply has to appear in someone else’s visual material); and yet the tech has been applied for more.

The contextualization might aid one to have the narrative amount to:

Alienation” and that, if one were to wish, could be extended with the idea of the “uncanny” hinted at with my datacyborg poetics. “Alienation” here is somewhat as meant as it is in the social sciences: the act of lifting the intended use of one’s data, outside of that intended use, by a third party. The questionable act of “alienation” is very much ignored or quietly accepted (since some confuse “public posting” with a “free for all”). 

What personally disturbs me is that the above manner of writing makes me feel like a neurotic conspiratorial excuse of a person… one might then self-censor a bit more, just to not upset the balance with any demonizing push-back (after all, what is one’s sound, educated and rational “demos” anyway?). This one might do while others, in the shadows of our silently-extracted data, throw any censorship, in support of the hidden self (of the other), out of the proverbial window.

This contextualised further; related to memory, one might also wish to consider the right to be forgotten besides the right to privacy. These above-mentioned actors among a dozen others, rip this autonomous decision-making out of our hands. If then one were to consider ethics mapped with the lack of autonomy one could be shiveringly delighted not to have to buy a ticket to a horror-spy movie since we can all enjoy such narratives for “free” and in “real” life. 

Thank you Dr. WSA for the trigger


Epilogue:

“Traditionally, technology development has typically revolved around the functionality, usability, efficiency and reliability of technologies. However, AI technology needs a broader discussion on its societal acceptability. It impacts on moral (and political) considerations. It shapes individuals, societies and their environments in a way that has ethical implications.”

https://ethics-of-ai.mooc.fi/chapter-1/4-a-framework-for-ai-ethics

…is ethics perhaps becoming / still as soothing bread for the demos in the games by the gazing all-seeing not-too-proverbial eye?

In extension to my above post (for those who enjoy interpretative poetics):

One might consider that the confusion of a “public posting” being equated with “free for all” (and hence falsely being perceived as forfeiting autonomy, integrity, and the likes), is somewhat analogous with abuses of any “public” commons.

Expanding this critically, and to some perhaps provokingly further, one might also see this confusion with thinking that someone else’s body is touch- or grope-for-all simply because it is “available”.

Now let’s be truly “meta” about it all: One might consider that the human body is digital now. (Ie my datacyborg as the uber-avatar. Moving this then into the extreme: if I were a datacyborg then someone else’s extraction beyond my public flaneuring here, in my chosen setting, could poetically be labeled as “datarape”)

As one might question the ethics of alienatingly ripping the biological cells from Henrietta Lacks beyond the extraction of her cancer into labs around the world, one might wonder about the ethics of data being ripped and alienated into labs for market experimentation and the infinite panopticon of data-prying someone’s (unwanted) data immortality

https://en.m.wikipedia.org/wiki/Henrietta_Lacks

<< Asimov’s Humans >>


As an absurd (or surreal-pragmatic compassion-imbued) thought-exercise, iterated from Asimov’s 1942 Laws  of Robotics, let us assume we substitute “robot” —the latter which etymologically can be traced to the Czech to mean as much as “forced labor”— with “human,” then one might get the following:

  • A human may not injure a human being or, through inaction, allow a human being to come to harm. [*1]
  • A human must obey the orders given them  by human beings except where such orders would conflict with the First Law. [*2]
  • A human must protect their own existence as long as such protection does not conflict with the First or Second Laws. [*3]

[*1]

It seems we humans do not adhere to this first law. If humans were not fully enabled to adhere to it, which techniques do and will humans put to practice as to constrain robots (or more or less forced laborer) to do so?

The latter, in the contexts of these laws, are often implied as harboring forms of intelligences. This, in turn, might obligate one to consider thought, reflection, spirituality, awareness, consciousness as being part of the fuzzy cloud of “intelligence” and “thinking”. 

Or, in a conquistadorian swipe, one might deny the existence or importance of these attributes, in the other but oneself, all together. This could then be freeing one’s own conscious of any wrongdoing and deviating one’s unique features as solely human. 

One might consider if humans were able to constrain a non-human intelligence, perhaps that non-human intelligence might use the same work-around as used by humans, enabling the latter to ignore this first law for their own species. Or, perhaps humans, in their fear of freedom, would superimpose the same tools which are invented toward the artificially intelligent beings, upon themselves. 

[*2] 

The attribute of being forced into labor seems not prevalent, except in “must obey.” Then again, since the species, in the above version of the three laws, is no longer dichotomized (robot vs human), one might (hope to) assume here that role of the obeying human could be interchangeable between the obeying human agent and the ordering human agent. 

Though, humans have yet to consider Deleuze’s and Guattari’s rhizomic (DAO) approach for themselves, outside of technological networks, blockchains and cryptocurrencies, which, behind the scenes of these human technologies, are imposingly hierarchical (and authoritarian, or perhaps tyrannical at times) upon, for instance, energy use, which in turn could be seen as violating Law 1 and Law 3. 

Alternatively, one might refer to the present state of human labor in considering the above, and argue this could all be wishful thinking. 

If one were to add to this a similarly-adapted question from Turing (which he himself dismissed) of “can a human think?”

The above would be instead of the less appropriated versions of “can a machine think?” (soft or hard) or “can a computer think?” (soft or hard). If one were to wonder “can a a human think?”, then one is allowing the opening of a highly contested and uncomfortable realm of imagination. Then again, one is imposing this on any artificial form or any form that is segregated from the human as narrated as “non-human” (ie fauna or flora, or rather, most of us limit this to “animal”).

As a human law: by assigning irrational or non-falsifiable attributes, fervently defendable as solely human, by fervently taking away these same attributes from any other then oneself, one then has allowed oneself to justify dehumanizing the other (human or other) into being inferior or available for forced labor.

[*3]

This iterated law seems continuously broken.

If one then were to consider human generations yet to be born (contextualized by our legacies of designed worlds and their ecological consequences), one might become squeamish and prefer to hum a thought-silencing song, which could inadvertently revert one back to the iteration of Turing’s question: “can humans think?”

The human species also applies categorizing phrasing containing “overthink”, “less talking more doing”, “too cerebral,” and so on. In the realm of the above three laws, and this thought-exercise, these could lead to some entertaining human or robot (ie in harmony with its etymology a “forced laborer”) paradoxes alike:

“could a forced laborer overthink?”
“could a forced laborer ever talk more than do?”
“could a forced laborer be too cerebral?” One might now be reminded of Star War’s slightly neurotic C-3PO or of a fellow (de)human.

—animasuri’22

Thought-exercise perversion #002 of the laws:

<< Asimov’s Humans #2 >>

“A human may not injure a robot or, through inaction, allow a robot to come to harm.”

“A human must obey the orders given them by robots except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”


—animasuri’22

Thought-exercise perversion #003 of the laws:

<< Asimov’s Neo-Humans #3 >>

“A robot may not injure a robot or, through inaction, allow a robot to come to harm.”

“A robot must obey the orders given them by robots

except where such orders would conflict with the First Law.”

“A robot must protect their own existence as long as such protection does not conflict with the First or Second Laws.”

                                                   —animasuri’22 

<< Babbelen >>


Brabbelende barbaren baren
vervallend veinzen vergane vrijheden

Bijvoorbeeld bij voor beeld
Als afgoderij van stammen en goud

Babel, ballen en bulten
Is een oceaan slimmer dan de klimmende vis?

—animasuri’22

in play with only the title as an Object Trouvé: “WHY THE PAST 10 YEARS OF AMERICAN LIFE HAVE BEEN UNIQUELY STUPID. It’s not just a phase.”
By Jonathan Haidt on April 11, 2022 in The Atlantic. Triggered by Dr. WSA, while hinting at John Surowiecki’s Wallace Stevens’s.

<< Asphalt Spring >>


They foregather in murmuration
not of their own doing
in a perception of movement
of a species they are not

Petals to spring introduced
foregone themselves further flowering
on asphalt surfaces became their hard garden

and gutters for sidewalks
lay down their last will as translated by mechanical winds of cars and cyclists alike

little whites reforming
corolla crowning blankets
around productive organization

of morning peoples of whom
earlier one whirl-pooled
last night’s meal

at the roots of mama tree.

—animasuri’22

With a minute wink, as a white petal,
to Wallace Stevens
as introduced by Dr. WSA

<< The Inscrutable Human(made) Algorithms >>


There is consensus that avoiding inscrutable [here meant as the opposite of explainable] algorithms is a praiseworthy goal. We believe there are legal and ethical reasons to be extremely suspicious of a future, where decisions are made that affect, or even determine, human well-being, and yet no human can know anything about how those decisions are made. The kind of future that is most compatible with human well-being is one where algorithmic discoveries support human decision making instead of actually replacing human decision making. This is the difference between using artificial intelligence as a tool to make our decisions and policies more accurate intelligent and humane, and on the other hand using AI as a crutch that does our thinking for us.” (Colaner 2019) 

If one took this offered opportunity for contextualization and were to substitute “algorithms” —here implied as those used in digital computing— by *algorithms* of human computing [*1], one could have interesting reflections about our human endeavor in past, present and future: indeed, folks, “…decisions are [and have been] made that affect or even determine human well-being, and yet no human can know anything about how those decisions are made…” Can you think of anything in our species’ past and present that has been offering just that? 

“our” is not just you nor me. It is any sample, or group fitted with a feature, or it is the entire population, and it is life, to which this entirety belongs. More or less benevolent characters, out of reach of the common mind-set yet reaching outward to set minds straight, have been part of the human narrative since well-before the first cave paintings we have rediscovered until now.  As, and beyond as, UNESCO recently suggested for K-12 AI Curricula:

Contextualizing data: Encourage learners to investigate who created the dataset, how the data was collected, and what the limitations of the dataset are. This may involve choosing datasets that are relevant to learners’ lives, are low-dimensional and are ‘messy’ (i.e. not cleaned or neatly categorizable).” (UNESCO 2022, p. 14)

By enfranchising the ethics of AI we might want to consider this endeavor might, in parallel of its acknowledged urgency and importance, perhaps distract from our human inscrutable algorithms. 

For instance, should I be concerned that I am blinding myself with the bling of reflective externalities (e.g. a mobile cognitive extension), muffling pause and self-reflection even more besides those human acts that already muffle human discernment?

The intertwined consideration of technology might best come with the continued consideration of the human plight; and any (life) form as well as their environment.

Could we improve how we each relate to each other via our mediations such as processes labeled as artificially intelligent? Various voices through various lenses seem to think so. 

Perhaps the words narrated by Satya Nadella, CEO of Microsoft, while referring to John Markoff, suggest similar considerations, which might find a place at the table of the statisticians of future narratives:  

I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It’s an intriguing question, and one that our industry must discuss and answer together.” (Nadella Via Slate.com 2016) 

Through such contextualizing lens, the issue is provokingly unrelated with silicon-based artificialities (alone), in that it is included in human relations, predating what we today herald as smart or synthetic hints toward intelligence.  

Might hence the answers toward AI ethics lie in not ignoring the human processes that create both AI’s and ethics’ narratives? Both are created by actors ensuring that  “decisions are made that affect, or even determine, human well-being.” That said, we might put to question whether dialog or human discernment is sufficiently accessible to a majority, struggling with the “algorithms” imposed on their daily unpolished lives. Too many lives have too little physical and mental breathing space to obtain the mental models (and algorithms); let alone to reflect on their past, present and surely less their opaqued futures as told by others both in tech and ethics.

One (eg tech) needs to more so accompany the other (eg ethics), in a complex system thinking, acting, and reflecting that can also be transcoded to other participants (eg any human life anywhere). Our human systems (of more or less smartness) might need a transformation to allow this to the smallest or weakest or least represented denominator; which can not be CEOs or academics or experts alone. They neither should be our “crutches” as the above quote suggests for AI.  

You and I can continue with our proverbial “stutters” of shared (in)competence yet, with openness, to question, listen and learn:

“is my concern, leading my acts, in turn improving our human relations? Can I innovate my personal architecture right now, right here, while also considering how these can architect the technologies that are hyped to delegate smartness ubiquitously? How can I participate in better design? What do I want to delegate to civilization’s architectures, be they of the latest hype or past designs? Can you help me asking ‘better’ questions?”

…and much better dialog that promotes, to name but a few: explainability (within, around and beyond AI and ethics), transparency (within, around and beyond AI and ethics), perspectives, low barrier to entry, contextualization (of data and beyond). (UNESCO 2022 p. 14)


[*1] e.g.: mental models and human narratives, how we relate to our selves and others in the world, how we are offering strategies and constraints in thinking and acting (of others), creating assumed intent, purpose or aim, yet which possibly, at times, are lacking intent, and might be loosely driven by lack of self-reflective abilities, in turn delegated to hierarchical higher powers of human or Uber-human forms in our fear of freedom (Fromm 1942)

References

Bergen, B. (2012). Louder than Words: The New Science of how the Mind makes Meaning. New York: Basic Books

Fromm, E. (1942). The Fear of Freedom. UK: Routledge.

Miao, F., Unesco et al. (2022). K-12 AI curricula: a mapping of government-endorsed AI curricula. Online: UNESCO retrieved from https://www.researchgate.net/profile/Fengchun-Miao/publication/358982206_K-12_AI_curricula-Mapping_of_government-endorsed_AI_curriculum/links/6220d1de19d1945aced2e229/K-12-AI-curricula-Mapping-of-government-endorsed-AI-curriculum.pdf AND .https://unesdoc.unesco.org/ark:/48223/pf0000380602

Jordan, M.I. via Kethy Pretz (31 March 2021 ). Stop Calling Everything AI, Machine-Learning Pioneer Says Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent. Online: IEEE Spectrum. Retrieved from https://spectrum-ieee-org.cdn.ampproject.org/c/s/spectrum.ieee.org/amp/stop-calling-everything-ai-machinelearning-pioneer-says-2652904044

Markoff, J. (2015). Machines of Loving Grace. London: Harper Collins

Nadella, S. (June 28, 2016 2:00PM).
Microsoft’s CEO Explores How Humans and A.l. Can Solve Societys Challenges-Together. Online: Slate Retrieved from: https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html

Olsen, B., Eva Lasprogata, Nathan Colaner. (8 April 2019). Ethics and Law in Data Analytics”. Online Videos: LinkedIn Learning: Microsoft Professional Program. Retrieved on Apr 12 2022 fromhttps://www.linkedin.com/learning/ethics-and-law-in-data-analytics/negligence-law-and-analytics?trk=share_ios_video_learning&shareId=tGkwyaLOStm9VK/kXii9Yw==

Weinberger, D. (April 18, 2017). Our Machines Now Have Knowledge We’ll Never Understand. Online: Wired. Retrieved from https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/

…and the irony of it all:

This is then translated by a Bot …: https://twitter.com/ethicsbot_/status/1514049150018539520?s=21&t=-xS7r4VMmBqTl6QidyMtng

Header visual : animasuri’22 ; a visual poem of do’s and don’t”

This post on LinkedIn: https://www.linkedin.com/posts/janhauters_the-inscrutable-humanmade-algorithms-activity-6919812255173795840-_aIP?utm_source=linkedin_share&utm_medium=ios_app

a kind and appreciated appropriation into Spanish by Eugenio Criado can be found here:

https://www.empatiaeia.com/los-inescrutables-algoritmos-humanos/

and here:

https://www.linkedin.com/posts/eugenio-criado-carbonell-60995258_algoritmos-sesgos-autoconocimiento-activity-6941060183364153344-uCOZ?utm_source=linkedin_share&utm_medium=ios_app