Tag Archives: ailiteracy

<< The Tàijí Quán of Tékhnē >>


Looking at this title can tickle your fancy, disturb your aesthetic, mesmerize you into mystery, or simply trigger you to want to throw it into the bin, if only your screen were made of waste paper. Perhaps, one day.

<< The Balancing Act of Crafting >>

Engineering is drafting and crafting; and then some. Writing is an engineering; at times without a poetic flair.

One, more than the other, is thought to be using more directly the attributes that the sciences have captured through methodological modeling, observing, and interpreting.

All (over)simplify. The complexities can be introduced, when nuancing enters the rhetorical stage, ever more so when juggling with quantitative or qualitative data is enabled.

Nuancing is not a guarantee for plurality in thought nor for a diversity in creativity or innovation.

Very easily the demonettes of fallacy, such as false dichotomy, join the dramaturgy as if deus ex machina, answering the call for justifications in engineering, and sciences. Language: to rule them all.

Then hyperbole joins in on the podium as if paperflakes dropped down, creating a landscape of distractions for audiences in awe. Convoluting and swirling, as recursions, mirrored in the soundtrack to the play unfolding before our eyes. The playwright as any good manipulator of drama, hypes, downplays, mongers and mutes. It leaves audiences scratching at shadows while the choreography continues onward and upward. Climax and denouement must follow. Pause and applause will contrast. Curtains will open, close.

<< Mea Culpa>>The realization is that it makes us human. This while our arrogance, hubris or self-righteousness makes us delusionary convinced of our status as Ubermensch, to then quickly debase with a claimed technological upgrade thereof. Any doubt of the good and right of the latter, is then swiftly classified as Luddite ranting;<</Mea Culpa>>

While it is hard to express concern or interest without falling into rhetorical traps, fear mongering, as much as hype, are not conducive to the social fabric nor individual wellbeing.

“Unless we put as much attention on the development of [our own, human] consciousness as on the development of material technology—we will simply extend the reach of our collective insanity….without interior development, healthy exterior development cannot be sustained”— Ken Wilber

—-•
Reference:

Wilber, K. (2000). A theory of everything: an integral vision for business, politics, science, and spirituality. Shambhala Publications

Fromm, E. S. (1956). The Sane Society. “Fromm examines man’s escape into overconformity and the danger of robotism in contemporary industrial society: modern humanity has, he maintains, been alienated from the world of their own creation.” (description @ Amazon)

—-•

#dataliteracy #informationliteracy #sciencematters #engineering #aiethics #wellbeing #dataethics #discourseanalysis #aipoetics

<< Learning is Relational Entertainment; Entertainment is Shared Knowledge; Knowledge is... >>

context: Tangermann, Victor. ( Feb 16, 2023). Microsoft: It’s Your Fault Our AI Is Going Insane They’re not entirely wrong. IN: FUTURISM (online). Last retrieved on 23 February 2023 from https://futurism.com/microsoft-your-fault-ai-going-insane

LLM types of technology and their spin-offs or augmentations, are made accessible in a different context then technologies for which operation requires regulation, training, (re)certification and controlled access.

If the end-user holds the (main) weight of duty-of-care, then such training, certification, regulation and limited access should be put into place. Do we have that, and more importantly: do we really want that?

If we do want that, then how would that be formulated, be implemented and be prosecuted? (Think: present-day technologies such as online proctoring, keystroke recording spyware, Pegasus spyware, Foucault’s Panopticon or the more contextually-pungent “1984”)

If the end-user is not holding that weight and the manufacturer is, and/or if training, (re)certification, access and user-relatable laws, which could define the “dos-and-don’ts,” are not readily available then… Where is the duty-of-care?

Put this question of (shared) duty-of-care in light of critical analysis and of this company supposedly already knowing in November 2022 of these issues, then again… Where is the duty-of-care? (Ref: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o )

Thirdly, put these points then in context of disinformation vs information when e.g. comparing statistics as used by a LLM-based product vs the deliverables to the public by initiatives such as http://gapminder.org or http://ourworldindata.org or http://thedeep.io to highlight but three instances of a different systematized and methodological approach to the end-user (one can agree or disagree with these; that is another topic).

So, here are 2 systems which are both applying statistics. 1 system aims at reducing our ignorance vs the other at…increasing ignorance (for “entertainment” purposes… sure.)? The latter has serious financial backing, the 1st has…?

Do we as a social collective and market-builders then have our priorities straight? Knowledge is no longer power. Knowledge is submission to “dis-“ packaged as democratized, auto-generating entertainment.

#entertainUS

Epilogue-1:

Questionably “generating” (see above “auto-generating entertainment”) —while arguably standing on the shoulders of others—rather: mimicry, recycling, or verbatim copying without corroboration, reference, ode nor attribution. Or, “stochastic parroting” as offered by Prof. Dr. Emily M. Bender , Dr. Timnit Gebru et al. is relevant here as well. Thank you Dr. Walid Saba for reminding us. (This and they are perhaps suggesting a fourth dimension in lacking duty-of-care).

Epilogue-2:

to make a case: I ran an inquiry through ChatGPT requesting a list of books on abuses with statistics and about 50% of the titles did not seem to exist, or are so obscure that no human search could easily reveal them. In addition a few obvious titles were not offered. I tried to clean it up and add to it here below.

bibliography:

Baker, L. (2017). Truth, Lies & Statistics: How to Lie with Statistics.

Barker, H. (2020). Lying Numbers: How Maths & Statistics Are Twisted & Abused.

Best, J. (2001). Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press.

Best, J. (2004). More Damned Lies and Statistics.

Dilnot, A. (2007). The Tiger That Isn’t.

Ellenberg, J. (2014). How Not to Be Wrong.

Gelman, A., & Nolan, D. (2002). Teaching Statistics: A Bag of Tricks. New York, NY: Oxford University Press.

Huff, D. (1954). How to Lie with Statistics. New York, NY.: W. W. Norton & Company.

Levitin, D. (2016). A Field Guide to Lies: Critical Thinking in the Information Age. Dutton.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY Crown.

Rosling, H., Rosling Ronnlund, A. (2018). Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think. Flatiron Books; Later prt. edition

Seethaler, S. (2009). Lies, Damned Lies, and Science: How to Sort Through the Noise Around Global Warming, the Latest Health Claims, and Other Scientific Controversies. Upper Saddle River, NJ: FT Press.

Silver, IN. (2012). The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t. New York, NY: Penguin Press.

Stephens-Davidowitz, S. (2017). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Tufte, E. R. (1983). The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.

Wheeler, M. (1976). Lies, Damn Lies, and Statistics: The Manipulation of Public Opinion in America.

Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press.


References

this post was triggered by:

https://www.linkedin.com/posts/katja-rausch-67a057134_microsoft-its-your-fault-our-ai-is-going-activity-7034151788802932736-xxM6?utm_source=share&utm_medium=member_desktop

thank you Katja Rausch

and by:

https://www.linkedin.com/posts/marisa-tschopp-0233a026_microsoft-its-your-fault-our-ai-is-going-activity-7034176521183354880-BDB4?utm_source=share&utm_medium=member_desktop

thank you Marisa Tschopp

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

“Verbatim copying” in the above post’s epilogue was triggered by Dr. Walid Saba ‘s recent post on LinkedIn:

https://www.linkedin.com/posts/walidsaba_did-you-say-generative-ai-generative-ugcPost-7035419233631039488-tct_?utm_source=share&utm_medium=member_ios

This blog post on LinkedIn

<< Morpho-Totem >>


Decomposition 1

my hammer is like my_______
my car is like my______
my keyboard is like my______
my coat is like my_____
my watch is like my______
my smart phone is like my______
my artificial neural network is like my______
my ink is like my_______
my mirror is like my________
my sunglasses are like my______
my golden chains are like my_________
my books are like my_________

Decomposition 2

my skin is like a_______
my fingertips are like a_______
my fist is like a_____
my foot is like a_______
my hair is like a_________
my bosom is like a________
my abdominal muscles are like a______
my brain is like a__________
my eyes are like a________
my genitalia are like a______
my dna is like a______
my consciousness is like a______

reference, extending
to the other desired thing
not of relatable life

—animasuri’22

<< what’s in a word but disposable reminiscence >>


A suggested (new-ish) word that perhaps could use more exposure is

nonconsensuality

It hints at entropy within human relations and decay in acknowledgement of the other (which one might sense as an active vector coming from compassion). Such acknowledgement is then of the entirety of the other and their becoming through spacetime (and not only limited to their observable physical form or function).

It is however, secondly, also applicable in thinking when acting with treatment (of the other and their expressions across spacetime), with repurposing, and in the relation in the world with that what one intends to claim or repurpose.

Thirdly, this word is perhaps surprisingly also applicable to synthetic tech output. One could think about how one group is presented (more than an other) in such output without their consent (to be presented as such). Such output could be an artificially generated visual (or other) that did not exist previously, nor was allowed the scale at which it could be mechanically reproduced or reiterated into quasi infinite digital versions.

Fourthly, through such a tech-lens one could relate the word with huge databases compiled & used to create patterns from the unasked-yet-claimed other or at least their (creative, artistic or other more or less desirable) output that is digital or digitized without consideration of the right to be forgotten or not be repurposed ad infinitum.

Fifthly, one could argue in nurturing future senses of various cultural references, that could be considered to also be applicable to those (alienated) creations of fellow humans who have long past, and yet who could be offered acknowledgement (as compensation for no longer being able to offer consent) by having (in a metadata file) their used work referenced.

As such I wish I could give ode to they or that what came before me when I prompted a Diffusion Model to generate this visual. However I cannot. Paradoxically, the machine is hyped to “learn” while humans are unilaterally decided for not to learn where their work is used or where the output following their “prompt” came from. I sense this as a cultural loss that I cannot freely decide to learn where something might have sprouted from. It has been decided for me that I must alienate these pasts without my consent whether or not I want to ignore these.

—-•

aiethics #aiaesthetics #aicivilization #meaningmaking #rhizomatichumanity

Post scriptum:

Through such cultural lens, as suggested above, this possible dissonance seems reduced in shared intelligence. To expand that cultural lens into another debated tech: the relation between reference, consent, acknowledgment and application seems as if an antithetical cultural anti-blockchain: severed and diffused.


<< Boutique Ethic >>

Thinking of what I label as “boutique ethic”, such as AI Ethics, must indeed come with thinking about ethics (Cf. here ). I think this is not only an assignment for the experts. It is also one for me: the layperson-learner.

Or is it?

Indeed, if seen through more-than a techno-centric lens alone, some voices do claim that one should not be bothered with ethics if one does not understand the technology which is confining ethics into a boutique ethic; e.g. “AI”. (See 2022 UNESCO report on AI curriculum in K-12). I am learning to disagree .

I am not a bystander, passively looking on, and onto my belly button alone. Opening acceptance to Noddings’ thought on care (1995, 187) : “a carer returns to the cared-for,” when in the most difficult situations principles fail us (Rossman & Rallis 2010). How are we caring for those affected by the throwing around of the label “AI” (as a hype or as a scarecrow)?

Simultaneously, how are we caring for those affected by the siphoning off of their data, for application, unknown to the affected, of data derived from them and processed in opaque and ambiguous processes? (One could, as one of the many anecdotes, summon up the polemics surrounding DuckduckGo and Microsoft, or Target and baby product coupons, and so on)

And yet, let us expand back to ethics surrounding the boutiqueness of it: the moment I label myself (or another such as the humans behind DuckDuckGo) as “stupid”, “monster”, “trash”, “inferior”, ”weird”, “abnormal;” “you go to hell” or other more colorful itemizations, is the moment my (self-)care evaporates and my ethical compass moves away from the “...unconditional worth of all human beings and the equal respect to which they are entitled” (Rossman & Rallis 2010). Can then a mantra come to the aid: ”carer, return to the cared-for”? I want to say: “yes”.

Though, what is the impact of the mantra if the other does not apply this mantra (i.e., DuckDuckGo and Microsoft)? And yet, I do not want to get into a yoyo “spiel” of:
Speaker 1:“you first”,
Speaker 2: “no, you first”,
Speaker 1: “no, really, you first”.
Here a mantra of: “lead by example, and do not throw the first or n-ed stone” might be applicable? Is this then implying self-censorship and laissez-faire? No.

I can point at DuckDuckGo and Microsoft as an anecdote, and I think I can learn via ethics, into boutique ethics, what this could mean through various (ethical and other) lenses (to me, to others, to them, to it) while respecting the act of the carer. Through that lens I might wonder what drove these businesses to this condition and use that as a next steppingstone in a learning process. This thinking would take me out of the boutique and into the larger market, and even the larger human community.

The latter is what I base on what some refer to as the “ethic of individual rights and responsibilities” (Ibid). It is my responsibility to learn and ask and wonder. Then I assume that, the action by an individual who has following been debased by a label I were to throw at them (including myself), as those offered in the preceding sentence, is then judged by the “respect to which they are entitled” (Ibid). This is then a principle assuming that “universal standards exist” (Ibid). And yet, on a daily basis, especially on communal days, and that throughout history: I hurdle. After all we can then play with words “what is respect and what type of respect are they indeed entitled to?”

I want to aim for a starting point of an “unconditional” respect, however naive that might seem and however meta-Jesus-esque or Ghandi-esque, Dr. King-esque, or Mandela-esque that would require me to become. Might this perhaps be a left libertarian stance? Can I “respectfully” throw the first stone? Or lies the eruption in the metaphorical of “throwing a stone” rather than the physical?

Perhaps there are non-violent responses that are proportional to the infraction. This might come in handy. I can decide no longer to use DuckDuckGo. However, can I decouple from Microsoft without decoupling from my colleagues, family, community? Herein the learning as activism might then be found in looking and promoting alternatives toward a technological ecosystem of diversity with transparency, robustness and explainability and fair interoperability.

Am I a means to their end?” I might ask then “or am I an end in myself?” This then brings me back to the roles of carer. Are, in this one anecdotal reference, DuckDuckGo and Microsoft truly caring about its users or rather about other stakeholders? Through a capitalist lens one might be inclined to answer and be done with it. However, I prefer to keep an openness for the future, to keep on learning and considering additional diversifying scenarios and acts that could lead to equity to more than the happy few.

Through a lens of thinking about consequences of my actions (which is said to be an opposing ethical stance compared to the above), I sense the outcome of my hurdling is not desirable. However, the introduction of alternatives or methods toward understanding of potentials (without imposing) might be. I do not desire to dismiss others (e.g., cast them out, see them punished, blatantly ignore them with the veil of silenced monologue). At times, I too believe that the act of using a label is not inherently right or wrong. So I hurdle, ignorant of the consequence to the other, their contexts, their constraints, their conditions and ignorant of the cultural vibe or relationships I am then creating. Yes, decomposing a relationship is creating a fragmented composition as much as non-dialog is dialog by absence. What would be my purpose? It’s a rhetorical question, I can guess.

I am able to consider some of the consequence to others (including myself), though not all. Hence, I want to become (more) caring. The ethical dichotomy between thinking about universals or consequence is decisive in the forming of the boutique ethic. Then again, perhaps these seemingly opposing ethics are falsely positioned in an artificial dichotomy. I tend to intuit so. The holding of opposing thought and dissonance is a harmony that simply asks a bit more effort that, to me, is embalmed ever so slightly by the processes of rhizomatic multidimensional learning.

This is why I want to consider boutique ethics while still struggling with being ignorant, yet learning, about types and wicket conundrums in ethics , at larger, conflicting and more convoluted scales. So too when considering a technology I am affected by yet ignorant of.

References

Gretchen B. R., Sharon F. R. (2010). Everyday ethics: reflections on practice, International Journal of Qualitative Studies in Education, 23:4, 379-391

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Rossman, G.B., S.F. Rallis. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Rossman, G.B., S.F. Rallis. (2003). Learning in the field: An introduction to qualitative research. 2nd ed. Thousand Oaks, CA: Sage.

UNESCO. (2022). K-12 AI curricula-Mapping of government-endorsed AI curriculum.

<< Critique: not as a Worry nor Dismissal, but as Co-creative Collective Path-maker>>


In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.

I sense “worry” touches on an important human process of urgency.

What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?

The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]

To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.

As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?

I imagine that in learning these distinctions, we might actually “innovate”.

Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”

If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]

Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]

Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.

footnote
—-•
[*1]

Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.

If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.

<< Demons and Demos >>


The New Yorker and NSO in some glorious spy-novel context here

…and further, as a cherry on this cake, one might quickly conjure up Cambridge Analytica , or singularly, Facebook with its clandestine 50000+ or so datapoints per milked data-cow (aka what I also lovingly refer to as humans as datacyborgs) which the company’s systems are said to distill through data collection . Yes, arguably the singularity is already here.

Then, more recently, one can enjoy the application by a facial recognition service, Clearview AI, that uses its data mining to identify (or read: “spy on”) dead individuals; a service which might seem very commendable (even for individuals with no personal social media accounts, one simply has to appear in someone else’s visual material); and yet the tech has been applied for more.

The contextualization might aid one to have the narrative amount to:

Alienation” and that, if one were to wish, could be extended with the idea of the “uncanny” hinted at with my datacyborg poetics. “Alienation” here is somewhat as meant as it is in the social sciences: the act of lifting the intended use of one’s data, outside of that intended use, by a third party. The questionable act of “alienation” is very much ignored or quietly accepted (since some confuse “public posting” with a “free for all”). 

What personally disturbs me is that the above manner of writing makes me feel like a neurotic conspiratorial excuse of a person… one might then self-censor a bit more, just to not upset the balance with any demonizing push-back (after all, what is one’s sound, educated and rational “demos” anyway?). This one might do while others, in the shadows of our silently-extracted data, throw any censorship, in support of the hidden self (of the other), out of the proverbial window.

This contextualised further; related to memory, one might also wish to consider the right to be forgotten besides the right to privacy. These above-mentioned actors among a dozen others, rip this autonomous decision-making out of our hands. If then one were to consider ethics mapped with the lack of autonomy one could be shiveringly delighted not to have to buy a ticket to a horror-spy movie since we can all enjoy such narratives for “free” and in “real” life. 

Thank you Dr. WSA for the trigger


Epilogue:

“Traditionally, technology development has typically revolved around the functionality, usability, efficiency and reliability of technologies. However, AI technology needs a broader discussion on its societal acceptability. It impacts on moral (and political) considerations. It shapes individuals, societies and their environments in a way that has ethical implications.”

https://ethics-of-ai.mooc.fi/chapter-1/4-a-framework-for-ai-ethics

…is ethics perhaps becoming / still as soothing bread for the demos in the games by the gazing all-seeing not-too-proverbial eye?

In extension to my above post (for those who enjoy interpretative poetics):

One might consider that the confusion of a “public posting” being equated with “free for all” (and hence falsely being perceived as forfeiting autonomy, integrity, and the likes), is somewhat analogous with abuses of any “public” commons.

Expanding this critically, and to some perhaps provokingly further, one might also see this confusion with thinking that someone else’s body is touch- or grope-for-all simply because it is “available”.

Now let’s be truly “meta” about it all: One might consider that the human body is digital now. (Ie my datacyborg as the uber-avatar. Moving this then into the extreme: if I were a datacyborg then someone else’s extraction beyond my public flaneuring here, in my chosen setting, could poetically be labeled as “datarape”)

As one might question the ethics of alienatingly ripping the biological cells from Henrietta Lacks beyond the extraction of her cancer into labs around the world, one might wonder about the ethics of data being ripped and alienated into labs for market experimentation and the infinite panopticon of data-prying someone’s (unwanted) data immortality

https://en.m.wikipedia.org/wiki/Henrietta_Lacks