<< Human-made Outlier Synthesis >>


Some hype and others lament “democratization” (Turing Institute & Seger et al. 2023) and “democracy.” Even though sharing two linguistic roots, the processes to the former can deconstruct the latter. This possibility does not lie in the meaning of the individual words. Nor does it lie in the statistical probability of their appearance, occurring in each other’s proximity. It might partly lie in the lived relationships, triangulated with human interest, incentive, aspiration and application. Or, more pungently, in the absences thereof. Here is an interpretive narrative as illustration: 

The recent report by the CAIDP on Artificial Intelligence and democratic values offers a stage. This publication is an impressive work that heralds policies, frameworks and protections. It echoes the statement that  “Policymakers want an enabling policy environment while mitigating the risks of AI language models”  (OECD).

There is promise of innovation while enabling the commoner to remain free from… ?

The political enablements and constraints imply technological enablements and constraints, while the latter more dominantly implies market influences. Both of them affect social enablements and constraints.  

They affect how the demos (“we”) moves (i.e., agency) and is moved (i.e., “our” wanted or unwitting delegation) within the human polis (i.e., the metaphorical dwelled city). Their mapping, that of polis and tékhnē, are here not a mapping of a linear, one to one, nature. That’s almost so tautological that it is as comforting as listening to and being reminded of  Beethoven’s hair decompositions and of sound into his music, again and again. (Thank you, Dr. Walter Sepp Aigner

Simultaneously, these enablements and mitigations create narratives that nurture outliers and entrenchments that are not (and should not at all cost?) be ignored or flippantly dismissed (and we are not only thinking of jellyfish, rhinos or swans of bland monochrome coloring). (Day One Futures) Counter to some scientific and engineering needs, we should not always ignore nor filter away the outliers, either. (Taylor et al. 2016)  

It should be noted that not only technologies and also policies have “omni-uses” or multiple uses users gained access to and have the incentive to tinker or pervert away from the intended or designed usages ( Daniel Schmachtenberger & thank you Liv Boeree). Hence, can protections also be impositions? Yes. 

Neither is amplification to be equated with taking note of these outliers. As well is the note-taking not immediately a representation of the note-taker’s outlying, emotional state-of-mind. Note-taking can rhetorically be embellished or muted while having been authored in a state of rational, calm mindfulness while reminding us of a call to innovation to address systemic issues that are ignored or relabeled as if technical outliers or socio-inevitable issues alone; “that’s how it’s always been done”.

And yet, I am utterly excited being alive during these times. This all the whilst I can also think, breathe, reflect and consider the tails of the curve-balls we throw into our underbellies. It does not have to make one despair, nor imagine having reached Nirvana. It could be perceived that such dichotomization and polarization in thinking and working with policy and technology is an act of outlier-creation. In effect this type of creation might mute urgencies related to social minority, and to socially subdued or pressured voices.  Almost paradoxically, these voices are then narrated (by opposing voices) as too extreme and uncomfortable (i.e., as “outliers”), away from a fairness of reparation, and toward a governed solidification of their further biased formatting through models and automation. 

These same outliers and the omni-uses of technology, of information, and of policy, are affecting social relations.  These are as curve-balls into the outlying edges of our playing-field, hitting those of us who are less empowered, those of us who are maintained in iterations of stereotyping and yet more of the same: the antithesis of (social & relational) exciting forms of innovation ( forbes & thank you Prof. Darius Burschka for having pointed at this). Is this antithetical innovation a Wizard of Oz Experiment in the wild, toward large-scale social engineering manipulation?

This seems as an application of an “experiment” which might be seen as an outlier yet at large scale and with serious affective impact. Just as some historic “outliers” (read: biases and bigotries) which are downplayed as ignorable outliers. As if kicking a dead horse, the ignored outliers are then further drowned by some outliers off of the scale of sanity. The insane is given technological form to, as well as given vast policy-making attention, with reduced understanding of both tech and the socio-culturally maintained outliers, who could and should find their place in the middle.

Reflection on the idea of, the lenses to look onto, and the processes that could be constructing, maintaining or muting outliers, and their omni-uses, are as such not a faux-pause, some other acts though might seem reflective and pausing yet smell as if they are not. ( financial times and business insider)

For instance, with The Internet Archive (IA) losing its “lending lawsuit…” (Copyright Lately) should we‘ve cracked down on access to credible and validly verifiable sources? This question is here re-placed in context of opening up the World Wide Web to tsunamis of synthetically-generated content that can’t be corroborated, are based on sources that were taken without consent, nor regard for IPR. .

Is the latter at such vast scale, and with such financial backing, that it’s too large to notice? (Politico) So large it remains hidden? (EUractiv) As a flea in a red carpet not enabled to reflect on its carpet being a carpet? 

So, IA: no, “AI,” yes? Are now outliers and false dichotomies at play?

Synthetic “content” entrenches biases and reinforces boring yet harmful stereotypes. (See recent examples by Abeba Birhane via Twitter). It does the opposite of tackling historic and systemic issues (that have synthetically been kept as outliers across the centuries). These could already be addressed socially, policy-wise and technologically. We even have non-techno cognitive tools. (thank you Alireza Dehbozorgi  for pointing at this) And yet they are stalled by incentives and will, mixed with resources & access.

There lies innovation. In contrast: lie-innovation is instead the stark option we decided exploring feverishly via tech & policy. Here social omni-use is present. Here outliers are reinforced on the scale of the sane.

There is a recent publication entitled “Real World AI Ethics.”Could we now —with digital multiverses, generative AI outputs, & deepfakes— also urge for the nascence of “Fake World AI Ethics,”(6) which could explore outliers that are mixing fakery with the false labeling of actual issues as “fake,” by ignoring what is lived, right now & right there, under extreme conditions? While information is “neither matter nor energy” (Norbert Wiener; Thank you Prof. Felix Hovsepian, PhD, FIMA For reminding us) its creative ugly ducklings named ‘Mis- & Dis-‘ are roughing up lots of dust & partying on vast amounts of energy ( EUractiv )

Innovation is not a one way ticket to bliss, unless we allow asking: “innovative” to whom and with what omni-used meaning making? Repetition and regurgitation, are derivative acts. They are confusing boredom for innovation and could if not handled with care, oppose addressing actual needs. This note-taking here could be interpreted as boring as well, as much as re-reading Beethoven could be. Though, they do not have to be.

Democratizing “solidarity” (thank you Michael Robbins) via wanting Beethoven, or social relational care into diversities of lived, local and global needs, can symbolize omni-innovation in tech and policy.

some highlighted References

OECD (2023), “AI language models: Technological, socio-economic and policy considerations”, OECD Digital Economy Papers, No. 352, OECD Publishing, Paris,https://doi.org/10.1787/13d38f92-en.

Seger, Elizabeth, Aviv Ovadya, Ben Garfinkel, Divya Siddarth, and Allan Dafoe. “Democratising AI: Multiple Meanings, Goals, and Methods,” March 27, 2023. https://doi.org/10.48550/arXiv.2303.12642.

Taylor, J. et al. (2016 ). Alignment for Advanced Machine Learning Systems . Last retrieved on April 15, 2023 from here