Tag Archives: wellbeing

<< Human-made Outlier Synthesis >>


Some hype and others lament “democratization” (Turing Institute & Seger et al. 2023) and “democracy.” Even though sharing two linguistic roots, the processes to the former can deconstruct the latter. This possibility does not lie in the meaning of the individual words. Nor does it lie in the statistical probability of their appearance, occurring in each other’s proximity. It might partly lie in the lived relationships, triangulated with human interest, incentive, aspiration and application. Or, more pungently, in the absences thereof. Here is an interpretive narrative as illustration: 

The recent report by the CAIDP on Artificial Intelligence and democratic values offers a stage. This publication is an impressive work that heralds policies, frameworks and protections. It echoes the statement that  “Policymakers want an enabling policy environment while mitigating the risks of AI language models”  (OECD).

There is promise of innovation while enabling the commoner to remain free from… ?

The political enablements and constraints imply technological enablements and constraints, while the latter more dominantly implies market influences. Both of them affect social enablements and constraints.  

They affect how the demos (“we”) moves (i.e., agency) and is moved (i.e., “our” wanted or unwitting delegation) within the human polis (i.e., the metaphorical dwelled city). Their mapping, that of polis and tékhnē, are here not a mapping of a linear, one to one, nature. That’s almost so tautological that it is as comforting as listening to and being reminded of  Beethoven’s hair decompositions and of sound into his music, again and again. (Thank you, Dr. Walter Sepp Aigner

Simultaneously, these enablements and mitigations create narratives that nurture outliers and entrenchments that are not (and should not at all cost?) be ignored or flippantly dismissed (and we are not only thinking of jellyfish, rhinos or swans of bland monochrome coloring). (Day One Futures) Counter to some scientific and engineering needs, we should not always ignore nor filter away the outliers, either. (Taylor et al. 2016)  

It should be noted that not only technologies and also policies have “omni-uses” or multiple uses users gained access to and have the incentive to tinker or pervert away from the intended or designed usages ( Daniel Schmachtenberger & thank you Liv Boeree). Hence, can protections also be impositions? Yes. 

Neither is amplification to be equated with taking note of these outliers. As well is the note-taking not immediately a representation of the note-taker’s outlying, emotional state-of-mind. Note-taking can rhetorically be embellished or muted while having been authored in a state of rational, calm mindfulness while reminding us of a call to innovation to address systemic issues that are ignored or relabeled as if technical outliers or socio-inevitable issues alone; “that’s how it’s always been done”.

And yet, I am utterly excited being alive during these times. This all the whilst I can also think, breathe, reflect and consider the tails of the curve-balls we throw into our underbellies. It does not have to make one despair, nor imagine having reached Nirvana. It could be perceived that such dichotomization and polarization in thinking and working with policy and technology is an act of outlier-creation. In effect this type of creation might mute urgencies related to social minority, and to socially subdued or pressured voices.  Almost paradoxically, these voices are then narrated (by opposing voices) as too extreme and uncomfortable (i.e., as “outliers”), away from a fairness of reparation, and toward a governed solidification of their further biased formatting through models and automation. 

These same outliers and the omni-uses of technology, of information, and of policy, are affecting social relations.  These are as curve-balls into the outlying edges of our playing-field, hitting those of us who are less empowered, those of us who are maintained in iterations of stereotyping and yet more of the same: the antithesis of (social & relational) exciting forms of innovation ( forbes & thank you Prof. Darius Burschka for having pointed at this). Is this antithetical innovation a Wizard of Oz Experiment in the wild, toward large-scale social engineering manipulation?

This seems as an application of an “experiment” which might be seen as an outlier yet at large scale and with serious affective impact. Just as some historic “outliers” (read: biases and bigotries) which are downplayed as ignorable outliers. As if kicking a dead horse, the ignored outliers are then further drowned by some outliers off of the scale of sanity. The insane is given technological form to, as well as given vast policy-making attention, with reduced understanding of both tech and the socio-culturally maintained outliers, who could and should find their place in the middle.

Reflection on the idea of, the lenses to look onto, and the processes that could be constructing, maintaining or muting outliers, and their omni-uses, are as such not a faux-pause, some other acts though might seem reflective and pausing yet smell as if they are not. ( financial times and business insider)

For instance, with The Internet Archive (IA) losing its “lending lawsuit…” (Copyright Lately) should we‘ve cracked down on access to credible and validly verifiable sources? This question is here re-placed in context of opening up the World Wide Web to tsunamis of synthetically-generated content that can’t be corroborated, are based on sources that were taken without consent, nor regard for IPR. .

Is the latter at such vast scale, and with such financial backing, that it’s too large to notice? (Politico) So large it remains hidden? (EUractiv) As a flea in a red carpet not enabled to reflect on its carpet being a carpet? 

So, IA: no, “AI,” yes? Are now outliers and false dichotomies at play?

Synthetic “content” entrenches biases and reinforces boring yet harmful stereotypes. (See recent examples by Abeba Birhane via Twitter). It does the opposite of tackling historic and systemic issues (that have synthetically been kept as outliers across the centuries). These could already be addressed socially, policy-wise and technologically. We even have non-techno cognitive tools. (thank you Alireza Dehbozorgi  for pointing at this) And yet they are stalled by incentives and will, mixed with resources & access.

There lies innovation. In contrast: lie-innovation is instead the stark option we decided exploring feverishly via tech & policy. Here social omni-use is present. Here outliers are reinforced on the scale of the sane.

There is a recent publication entitled “Real World AI Ethics.”Could we now —with digital multiverses, generative AI outputs, & deepfakes— also urge for the nascence of “Fake World AI Ethics,”(6) which could explore outliers that are mixing fakery with the false labeling of actual issues as “fake,” by ignoring what is lived, right now & right there, under extreme conditions? While information is “neither matter nor energy” (Norbert Wiener; Thank you Prof. Felix Hovsepian, PhD, FIMA For reminding us) its creative ugly ducklings named ‘Mis- & Dis-‘ are roughing up lots of dust & partying on vast amounts of energy ( EUractiv )

Innovation is not a one way ticket to bliss, unless we allow asking: “innovative” to whom and with what omni-used meaning making? Repetition and regurgitation, are derivative acts. They are confusing boredom for innovation and could if not handled with care, oppose addressing actual needs. This note-taking here could be interpreted as boring as well, as much as re-reading Beethoven could be. Though, they do not have to be.

Democratizing “solidarity” (thank you Michael Robbins) via wanting Beethoven, or social relational care into diversities of lived, local and global needs, can symbolize omni-innovation in tech and policy.

some highlighted References

OECD (2023), “AI language models: Technological, socio-economic and policy considerations”, OECD Digital Economy Papers, No. 352, OECD Publishing, Paris,https://doi.org/10.1787/13d38f92-en.

Seger, Elizabeth, Aviv Ovadya, Ben Garfinkel, Divya Siddarth, and Allan Dafoe. “Democratising AI: Multiple Meanings, Goals, and Methods,” March 27, 2023. https://doi.org/10.48550/arXiv.2303.12642.

Taylor, J. et al. (2016 ). Alignment for Advanced Machine Learning Systems . Last retrieved on April 15, 2023 from here


<< The Gates open, close, open, close >>



If one were to bring a product to market that plays Russian Roulette with qualitative & quantitative Validity & Reliability (cf. Cohen & Morrison 2018:249-… Table 14.1 & 14.2), would it then be surprising that their “paper,” justifying that same product, plays it loosely as well?  (Ref: https://lnkd.in/ek8nRxcF and https://lnkd.in/e5sGtMSH )

…and, before one rebuttals, does a technical paper (or, “report”) with the flair of an academic paper not need to follow attributes of validity & reliability? Open(AI)ness / transparency are but two attributes within an entire set, innate to proper scientific methodologies AND engineering methodologies; the 2 sets of methodologies not being synonymous.

Open is lost in AI.

Or, as argued elsewhere, as a rebuttal to similar concerns with LLMs, are these “only for entertainment?” I say, expensive entertainment; i.e.: financially, socially, environmentally and governance-wise, not to mention the expense on legality of data sourcing. (thread: https://lnkd.in/db_JdQCw referencing: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o and https://www.animasuri.com/iOi/?p=4442 )

“Ah, yes, of course,” one might reply, “these are the sad collateral damages as part of the acceleration risks, innate to innovation: fast, furious, and creatively destructive. A Luddite would not understand as is clear from your types and your fear-mongering during them days of the printing press.” 

no. Not exactly.

Reflection, care, (re)consideration, and nuancing are not innate to burning technology at the stake. Very much the opposite. Love for engineering and for sciences do not have to conflict with each other nor with love for life, ethics, aesthetics, love for relations, poetics, communities, well-being, advancement and market exploration. 

Rest assured one can work in tech, play with tech and build tech while still reflecting on confounding variables, processes, collateral affects, risk, redundancies, opportunity, creativity, impact, human relation, market excitement and, so on. 

Some humans can even consider conflicting arguments at once without having to dismiss one over the other (cf.: https://t.co/D3qSgqmlSf ). This is not only the privilege of quantum particles. Though, carelessness while seeing if, when and where the rouletted bullet lands, while scaling this game at global level and into the wild, is surely not a telltale signaling of such ability either. 

Are some of our commercial authoritative collectives too big to be failed at this? No testing ability in the highest percentile answers us this question folks. That “assessment” lies elsewhere.

And yet, fascinating are these technological tools.

—-•
Additional Triggers:

https://lnkd.in/ecyhPHSM

Thank you Professor Gary Marcus

https://lnkd.in/d63-w6Mj

Thank you Sharon Goldman

—-•

Cohen, L., Manion, L., Morrison, K. (2018). Research Methods in Education. 8th Edition. Routledge.

—-•

<< 7 Musketeers of Data Protection >>


In the EU & UK there are data protection principles set within regulation or law. Some relate back to the UN’s Human Rights:

(right to) Lawfulness;
(right to) Fairness & transparency;
(right to) Purpose limitation;
(right to) Data minimization;
(right to) Accuracy;
(right to) Storage limitation;
(right to) Integrity & confidentiality;
(right to) Accountability

How might Large Language Models (LLMs) measure up?

These innovations were built on scraping the internet for data. The collected data was then processed in a manner to allow the creation of LLMs & their spin-off chatbots. Then products have been layered on top of that which are being capitalized upon. While oversimplified, this paragraph functions as the language model for this text.

This process, hinted at in the previous paragraph, has not been & is not occurring in a transparent fashion. Since the birth of the World Wide Web, and with it the rise of “social” networks, the purpose of users in uploading their data onto the internet was probably not intended with this purpose (i.e., of large data scraping initiatives) in their mind. The data on users is being maximized not minimized.

The resulting output is rehashed in such way that accuracy is seriously questionable. One’s data is potentially stored for unlimited time at unlimited locations. confidentiality seems at least unclear if not destabilized. Who is accountable; this is unclear

I am not yet clear as to how LLMs measure up to the above 7 data protection principles. Are you clear?

If these principles were actually implemented, would they stifle innovation & market? Though, if these seven were not executed, what would be the benefit & what would be the cost (think value, social relations and democracy here, and not only money) to the user-netizen with “democratized” (lack of) access to these “AI”innovations?

Or, shall we declare these 7 musketeers on a path to a death by a thousand transformable cuts? This then occurs in harmony with calls for the need for trustworthy “AI.” Is the latter then rather questionable; to ask it gently?

References

data protection legislation (UK):
Data Protection Act 2018 + the General Data Protection Regulation 2016

https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted

https://www.legislation.gov.uk/eur/2016/679/contents

General Data Protection Regulation (GDPR)
https://gdpr-info.eu/


Data Protection and Human Rights:
https://publications.parliament.uk/pa/jt200708/jtselect/jtrights/72/72.pdf

https://edps.europa.eu/data-protection/data-protection_en