<< Context Ethics & Ethics of Complex Systems>>


Professor Bernd Carsten Stahl and co-authors recently published a paper where a panel was asked about AI Ethics. In the blog post on the paper’s topic the conclusion is telling: “…ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems that constitute and make use of AI. If this is true, then it will be important to think how we can move beyond the current state of the debate…” (Ref)

As the blog post accompanying the paper suggest as well, the present-day context that I too would like giving to this basic empirical study are the technological outputs, accelerated since the end of 2022. Intertwined with Generative AI technologies, are the technological services based on Large Language Models (LLM). ChatGPT, Bard, and others could come to mind.

Some of these LLM technological outputs are said to have been constrained in a metaphorical straitjacket: remember unhinged “DAN” and “Sydney”? Following creative prompting the systems seemed to be of questionable stability. Technological patches were suggested while others suggested to put the responsibility on the end-users with pathos and calls not to abuse our digital “companions.” The tools are continued to be fine-tuned and churned out while the foundations remain that of Machine Learning. It’s wild. It’s exciting. Yes. Or, so we might like to be enthralled. The ethical concerns sprout up as mushrooms on a wet autumn morning. And then there is Age of AI’s FreedomGPT; downloadable and claimed to be optimized to run on laptops and in private. The unhinged-ness does not only lie with the Garbage In Garbage Out (GIGO). It does not only lie with unfriendly users. It lies with the nature of the processes and architecture (i.e., Deep Learning and LLM). Moreover it lies with the operators of the engines; the engine’ers. Their ideological intentions input might be Freedom but the output is possibly an actual historic, systemic, power-grabbing Moloch in the here and the now, like Fascism: Freedom In Fascism Out. (Fromm 1942) Can we move beyond the debate? What are the data input of the debate, really?

Following unhinged characters, tamer bot-constructs are not contained into research, nor only for the pursuit of knowledge in an academic sense. The tech is not contained into a lab. It is out in the wild. The marketed tech comes with or without human actors, as cogs in the machinery of a Wizards of Oz-like social Experiments. (Ref) . Considering context (and techno-subtext): this group of humans are less excited and less enthralled by the innovative wild. They might know of some of the “technical characteristics” of their work yet they are not enabled to address the ethical constraints, which reach well beyond their focused-on tech.

The data collected in the paper suggest that AI (ethics) “experts” rank “interventions most highly that were of a general nature. This includes measures geared towards the creation of knowledge and raising of awareness.” (Stahl et al. 2013 via blog). It is a common and reasonable assumption that “once one has a good understanding of the technical characteristics, ethical concerns and mitigation options, one can prioritise practical steps to address the ethics of AI,” (Ibid). Associating this with the ranking by “experts,”(Ibid) perhaps it could be that once one has a good understanding of pluralist (and as such, a type of “general” knowledge base) ethical characteristics, of concepts, relations, concerns and mitigation options, then one could prioritize or demote one step to address an implementation in an AI application over another step. In effect, the view which many echo to first learn the tech and then perhaps the ethics, (UNESCO 2022:50, Groz et al 2019) and as Professor Stahl presents, cannot offer the rationale as to dismiss the other larger, contextualizing ethical literacy that is suggested here. Then again, Ethics not being subservient to technology is at present indeed not (yet) reality. Freedom is equated with technology. Freedom is not yet related for some strong voices to Ethics. I argue against this latter (while I argue against “morality”). As in some poetry, the form supports the liberation of the poetic function. One can think Sonnets or any other art forms where then aesthetic and ethics could meet. Neither in technology-to-market nor in education.

Reading the blog post further, it seems the research tends to agree to a larger more contextualized ethical thinking since only “in some cases it is no doubt true that specific characteristics lead to identifiable ethical problems, which may be addressed through targeted interventions. However, overall the AI ethics discourse does not seem to be captured well by this logic.” (Ibid) The text then continues with what I quoted here above: “ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems.

These AI-labeled technologies are contextualized and contextualizing. Metaphors of ink spreading out on paper might come to mind. Breathe, people. Critically assess, people. Or can we?

The above superficial suggestions could offer us some possible socio-technic contextualization to Prof. Stahl et al.’s publication.

Further context is one of investment figures. As shared by CB Insights funding in 2021-2022 for text-based Generative AI is claimed at 852M USD across 48 deals. That is while not elaborating on the investment in other Generative AI, with a similar amount. (Ref). Following the money (in non-linear fashions) offers a social debate in addition to debate through the techno-lenses. Via this context, related to present debates on speeding up and slowing down, we might want to try and see from and to where the money and R&D flows; both in tech and in academic research on the science and engineering as well as the ethics of these. Then the hype, seeming dichotomy or seeming contradiction in the present debate, might perhaps clear up a bit. (i.e., investors, stockholders and associated stakeholders are probably not often in the business of slowing down their own investments; perhaps those of competitors? Hence non-linearities, multidirectionality, seeming dichotomy, and seeming contradictions in PR acts vs investment, donations, public funding, or grant-offering acts?). It should be noted that the above excludes transparency on investment into hardware (companies) and their solutions. More importantly, these companies and there backers are then aiming to support rapid yet costly processes, and (rapid) Return on Investment. Muffled are then cost and cost to context (or environmental cost of which some is defined as carbon footprint). These too are downplayed or excluded, yet are the context of this debate.

Thirdly, the ethical concepts (from one lens of ethical thinking) of benign and malign too are contextualizing the AI ethics concepts into a larger than socio-technical sense alone. It is not only the tool itself that begs a human-in-the-loop. It is the human-in-some-loop according to Interpol that needs consideration as well. In taking us into the wider socio-technical realm, while reflecting via contextualization on the above academic publication, this too might be of interest. (Ref)

Fourthly, some wonder how we could put our trust in systems that are opaque and ambiguous in their information output. (Ref via Ref). As part of human nature, some seem to suggest putting blame with humans for a tech’s output of “confabulation,” while others oppose. Trust needs to be (adopted) in both the tech and perhaps even more so in the tools for thinking and debating the tech. At present the issue might be that the ethical concepts are piecemeal and feared to be looked at through a variety of ethically diverse yet systematized lenses. Various lenses can create a nuanced yet complex social landscape to think and debate socio-technical and technological attributes and concerns.

A set of questions might come to mind: Can they who are not (intending nor able to be) transparent, create transparency? While having the tech aligned with the human need (and human want), if there is no transparency, can there be proper debate? Probably, maybe, though… If one knows while the other is touching increasingly into the information-dark-ages, will there be equity in this debate? This is not merely a technical problem.

Via comparative contextualization: is transparency an accessibility issue, enabling one to see the necessary information under the metaphorical or physical hood? In analogy, is lifting the bonnet sufficient to see whether the battery of the car is damaged after a crash?

Reverting back to chat-bots: how are we the common people, insured for their failures, following a minor or less-minor information crash? Failures such as misinformation (or here), and that due to reasons that cannot be corroborated by them. The lack of corroboration could be due to lack of organized, open access, and lack of transparency into ethics or information literacy. Transparency is surely not intended as a privileged to a techno-savvy or academic few? Or, is it? These questions, and the contextualization too, hint of considerations beyond the socio-technical alone. These and better considerations too might offer directions to break open and innovate the debate itself.

Besides solutions through a tech-lens, do too few attributes seem to be rationally and systematically considered in the (ethics) debate?

And then:

A contextualization that allows a reflecting back to how we humans learn and think about the ethical features seems, at times, to be shunned by experts and leading voices.

Some seem to lament that there is little to no time due to the speed of the developments. Yes, an understandable concern. Moreover, as for the masses, while they are jumping on hype or doom, the non-initiated (and I dare not say non- “expert”), is excluded from the debate among the leading voices. That too might be understandable, given the state of flux.

Could it be that :

  1. if we limit our view through a technology-lens alone (still a lens which is nevertheless important!), and
  2. if we hold off on reflection due to speed of innovation (which is understandably impressive), and
  3. if we do not include trans-disciplinary and more “naive,” differently -initiated voices, that
  4. we continue finding a monolithic reasoning, not to reflect and consider contextualizing, enabling options over time?

While we put our tech into context, we might be de-contextualising our human participants and their wish for reflection (by some). Do we then put our human “Dan,” and human “Sydney,” in proverbial straitjackets or metaphorical muzzles?

Lack of inclusion and reluctance to enable others into the AI (ethics) debate, and that beyond hype & doom, or restricted to tech-only, might create disenfranchisement, alienation or maintain credulousness. As for the latter, we perhaps do not want to label someone as all-in stupid and then let them to their own devices to figure it out.

Agency without diversely yet systematically framed (ethics) concepts is not agency. It seems more like negligence. Access to the various yet systematized ethical lenses could possibly increase one’s options to voice needs, concerns and have agency. Or, is the lack thereof the socio-technical mindfulness we are hoping for?

The lack that is assumed here, might distract from sustainable adaptation, with contextualized understanding and rational critique, while considering individual, social, technological and market “innovation.” Innovation, here, is also inclusive of a process of enabled self-innovation and relational-innovation. A thing we traditionally label as “learning” or “education.”

Inclusion via tools for reflection, could aid in applying what the authors of the above paper offer us. Even if, we are spinning at high speeds we can still reflect, debate, and act following reflection. The speed of the spin is the nature of a planetary species. Or, is a methodological social inclusion, and such systematized transparency, not our intent?

Might a direction be one where we consider offering more systematic, methodological, pluralist ethics education and learning resources, accessible to the more “naive”? Mind you, “naive” is here used in an ironic manner. It is referring to they who are not considered initiated, or of the techno(-ethics) in-group. They could include the (next generation) users/consumers, in general. Could these be considered, inclusive of, and beyond the usual context-constraining socio-technical gatekeepers? Yes they can. I, for one, want to be exploring this elsewhere and over time.

In the meantime, let us reflect, while we spin. We might not have sufficiently recursed this:

Unless we put as much attention on the development of [our own, human] consciousness as on the development of material technology—we will simply extend the reach of our collective insanity…without interior development, healthy exterior development cannot be sustained”— Ken Wilber (2000)

To round this writing up:

if “…ethical issues are not immediate consequence of the technology but are caused by and located in the broader set of socio-technical systems…” then a path to reflect on might be one that includes various social systems. Such as those resulting from our views and applications into designing educational systems.

More importantly, we might want to reflect on how and why we continue to lack the thrill toward the development of our own “consciousness,” rather than predominantly being highly thrilled and extremely financially-invested in the fantastic, long-term aspiration toward perceptions of artificial versions thereof. Questioning does not need to exclude the explorations of either, or other. Then it might be more enabling “to think how we can move beyond the current state of the debate…” This, interestingly, seems to be a narrative’s state which seems to be patterned beyond the AI ethics or AI technology debates alone.

And yet, don’t just trust me. I here played with resolute polar opposites. Realities are (even) more nuanced. So don’t trust me. Not only because I’m neither a corporation, not a leading voice, do not have financial backing, nor am a transparent, open AI technology. Don’t trust me simply because your ability to reflect and your continued grit to socially innovate, could be yours to nurture and not to delegate.


References:

Fjeld, Jessica and Achten, Nele and Hilligoss, Hannah and Nagy, Adam and Srikumar, Madhulika, Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI (January 15, 2020). Berkman Klein Center Research Publication No. 2020-1, Available at SSRN: https://ssrn.com/abstract=3518482 or http://dx.doi.org/10.2139/ssrn.3518482

Stahl, B. C., Antoniou, J., Bhalla, N., Brooks, L., Jansen, P., Lindqvist, B., Kirichenko, A., Marchal, S., Rodrigues, R., Santiago, N., Warso, Z., & Wright, D. (2023). A systematic review of artificial intelligence impact assessments. Artificial Intelligence Review. https://doi.org/10.1007/s10462-023-10420-8

Stahl, B. C., Brooks, L., Hatzakis, T., Santiago, N., & Wright, D. (2023). Exploring ethics and human rights in artificial intelligence – A Delphi study. Technological Forecasting and Social Change, 191, 122502. https://doi.org/10.1016/j.techfore.2023.122502

Curated references as breadcrumbs & debate triggers enriching the socio-technological lens:

https://talks.ox.ac.uk/talks/id/ede5a398-9b98-4269-a13f-3f2261ee6d2c/https://aeon.co/essays/why-longtermism-is-the-worlds-most-dangerous-secular-credo

https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future

https://www.pestemag.com/books-before-you-die-or-after/longtermism-or-how-to-get-out-of-caring-while-feeling-moral-and-smart-s52gf-6tzmf

https://www.currentaffairs.org/2021/07/the-dangerous-ideas-of-longtermism-and-existential-risk

https://www.vox.com/2015/8/10/9124145/effective-altruism-global-aihttps://www.lrb.co.uk/the-paper/v37/n18/amia-srinivasan/stop-the-robot-apocalypse

systemic racism, eugenics, longtermism, effective altruism, Bostrom

https://forum.effectivealtruism.org/posts/4GAaznADMXL7uFJsY/longtermism-aliens-ai

https://www.vice.com/en/article/z34dm3/prominent-ai-philosopher-and-father-of-longtermism-sent-very-racist-email-to-a-90s-philosophy-listserv

https://global.oup.com/academic/product/superintelligence-9780199678112?cc=us&lang=en&

Isaac Asimov’s Foundation trilogy (where he wrote about the concept of long term thinking)

Hao, K. (2021). The Fight to Reclaim AI. MIT Technology Review, 124(4), 48–52: https://lnkd.in/dG_Ypk6G : “These changes that we’re fighting for—it’s not just for marginalized groups,… It’s actually for everyone.” (Prof. Rediet Abebe)

Eddo-Lodge, Reni. (2017). Why I’m No Longer Talking to White People About Race. Bloomsbury Publishing. https://lnkd.in/dchssE29

https://venturebeat.com/ai/open-letter-calling-for-ai-pause-shines-light-on-fierce-debate-around-risks-vs-hype/

80000 hours and AI

chatgpt, safety in artificial intelligence and elon-musk

https://www.vice.com/en/article/qjvppm/the-open-letter-to-stop-dangerous-ai-race-is-a-huge-mess

https://www.wired.com/story/effective-altruism-artificial-intelligence-sam-bankman-fried/

pause letter, AI, hype, and longtermism

https://twitter.com/emilymbender/status/1640923122831626247?s=46&t=iCKk3zEEX9mB7PrI1A3EDQ