<< The Inscrutable Human(made) Algorithms >>


There is consensus that avoiding inscrutable [here meant as the opposite of explainable] algorithms is a praiseworthy goal. We believe there are legal and ethical reasons to be extremely suspicious of a future, where decisions are made that affect, or even determine, human well-being, and yet no human can know anything about how those decisions are made. The kind of future that is most compatible with human well-being is one where algorithmic discoveries support human decision making instead of actually replacing human decision making. This is the difference between using artificial intelligence as a tool to make our decisions and policies more accurate intelligent and humane, and on the other hand using AI as a crutch that does our thinking for us.” (Colaner 2019) 

If one took this offered opportunity for contextualization and were to substitute “algorithms” —here implied as those used in digital computing— by *algorithms* of human computing [*1], one could have interesting reflections about our human endeavor in past, present and future: indeed, folks, “…decisions are [and have been] made that affect or even determine human well-being, and yet no human can know anything about how those decisions are made…” Can you think of anything in our species’ past and present that has been offering just that? 

“our” is not just you nor me. It is any sample, or group fitted with a feature, or it is the entire population, and it is life, to which this entirety belongs. More or less benevolent characters, out of reach of the common mind-set yet reaching outward to set minds straight, have been part of the human narrative since well-before the first cave paintings we have rediscovered until now.  As, and beyond as, UNESCO recently suggested for K-12 AI Curricula:

Contextualizing data: Encourage learners to investigate who created the dataset, how the data was collected, and what the limitations of the dataset are. This may involve choosing datasets that are relevant to learners’ lives, are low-dimensional and are ‘messy’ (i.e. not cleaned or neatly categorizable).” (UNESCO 2022, p. 14)

By enfranchising the ethics of AI we might want to consider this endeavor might, in parallel of its acknowledged urgency and importance, perhaps distract from our human inscrutable algorithms. 

For instance, should I be concerned that I am blinding myself with the bling of reflective externalities (e.g. a mobile cognitive extension), muffling pause and self-reflection even more besides those human acts that already muffle human discernment?

The intertwined consideration of technology might best come with the continued consideration of the human plight; and any (life) form as well as their environment.

Could we improve how we each relate to each other via our mediations such as processes labeled as artificially intelligent? Various voices through various lenses seem to think so. 

Perhaps the words narrated by Satya Nadella, CEO of Microsoft, while referring to John Markoff, suggest similar considerations, which might find a place at the table of the statisticians of future narratives:  

I would argue that perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology. In his book Machines of Loving Grace, John Markoff writes, “The best way to answer the hard questions about control in a world full of smart machines is by understanding the values of those who are actually building these systems.” It’s an intriguing question, and one that our industry must discuss and answer together.” (Nadella Via Slate.com 2016) 

Through such contextualizing lens, the issue is provokingly unrelated with silicon-based artificialities (alone), in that it is included in human relations, predating what we today herald as smart or synthetic hints toward intelligence.  

Might hence the answers toward AI ethics lie in not ignoring the human processes that create both AI’s and ethics’ narratives? Both are created by actors ensuring that  “decisions are made that affect, or even determine, human well-being.” That said, we might put to question whether dialog or human discernment is sufficiently accessible to a majority, struggling with the “algorithms” imposed on their daily unpolished lives. Too many lives have too little physical and mental breathing space to obtain the mental models (and algorithms); let alone to reflect on their past, present and surely less their opaqued futures as told by others both in tech and ethics.

One (eg tech) needs to more so accompany the other (eg ethics), in a complex system thinking, acting, and reflecting that can also be transcoded to other participants (eg any human life anywhere). Our human systems (of more or less smartness) might need a transformation to allow this to the smallest or weakest or least represented denominator; which can not be CEOs or academics or experts alone. They neither should be our “crutches” as the above quote suggests for AI.  

You and I can continue with our proverbial “stutters” of shared (in)competence yet, with openness, to question, listen and learn:

“is my concern, leading my acts, in turn improving our human relations? Can I innovate my personal architecture right now, right here, while also considering how these can architect the technologies that are hyped to delegate smartness ubiquitously? How can I participate in better design? What do I want to delegate to civilization’s architectures, be they of the latest hype or past designs? Can you help me asking ‘better’ questions?”

…and much better dialog that promotes, to name but a few: explainability (within, around and beyond AI and ethics), transparency (within, around and beyond AI and ethics), perspectives, low barrier to entry, contextualization (of data and beyond). (UNESCO 2022 p. 14)


[*1] e.g.: mental models and human narratives, how we relate to our selves and others in the world, how we are offering strategies and constraints in thinking and acting (of others), creating assumed intent, purpose or aim, yet which possibly, at times, are lacking intent, and might be loosely driven by lack of self-reflective abilities, in turn delegated to hierarchical higher powers of human or Uber-human forms in our fear of freedom (Fromm 1942)

References

Bergen, B. (2012). Louder than Words: The New Science of how the Mind makes Meaning. New York: Basic Books

Fromm, E. (1942). The Fear of Freedom. UK: Routledge.

Miao, F., Unesco et al. (2022). K-12 AI curricula: a mapping of government-endorsed AI curricula. Online: UNESCO retrieved from https://www.researchgate.net/profile/Fengchun-Miao/publication/358982206_K-12_AI_curricula-Mapping_of_government-endorsed_AI_curriculum/links/6220d1de19d1945aced2e229/K-12-AI-curricula-Mapping-of-government-endorsed-AI-curriculum.pdf AND .https://unesdoc.unesco.org/ark:/48223/pf0000380602

Jordan, M.I. via Kethy Pretz (31 March 2021 ). Stop Calling Everything AI, Machine-Learning Pioneer Says Michael I. Jordan explains why today’s artificial-intelligence systems aren’t actually intelligent. Online: IEEE Spectrum. Retrieved from https://spectrum-ieee-org.cdn.ampproject.org/c/s/spectrum.ieee.org/amp/stop-calling-everything-ai-machinelearning-pioneer-says-2652904044

Markoff, J. (2015). Machines of Loving Grace. London: Harper Collins

Nadella, S. (June 28, 2016 2:00PM).
Microsoft’s CEO Explores How Humans and A.l. Can Solve Societys Challenges-Together. Online: Slate Retrieved from: https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html

Olsen, B., Eva Lasprogata, Nathan Colaner. (8 April 2019). Ethics and Law in Data Analytics”. Online Videos: LinkedIn Learning: Microsoft Professional Program. Retrieved on Apr 12 2022 fromhttps://www.linkedin.com/learning/ethics-and-law-in-data-analytics/negligence-law-and-analytics?trk=share_ios_video_learning&shareId=tGkwyaLOStm9VK/kXii9Yw==

Weinberger, D. (April 18, 2017). Our Machines Now Have Knowledge We’ll Never Understand. Online: Wired. Retrieved from https://www.wired.com/story/our-machines-now-have-knowledge-well-never-understand/

…and the irony of it all:

This is then translated by a Bot …: https://twitter.com/ethicsbot_/status/1514049150018539520?s=21&t=-xS7r4VMmBqTl6QidyMtng

Header visual : animasuri’22 ; a visual poem of do’s and don’t”

This post on LinkedIn: https://www.linkedin.com/posts/janhauters_the-inscrutable-humanmade-algorithms-activity-6919812255173795840-_aIP?utm_source=linkedin_share&utm_medium=ios_app

a kind and appreciated appropriation into Spanish by Eugenio Criado can be found here:

https://www.empatiaeia.com/los-inescrutables-algoritmos-humanos/

and here:

https://www.linkedin.com/posts/eugenio-criado-carbonell-60995258_algoritmos-sesgos-autoconocimiento-activity-6941060183364153344-uCOZ?utm_source=linkedin_share&utm_medium=ios_app