Tag Archives: #aipoetics

<< The Intelligent Teeth of Artificial Times >>

While we are collectively & synchronously brushing our teeth –in harmony with taking “the World From Another Point of View,” as the Nobel Laureate Richard Feynman encourages us, (1973)– let us revisit Norbert Wiener.

Besides likely Feynmanian brushing, Norbert also made a few predictions through his publications about “the development of controlled machines and about the corresponding techniques of automatization.” (1960)

Playing with Feynman, when we brush teeth at that sharp Earthly edge of day-to-night, & vice versa, have we been developed into a collective controlled machine by expert “dentists” & “parents” alike? Wiener claimed to foresee this having “important consequences affecting the society of the future.” (Wiener 1960)

Here is one of his 1960 predictions. Several of its attributes might harmonic oscillate with today’s AI applications & mythologies, grinding away at the socially-cleansed enamel we hold so dear:

“If we use, to achieve our purposes, [0] a mechanical agency [1] with whose operation we cannot efficiently interfere [2] once we have started it, [3] because the action is so fast [4] and irrevocable that we have not the data [5] to intervene before the action is complete, [6] then we had better be quite sure that the purpose put into the machine [0] is the purpose which we really desire [7] and not merely a colorful imitation [8] of it”

Today, more than 60 years later: what are the new points of view relatable to achieving our purposes? That is, where the “point” in the “point of view” itself (à la Feynman) is innovative (irrespective of any of our technologies being more, or less, innovative)? Above I highlighted 8 attributes in Wiener’s quote that trigger both teeth grinding & the consideration of view-pointing in the imagined character writing this post. Below, I will touch on only three of those (if any one wishes I can also touch on the others)

This post does not pretend to answer the existence of innovating points of view. Besides hinting at some present concerns & polemic or hyped tensions (Wiener 1960, Samuel 1960) going back to the early days of ethics surrounding AI as a field, this post lets you widen your own points of view

& yet, we can note with text, context & subtext that “therefore, the task is not so much to see what no one has seen before as to think about what no one has yet thought about what everyone sees. That’s why it takes so much more to be a philosopher than a physicist” (Schopenhauer 1851)

As an influencing side note: in so many ways it is ironical that this quote is not a quote, as many claim on our (mis)informing internet, by the physicist Schrödinger. Either way, in allowing us a place in & a sense of the world, this quote can be extended: it is also our task to relationally look backward, forward, presently, outward & inward, far & nearby, & always confidently celebrate doubt

ITEM [0]

“purposes:” are these now aligned with needs, urgencies, importance? Who is “we”? Whose purposes, needs, urgencies and weighing of importance is addressed and is answered? If probably not all (and if any) needs (etc.) are considered or met, then who is affected by the consequential ethical debt?

Atari, M., Xue, M. J., Park, P. S., Blasi, D., Henrich, J. (2023, Sep 22). Which Humans?. Harvard University, University of Massachusetts Amherst. https://doi.org/10.31234/osf.io/5b26t.

Lazar, Seth, and Alondra Nelson. “AI Safety on Whose Terms?” Science 381, no. 6654 (July 14, 2023): 138–138. https://doi.org/10.1126/science.adi8982.

Petrozzino, C. (2021) “Who Pays for Ethical Debt in AI?” AI and Ethics 1, no. 3 (August 1, 2021): 205–8. https://doi.org/10.1007/s43681-020-00030-3.

ITEM [1]

agency has been understood as a “temporally embedded process of social engagement, informed by the past (in its “iterational” or habitual aspect) but also oriented toward the future (as a “projective” capacity to imagine alternative possibilities) and toward the present (as a “practical-evaluative” capacity to contextualize past habits and future projects within the contingencies of the moment)” (Emirbayer et al. 1998)

Emirbayer, Mustafa, and Ann Mische. “What Is Agency?” American Journal of Sociology 103, no. 4 (1998): 962–1023. https://doi.org/10.1086/231294.

Does this fit any or all (labeled) agent?

ITEM [5]

quality collection of data (e.g., in pre-process: consent, IPR, right to be forgotten. In post-output: libel via the mislabeled “hallucinations”) is unique from quality of data.  

Garvie, C. (2019). Garbage In, Garbage Out. Face Recognition on Flawed Data. Center on Privacy & Technology (Georgetown Law. May 16, 2019. https://www.law.georgetown.edu/privacy-technology-center/publications/garbage-in-garbage-out-face-recognition-on-flawed-data/.

References:

Feynman, R. (1973). Take The World From Another Point Of View. IN: a presentation of Films for the Humanities & Sciences, Inc. Yorkshire Television. Color Production. https://archive.org/details/richard-feynman-take-the-world-from-another-point-of-view  (00.20 – 1:40)

Samuel, A. L. (1960). Some Moral and Technical Consequences of Automation–A Refutation. Science, 132(3429), 741–742. doi:10.1126/science.132.3429.741  

Schopenhauer, A. (1851) Parerga und Paralipomena: Kleine Philosophische Schriften. Volume 2, paragraph 76, p 93. Berlin: A. W. Hayn. Original: “Daher ist die Aufgabe nicht sowohl zu sehen was noch keiner gesehen hat, als bei Dem was Jeder sieht, zu denken was noch Keiner gedacht hat. Darum auch gehört so sehr viel mehr dazu, ein Philosoph als ein Physiker zu seyn.” https://archive.org/details/bub_gb_WuUOAAAAIAAJ/page/92/mode/2up

Wiener, N. (1960). Some Moral and Technical Consequences of Automation. Science 131, 1355-1358. DOI:10.1126/science.131.3410.1355