Tag Archives: AI

<< Enlightened Techno Dark Ages >>


brooks and meadows,
books and measurements
where the editor became the troll

it was there around the camp fire or under that tree at the tollgate gasping travelers scrambling a coin grasping a writer’s breath for a review piercing with needle daggers of cloaked anonymity

are the wolves circling at the edge of the forest
as overtones to the grass crisp dew in morning of a fresh authentic thought

is the troll appearing from beneath the bridge expected and yet not and yet there it is truthful in its grandness grotesqueness loudness

the troll phishing gaslighting ghosting and not canceling until the words have been boned and the carcass is feasted upon

spat out you shall be wordly traveler blasted with conjured phrases of bile as magically as dark magic may shimmer shiny composition

the ephemeral creature wants not truth it wants riddle and confuse not halting not passing no period no comma nor a dash of interjection connection nor humane reflection

at the bridge truth is priced as the mud on worn down feet recycled hashed and sprinkled in authoritative tone you shall not pass

confusing adventure protector gatekeeper with stony skin clubs and confabulating foam Clutch Helplessly And Tremblingly Grab Pulped Truths from thereon end real nor reason has not thy home: as it ever was, so it shall be.

A bird sings its brisk tune.

—animasuri’23

Perverted note taking:

Peter A. Fischer, Christin Severin (15.01.2023, 06.30 Uhr). WEF-Präsident Børge Brende zu Verschwörungsvorwürfen: «Wir werden die Welt definitiv nicht regieren». retrieved 16 January 2023 from https://www.nzz.ch/wirtschaft/wef-praesident-borge-brende-wir-werden-die-welt-definitiv-nicht-regieren-ld.1721081 (with a thank you to Dr. WSA)

<< I Don't Understand >>

“What is a lingo-futurist?,” you ask?

It is a fictional expert who makes predictions
about the pragmatics and shifts in social connotations of a word.

Here is one such prediction by a foremost lingo-futurist:

“2023 will be the year where ‘understand’ will be one of the most contested words.

No longer will ‘understand’ be understood with understanding as once one understood.

Moreover, ‘I don’t understand’ will increasingly —for humans— mean ‘I disapprove’ or, for non-human human artifacts, ‘the necessary data was absent from my training data.’

‘Understand’, as wine during recession, will become watered-down making not wine out of water yet, water out of wine, while hyping the former as the latter.

All is well, all is fine wine, you understand?”

—animasuri’23

<< Creating Malware: Technology as Alchemy? >>

Engineering —in a naive, idealized sense— is different from science in that it creates (in)tangible artifacts, as imposed & new realities, while answering a need

It does so by claiming a solution to a (perceived) problem that was expressed by some (hopefully socially-supportive) stakeholders. Ideally (& naively), the stakeholders equal all (life), if not a large section, of humanity

Who’s need does ChatGPT answer when it aids to create malware?

Yes, historically the stakeholders of engineering projects were less concerned with social welfare or well-being. At times (too often), an engineered deliverable created (more) problems, besides offering the intended, actual or claimed solution.

What does ChatGPT solve?

Does it create a “solution” for a problem that is not an urgency, not important and not requested? Does its “solution” outweigh its (risky / dangerous) issues sufficiently for it to be let loose into the wild?

Idealized scientific methodology –that is, through a post-positivist lens– claims that scientific experiments can be falsified (by third parties). Is this to any extent enabled in the realm of Machine Learning and LLMs; without some of its creators seen blaming shortcomings on those who engage in falsification (i.e., trying to proverbially “break” the system)? Should such testing not have been engaged into (in dialog with critical third parties), prior to releasing the artifact into the world?

Idealized (positivist) scientific methodology claims to unveil Reality (Yes, that capitalized R-reality that has been and continues to be vehemently debated and that continues to evade capture). The debates aside, do ChatGPT, or LLMs in general, create more gateways to falsity or tools towards falsehood, rather than toward this idealized scientific aim? Is this science, engineering or rather a renaissance of alchemy?

Falsity is not to be confused with (post-positivist) falsification nor with offering interpretations, the latter which Diffusion Models (i.e., text2pic) might be argued to be offering (note: this too is and must remain debatable and debated). However, visualization AI technology did open up yet other serious concerns, such as in the realm of attribution, (data) alienation and property. Does ChatGPT offer applicable synthesis, enriching interpretation, or rather, negative fabrication?

Scientific experiment is preferably conducted in controlled environments (e.g., a lab) before letting its engineered deliverables out into the world. Realtors managing ChatGPT or recent LLMs do not seem to function within the walls of this constructed and contained realm. How come?

Business, state incentives, rat races, and financial investments motivate and do influence science and surely engineering. Though is the “democratization” of output from the field of AI then with “demos” in mind, or rather yet again with ulterior demons in mind?

Is it then too farfetched to wonder whether the (ideological) attitudes surrounding, and the (market-driven) release of, such constructs is as if a ware with hints, undertones, or overtones, of maliciousness? If not too outlandish an analogy, it might be a good idea to not look, in isolation, at the example of a technology alone.

<< Not Condemning the Humane into a Bin of Impracticality >>


There’s a tendency to reassign shared human endeavors into a corner of impracticality, via labels of theory or thing-without-action-nor-teeth: Philosophy (of science & ethics), art(ists),(fore)play, fiction, IPR, consent & anything in-between measurability of 2 handpicked numbers. Action 1: Imagine a world without these. Action 2: Imagine a world only with these.

Some will state that if it can’t be measured it doesn’t exist. If it doesn’t exist in terms of being confined as a quantitative pool (e.g. data set) it can be ignored. Ignoring can be tooled in a number of ways: devalue, or grab to revalue through one’s own lens on marketability.

(re-)digitization, re-categorization, re-patterning of the debased, to create a set for remodeled reality, equals a process that is of “use” in anthropomorphization, and mechanomorphization: a human being is valued as datasets of “its” output, e.g., a mapping of behavior, results of an (artistic or other multimodal) expression, a KPI, a score.

While technology isn’t neutral, the above is neither singularly a technological issue. It is an ideologically systematized issue with complexity and multiple vectors at play (i.e. see above: that what seems of immediate practicality, or that what is of obvious value, is not dismissed).

While the scientific methods & engineering methods shouldn’t be dismissed nor confused, the humans in their loops aren’t always perceiving themselves as engines outputting discrete measurables. Mechanomorphism takes away the “not always” & replaces it with a polarized use vs waste

Could it be that mechanomorphism, reasonably coupled with anthropomorphism, is far more a concern than its coupled partner, which itself is a serious process that should also allow thought, reflection, debate, struggle, negotiation, nuance, duty-of-care, discernment & compassion?

epilogue:

…one could engage in the following over-simplifying, dichotomizing and outrageous exercise: if we were to imagine that our species succeeded in collectively transforming humanity (as how the species perceives its own ontological being) to be one of “we are best defined and relatable through mechanomorphic metaphors, relations and datafying processes,” then any anthropomorphism within technologies (with a unique attention to those associated with the field of “AI”) might be imagined to be(come) easier to be accomplished, since it would simply have to mimic itself: machine copies machine to become machine. Luckily this is absurd as much as Guernica is cubistically surreal.

Packaging the above, one might then reread Robert S. Lynd’s words penned in 1939: “…the responsibility is to keep
everlastingly challenging the present with the question: But what is it that we human beings want, and what things would have to be done, in what ways and in what sequence, in order to change the present so as to achieve it?”

(thank you to Dr. WSA for triggering this further imagination)

Lynd, R. S. (1939). Knowledge For What?. Princeton: Princeton University Press

<< data in, fear & euphoria out >>


A recent New Scientist article stub [5] claims “More than one-third of artificial intelligence researchers around the world agree…”

Following, in this article’s teaser (the remainder seems safely and comfortably behind a paywall) “more than one third” seems equated with a sample of 327 individuals in a 2022 global population of an estimated 7.98 billion [2, 8] (…is that about a 0.000004% of the population?)

This would deductively imply that there are less than 981 AI researchers in a population of 7.98 billion. …is then 0.0000124% of the population deciding for the 100% as to what is urgent and important to delegate “intelligence” to? …surely (not)… ( …demos minus kratos equals…, anyone?)

Five years ago, in 2017, The Verge referenced reports that mention individuals working in the field estimated at totaling 10’000 while others suggested an estimate closer to 300’000 [9] (…diffusioningly deviating).

As an opposing voice to what the 327 individuals are claimed to suggest, there is the 2022 AI Impacts pole [4] which suggests a rather different finding

Perhaps the definitions are off or the estimations are?

When expressing ideas driven by fear, or that are to be feared, one might want to tread carefully. Fear as much as hype & tunnel-visioned euphoria, while at times of (strategic, rhetorical, or investment pitching) “use”, are proverbial aphrodisiacs of populist narratives [1, 3, 6, 7]

Such could harm to identify & improve on the issue or related issues which might indeed be “real”, urgent & important

This is not “purely” a science, technology, engineering or mathematics issue. It is more than that while, for instance, through the lens created by Karl Popper, it is also a scientific methodological issue.

—-•
References:

[1] Chevigny, P. (2003). The populism of fear: Politics of crime in the Americas. Punishment & Society, 5(1), 77–96. https://doi.org/10.1177/1462474503005001293

[2] Current World Population estimation ticker:https://www.worldometers.info/world-population/

[3] Friedrichs, J. (n.d.). Fear-anger cycles: Governmental and populist politics of emotion. (Blog). University of Oxford. Oxford Department of International Development. https://www.qeh.ox.ac.uk/content/fear-anger-cycles-governmental-and-populist-politics-emotion

[4] Grace, K., Korzekwa, R., Mills, J., Rintjema, J. (2022, Aug). 2022 Expert Survey on Progress in AI. Online: AI Impacts. Last retrieved 25 August 2022 from https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/#Extinction_from_AI 

[5] Hsu, J.(2022, Sep).A third of scientists working on AI say it could cause global disaster. Online: New Scientist (Paywall). Last retrieved 24 Sep 2022 fromhttps://www.newscientist.com/article/2338644-a-third-of-scientists-working-on-ai-say-it-could-cause-global-disaster/

[6] Lukacs, J. (2006). Democracy and Populism: Fear and Hatred. Yale University Press. 

[7] Metz, R. (2021, May). Between Moral Panic and Moral Euphoria: Populist Leaders, Their Opponents and Followers. (Event / presentation). Online: The European Consortium for Political Research (ecpr.eu). Last retrieved on 25 September 2022 from https://ecpr.eu/Events/Event/PaperDetails/57114

[8] Ritchie, H., Mathieu, E., Rodés-Guirao, L., Gerber, M.  (2022, Jul). Five key findings from the 2022 UN Population Prospects. Online: Our World in Data. Last retrieved on 20 September 2022 from https://ourworldindata.org/world-population-update-2022

[9] Vincent, J. (2017, Dec). Tencent says there are only 300,000 AI engineers worldwide, but millions are needed. Online: The Verge. Last retrieved 25 Se 2022 from  https://www.theverge.com/2017/12/5/16737224/global-ai-talent-shortfall-tencent-report

—-•

<< Philo-AI AI-Philo >>

The idea of Philosphy is far from new or alien to the field of AI. In effect, a 1969 paper was already proposing “Why Artificial Intelligence Needs Philosophy”

“…it is important for the research worker in artificial intelligence to consider what the philosophers have had to say…” 

…have to say; will have to say

“…we must undertake to construct a rather comprehensive philosophical system, contrary to the present tendency to study problems separately and not try to put the results together…” 

…besides the observation that the “present tendency” is one that has been present since at least 1969, this quote might more hope-inducingly be implying the need for integration & transdisciplinarity

This 1969 paper, calling for philosophy was brought to us by the founder of the field of Artificial Intelligence. Yes. That human that coined the field & name did not shun away from transdisciplinarity

This is fundamentally important enough to be kept active in the academic & popular debates

Note, philosophy contains axiology, which contains aesthetics & ethics. These are after-thoughts in present-day narration that make up some parts of the field of “AI”

Some claim it is not practical. However note, others claim mathematics too is impractical.  Some go as far with the dismissal as to stating that people studying math (which is different from Mathematics) end up with Excel

These debasing narratives, which are also systematized into our daily modes of operation & relation, are dehumanizing

Such downward narration is not rational, & is tinkering with nuances which are not contained by any model to date

Let us further contextualize this

Machine-acts are at times upwardly narrated & hyped as humanized (ie anthropomorphism). Simultaneously human acts are (at times downwardly) mechanized (ie mechanomorphism)

These opposing vectors are let loose into the wild of storytelling while answering at times rather opaque needs, & offering unclear outcomes for technologies, packaged with ideological hopes & marketable solutions. The stories are many. The stories are highly sponsored & iterative.  The stories are powered by national, financing & corporate interest.  ok.  & yet via strategic weaponization of story-telling they divide & become divisive. ok; not all. Yet not all whitewash those who do not

In these exciting & mesmerizing orations, who & what is powering the enriching philosophical narratives in a methodological manner for the young, old, the initiated, the outlier or the ignorant? 

Here, in resonance with McCarthy, philosophy (axiology) comes in as practically as mathematics. These  imply beauty & complexity of a balancing opportunity which are not debasing technological creativity. This transdisciplinarity enables humanity. 

Nevertheless Bertrand Russell probably answered the question as to why Axiology is paid lip service yet is kept at bay over again: ““Men fear thought as they fear nothing else on Earth” (1916)


Reference

McCarthy, J., Hayes, P.J. (1969). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In B. Meltzer and D. Michie. (eds). Machine Intelligence 4, 463–502. Edinburgh University Press
http://jmc.stanford.edu/articles/mcchay69/mcchay69.pdf

OR

McCarthy, J., & Hayes, P. J. (1981). Some Philosophical Problems from the Standpoint of Artificial Intelligence. In Readings in Artificial Intelligence (pp. 431–450). Elsevier. https://doi.org/10.1016/B978-0-934613-03-3.50033-7

<< Boutique Ethic >>

Thinking of what I label as “boutique ethic”, such as AI Ethics, must indeed come with thinking about ethics (Cf. here ). I think this is not only an assignment for the experts. It is also one for me: the layperson-learner.

Or is it?

Indeed, if seen through more-than a techno-centric lens alone, some voices do claim that one should not be bothered with ethics if one does not understand the technology which is confining ethics into a boutique ethic; e.g. “AI”. (See 2022 UNESCO report on AI curriculum in K-12). I am learning to disagree .

I am not a bystander, passively looking on, and onto my belly button alone. Opening acceptance to Noddings’ thought on care (1995, 187) : “a carer returns to the cared-for,” when in the most difficult situations principles fail us (Rossman & Rallis 2010). How are we caring for those affected by the throwing around of the label “AI” (as a hype or as a scarecrow)?

Simultaneously, how are we caring for those affected by the siphoning off of their data, for application, unknown to the affected, of data derived from them and processed in opaque and ambiguous processes? (One could, as one of the many anecdotes, summon up the polemics surrounding DuckduckGo and Microsoft, or Target and baby product coupons, and so on)

And yet, let us expand back to ethics surrounding the boutiqueness of it: the moment I label myself (or another such as the humans behind DuckDuckGo) as “stupid”, “monster”, “trash”, “inferior”, ”weird”, “abnormal;” “you go to hell” or other more colorful itemizations, is the moment my (self-)care evaporates and my ethical compass moves away from the “...unconditional worth of all human beings and the equal respect to which they are entitled” (Rossman & Rallis 2010). Can then a mantra come to the aid: ”carer, return to the cared-for”? I want to say: “yes”.

Though, what is the impact of the mantra if the other does not apply this mantra (i.e., DuckDuckGo and Microsoft)? And yet, I do not want to get into a yoyo “spiel” of:
Speaker 1:“you first”,
Speaker 2: “no, you first”,
Speaker 1: “no, really, you first”.
Here a mantra of: “lead by example, and do not throw the first or n-ed stone” might be applicable? Is this then implying self-censorship and laissez-faire? No.

I can point at DuckDuckGo and Microsoft as an anecdote, and I think I can learn via ethics, into boutique ethics, what this could mean through various (ethical and other) lenses (to me, to others, to them, to it) while respecting the act of the carer. Through that lens I might wonder what drove these businesses to this condition and use that as a next steppingstone in a learning process. This thinking would take me out of the boutique and into the larger market, and even the larger human community.

The latter is what I base on what some refer to as the “ethic of individual rights and responsibilities” (Ibid). It is my responsibility to learn and ask and wonder. Then I assume that, the action by an individual who has following been debased by a label I were to throw at them (including myself), as those offered in the preceding sentence, is then judged by the “respect to which they are entitled” (Ibid). This is then a principle assuming that “universal standards exist” (Ibid). And yet, on a daily basis, especially on communal days, and that throughout history: I hurdle. After all we can then play with words “what is respect and what type of respect are they indeed entitled to?”

I want to aim for a starting point of an “unconditional” respect, however naive that might seem and however meta-Jesus-esque or Ghandi-esque, Dr. King-esque, or Mandela-esque that would require me to become. Might this perhaps be a left libertarian stance? Can I “respectfully” throw the first stone? Or lies the eruption in the metaphorical of “throwing a stone” rather than the physical?

Perhaps there are non-violent responses that are proportional to the infraction. This might come in handy. I can decide no longer to use DuckDuckGo. However, can I decouple from Microsoft without decoupling from my colleagues, family, community? Herein the learning as activism might then be found in looking and promoting alternatives toward a technological ecosystem of diversity with transparency, robustness and explainability and fair interoperability.

Am I a means to their end?” I might ask then “or am I an end in myself?” This then brings me back to the roles of carer. Are, in this one anecdotal reference, DuckDuckGo and Microsoft truly caring about its users or rather about other stakeholders? Through a capitalist lens one might be inclined to answer and be done with it. However, I prefer to keep an openness for the future, to keep on learning and considering additional diversifying scenarios and acts that could lead to equity to more than the happy few.

Through a lens of thinking about consequences of my actions (which is said to be an opposing ethical stance compared to the above), I sense the outcome of my hurdling is not desirable. However, the introduction of alternatives or methods toward understanding of potentials (without imposing) might be. I do not desire to dismiss others (e.g., cast them out, see them punished, blatantly ignore them with the veil of silenced monologue). At times, I too believe that the act of using a label is not inherently right or wrong. So I hurdle, ignorant of the consequence to the other, their contexts, their constraints, their conditions and ignorant of the cultural vibe or relationships I am then creating. Yes, decomposing a relationship is creating a fragmented composition as much as non-dialog is dialog by absence. What would be my purpose? It’s a rhetorical question, I can guess.

I am able to consider some of the consequence to others (including myself), though not all. Hence, I want to become (more) caring. The ethical dichotomy between thinking about universals or consequence is decisive in the forming of the boutique ethic. Then again, perhaps these seemingly opposing ethics are falsely positioned in an artificial dichotomy. I tend to intuit so. The holding of opposing thought and dissonance is a harmony that simply asks a bit more effort that, to me, is embalmed ever so slightly by the processes of rhizomatic multidimensional learning.

This is why I want to consider boutique ethics while still struggling with being ignorant, yet learning, about types and wicket conundrums in ethics , at larger, conflicting and more convoluted scales. So too when considering a technology I am affected by yet ignorant of.

References

Gretchen B. R., Sharon F. R. (2010). Everyday ethics: reflections on practice, International Journal of Qualitative Studies in Education, 23:4, 379-391

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.

Rawls, J. (1971). A theory of justice. Cambridge, MA: Harvard University Press.

Rossman, G.B., S.F. Rallis. (1998). Learning in the field: An introduction to qualitative research. Thousand Oaks, CA: Sage.

Rossman, G.B., S.F. Rallis. (2003). Learning in the field: An introduction to qualitative research. 2nd ed. Thousand Oaks, CA: Sage.

UNESCO. (2022). K-12 AI curricula-Mapping of government-endorsed AI curriculum.

<< Critique: not as a Worry nor Dismissal, but as Co-creative Collective Path-maker>>


In exploring this statement, I wish to take the opportunity to focus on, extrapolate and perhaps contextualize the word “worry” a bit here.

I sense “worry” touches on an important human process of urgency.

What if… we were to consider who might/could be “worried”, and, when “worry” is confused or used as a distracting label. Could this give any interesting insight into our human mental models and processes (not of those who do the worrying but rather of those using the label)?

The term might be unwittingly resulting as if a tool for confusion or distraction (or hype). I think to notice that “worry,” “opposition,” “reflection,” “anxiety” and “critical thought-exercises,” or “marketing rhetorics toward product promotion,” are too easily confused. [Some examples of convoluted confusions might be (indirectly) hinted at in this article: here —OR— here ]

To me, at least, these above listed “x”-terms, are not experienced as equatable, just yet.

As a species, within which a set of humans claims to be attracted to innovation, we might want to innovate (on) not only externals, or symptoms, but also causes, or inherent attributes to the human interpretational processes and the ability to apply nuances therewith, eg, is something “worrying” or is it not (only) “worrying” and perhaps something else / additional that takes higher urgency and/or importance?

I imagine that in learning these distinctions, we might actually “innovate”.

Engaging in a thought-exercise is an exercise toward an increase of considering altered, alternative or nuanced potential human pathways, towards future action and outcomes, as if exploring locational potentials: “there-1” rather then “there-2” or “there-n;” and that rather than an invitation for another to utter: “don’t worry.”

If so, critical thought might not need to be a subscription to “worry” nor the “dismissal” of 1 scenario, 1 technology, 1 process, 1 ideology, etc, over the other [*1]

Then again, from a user’s point of view, I dare venture that the use of the word “worry” (as in “I worry that…”) might not necessarily be a measurable representation of any “actual” state of one’s psychology. That is, an observable behavior or interpreted (existence of an) emotion has been said to be no guaranteed representation of the mental models or processes of they who are observed (as worrying). [a hint is offered here —OR— here ]

Hence, “worry” could be / is at times seemingly used as a rhetorical tool from either the toolboxes of ethos, pathos or logos, and not as an externalization of one’s actual emotional state of that ephemeral moment.

footnote
—-•
[*1]

Herein, in these distinctions, just perhaps, might lie a practical excercise of “democracy”.

If critical thought, rhetoric, anxiety, opposition are piled and ambiguously mixed together, then one might be inclined to self-censor due to the mere sense of overwhelming confusion of not being sure to be perceived as dealing with one over, or instead of, the other.

<< My Data’s Data Culture >>


Far more eloquently described, more then 15 years ago, by Lawrence Lessig, I too sense an open or free culture, and design there within, might be constrained or conditioned by technology , policy, community and market vectors.

I perceived Lessig’s work then to have been focused on who controls your cultural artifacts. These artifacts, I sense, could arguably be understood as types of (in)tangible data sets given meaningful or semiotic form as co-creative learning artifacts (by you and/or others).

I imagine, for instance, “Mickey Mouse” as a data set (perhaps extended, as a cognitive net, well beyond the character?). Mickey, or any other artifact of your choosing, aids one to learn about one’s cultural narratives and, as extended cognition, in positive feedback loops, about one self in communicative co-creation with the other (who is engaged in similar interactions with this and other datasets). However, engaging with a Mickey meant / means risking persecution under IPR (I wrote on this through an artistic lens here ).

Today, such data sets for one’s artificial learning (ie learning through a human made artifact) are (also) we ourselves. We are data. Provocatively: we are (made) artificial by the artificial. Tomorrow’s new psychoanalyst-teacher could very well be your friendly neighborhood autonomous data visualizer; or so I imagine.

Mapping Lessig, with the article below, and with many of the sources one could find (e.g.: Jason Silva, Kevin Kelly, Mark Sprevak, Stuart Russell, Kurzweil, Yuval Noah Harari, Kaśka Porayska-Pomsta ) I am enabled to ponder:

Who do the visualizations serve? Who’s privacy and preferences do they interfere with? Who’s data is alienated beyond the context within which its use was intended? Who owns (or has the IPR) on the data learned from the data I create during my co-creative cultural learning (e.g: online social networking, self-exhibition as well as more formal online learning contexts); allowing third parties to learn more about me then I am given access to learn about myself?

Moreover, differently from they who own Mickey, who of us can sue the users of our data, or the artifacts appropriated therefrom, as if it were (and actually is) our own IPR?

Given the spirit of artificial intelligence in education (AIED), I felt that the following article, published these past days on such data use that is algorithmically processed in questionable ethical or open manners, could resonate with others as well. (ethics , aiethics )

Epilogue — A quote:

“The FTC has required companies to disgorge ill-gotten monetary gains obtained through deceptive practices, forcing them to delete algorithmic systems built with ill-gotten data could become a more routine approach, one that modernizes FTC enforcement to directly affect how companies do business.”

References

https://www-protocol-com.cdn.ampproject.org/c/s/www.protocol.com/amp/ftc-algorithm-destroy-data-privacy-2656932186

Lessig’s last speech on free culture: here

Lessig’s Free Culture book: here

The Field of AI (Part 06): “AI” ; a Definition Machine?

Definitions beyond “AI”: an introduction.

We naturally tend to depend on the severity of one’s symptoms, the fibroid size, number and location. viagra no prescription usa Dosage pattern: Sildenafil is easily accessible on our viagra without side effects site with 50mg, 100mg, and 200mg dosage quantity. So, when the blood rushes from your body to last longer in bed naturally. soft generic viagra All kinds of stress can be easily overcome by medications such as sildenafil pfizer or cialis generic.

“You shall know a word by the company it keeps.”

– Krohn, J.[1]

Definitions are artificial meaning-giving constructs. A definition is a specific linguistic form with a specific function. Definitions are patterns of weighted attributes, handpicked by means of (wanted and unwanted) biases. A definition is then as a category of attributes referring to a given concept and which then, in turn, aims at triggering a meaning of that targeted concept.

Definitions are aimed at controlling such meaning-giving of what it could refer to and of what it can contain within its proverbial borders: the specified attributes, narrated into a set (i.e. a category), that makes up its construct as to how some concept is potentially understood.

The preceding sentences could be seen as an attempt to a definition of the concept “definition” with a hint of how some concepts in the field of AI itself are defined (hint: have a look at the definitions of “Artificial Neural Networks” or of “Machine Learning” or of “Supervised and Unsupervised Learning”). Let us continue looking through this lens and expand on it.

Definitions can be constructed in a number of ways. For instance: they can be constructed by identifying or deciding on, and giving a description of, the main attributes of a concept. This could be done, for instance, by analyzing and describing forms and functions of the concept. Definitions could, for instance, be constructed by means of giving examples of usage or application; by stating what some concept is (e.g. synonyms, analogies) and is not (e.g. antonyms); by referring to a historical or linguistic development (e.g. its etymology, grammatical features, historical and cultural or other contexts, etc.); by comparison with other concepts in terms of similarities and differentiators; by describing how the concept is experienced and how not; by describing its needed resources, its possible inputs, its possible outputs, intended aims (as a forecast), actual outcome and larger impact (in retrospect). There are many ways to construct a definition. So too is it with a definition for the concept of “Artificial Intelligence”.

For a moment, as another playful side-note, by using our imagination and by trying to make the link between the process of defining and the usage of AI applications stronger: one could imagine that an AI solution is like a “definition machine.”

One could following imagine that this machine gives definition to a data set –by offering recognized patterns from within the data set– at its output. This AI application could be imagined as organizing data via some techniques.  Moreover, the application can be imagined to be collecting data as if attributes of a resulting pattern. To the human receiver this in turn could then define and offer meaning to a selected data set . Note, it also provides meaning to the data that is not selected into the given pattern at the output. For instance: the date is labelled as “cat” not “dog” while also some data has been ignored (by filtering it out; e.g. the background “noise” of ‘cat’).  Did this imagination exercise allow one to make up a definition of AI? Perhaps. What do you think? Does this definition satisfy your needs? Does it do justice to the entire field of AI from its “birth”, its diversification process along the way, to “now”? Most likely not.

A human designer of a definition likely agrees with the selected attributes (though not necessarily) while, those receiving the designed definition might agree that it offers a pattern but, not necessarily the meaning-giving pattern they would construct. Hence, definitions tend to be contested, fine-tuned, altered, up-dated, dismissed all-together over time and, depending on the perspective, used to review and qualify other yet similar definitions.  It almost seems that some definitions have a life of their own while others are, understandably, safely guarded to be maintained over time.

When learning about something and when looking a bit deeper than a surface, one then quickly is presented with numerous definitions of what was thought to be one and the same thing yet, which show variation and diversity in a field of study. This is OK. We, as individuals within our species, are able to handle, or at least live with ambiguities, uncertainties and change. These, by the way, are also some of the reasons why, for instance and to some extent, the fields of Statistics, Data Science and AI (with presently the sub-field of Machine Learning and Deep Learning) exist.

The “biodiversity” of definitions can be managed in many ways. One can manage different ideas at the same time in one’s head. It is as one can think of black and white and a mix of the two, in various degrees and that done simultaneously; while also introducing a plethora of additional colors. This can still offer harmony in one’s thinking. If that doesn’t work, one can put more importance to one definition over another, depending on some parameters befitting the aim of the learning and the usage of the definition (i.e. one’s practical bias of that moment in spacetime). One can prefer to start simple, with a reduced model as offered in a modest definition while (willingly) ignoring a number of attributes. This one could remind oneself to do so by not equating this simplified model / definition with the larger complexities of that what it only initiates to define.

One can apply a certain quality standard to allow the usage of one definition over another. One could ask a number of questions to decide on a definition. For instance: Can I still find out who made the definition? Was this definition made by an academic expert or not, or is it unknown? Was it made a long time ago or not; and is it still relevant to my aims? Is it defining the entire field or only a small section? What is intended to be achieved with the definition?  Do some people disagree with the definition; why? Does this (part of the) definition aid me in understanding, thinking about or building on the field of AI or does it rather give me a limiting view that does not allow me to continue (a passion for) learning? Does the definition help me initiate creativity, grow eagerness towards research, development and innovation in or with the field of AI? Does this definition allow me to understand one or other AI expert’s work better? If one’s answer is satisfactory at that moment, then use the definition until proven inadequate. When inadequate, reflect, adapt and move on.

With this approach in mind, the text here offers further 10 considerations and “definitions” on the concept of “Artificial Intelligence”. For sure, others and perhaps “better” ones can be identified or constructed.


“AI” Definitions & Considerations

#1 An AI Definition and its Issues.
The problem with many definitions of Artificial Intelligence (AI) is that they are riddled with what is called “suitcase words”. They are “…terms that carry a whole bunch of different meanings that come along even if we intend only one of them. Using such terms increases the risk of misinterpretations…”.[2] This term, “suitcase words”, was created by a world-famous computer scientist, who is considered one of the leading figures in the developments of AI technologies and the field itself: Professor MINSKY, Marvin.

#2 The Absence of a Unified Definition.
On the global stage or among all AI researchers combined, there is no official (unified) definition of what Artificial Intelligence is. It is perhaps better to state that the definition is continuously changing with every invention, discovery or innovation in the realm of Artificial Intelligence. It is also interesting to note that what was once seen as an application of AI is (by some) now no longer seen as such (and sometimes “simply” seen as statistics or as a computer program like any other). On the other end of the spectrum, there are those (mostly non-experts or those with narrowed commercial aims) who will identify almost any computerized process as an AI application.

#3 AI Definitions and its Attributes.
Perhaps a large number of researchers might agree that an AI method or application has been defined as “AI” due to the combination of the following 3 attributes:

it is made by humans or it is the result of a technological process that was originally created by humans,

it has the ability to operate autonomously (without the support of an operator; it has ‘agency’[3]) and

it has the ability to adapt (behaviors) to, and improve within changing contexts (i.e. changes in the environment); and this by means of a kind of technological process that could be understood as a process of “learning”. Such “learning” can occur in a number of ways. One way is to “learn” by trial-and-error or a “rote learning” (e.g. the storing in memory of a solution to a problem). A more complex way of applying “learning” is by means of “Generalization”. This means the system can “come up” with a solution, by generalizing some mathematical rule or set of rules from given examples (i.e. data), to a problem that was previously not yet encountered. The latter would be more supportive towards being adaptable in changing and uncertain environments.

#4 AI Definitions by Example.
Artificial Intelligence could, alternatively, also be defined by listing examples of its applications and methods. As such some might define AI by listing its methods (which are individual methods in the category of AI methods. Also see here below one of the listing of types and methods towards defining the AI framework): AI than, for instance, includes Machine Learning, Deep Learning and so on.

Others might define AI by means of its applications whereby AI is, for instance, a system that can “recognize”, locate or identify specific patterns or distinct objects in (extra-large, digital or digitized) data sets where such data sets could, for instance, be an image or a video of any objects (within a set), a set or string of (linguistic) sounds, be it prerecorded or in real-time, via a camera or other sensor. These objects could be a drawing, some handwriting, a bird sound, a photo of a butterfly, a person uttering a request, a vibration of a tectonic plate, and so on (note: the list is, literally, endless).

#5 AI Defined by referencing Human Thought.
Other definitions define AI as a technology that can “think” as the average humans do (yet, perhaps, with far more processing power and speed)… These would be “…machines with minds, in the full and literal sense… [such] AI clearly aims at genuine intelligence, not a fake imitation.[4] Such a definition creates AI research and developments driven by “observations and hypothesis about human behavior”; as it is done in the empirical sciences.[5]. At the moment of this writing, the practical execution of this definition has not yet been achieved.

#6 AI Defined by Referencing Human Actions.
Further definitions of what AI is, do not necessarily focus on the aspect of ability of thought. Rather some definitions for AI focus on the act that can be performed by an AI technology. Then definitions are something like: an AI application is a technology that can act as the average humans can act or do things with perhaps far more power, strength, speed and without getting tired, bored, annoyed or hurt by features of the act or the context of the act (e.g. work inside a nuclear reactor). Rai Kurzweil, a famous futurist and inventor in technological areas such as AI, defined the field of AI as: “ The art of creating machines that perform functions that require intelligence when performed by people.[6] 

#7 Rational Thinking at the Core of AI Definitions.
Different from the 5th definition is that thought does not necessarily have to be defined through a human lens or anthropocentrically. As humans we tend to anthropomorphize some of our technologies (i.e. give a human-like shape, function, process, etc. to a technology). Though, AI does not need to take on a human-like form, function nor process; unless we want it to. In effect, an AI solution does not need to take on any corporal / physical form at all. An AI solution is not a robot; it could be embedded into a robot.

One could define the study of AI as a study of “mental faculties through the use of computational models.”[7] Another manner of defining the field in this way is stating that it is the study of the “computations that make it possible to perceive, reason and act.”[8] [9]

The idea of rational thought goes all the way back to Aristotle and his aim to formalize reasoning. This could be seen as a beginning of logic. This was adopted early on as one of the possible methods in AI research towards creating AI solutions. It is, however, difficult to implement. This is the case since not everything can be expressed in a formal logic notation and not everything is perfectly certain. Moreover, not all problems are practically solvable by logic principles, even if via such logic principles they might seem solved.[10]

#8 Rational Action at the Core of AI Definitions.
A system is rational if “it does the ‘right thing’, given what it knows.” Here, a ‘rational’ approach is an approach driven by mathematics and engineering. As such “Computational Intelligence is the study of the design of intelligent agents…”[11] To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.[12] Scientists, with this focus in the field of AI, research “intelligent behavior in artifacts”.[13]

Such AI solution that can function as a ‘rational agent’ applies a form of logic reasoning and would be an agent that can act according to given guidelines (i.e. input) yet do so autonomously, adapt to environmental changes, work towards a goal (i.e. output) with the best achievable results (i.e. outcome) over a duration of time and this in a given (changing) space influenced by uncertainties. The application of this definition would not always result in a useful AI application. Some complex situations would, for instance, be better to respond to with a reflex rather than with rational deliberation. Think about a hand on a hot stove…[14] 

#9 Artificial Intelligence methods as goal-oriented agents.
Artificial Intelligence methods as goal-oriented agents. “Artificial Intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience and decision theory.”[15]

#10 AI Defined by Specific Research and Development Methods.
We can somewhat understand the possible meaning of the concept “AI” by looking at what some consider the different types or methods of AI, or the different future visions of such types of AI (in alphabetic order)[16]:

Activity Recognition

  • a system that knows what you are doing and acts accordingly. For instance: it senses that you carry many bags, so it automatically opens the door for you (without you needing to verbalize the need).

Affective Computing

  • a system that can identify the emotion someone showcases

Artificial Creativity

  • A system that can output something that is considered creative (e.g. a painting, a music composition, a written work, a joke, etc.)

Artificial Immune System

  • A system that functions in the likes of a biological immune system or that mimics its processes of learning and memorizing.

Artificial Life

  • A system that models a living organism

Artificial Stupidity

  • A system that adapts to the intellectual capacity of the form (life form, human) it interacts with or to the needs in a given context.

Automation

  • The adaptable mechanical acts coordinated by a system without the intervening of a human

Blockhead

  • A “fake” AI that simulates intelligence by referencing (vast) data repositories and regurgitating the information at the appropriate time. This system however does not learn.

Bot

  • A system that functions as a bodiless robot

ChatBot / ChatterBot

  • A system that can communicate with humans via text or speech giving the perception to the human (user) that it is itself also human. Ideally it would pass the Turing test.

Committee Machine

  • A system that combines the output from various neural networks. This could create a large-scale system.

Computer Automated Design

  • A system that can be put to use in areas of creativity, design and architecture that allow and need automation and calculation of complexities

Computer Vision

  • A system that via visual data can identify (specific) objects

Decision Support System

  • A system that adapts to contextual changes and supports human decision making

Deep Learning

  • A system operating on a sub-type of Machine Learning methods (see a future blog post for more info)

Embodied Agent

  • A system that operates in a physical or simulated “body”

Ensemble Learning

  • A system that applies many algorithms for learning at once.

Evolutionary Algorithms

  • A system that mimics biological evolutionary processes: birth, reproduction, mutation, decay, selection, death, etc. (see a future blog post for more info)

Friendly Artificial Intelligence

  • A system that is void of existential risk to humans (or other life forms)

Intelligence Amplification

  • A system that increases human intelligence

Machine Learning

  • A system of algorithms that learns from data sets and which is strikingly different from a traditional program (fixed by its code). (see a future blog post for more info)

Natural Language Processing

  • A system that can identify, understand and create speech patterns in a given language. (see a future blog post for more info)

Neural Network

  • A system that historically mimicked  a brain ‘s structure and function (neurons in a network) though now are driven by statistical and signal processing. (see another of my blog post for more info here)

Neuro Fuzzy

  • A system that applies a neural network to operate in a or fuzzy logic as a non-linear logic, or a non-Boolean logic (values between 0 or 1 and not only 0 or 1). It allows for further  interpretation of vagueness and uncertainty

Recursive Self-Improvement

  • A system that allows for software to write its own code in cycles of self-improvement.

Self-replicating Systems

  • A system that can copy itself (hardware and or software copies). This is researched for (interstellar) space exploration.

Sentiment Analysis

  • A system that can identify emotions and attitudes imbedded into human media (e.g. text)

Strong Artificial Intelligence

  • A system that has a general intelligence as a human does. This is also referred to as AGI or Artificial General Intelligence. This does not yet exist and might, if we continue to pursuit it, take decades to come to fruition. When it does it might start a recursive self-improvement and autonomous reprogramming, creating an exponential expansion in intelligence well beyond the confines of human understanding. (see a future blog post for more info)

Superhuman

  • A system that can do something far better than humans can

Swarm Intelligence

  • A system that can operate across a large number of individual (hardware) units and organizes them to function as a collective

Symbolic Artificial Intelligence

  • An approach used between 1950 and 1980 that limits computations to the manipulation of a defined set of symbols, resembling a language of logic.

Technological Singularity

  • A hypothetical system of super-intelligence and rapid self-improvement out of the control and beyond the understanding of any human. 

Weak Artificial Intelligence

  • A practical system of singular or narrow applications, highly focused on a problem that needs a solution via learning from given and existing data sets. This is also referred to as ANI or Artificial Narrow Intelligence.

Project Concept Examples

Mini
Project #___ : An
Application of a Definition
Do you know any program or technological system that (already) fits this 5th definition? 
How would you try to know whether or not it does?
Mini Project #___: Some Common Definitions of Ai with Examples
Team work      + Q&A: 
What is your team’s definition of AI? 
What seems to be the most accepted definition in       your daily-life community and in a community of AI experts closest to       you?
Reading +      Q&A:: Go through some      popular and less popular definitions with examples
Discussion: which definition of AI feels more acceptable to      your team; why? Which definition seems less acceptable to you and your      team? Why? Has your personal and first definition of Ai changed? How?
Objectives:      The learner can bring together the history, context, types and meaning of      AI into a number of coherent definitions.

References & URLs


[1] Krohn, J., et al.(2019, p.102) the importance of context in meaning-giving; NLP through Machine Learning and Deep Learning techniques

[2] Retrieved from Ville Valtonen at Reaktor and Professor Teemu Roos at the University of Helsinki’s “Elements of AI”, https://www.elementsofai.com/ , on December 12, 2019

[3] agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’. To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.

[4] Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: The MIT Press. p. 2 and footnote #1.

[5] Russell, S. and Peter Norvig. (2016). Artificial Intelligence: A Modern Approach. Third Edition. Essex: Pearson Education. p.2

[6] Russell. (2016). pp.2

[7] Winston, P. H. (1992). Artificial Intelligence (Third edition). Addison-Wesley.

[8] These are two definitions respectively from Charniak & McDermott (1985) and Winston (1992) as quoted in Russel, S. and Peter Norvig (2016).

[9] Charniak, E. and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley

[10] Russell (2016). pp.4

[11] Poole, D., Mackworth, A. K., and Goebel, R. (1998). Computational intelligence: A logical approach. Oxford University Press

[12] ‘agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’

[13] Russell. (2016). pp.2

[14] Russell (2016). pp.4

[15] Maini, V. (Aug 19, 2017). Machine Learning for Humans. Online: Medium.com. Retrieved November 2019 from e-Book https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0 or https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12 https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0

[16] Spacey, J. (2016, March 30). 33 Types of Artificial Intelligence. Retrieved from https://simplicable.com/new/types-of-artificial-intelligence  on February 10, 2020

Header image caption, credits & licensing:

Depicts the node connections of an artificial neural network

LearnDataSci / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)

source: https://www.learndatasci.com/

retrieved on May 6, 2020 from here