Tag Archives: education

<< Learning is Relational Entertainment; Entertainment is Shared Knowledge; Knowledge is... >>

context: Tangermann, Victor. ( Feb 16, 2023). Microsoft: It’s Your Fault Our AI Is Going Insane They’re not entirely wrong. IN: FUTURISM (online). Last retrieved on 23 February 2023 from https://futurism.com/microsoft-your-fault-ai-going-insane

LLM types of technology and their spin-offs or augmentations, are made accessible in a different context then technologies for which operation requires regulation, training, (re)certification and controlled access.

If the end-user holds the (main) weight of duty-of-care, then such training, certification, regulation and limited access should be put into place. Do we have that, and more importantly: do we really want that?

If we do want that, then how would that be formulated, be implemented and be prosecuted? (Think: present-day technologies such as online proctoring, keystroke recording spyware, Pegasus spyware, Foucault’s Panopticon or the more contextually-pungent “1984”)

If the end-user is not holding that weight and the manufacturer is, and/or if training, (re)certification, access and user-relatable laws, which could define the “dos-and-don’ts,” are not readily available then… Where is the duty-of-care?

Put this question of (shared) duty-of-care in light of critical analysis and of this company supposedly already knowing in November 2022 of these issues, then again… Where is the duty-of-care? (Ref: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o )

Thirdly, put these points then in context of disinformation vs information when e.g. comparing statistics as used by a LLM-based product vs the deliverables to the public by initiatives such as http://gapminder.org or http://ourworldindata.org or http://thedeep.io to highlight but three instances of a different systematized and methodological approach to the end-user (one can agree or disagree with these; that is another topic).

So, here are 2 systems which are both applying statistics. 1 system aims at reducing our ignorance vs the other at…increasing ignorance (for “entertainment” purposes… sure.)? The latter has serious financial backing, the 1st has…?

Do we as a social collective and market-builders then have our priorities straight? Knowledge is no longer power. Knowledge is submission to “dis-“ packaged as democratized, auto-generating entertainment.

#entertainUS

Epilogue-1:

Questionably “generating” (see above “auto-generating entertainment”) —while arguably standing on the shoulders of others—rather: mimicry, recycling, or verbatim copying without corroboration, reference, ode nor attribution. Or, “stochastic parroting” as offered by Prof. Dr. Emily M. Bender , Dr. Timnit Gebru et al. is relevant here as well. Thank you Dr. Walid Saba for reminding us. (This and they are perhaps suggesting a fourth dimension in lacking duty-of-care).

Epilogue-2:

to make a case: I ran an inquiry through ChatGPT requesting a list of books on abuses with statistics and about 50% of the titles did not seem to exist, or are so obscure that no human search could easily reveal them. In addition a few obvious titles were not offered. I tried to clean it up and add to it here below.

bibliography:

Baker, L. (2017). Truth, Lies & Statistics: How to Lie with Statistics.

Barker, H. (2020). Lying Numbers: How Maths & Statistics Are Twisted & Abused.

Best, J. (2001). Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press.

Best, J. (2004). More Damned Lies and Statistics.

Dilnot, A. (2007). The Tiger That Isn’t.

Ellenberg, J. (2014). How Not to Be Wrong.

Gelman, A., & Nolan, D. (2002). Teaching Statistics: A Bag of Tricks. New York, NY: Oxford University Press.

Huff, D. (1954). How to Lie with Statistics. New York, NY.: W. W. Norton & Company.

Levitin, D. (2016). A Field Guide to Lies: Critical Thinking in the Information Age. Dutton.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY Crown.

Rosling, H., Rosling Ronnlund, A. (2018). Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think. Flatiron Books; Later prt. edition

Seethaler, S. (2009). Lies, Damned Lies, and Science: How to Sort Through the Noise Around Global Warming, the Latest Health Claims, and Other Scientific Controversies. Upper Saddle River, NJ: FT Press.

Silver, IN. (2012). The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t. New York, NY: Penguin Press.

Stephens-Davidowitz, S. (2017). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Tufte, E. R. (1983). The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.

Wheeler, M. (1976). Lies, Damn Lies, and Statistics: The Manipulation of Public Opinion in America.

Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press.


References

this post was triggered by:

https://www.linkedin.com/posts/katja-rausch-67a057134_microsoft-its-your-fault-our-ai-is-going-activity-7034151788802932736-xxM6?utm_source=share&utm_medium=member_desktop

thank you Katja Rausch

and by:

https://www.linkedin.com/posts/marisa-tschopp-0233a026_microsoft-its-your-fault-our-ai-is-going-activity-7034176521183354880-BDB4?utm_source=share&utm_medium=member_desktop

thank you Marisa Tschopp

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

“Verbatim copying” in the above post’s epilogue was triggered by Dr. Walid Saba ‘s recent post on LinkedIn:

https://www.linkedin.com/posts/walidsaba_did-you-say-generative-ai-generative-ugcPost-7035419233631039488-tct_?utm_source=share&utm_medium=member_ios

This blog post on LinkedIn

<< Fair Technologies & Gatekeepers at the Fair >>

As summarized by Patton, Sirotnik pointed at the importance of “equality, fairness and concern for the common welfare.” (1997, 1990) This is on the side of processes of evaluation, and that of the implementation of interventions (in education), through the participation by those who will be most affected by that what is being evaluated. These authors, among the many others, offer various insights into practical methods and forms of evaluation; some more or less participatory in nature.

With this in mind, let us now divert our gaze to the stage of “AI”-labeled research, application, implementation and hype. Let us then look deeper into its evaluation (via expressing ethical concern or critique).

“AI” experts, evaluators and social media voices warn and call for fair “AI” application (in society at large, and thus also into education).

These calls occur while some are claiming ethical concerns related to fairness. Others are dismissing these concerns in combo with discounting the same people who voice such concerns. For an outsider, looking in on the public polemics, it might seem as “cut throat”. And yet, if we are truly concerned about ethical implications, and about answering needs of peoples, this violent image and the (de)relational acts it collects in its imagery, have no place. Debate, evaluate, dialog: yes. Debase, depreciate, monologue: no. Yes, it does not go unnoticed that a post as this one initially runs as a monologue, only quietly inviting dialog.

So long as experts, and perhaps any participant alike, are their own pawns, they are also violently placed according to their loyalties to very narrow tech formats. Submit to the theology of the day and thy shall be received. Voices are tested, probed, inducted, stripped or denied. Evaluation as such is violent and questionably serves that “common welfare.” (Ibid) Individuals are silenced, or over-amplified, while others are simply not even considered worthy to be heard, let alone tried to be understood. These processes too have been assigned to “AI”-relatable technologies (i.e., algorithmic designs and how messages are boosted or muted). So goes the human endeavor. So goes the larger human need in place of the forces of market gain and loudest noises on the information highways which we cynically label as “social.”

These polemics occur when in almost the same breathe this is kept within the bubble of the same expert voices: the engineer, the scientist, the occasional policymaker, the business leader (at times narrated as if in their own echo-chambers). The experts are “obviously” “multidisciplinary.” That is, many seem, tautologically, from within the fields of Engineering, Computer Science, Cognitive Science, Cybernetics, Data Science, Mathematics, Futurology, (some) Linguistics, Philosophy of Technology, or variations, sub-fields and combinations thereof. And yet, even there, the rational goes off the rails, and theologies and witches become but distastefully mislabeled dynamics, pitchforking human relations.

Some of the actors in these theatrical stagings have no industrial backing. This is while other characters are perceived as comfortably leaning against the strongholds of investors, financial and geopolitical forces or mega-sized corporations. This is known. Though, it is a taboo (and perhaps defeatist) to bundle it all as a (social) “reality.” Ethical considerations on these conditions are frequently hushed or more easily burned at the stake. Mind you, across human history, this witch never was a witch, yet was evaluated to better be known as a witch; or else…

In similarity to how some perceive the financially-rich –as taking stages to claim insight in any aspects of human life, needs, urgency, and decisions on filters of what is important– so too could gatekeepers in the field of “AI,” and its peripheries (Symbolic, ML or Neuro-symbolic), be perceived to be deciding for the global masses what is needed for these masses, and what is to be these masses’ contribution to guiding the benefit of “AI” applications for all. It’s fair and understandable that such considerations start there where they with insight wander. It is also fair stating that the “masses” can not really be asked. And yet, who then? When then? How then? Considering the proclaimed impacts of output from the field of “AI,” is it desirable that the thinking and acting stays there where the bickering industry and academic experts roam?

Let us give this last question some context:

For instance, in too broad and yet more concrete strokes: hardly any from the pre-18-year-old generations are asked, let alone prepared to critically participate in the supposedly-transformational thinking and deployment of “AI” related, or hyped, output. This might possibly be because these young humans are hardly offered the tools to gain insight, beyond the skill of building some “AI”-relatable deliverable. The techno-focused techniques are heralded, again understandably (yet questioned as universally agreeable), as a must-have before entering the job market. Again, to be fair, sprinkled approaches to critical and ethical thinking are presented to these youngsters (in some schools). Perhaps a curriculum or two, some designed at MIT, might come to mind. Nevertheless, seemingly, some of these attributes are only offered as mere echoes within techno-centrist curricula. Is this an offering that risks flavoring learning with ethics-washing? Is the (re)considering where the voices of reflection come from, are nurtured and are located, a Luddite’s stance? As a witch was, it too could be easily labeled as such.

AI applications are narrated as transforming, ubiquitous, methodological, universal, enriching and engulfing. Does ethical and critical skill-building similarly meander through the magical land of formalized learning? Ethical and critical processes (including computational thinking beyond “coding” alone), of thought and act, seem shunned and feared to become ubiquitous, methodological, enriching, engulfing or universal (even if diverse, relative, depending on context, or depending on multiple cultural lenses). Is this a systemic pattern, as an undercurrent in human societies? At times it seems that they who court such metaphorical creek are shunned as the village fool or its outskirt’s witch.

Besides youth, color and gender see their articulations staged while filtered through the few voices that represent them from within the above-mentioned fields. Are then more than some nuances silenced, yet scraped for handpicked data, and left to bite the dust?

Finally, looking at the conditions in further abstract (with practical and relational consequences): humans are mechanomorphized (i.e., seen as machines in not much more complexity than human-made machinery, and as an old computer dismissed if no longer befitting the mold or the preferred model). Simultaneously, while pulling in the opposite direction, the artificial machines are being anthropomorphized (i.e., made to act, look and feel as if of human flesh and psyche). Each of their applied, capitalized technique is glorified. Some machines are promoted (through lenses of fear-mongering, one might add) to be better at your human job or to be better than an increasing number of features you might perceive as part of becoming human(e).

Looking at the above, broadly brushed, scenarios:

can we speak of fair technologies (e.g. fair “AI” applications) if the gatekeepers to fairness are mainly those who create, or they who sanction the creation of the machinery? Can we speak of fair technologies if they who do create them have no constructive space to critique, nor evaluate via various vectors, and construct creatively through such feedback loops of thought-to-action? Can we speak of fair technology, or fair “AI” applications, if they who are influenced by its machinery, now and into the future, have few tools to question and evaluate? Should fairness, and its tools for evaluation, be kept aside for evaluation by the initiated alone?

While a human is not the center of the universe (anymore / for an increasing number of us), the carefully nurtured tools to think, participatorily evaluate and (temporarily) place the implied transformations, are sorely missing, or remain in the realm of the mysterious, the magical, witches, mesmerizations, hero-worship or feared monstrosities.

References:

Patton, M.Q. (1997). Intended Process Uses: Impacts of Evaluation, Thinking and Experiences. IN: Patton, M.Q. Utilization-focused Evaluation: The New Century Text (Ch. 5 pp 87 – 113). London: Sage.

Sirotnik, Kenneth A. (eds.). (1990). Evaluation and Social Justice: Issues in Public Education. New Directions for Program Evaluation, No. 45. San Francisco: Jossey-Bass.

Looking into underpinnings of learning and tech in and beyond technology-imbued learning environments


Over the past 150 years, give or take a year, the most impactful theories of learning have been defined into a few isms. According to UNESCO’s International Bureau of Education, these can be identified as:  behaviorism, cognitive psychology, constructivism, social learning theory, socio-constructivism, experiential learning, multiple intelligences, situated learning theory, community of practice, and 21st century learning 

Intuitively, one might sense various filtering attributes within the above paragraph. For instance “150 years”, “most impactful”, “theories”, “isms”. While these listed theories might find their roots in yet other pedagogical theories and practices, the constraining parameters might either function as biases and blinders, or rather, might be enriched or contextualized. 

Tribal methods of knowledge transfer, for instance, are not mentioned in this list; a list that feels, ever-so slightly, Eurocentric. If one were to imagine a child learning in a “remote” tribe (or various less remote yet self-sustaining communities) where such theories have not penetrated the colorful diverse foliage of community learning, is the child then learning via a less defined or less impactful theory? Strecthing beyond the intended meaning: would learning then be less impactful? 

Even within flavors of the New or Old World, there too methods of learning that could be distilled from those described in classical settings, are hidden or perhaps not sufficiently made explicit: Plato (in The Republic) as well as Socrates and their respective methodologies.

Yet another example, stretching both the space and time of impactful theories: the thousands of years of highly elite and selective learning methods in China are neither there, besides its present-day methodological implementation of Labor Education, as one of the five pillars of educational and learning theory and practices in the country. 

Then comparison or cross-pollinatable attributes of, for instance, Labor Education with scientifically corroborated methodologies, such as those found with Montessorian methods and theory (or perhaps even links with character-building as hinted at in Plato) are, understandably, neither obvious in such list of the influential few. 

The potential of cross-pollination of methodology or theory that explores potentials in-between the impactful theories, is subdued by the mere mentioning of these as segregated in or by their impact.  Sure, some might need highly creative or perhaps construing effort to be (partially) combined. 

Arguably too imaginative: one could consider that if “the media is the message,” that the technological structures and architectural frameworks could be implied to be a “theory of learning” as well. That is, if a medium (which is implicitly a technology) can define the message, then the structure (again, the technology) influences that what is being learned. One could imagine that structures hence could be designed as such to define a process of learning. Though, do they and are they consciously designed as such, or is the influence on learning often a collateral effect or damage? One might herein consider the field of AI, and Machine Learning and how it is mapped into Edtech. 

As Professor Luckin suggested in the Financial Times article of 14 August 2021, “English schools turn to AI to help students catch up after Covid”,  AI systems should be challenged by teachers. It is perhaps not too farfetched to assume such challenging might occur through ideological lenses, and secondarily, through pedagogical lenses, or hence through theories of learning, heralded by said teachers. 

If, however, the learning theories that are being conjured up are excluding those learning methods that are not recognized as sanctioned theories (conjectured by the idea of being insufficiently “impactful”), then it might again be not too far-fetched to assume that such challenge might be biased and exclusive of those who follow various other methods (or implicit theories, or not yet theorized practices).  Here hints of equity or equality or at least some degree of (minor) consideration might be felt.

Is it of high probability that those “others” who do not follow “most impactful” methods, might count into the millions of learners? i.e. China alone accounts for about 200 million students per annum; most of whom are not only influenced by these most impactful theories alone. Who out there, is learning outside of these “most impactful” learning theories? 

I can’t help but intuit this group of humans might not be negligibly small. I can intuit,  if I were to question the theories that underpinned the majority of our learning, that I would find it educational to see those diversities to be offered at least some footnote in a text that is promoted by a pan-national institution, such as Unesco. 

Imagine looking into EdTech applications (and their ML-underpinnings) through such lenses of both impactful and other learning (and assessment) theories and practices. Let us just imagine…

http://secretworldchronicle.com/2019/09/ep-9-37-the-sun-aint-gonna-shine-anymore/ levitra online You can buy Silagra 100 mg tablets online now from a trusted online pharmacy in UK. Due to viagra discount these properties it alleviates pitta and vata . If we talk about the duration of effectiveness for each of the get cialis overnight above said tasks. No matter what your age tadalafil uk is, Kamagra offers you the same pleasure of recovery.

Sources:

UNESCO. International Bureau of Education. “Most influential theories of learning”. Last retrieved on Monday, November 22, 2021 fromhttp://www.ibe.unesco.org/en/geqaf/annexes/technical-notes/most-influential-theories-learning

Financial Times (14 August, 2021). “English schools turn to AI to help students catch up after Covid”. Retrieved on Monday, November 22, 2021 fromhttps://www.ft.com/content/006ebaf6-a76c-4257-a343-f1db1f7b39e7

Ogilvy, J. (1971). “Socratic Method, Platonic Method, and Authority.” Wiley Online Library. Retrieved on Monday, November 22, 2021 fromhttps://onlinelibrary.wiley.com/doi/abs/10.1111/j.1741-5446.1971.tb00488.x

AI, Impact Investment, Ethics & Deeply Human-Centered Innovation

Contents

 

This class of drugs also includes buy sildenafil india and viagra. What is the reason of Kamagra’s popularity? Increasing number of people are turning to the internet for their medication requirements, and this includes over-the-counter medications as well as prescribed ones like brand viagra overnight. Many men as well as couples prefer order generic viagra over other ED drugs. cialis causes fast and effective reaction by helping the men to get prolonged, harder and better erection. Many surveys revealed that after having a range of effective ED pills, still many order cialis canada of the ED sufferers for their sexual disorder.