Tag Archives: aieducation

Access. Accessible. Accessibility. Right of Access (GDPR).


Through a technological lens, mapped with efficiency and with AI, ‘accessible’ could refer to the ease with which data, applications, and services can be accessed and used by machines, without human intervention. This could imply the absence of a ‘human-in-the-loop.’

Such a system is one that could be optimized for efficiency and that could perform tasks quickly, accurately, and reliably.

From an interface design, mapped with consequentialist ethical perspectives, an accessible AI system could suggest that users, with empowering considerations of their abilities, vulnerabilities or disabilities, could access and use the system with ease or with means nuanced to their specific needs.

It could also refer to the degree to which a product, service, or technology is available, affordable, and designed to meet the needs of all individuals, including those from marginalized or otherwise disenfranchised  communities.

Degrees of accessibility implies that access could not be or be less constraint due to demographics, background, abilities, or socioeconomic status. This definition of accessibility could imply some of the following concepts which could improve due to accessibility, and that to some degree: agency, autonomy, plurality, diversity and diversification, equity, personalization, inclusivity, fairness, mindfulness, and compassion. Through such perspective this could be considered a ‘good’ system design. This could then lead one to consider concepts such as ‘ethical-by-design,’

An accessible AI  system could then also be one that is transparent (the lack of transparency implies a lack of access, even if it is access to the possibility of understanding the inner workings of the AI system), and thus of concepts such as, explainable, and accountable, ensuring that the decisions made by the AI system are fair, unbiased, and aligned with ethical principles.

The Right of Access (GDPR)’ is one of the 8 rights of the individual user (also referred to as “data subjects”) as defined within the European Union’s General Data Protection Regulation (GDPR). It is article 15 in the GDPR: “The data subject shall have the right to obtain from the controller confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data…” and access to a number of categories of information as further defined in this article.

This policy item aims “to empower individuals and give them control over their personal data.” The 8 rights are “the right of access, the right to rectification, the right to erasure, the right to restrict processing, the right to data portability, the right to object and the right not to be subject to a decision based solely on automated processing.

References

European Data Protection Supervisor. ( ). Rights of the Individual. Online: (an official EU website). Last retrieved on April 10, 2023 fromhttps://edps.europa.eu/data-protection/our-work/subjects/rights-individual_en

Art. 15 GDPR Right of access by the data subject: https://gdpr-info.eu/art-15-gdpr/

Page, Matthew J, David Moher, Patrick M Bossuyt, Isabelle Boutron, Tammy C Hoffmann, Cynthia D Mulrow, Larissa Shamseer, et al. “PRISMA 2020 Explanation and Elaboration: Updated Guidance and Exemplars for Reporting Systematic Reviews.” BMJ, March 29, 2021, n160. https://doi.org/10.1136/bmj.n160

https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/?template=pdf&patch=17#

https://ethics-of-ai.mooc.fi/chapter-5/3-examples-of-human-rights

<< Learning is Relational Entertainment; Entertainment is Shared Knowledge; Knowledge is... >>

context: Tangermann, Victor. ( Feb 16, 2023). Microsoft: It’s Your Fault Our AI Is Going Insane They’re not entirely wrong. IN: FUTURISM (online). Last retrieved on 23 February 2023 from https://futurism.com/microsoft-your-fault-ai-going-insane

LLM types of technology and their spin-offs or augmentations, are made accessible in a different context then technologies for which operation requires regulation, training, (re)certification and controlled access.

If the end-user holds the (main) weight of duty-of-care, then such training, certification, regulation and limited access should be put into place. Do we have that, and more importantly: do we really want that?

If we do want that, then how would that be formulated, be implemented and be prosecuted? (Think: present-day technologies such as online proctoring, keystroke recording spyware, Pegasus spyware, Foucault’s Panopticon or the more contextually-pungent “1984”)

If the end-user is not holding that weight and the manufacturer is, and/or if training, (re)certification, access and user-relatable laws, which could define the “dos-and-don’ts,” are not readily available then… Where is the duty-of-care?

Put this question of (shared) duty-of-care in light of critical analysis and of this company supposedly already knowing in November 2022 of these issues, then again… Where is the duty-of-care? (Ref: https://garymarcus.substack.com/p/what-did-they-know-and-when-did-they?r=drb4o )

Thirdly, put these points then in context of disinformation vs information when e.g. comparing statistics as used by a LLM-based product vs the deliverables to the public by initiatives such as http://gapminder.org or http://ourworldindata.org or http://thedeep.io to highlight but three instances of a different systematized and methodological approach to the end-user (one can agree or disagree with these; that is another topic).

So, here are 2 systems which are both applying statistics. 1 system aims at reducing our ignorance vs the other at…increasing ignorance (for “entertainment” purposes… sure.)? The latter has serious financial backing, the 1st has…?

Do we as a social collective and market-builders then have our priorities straight? Knowledge is no longer power. Knowledge is submission to “dis-“ packaged as democratized, auto-generating entertainment.

#entertainUS

Epilogue-1:

Questionably “generating” (see above “auto-generating entertainment”) —while arguably standing on the shoulders of others—rather: mimicry, recycling, or verbatim copying without corroboration, reference, ode nor attribution. Or, “stochastic parroting” as offered by Prof. Dr. Emily M. Bender , Dr. Timnit Gebru et al. is relevant here as well. Thank you Dr. Walid Saba for reminding us. (This and they are perhaps suggesting a fourth dimension in lacking duty-of-care).

Epilogue-2:

to make a case: I ran an inquiry through ChatGPT requesting a list of books on abuses with statistics and about 50% of the titles did not seem to exist, or are so obscure that no human search could easily reveal them. In addition a few obvious titles were not offered. I tried to clean it up and add to it here below.

bibliography:

Baker, L. (2017). Truth, Lies & Statistics: How to Lie with Statistics.

Barker, H. (2020). Lying Numbers: How Maths & Statistics Are Twisted & Abused.

Best, J. (2001). Damned Lies and Statistics: Untangling Numbers from the Media, Politicians, and Activists. Berkeley, CA: University of California Press.

Best, J. (2004). More Damned Lies and Statistics.

Dilnot, A. (2007). The Tiger That Isn’t.

Ellenberg, J. (2014). How Not to Be Wrong.

Gelman, A., & Nolan, D. (2002). Teaching Statistics: A Bag of Tricks. New York, NY: Oxford University Press.

Huff, D. (1954). How to Lie with Statistics. New York, NY.: W. W. Norton & Company.

Levitin, D. (2016). A Field Guide to Lies: Critical Thinking in the Information Age. Dutton.

O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York, NY Crown.

Rosling, H., Rosling Ronnlund, A. (2018). Factfulness: Ten Reasons We’re Wrong About the World–and Why Things Are Better Than You Think. Flatiron Books; Later prt. edition

Seethaler, S. (2009). Lies, Damned Lies, and Science: How to Sort Through the Noise Around Global Warming, the Latest Health Claims, and Other Scientific Controversies. Upper Saddle River, NJ: FT Press.

Silver, IN. (2012). The Signal and the Noise: Why So Many Predictions Fail – but Some Don’t. New York, NY: Penguin Press.

Stephens-Davidowitz, S. (2017). Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are.

Tufte, E. R. (1983). The Visual Display of Quantitative Information. Cheshire, CT: Graphics Press.

Wheeler, M. (1976). Lies, Damn Lies, and Statistics: The Manipulation of Public Opinion in America.

Ziliak, S. T., & McCloskey, D. N. (2008). The Cult of Statistical Significance: How the Standard Error Costs Us Jobs, Justice, and Lives. Ann Arbor, MI: University of Michigan Press.


References

this post was triggered by:

https://www.linkedin.com/posts/katja-rausch-67a057134_microsoft-its-your-fault-our-ai-is-going-activity-7034151788802932736-xxM6?utm_source=share&utm_medium=member_desktop

thank you Katja Rausch

and by:

https://www.linkedin.com/posts/marisa-tschopp-0233a026_microsoft-its-your-fault-our-ai-is-going-activity-7034176521183354880-BDB4?utm_source=share&utm_medium=member_desktop

thank you Marisa Tschopp

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922

“Verbatim copying” in the above post’s epilogue was triggered by Dr. Walid Saba ‘s recent post on LinkedIn:

https://www.linkedin.com/posts/walidsaba_did-you-say-generative-ai-generative-ugcPost-7035419233631039488-tct_?utm_source=share&utm_medium=member_ios

This blog post on LinkedIn

<< Demons and Demos >>


The New Yorker and NSO in some glorious spy-novel context here

…and further, as a cherry on this cake, one might quickly conjure up Cambridge Analytica , or singularly, Facebook with its clandestine 50000+ or so datapoints per milked data-cow (aka what I also lovingly refer to as humans as datacyborgs) which the company’s systems are said to distill through data collection . Yes, arguably the singularity is already here.

Then, more recently, one can enjoy the application by a facial recognition service, Clearview AI, that uses its data mining to identify (or read: “spy on”) dead individuals; a service which might seem very commendable (even for individuals with no personal social media accounts, one simply has to appear in someone else’s visual material); and yet the tech has been applied for more.

The contextualization might aid one to have the narrative amount to:

Alienation” and that, if one were to wish, could be extended with the idea of the “uncanny” hinted at with my datacyborg poetics. “Alienation” here is somewhat as meant as it is in the social sciences: the act of lifting the intended use of one’s data, outside of that intended use, by a third party. The questionable act of “alienation” is very much ignored or quietly accepted (since some confuse “public posting” with a “free for all”). 

What personally disturbs me is that the above manner of writing makes me feel like a neurotic conspiratorial excuse of a person… one might then self-censor a bit more, just to not upset the balance with any demonizing push-back (after all, what is one’s sound, educated and rational “demos” anyway?). This one might do while others, in the shadows of our silently-extracted data, throw any censorship, in support of the hidden self (of the other), out of the proverbial window.

This contextualised further; related to memory, one might also wish to consider the right to be forgotten besides the right to privacy. These above-mentioned actors among a dozen others, rip this autonomous decision-making out of our hands. If then one were to consider ethics mapped with the lack of autonomy one could be shiveringly delighted not to have to buy a ticket to a horror-spy movie since we can all enjoy such narratives for “free” and in “real” life. 

Thank you Dr. WSA for the trigger


Epilogue:

“Traditionally, technology development has typically revolved around the functionality, usability, efficiency and reliability of technologies. However, AI technology needs a broader discussion on its societal acceptability. It impacts on moral (and political) considerations. It shapes individuals, societies and their environments in a way that has ethical implications.”

https://ethics-of-ai.mooc.fi/chapter-1/4-a-framework-for-ai-ethics

…is ethics perhaps becoming / still as soothing bread for the demos in the games by the gazing all-seeing not-too-proverbial eye?

In extension to my above post (for those who enjoy interpretative poetics):

One might consider that the confusion of a “public posting” being equated with “free for all” (and hence falsely being perceived as forfeiting autonomy, integrity, and the likes), is somewhat analogous with abuses of any “public” commons.

Expanding this critically, and to some perhaps provokingly further, one might also see this confusion with thinking that someone else’s body is touch- or grope-for-all simply because it is “available”.

Now let’s be truly “meta” about it all: One might consider that the human body is digital now. (Ie my datacyborg as the uber-avatar. Moving this then into the extreme: if I were a datacyborg then someone else’s extraction beyond my public flaneuring here, in my chosen setting, could poetically be labeled as “datarape”)

As one might question the ethics of alienatingly ripping the biological cells from Henrietta Lacks beyond the extraction of her cancer into labs around the world, one might wonder about the ethics of data being ripped and alienated into labs for market experimentation and the infinite panopticon of data-prying someone’s (unwanted) data immortality

https://en.m.wikipedia.org/wiki/Henrietta_Lacks