AI, Impact Investment, Ethics & Deeply Human-Centered Innovation: #2

Part 2: The Logic, Emotion & Ethics of Writing about Artificial Intelligence & Beyond

 

Following a somewhat extensive search of various formats of publications[4], surrounding AI, I found, rather unsurprisingly, that many expressed potentials and concerns. These covered the areas of the commercial, the non-profit-oriented, the psychological, social or the existential. It is hardly arguable that this finding is almost a truism and that this is the case for expressions concerning most technologies, including AI.

These exposés offered potentials in the form of seemingly logical analyses. At times the publications present concerns through emotional calls and explorations of fear. At other times some of these publications were intertwined with ethical issues and considerations, as well as with degrees of emotional responses or an entire lack thereof to the imagined prospects of AI.

Such rhetorical approaches –of mixing logic with emotion and ethics– could still be imagined as the triangular reality of becoming human (as humans) and of interacting, in a human manner, with technologies such as Artificial Intelligence.

That stated, could we not approach the technology which is umbrella’ed as “AI,” in more innovative and more penetrating ways, as the technology itself is considered to be highly innovating and penetrating (and even disruptively paradigm-shifting)? I believe so. Here below, under the last title of this text, one can find such attempted offer for social impact focusing on humanity.

I attempt this because I found that some authors, on for instance the topic of AI, seemingly addressed the topic of ethics and AI while very few though addressed little to no consideration of truly methodological support and ethically innovative solutions.[5] Interestingly, some even claimed an approach that is considerate of the human experience yet, I felt, they failed actually presenting such approach.

Some of these authors who claimed a consideration that has a human and humanity at its center, contemplated consequences of AI, and this as supported by (their) views on ethics or on social bias[6] of (input and or output) data (…and some even claim biases of algorithms themselves, which others again question). I would like to guide you to a few of many publications as the main references to this writing here:

1.        The World Economic Forum (WEF)’s White Paper entitled AI Governance. A Holistic Approach to Implement Ethics into AI[7] as freely published in January 2019 [see endnote #12 for my views on this paper]

2.      The United Nation’s Educational, Scientific and Cultural Organization (UNESCO)’s 2018 publication entitled “Human Decisions. Thoughts on AI

3.      The OECD’s “Algorithms and Collusion: Competition Policy in the Digital Age[8] and additional OECD publications[9]

4.      The IEEE’s 2019 publication entitled “Ethically Aligned Design. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems[10]

5.      European Parliament’s 2016 publication entitled “Ethical Aspects of Cyber-Physical Systems. Scientific Foresight study.”[11]

In browsing through these and a dozen of other publications, I could not help but sense my increasing intuition that ethics was possibly not seriously considered or, was perhaps very seriously considered, by others while seemingly most authors were ignoring the human capability for innovation in relation to their human thought processes. Could we not consider improving how we think and how that thinking affects ourselves, our surrounding and those (life) forms and functions surrounding us? Could we, by doing so, not improve our technological output, such as AI?[12] [This endnote offers my view on, specifically, the WEF paper] .

These publications, at least those which I browsed, felt as if they offer how to handle AI by fine-tuning the technology and not by considering what we as humans could achieve within and between ourselves. The serious and still very interesting and thought-provoking publications seemed to feel techno-centric. Here “techno-centric” refers to a (problem-solving) approach that starts from the technology towards offering a solution (where the technology itself, at times in isolation, is perceived as a problem). Can a perceived technological problem be solved with merely a technological solution? Could a solution also lie in the thinking and interacting of humans? To me these publications felt, at times, to be veiled with mere words of considering the human and their experiences, not with actual methods and processes that can be offered to humans and which could perhaps positively influence our technological designs and usages.

Could we develop human algorithms, prototypes and simulations that humans cognitively apply and which aid them to output, by means of design, ethical technologies, such as AI? If such method or solution were different, how would it/these be different from any existing educational or self-/group-improvement solution, process or ritual out there?

More importantly, to me at least, some did not offer writings that were immediately applicable to the very human layperson, who is claimed to be more or less dramatically influenced, yet, who is very often (more or less intentionally and perhaps understandably) excluded from the expert or in-depth conversation.


Contents

The purpose is for rehabilitation especially patients coming from some form of purchase levitra you could check here injuries. The condition is more complicated buy levitra in canada than in men, in men only the problem of erectile dysfunction prevails which can be very expensive in the end. Individuals with kidney, liver and heart issue must order levitra online not take this solution by any means. The locals advised them to feed Maca to the acquisition de viagra raindogscine.com horses and the animals immediately experienced an increase in energy levels.