Tag Archives: generative ai

<< Six Fingers as Six Sigma >>

Some Reflections on Artificially-Generated Content (AGC) Based on Synchronously-occurring News Cycles and Suffering

The concept of “Six Sigma” is related to processes to identify error and to reduce defects. It is a method aimed at improving processes and their output. In this context, ‘Six Fingers,’ is an artifact found in visual Artificially-Generated Content (AGC). Identifying attributes to aid a critical view on AGC could, to some extent, allow the nurturing of a set of tools in support of well-being and in considering the right action to take, perhaps aiding the human processes of being human and becoming (even more) human(e)…

Could/should I identify errors or features in AGC that identify a piece of AGC as AGC? Can we humans identify AGC with our own senses? For how much longer? Do we want to identify it? Are there contexts within which we want to identify AGC more urgently than in other contexts; e.g. highly emotionally-loaded content that occurs in one’s present-day news cycles, or where the AGC is used to (emotionally) augment, or create a sensation of being, present-day news? What are the ethical implications?

This first post tries to bring together some of my initial thoughts and some of those I observed among others. Since this is a rather novel area surely errors in this post can be identified and improvements on this theme by others (or myself) could surely follow.

Let me constrain this post by triggering some examples of some visual AGC

A common visual attribute in the above are the hands with (at least) six fingers. The six-fingers, at times found in graphic Generative AI output (a subset of AGC), are an attribute that reoccurs in this medium and yet, is one that is possibly disappearing as the technology develops over time.

For now, it has been established by some as a tool to identify hyper realistic imagery, generated of an imaginable event ,that statistically could occur and could have a probability to occur in the tangible realm of human events and experiences. This is while fantastical settings can as easily be generated that include six or more fingers.

And then, then there are artificial renditions of events that are shown in the news cycles. These events occur almost in synchronization with the artificial rendition that could follow. I am prompted by the above visuals which are a few of such artificial renditions. Some of these creations are touching on the relations and lives of actual, physical people. This latter category of artificial renditions is especially sensitive since it is intertwined with the understandable emotional and compassionate weight we humans attach to some events as they are occurring.

For now, the six fingers, among a few other and at times difficult to identify attributes, allow heuristic methods for identification. Such process of judgement inherently lacks the features of the more constrained and defined scientific techniques, or mathematical accuracy. In effect, the latter is one of those categories for identification. Some realistic renditions are not just realistic, they are hyper-realistic. For instance, it is possible that some smudges of dirt on someone’s face might just seem a bit uncanny in their (at times gruesome) graphic perfection. Moreover, by comparing generated images of this sort, one can start to see similarities in these “perfections.”

This, albeit intuitive identification of attributes, as a set of tools, can enable one to distinguish the generated visuals from the visuals found in, say, (digital) photographs taken from factual, tangible humans and their surrounding attributes. Digital photos (and at times intuitively more so analog photos) found their beginnings in capturing a single event, occurring in the non-digital realm. In a sense, digital photos are the output of a digitization process. AI technology-generated realistic imagery are not simply capturing the singular (or are not, compared to the ever so slightly more direct sense with the data collected by means of digital photography).

Simultaneously, we continue to want to realize that (analog or digital) photography too can suffer from error and manipulation. Moreover it too is very open to interpretation (i.e., via angle, focus, digital retouching, and other techniques). Note, error and interpretation are different processes. So too are transparency and tricking consumers of content, different processes. In the human process of (wanting to) believe a story, the creator of stories might best know that consumers of content are not always sharply focused, nor always on the look out for little nuances that could give away diminished holistic views of the depicted and constructed reality or+and realities. Multi-billion dollar industries and entire nations are built on this very human and neurological set of facts: what we believe to see or otherwise believe to sense is not what is necessarily always there. Some might argue this is a desirable feature. This opens up many venues for discussion, some of which are centuries old and lead us into metaphysics, ontology, reality and+or realities.

Reverting back to digits as fingers. In generated imagery the fingers are one attribute that, for now, can aid to burst the bubble (if such bubble needs bursting). The anatomy of the hand (and other attributes), e.g., the position, length of the thumb as well as, when zoomed-in, the texture of the skin can aid in creating doubt toward the authenticity of a “photo.” The type of pose and closed eyes also reoccur in other similar generated visuals can aid in critically questioning the visual. The overall color theme and overall texture are a third set of less obvious attributes. The additional text and symbols (i.e., the spelling, composition or texture of the symbol’s accompanying text, their position, the lack of certain symbols or the choice of symbols (due to their suggestive combination with other symbols), versus the absence or versus the probability of combination of symbols) could, just perhaps and only heuristically, create desirable doubt as well.

We might want to establish updated categorizations (if these do not already exist somewhere) that could aid they who wish to learn to see and to distinguish types of AGC, or types of content sources, with a specific focus on Generative AI. At the same time, it is good remembering that this is difficult due to the iterative nature of that what is aimed to be categorized: technology, and thus its output, upgrade often and adapt quickly.

Nevertheless, it could be a worthy intent, identifying tricks for increasing possible identification by humans in human and heuristic ways. For instance, some might want to become enabled to ask and (temporarily) answer: is it or is it not created via Generative AI?; Or, as it has occurred in the history of newly-introduced modalities of content generating media; e.g. the first films: is it, or is it not, a film of a train, or rather an actual train, approaching me? Is it, or is it not, an actual person helping someone in this photo? Or+and is this a trigger feeding on my emotions, which can aid me to take (more or less) constructive action? And do I, at all, want to care about these distinctions (or is something else more important and urgent to me)?

As with other areas in human experiences (e.g. the meat industry versus the vegetable-based “meats” versus the cell-based lab-generated meats: some people like to know whether the source of the juiciness of the beef steak, which they are biting in, comes from an animal or does not come from an animal, or comes from a cell-reproducing and repurposing lab. (side-note: I do not often eat meat(-like) products; if any at all). A particularly 1999 science fiction film plays with this topic of juicy, stimulating content as well; The Matrix. This then in turn could bring us to wonder about ontological questions of the artificial, the simulation, and of simulacra.

Marketing, advertising, PR, rhetoric and narration tools, and intents, can aid in making this more or far less transparent. Sauce this over with variations in types of ethics and one can image a sensitive process in making, using and evaluating an artificially generated hyper-real image.

Indeed, while the generated sentiment in such visuals could be sensed as important and as real (by they who sense it and they who share it), we might still want to tread with care. If such artificial generation is on a topic that is current and is on a topic of, for instance, a natural disaster, then the clarity of such natural disaster as being very tangibly real, makes it particularly unsettling for us humans. And yet, for instance, natural disasters affecting lives and communities, are (most often) not artificially generated (while some might be and are human-instigated). The use of artificial attributes toward that what is very tangible, might to some, increase distrust, desensitization, apathy or a sense of dehumanization.

Then there is the following question: why shall I use an artificially generated image instead of using one that is from actual people and (positive) actors, actually aiding in an actual event? It is a fair question to ponder as to unveil possible layers of artificial narrative design, implied in the making of a visual, or other, modality. So, then, what with the actual firefighter who actually rescued an actual child? Where is her representation to the world and in remembrance or commemoration?

Granted, an artificial image could touch on issues or needs relatable to a present-day event in very focused and controlled manners; such as the call for cooperation. It can guide stimulating emotion-to-positive-action. It is also fair to assume that the sentiment found with such visual can remind us that we need and want to work together, now and in the future, and  see actual humans relate with other humans while getting the urgent and important work done, no matter one’s narrated and believed differences generated through cultural constructs (digital, analog, imagined or otherwise imbued).

Simultaneously, we could also try to keep it real and humble, and still actionable. Simultaneously it is possible to tell the story of these physical, tangible and relational acts not by artificially diminishing them, and simultaneously we can commemorate and trigger by means of artificially generated representations of what is happening as we speak. Then a question might be: is my aim with AGC to augment, confuse, distract, shift attention, shift the weight, change the narrative, commemorate, etc?

Symbols are strong. For instance that of a “firefighter,” holding a rescued child with the care a loving mother, and father, would unconditionally offer. Ideally we each are firefighters with a care-of-duty. These images can be aiding across ideologies, unless, and this is only hypothetical of course, such imagery were used via its additionally placed symbols, to reinforce ideological tension or other ideological attributes while misusing or abusing human suffering. Is such use then a form of misinformation or even disinformation? While an answer here is not yet clear to me, the question is a fair question to ask intertwined with a duty-of-care.

Hence, openness of, and transparency toward, attribution of the one (e.g,, we can state it is a “generated image” more explicitly than “image” versus “photo”) does not have to diminish integrity of the other (e.g., shared human compassion via shared emotion and narration), nor of on-the-ground, physical action by actual human beings amidst actual disaster-stricken communities, or within other events that need aid. How can I decrease the risk that the AGC could diminish (to some) consumers of the aimed at AGC?

The manner of using Artificially Generated Content (AGC) is still extremely new and very complex. We collectively need time to figure out its preferred uses (if we were to want to use these at all). Also in this we can cross “borders” and help each other in our very human processes of trial and error.

This can be offered by balancing ethos (ethics, duty-of-care, etc.), pathos (emotion, passion, compassion, etc.) and logos (logic, reason, etc.) and now, perhaps more than ever, also techne (e.g., Generative AI and its AGC). One can include the other in nuanced design, sensibility, persistence, duty of care, recognition, and action, even and especially in moments of terrible events.

Expanding on this topic of artificially generated narration with positive and engaging aims, I for one wouldn’t mind seeing diversity in gender roles and (equity via) other diversities as well in some of these generated visuals of present-day events.

Reverting back to the artificial, if it must be, then diversity in poses, skin colors and textures as well, would be more than only “nice.” And yet, someone might wonder, all fine-tuning and nuancing might perhaps decreases the ability to distinguish the digitally-generated (e.g. via data sets, a Generative AI system and prompting), from the digitized and digitally captured (e.g. a digital photo). The previous is, if the data set is not socially biased. Herein too technology and its outputs are not neutral.

If the aim with a generated visual (and other modalities of AGC) of a present-day, urgent, important and sensitive event, is to stimulate aid, support, compassion, constructive relations, positive acts, inclusiveness (across ideology, nation and human diversities), then we could do so (while attributing it clearly). We can then also note that this is even if one thinks one does not need to, and one thinks one is free to only show generated attributes derived from traditional, European, strong male narratives. Or+and, we could do so much more. We could, while one does not need to exclude the other, be so much more nuanced, more inclusive and increase integrity in our calls-to-action. Generated imagery makes that possible too; if its data set is so designed to allow it.

reference

https://france3-regions.francetvinfo.fr/bretagne/ille-et-vilaine/rennes/intelligence-artificielle-ses-photos-faites-par-ia-font-le-tour-du-monde-2711210.html

note

it was fairly and wittily pointed out (on LinkedIn) that “Six Fingers” in this context is not to be confused as being a critique on the human imagination via fairy tales (e.g.: “–Inigo Montoya : I do not mean to pry, but you don’t by any chance happen to have six fingers on your right hand? –Man in Black : Do you always begin conversations this way?”) nor as a denial or acceptance of human diversity such as classified as (human) polydactyly.


source