Tag Archives: MAchine learning

The Field of AI (Part 06): “AI” ; a Definition Machine?

Definitions beyond “AI”: an introduction.

We naturally tend to depend on the severity of one’s symptoms, the fibroid size, number and location. viagra no prescription usa Dosage pattern: Sildenafil is easily accessible on our viagra without side effects site with 50mg, 100mg, and 200mg dosage quantity. So, when the blood rushes from your body to last longer in bed naturally. soft generic viagra All kinds of stress can be easily overcome by medications such as sildenafil pfizer or cialis generic.

“You shall know a word by the company it keeps.”

– Krohn, J.[1]

Definitions are artificial meaning-giving constructs. A definition is a specific linguistic form with a specific function. Definitions are patterns of weighted attributes, handpicked by means of (wanted and unwanted) biases. A definition is then as a category of attributes referring to a given concept and which then, in turn, aims at triggering a meaning of that targeted concept.

Definitions are aimed at controlling such meaning-giving of what it could refer to and of what it can contain within its proverbial borders: the specified attributes, narrated into a set (i.e. a category), that makes up its construct as to how some concept is potentially understood.

The preceding sentences could be seen as an attempt to a definition of the concept “definition” with a hint of how some concepts in the field of AI itself are defined (hint: have a look at the definitions of “Artificial Neural Networks” or of “Machine Learning” or of “Supervised and Unsupervised Learning”). Let us continue looking through this lens and expand on it.

Definitions can be constructed in a number of ways. For instance: they can be constructed by identifying or deciding on, and giving a description of, the main attributes of a concept. This could be done, for instance, by analyzing and describing forms and functions of the concept. Definitions could, for instance, be constructed by means of giving examples of usage or application; by stating what some concept is (e.g. synonyms, analogies) and is not (e.g. antonyms); by referring to a historical or linguistic development (e.g. its etymology, grammatical features, historical and cultural or other contexts, etc.); by comparison with other concepts in terms of similarities and differentiators; by describing how the concept is experienced and how not; by describing its needed resources, its possible inputs, its possible outputs, intended aims (as a forecast), actual outcome and larger impact (in retrospect). There are many ways to construct a definition. So too is it with a definition for the concept of “Artificial Intelligence”.

For a moment, as another playful side-note, by using our imagination and by trying to make the link between the process of defining and the usage of AI applications stronger: one could imagine that an AI solution is like a “definition machine.”

One could following imagine that this machine gives definition to a data set –by offering recognized patterns from within the data set– at its output. This AI application could be imagined as organizing data via some techniques.  Moreover, the application can be imagined to be collecting data as if attributes of a resulting pattern. To the human receiver this in turn could then define and offer meaning to a selected data set . Note, it also provides meaning to the data that is not selected into the given pattern at the output. For instance: the date is labelled as “cat” not “dog” while also some data has been ignored (by filtering it out; e.g. the background “noise” of ‘cat’).  Did this imagination exercise allow one to make up a definition of AI? Perhaps. What do you think? Does this definition satisfy your needs? Does it do justice to the entire field of AI from its “birth”, its diversification process along the way, to “now”? Most likely not.

A human designer of a definition likely agrees with the selected attributes (though not necessarily) while, those receiving the designed definition might agree that it offers a pattern but, not necessarily the meaning-giving pattern they would construct. Hence, definitions tend to be contested, fine-tuned, altered, up-dated, dismissed all-together over time and, depending on the perspective, used to review and qualify other yet similar definitions.  It almost seems that some definitions have a life of their own while others are, understandably, safely guarded to be maintained over time.

When learning about something and when looking a bit deeper than a surface, one then quickly is presented with numerous definitions of what was thought to be one and the same thing yet, which show variation and diversity in a field of study. This is OK. We, as individuals within our species, are able to handle, or at least live with ambiguities, uncertainties and change. These, by the way, are also some of the reasons why, for instance and to some extent, the fields of Statistics, Data Science and AI (with presently the sub-field of Machine Learning and Deep Learning) exist.

The “biodiversity” of definitions can be managed in many ways. One can manage different ideas at the same time in one’s head. It is as one can think of black and white and a mix of the two, in various degrees and that done simultaneously; while also introducing a plethora of additional colors. This can still offer harmony in one’s thinking. If that doesn’t work, one can put more importance to one definition over another, depending on some parameters befitting the aim of the learning and the usage of the definition (i.e. one’s practical bias of that moment in spacetime). One can prefer to start simple, with a reduced model as offered in a modest definition while (willingly) ignoring a number of attributes. This one could remind oneself to do so by not equating this simplified model / definition with the larger complexities of that what it only initiates to define.

One can apply a certain quality standard to allow the usage of one definition over another. One could ask a number of questions to decide on a definition. For instance: Can I still find out who made the definition? Was this definition made by an academic expert or not, or is it unknown? Was it made a long time ago or not; and is it still relevant to my aims? Is it defining the entire field or only a small section? What is intended to be achieved with the definition?  Do some people disagree with the definition; why? Does this (part of the) definition aid me in understanding, thinking about or building on the field of AI or does it rather give me a limiting view that does not allow me to continue (a passion for) learning? Does the definition help me initiate creativity, grow eagerness towards research, development and innovation in or with the field of AI? Does this definition allow me to understand one or other AI expert’s work better? If one’s answer is satisfactory at that moment, then use the definition until proven inadequate. When inadequate, reflect, adapt and move on.

With this approach in mind, the text here offers further 10 considerations and “definitions” on the concept of “Artificial Intelligence”. For sure, others and perhaps “better” ones can be identified or constructed.


“AI” Definitions & Considerations

#1 An AI Definition and its Issues.
The problem with many definitions of Artificial Intelligence (AI) is that they are riddled with what is called “suitcase words”. They are “…terms that carry a whole bunch of different meanings that come along even if we intend only one of them. Using such terms increases the risk of misinterpretations…”.[2] This term, “suitcase words”, was created by a world-famous computer scientist, who is considered one of the leading figures in the developments of AI technologies and the field itself: Professor MINSKY, Marvin.

#2 The Absence of a Unified Definition.
On the global stage or among all AI researchers combined, there is no official (unified) definition of what Artificial Intelligence is. It is perhaps better to state that the definition is continuously changing with every invention, discovery or innovation in the realm of Artificial Intelligence. It is also interesting to note that what was once seen as an application of AI is (by some) now no longer seen as such (and sometimes “simply” seen as statistics or as a computer program like any other). On the other end of the spectrum, there are those (mostly non-experts or those with narrowed commercial aims) who will identify almost any computerized process as an AI application.

#3 AI Definitions and its Attributes.
Perhaps a large number of researchers might agree that an AI method or application has been defined as “AI” due to the combination of the following 3 attributes:

it is made by humans or it is the result of a technological process that was originally created by humans,

it has the ability to operate autonomously (without the support of an operator; it has ‘agency’[3]) and

it has the ability to adapt (behaviors) to, and improve within changing contexts (i.e. changes in the environment); and this by means of a kind of technological process that could be understood as a process of “learning”. Such “learning” can occur in a number of ways. One way is to “learn” by trial-and-error or a “rote learning” (e.g. the storing in memory of a solution to a problem). A more complex way of applying “learning” is by means of “Generalization”. This means the system can “come up” with a solution, by generalizing some mathematical rule or set of rules from given examples (i.e. data), to a problem that was previously not yet encountered. The latter would be more supportive towards being adaptable in changing and uncertain environments.

#4 AI Definitions by Example.
Artificial Intelligence could, alternatively, also be defined by listing examples of its applications and methods. As such some might define AI by listing its methods (which are individual methods in the category of AI methods. Also see here below one of the listing of types and methods towards defining the AI framework): AI than, for instance, includes Machine Learning, Deep Learning and so on.

Others might define AI by means of its applications whereby AI is, for instance, a system that can “recognize”, locate or identify specific patterns or distinct objects in (extra-large, digital or digitized) data sets where such data sets could, for instance, be an image or a video of any objects (within a set), a set or string of (linguistic) sounds, be it prerecorded or in real-time, via a camera or other sensor. These objects could be a drawing, some handwriting, a bird sound, a photo of a butterfly, a person uttering a request, a vibration of a tectonic plate, and so on (note: the list is, literally, endless).

#5 AI Defined by referencing Human Thought.
Other definitions define AI as a technology that can “think” as the average humans do (yet, perhaps, with far more processing power and speed)… These would be “…machines with minds, in the full and literal sense… [such] AI clearly aims at genuine intelligence, not a fake imitation.[4] Such a definition creates AI research and developments driven by “observations and hypothesis about human behavior”; as it is done in the empirical sciences.[5]. At the moment of this writing, the practical execution of this definition has not yet been achieved.

#6 AI Defined by Referencing Human Actions.
Further definitions of what AI is, do not necessarily focus on the aspect of ability of thought. Rather some definitions for AI focus on the act that can be performed by an AI technology. Then definitions are something like: an AI application is a technology that can act as the average humans can act or do things with perhaps far more power, strength, speed and without getting tired, bored, annoyed or hurt by features of the act or the context of the act (e.g. work inside a nuclear reactor). Rai Kurzweil, a famous futurist and inventor in technological areas such as AI, defined the field of AI as: “ The art of creating machines that perform functions that require intelligence when performed by people.[6] 

#7 Rational Thinking at the Core of AI Definitions.
Different from the 5th definition is that thought does not necessarily have to be defined through a human lens or anthropocentrically. As humans we tend to anthropomorphize some of our technologies (i.e. give a human-like shape, function, process, etc. to a technology). Though, AI does not need to take on a human-like form, function nor process; unless we want it to. In effect, an AI solution does not need to take on any corporal / physical form at all. An AI solution is not a robot; it could be embedded into a robot.

One could define the study of AI as a study of “mental faculties through the use of computational models.”[7] Another manner of defining the field in this way is stating that it is the study of the “computations that make it possible to perceive, reason and act.”[8] [9]

The idea of rational thought goes all the way back to Aristotle and his aim to formalize reasoning. This could be seen as a beginning of logic. This was adopted early on as one of the possible methods in AI research towards creating AI solutions. It is, however, difficult to implement. This is the case since not everything can be expressed in a formal logic notation and not everything is perfectly certain. Moreover, not all problems are practically solvable by logic principles, even if via such logic principles they might seem solved.[10]

#8 Rational Action at the Core of AI Definitions.
A system is rational if “it does the ‘right thing’, given what it knows.” Here, a ‘rational’ approach is an approach driven by mathematics and engineering. As such “Computational Intelligence is the study of the design of intelligent agents…”[11] To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.[12] Scientists, with this focus in the field of AI, research “intelligent behavior in artifacts”.[13]

Such AI solution that can function as a ‘rational agent’ applies a form of logic reasoning and would be an agent that can act according to given guidelines (i.e. input) yet do so autonomously, adapt to environmental changes, work towards a goal (i.e. output) with the best achievable results (i.e. outcome) over a duration of time and this in a given (changing) space influenced by uncertainties. The application of this definition would not always result in a useful AI application. Some complex situations would, for instance, be better to respond to with a reflex rather than with rational deliberation. Think about a hand on a hot stove…[14] 

#9 Artificial Intelligence methods as goal-oriented agents.
Artificial Intelligence methods as goal-oriented agents. “Artificial Intelligence is the study of agents that perceive the world around them, form plans, and make decisions to achieve their goals. Its foundations include mathematics, logic, philosophy, probability, linguistics, neuroscience and decision theory.”[15]

#10 AI Defined by Specific Research and Development Methods.
We can somewhat understand the possible meaning of the concept “AI” by looking at what some consider the different types or methods of AI, or the different future visions of such types of AI (in alphabetic order)[16]:

Activity Recognition

  • a system that knows what you are doing and acts accordingly. For instance: it senses that you carry many bags, so it automatically opens the door for you (without you needing to verbalize the need).

Affective Computing

  • a system that can identify the emotion someone showcases

Artificial Creativity

  • A system that can output something that is considered creative (e.g. a painting, a music composition, a written work, a joke, etc.)

Artificial Immune System

  • A system that functions in the likes of a biological immune system or that mimics its processes of learning and memorizing.

Artificial Life

  • A system that models a living organism

Artificial Stupidity

  • A system that adapts to the intellectual capacity of the form (life form, human) it interacts with or to the needs in a given context.

Automation

  • The adaptable mechanical acts coordinated by a system without the intervening of a human

Blockhead

  • A “fake” AI that simulates intelligence by referencing (vast) data repositories and regurgitating the information at the appropriate time. This system however does not learn.

Bot

  • A system that functions as a bodiless robot

ChatBot / ChatterBot

  • A system that can communicate with humans via text or speech giving the perception to the human (user) that it is itself also human. Ideally it would pass the Turing test.

Committee Machine

  • A system that combines the output from various neural networks. This could create a large-scale system.

Computer Automated Design

  • A system that can be put to use in areas of creativity, design and architecture that allow and need automation and calculation of complexities

Computer Vision

  • A system that via visual data can identify (specific) objects

Decision Support System

  • A system that adapts to contextual changes and supports human decision making

Deep Learning

  • A system operating on a sub-type of Machine Learning methods (see a future blog post for more info)

Embodied Agent

  • A system that operates in a physical or simulated “body”

Ensemble Learning

  • A system that applies many algorithms for learning at once.

Evolutionary Algorithms

  • A system that mimics biological evolutionary processes: birth, reproduction, mutation, decay, selection, death, etc. (see a future blog post for more info)

Friendly Artificial Intelligence

  • A system that is void of existential risk to humans (or other life forms)

Intelligence Amplification

  • A system that increases human intelligence

Machine Learning

  • A system of algorithms that learns from data sets and which is strikingly different from a traditional program (fixed by its code). (see a future blog post for more info)

Natural Language Processing

  • A system that can identify, understand and create speech patterns in a given language. (see a future blog post for more info)

Neural Network

  • A system that historically mimicked  a brain ‘s structure and function (neurons in a network) though now are driven by statistical and signal processing. (see another of my blog post for more info here)

Neuro Fuzzy

  • A system that applies a neural network to operate in a or fuzzy logic as a non-linear logic, or a non-Boolean logic (values between 0 or 1 and not only 0 or 1). It allows for further  interpretation of vagueness and uncertainty

Recursive Self-Improvement

  • A system that allows for software to write its own code in cycles of self-improvement.

Self-replicating Systems

  • A system that can copy itself (hardware and or software copies). This is researched for (interstellar) space exploration.

Sentiment Analysis

  • A system that can identify emotions and attitudes imbedded into human media (e.g. text)

Strong Artificial Intelligence

  • A system that has a general intelligence as a human does. This is also referred to as AGI or Artificial General Intelligence. This does not yet exist and might, if we continue to pursuit it, take decades to come to fruition. When it does it might start a recursive self-improvement and autonomous reprogramming, creating an exponential expansion in intelligence well beyond the confines of human understanding. (see a future blog post for more info)

Superhuman

  • A system that can do something far better than humans can

Swarm Intelligence

  • A system that can operate across a large number of individual (hardware) units and organizes them to function as a collective

Symbolic Artificial Intelligence

  • An approach used between 1950 and 1980 that limits computations to the manipulation of a defined set of symbols, resembling a language of logic.

Technological Singularity

  • A hypothetical system of super-intelligence and rapid self-improvement out of the control and beyond the understanding of any human. 

Weak Artificial Intelligence

  • A practical system of singular or narrow applications, highly focused on a problem that needs a solution via learning from given and existing data sets. This is also referred to as ANI or Artificial Narrow Intelligence.

Project Concept Examples

Mini
Project #___ : An
Application of a Definition
Do you know any program or technological system that (already) fits this 5th definition? 
How would you try to know whether or not it does?
Mini Project #___: Some Common Definitions of Ai with Examples
Team work      + Q&A: 
What is your team’s definition of AI? 
What seems to be the most accepted definition in       your daily-life community and in a community of AI experts closest to       you?
Reading +      Q&A:: Go through some      popular and less popular definitions with examples
Discussion: which definition of AI feels more acceptable to      your team; why? Which definition seems less acceptable to you and your      team? Why? Has your personal and first definition of Ai changed? How?
Objectives:      The learner can bring together the history, context, types and meaning of      AI into a number of coherent definitions.

References & URLs


[1] Krohn, J., et al.(2019, p.102) the importance of context in meaning-giving; NLP through Machine Learning and Deep Learning techniques

[2] Retrieved from Ville Valtonen at Reaktor and Professor Teemu Roos at the University of Helsinki’s “Elements of AI”, https://www.elementsofai.com/ , on December 12, 2019

[3] agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’. To have ‘agency’ means to have the autonomous ability and to be enabled to act / do / communicate with the aim to perform a (collective) task.

[4] Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: The MIT Press. p. 2 and footnote #1.

[5] Russell, S. and Peter Norvig. (2016). Artificial Intelligence: A Modern Approach. Third Edition. Essex: Pearson Education. p.2

[6] Russell. (2016). pp.2

[7] Winston, P. H. (1992). Artificial Intelligence (Third edition). Addison-Wesley.

[8] These are two definitions respectively from Charniak & McDermott (1985) and Winston (1992) as quoted in Russel, S. and Peter Norvig (2016).

[9] Charniak, E. and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley

[10] Russell (2016). pp.4

[11] Poole, D., Mackworth, A. K., and Goebel, R. (1998). Computational intelligence: A logical approach. Oxford University Press

[12] ‘agent’ is from Latin ‘agere’ which means ‘to manage’, ‘to drive’, ‘to conduct’, ‘to do’

[13] Russell. (2016). pp.2

[14] Russell (2016). pp.4

[15] Maini, V. (Aug 19, 2017). Machine Learning for Humans. Online: Medium.com. Retrieved November 2019 from e-Book https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0 or https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12 https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0

[16] Spacey, J. (2016, March 30). 33 Types of Artificial Intelligence. Retrieved from https://simplicable.com/new/types-of-artificial-intelligence  on February 10, 2020

Header image caption, credits & licensing:

Depicts the node connections of an artificial neural network

LearnDataSci / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)

source: https://www.learndatasci.com/

retrieved on May 6, 2020 from here

The Field of AI (Part 03): A Recent History


A Consideration on Stories of Recent Histories

This story is not a fixed point. This one story here below is neither one that controls all possible AI stories. We are able to note that a history, such as this one, is a handpicking from a source that is richer than the story that is consequentially put in front of you, here, on a linear and engineered chronology. The entirety of the field of AI is more contextualized with parallel storylines, faded-in trials, and faded-out errors, with many small compounded successes and numerous complexities. Histories tend to be messy. This story does not pretend to do justice to that richness.

Just like a numerical dataset, history is a (swirling) pool of data. Just as an information processing unit, hopefully enabled to identify a relevant pattern that still could be prone to an unwanted bias, ambiguities, and probabilities with given uncertainties, so too is this narrative of a history on the dynamic field of AI studies, its researches and its developments. In realizing this, one can only wish that the reader of this story shares the wish to grow towards an increased self-reliant literacy, nurtured with “data” (read the word “data” here as “historical resources” among more numerical sources) and analytical ability.

A Suggested Mini Project Concept for Students:

Mini Project #___ : 
Datasets & Pattern Recognition Opportunities are Everywhere
The above consideration could be part of any storytelling, and its implication is not an insurmountable problem. It’s an opportunity, especially in the area of data sciences associated with AI research and development. See, this story here as an invitation to go out into the field and study more, to get a more nuanced sense of this history’s depths and its adventures within it. Try to see its datasets and their correlations, fluidities, contradictions and complexities. The ability to do so are essential skills in the field of AI as well as important skills as a human in a complex and changing world.
What “algorithm” might the author of this story here have used when recognizing a pattern from a given dataset in creating this story? (there is no right or wrong answer)
It’s almost obvious that a learner can only aspire toward something they know as something that existed, exists or could be imagined to exist. That is simultaneously through for learning from the data selection processes from another, for instance, the authoring of and the applying of the following history of the field of AI.
Can you create your own version of an AI history? What kind of filter, weight, bias or algorithm have you decided to use in creating your version of an AI history? 
Figure 01 Cells from a pigeon brain. Drawing made in 1899, of Purkinje cells (A) and granule cells (B) from pigeon cerebellum by Santiago Ramón y Cajal; Instituto Cajal, Madrid, Spain. Image: Public Domain Retrieved on March 27, 2020 from here


A Recent History of the Field of AI

Just like a present-day AI solution does, a human learner too needs datasets to see their own pattern of their path within the larger field. Who knows, digging into the layers of AI history might spark a drive to innovate on an idea some researchers had touched on in the past yet, have not further developed. This has been known to happen in a number of academic fields of which the field of AI is no exception.[1] Here it is opted to present a recent history of the field of AI[2] with a few milestones from the 20th century and the 21st century:

By the end of the 1930s and during the decade of 1940-1950, scientists and engineers joined with mathematicians, neurologists, psychologists, economists and political scientists to theoretically discuss the development of an artificial brain or of the comparison between the brain, intelligence and what computers could be (note, these did not yet exist in these earliest years of the 1940s).

In 1943, McCulloch & Pitts offered a theoretical proposal for a Boolean logic[3] circuit model of the brain.[4] These could be seen as the theoretical beginnings of what we know today as the Artificial Neural Networks.

In 1950 Turing wrote his seminal paper entitled “Computing Machinery and Intelligence.[5] Slightly later, in the 1950s, early AI programs included Samuel’s checkers game program, Newell & Simon’s Logic Theorist, or Gelernter’s Geometry Engine. It has been suggested that perhaps the first AI program was the Checkers game software. Games, such as Chess, Wéiqí (aka GO) and others (e.g. LěngPūDàshī, the poker-playing AI[6]) have been, and continue to be, important in the field of AI research and development.

While relatively little about it is said to have sustained the test of time, in 1951, a predecessor of the first Artificial Neural Network was created by Marvin Minsky and Dean Edmonds.[7] It was named “SNARC” which is short for “Stochastic Neural Analog Reinforcement Computer”.[8] The hardware system solved mazes. It simulated a rat finding its way through a maze. This machine was not yet a programmable computer as we know it today.

The academic field of “Artificial Intelligence Research” was created between 1955 and 1956.[9] That year, in 1956, the term “AI” was suggested by John McCarthy. He is claimed to have done so during a Dartmouth College conference that same year in Hanover, New Hampshire, the USA. McCarthy defined AI as “… the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable…”[10]

At that same time, the “Logic Theorist” was introduced by other scientists as the first Artificial Intelligence application.[11] It was able to proof a number of mathematical theorems.

In January 1957 Frank Rosenblatt proposed the concept of a single-layered neural network. He invented the photoperceptron (“perceptron” for short), an electronic automaton and model analogous to the brain, in a simplest sense thinkable, that would have the ability to “learn” visual patterns and to process such “…human-like functions as perception, recognition, concept formation, and the ability to generalize from experience… [This system would get its inputs] directly from the physical world rather than requiring the intervention of a human agent to digest and code the necessary information.[12] In short, the machine was aimed to recognize and classify a given geometric shape, following the input from a camera.

It is natural and healthy in the sciences to inquire with intellectual integrity and wonder, to insist for verification, corroboration and falsifiability[13] of theories and designs. So too did the photoperceptron design not escape scrutiny and the common peer review.[14] At one point, the perceptron was perceived to be of debated applicability and of contested interest as a basis for further research and developments within AI.[15] Several decades later, following a couple of “AI Winters” and academic meanderings, substantial increase in computational power and processing techniques, this will turn out to be a fruitful basis for a specific area of AI research and development: Machine Learning, its area of Deep Learning and its multilayered Artificial Neural Networks.[16]

1959: The term Machine Learning” was invented by the IBM electrical engineer and Stanford Professor, Arthur Lee Samuel. He wrote the first successful self-learning program. It played a Checkers game.[17] This was an early demonstration of an AI-type application which will become a hot topic in the 21st century and into present-day academic work.

The period from the nineteen fifties (1950s) into the earliest years of the nineteen seventies (early 1970s): during this two-decades-long period there was a lot of excitement around the promises suggested within the research and developments in the Artificial Intelligence field.

In 1965 Robinson invented an algorithm[18] for logical reasoning that laid the groundwork for a form of computer programming,[19] allowing for the automated proving of mathematical theorems.

Around 1970 Minsky and Papert considered the multilayered perceptron (MLP)[20] which could be seen as a theoretical predecessor to the multilayered neural networks as they are researched and developed today, in the AI sub-field of Machine Learning and its Deep Learning techniques.

Reflecting back onto the years around 1973,[21] voices tend to speak of the first “AI Winter”[22] while others don’t seem to mention this period at all.[23] Either way, it means that during this time, it is perceived that two forces supposedly collided: one was that of some academics and other people with a wish to do research in specific directions in the field of AI. They continued needing money. However, other academics with some understandable doubts[24] and those controlling the funds,[25]  no longer believed much in the (inflated) promises made within the given AI research of that period in history. Since money as a resource became limited, so too did research and development slow down. More focus and result-oriented work was required to obtain funds. At least, so it seemed for a period of time, until after the mid-seventies or until the early Eighties (1980s).[26] Depending on the historical source this time period has been demarcated rather differently (and perhaps views on what counts as significant might differ).[27]

Fading in from the early 1970s and lasting until the early 1990s, the AI research and developmental focus was on what is referred to as Knowledge-based approaches.  Those designing these type of solutions sought to “hard-code knowledge about the world in formal languages…” However, “…none of these projects has led to a major success. One of the most famous such projects is Cyc…”[28] Humans had to physically code the solutions which created a number of concerns and problems. The experts could not sufficiently and accurately code all the nuances of the reality of the world around the topic which the application was supposed to “intelligently” manage.

With the earliest introductions in 1965 by Edward Feigenbaum, one could continue finding further, yet still early, developments of these “Knowledge-based Systems”[29] (KBS). The development of these systems continued into the 1970s, some of which then came to further (commercial) fruition during the 1980s in the form of what was by then called “Expert Systems”(ES). The two systems, KBS and ES, are not exactly the same but they are historically connected.  These later systems were claimed to represent how a human expert would think through a highly specific problem. In this case the processing method was conducted by means of IF-THEN rules. During the 1980s the mainstream AI research and development focused on these “Logic-based, Programmed Expert Systems”. Prolog, a programming language, initially aimed at Natural Language Processing,[30] has been one of the favorites in designing Expert Systems.[31]  All expert systems are knowledge-based systems, the reverse is not true. By the mid-1980s Professor John McCarthy would criticize these systems as not living up to their promises.[32]

In the late Eighties (late 1980s), Carver Mead[33] introduced the idea to mimic the structure and functions of neuro-biological architecture (e.g. of brains or of the eye’s retina and visual perception) in the research and development of AI solutions (both in hardware and software). This approach (especially in chip design) has been increasingly and presently known as “Neuromorphic Engineering”. This is considered a sub-field of Electrical Engineering.

Jumping forward to present-day research and development, “neuromorphic computing” implies the promise of a processing of data in more of an analog manner rather than the digital manner traditionally known in our daily computers. It could, for instance, imply the implementation of artificial neural networks onto a computer chip.  This, for instance, could mean that the intensity of the signal is not bluntly on or off (read: 1 or 0) but rather that there could be a varying intensity.   One could read more in relation to this and some forms of artificial neural networks by, for instance, looking at gates, thresholds, and the practical application of the mathematical sigmoid function; to name but a few.

Simultaneously, these later years, following 1988 until about 1995, some claim, can be referred to as the second surge of an “AI Winter”.[34]  Some seem to put the period a few years earlier.[35] The accuracy of years put aside, resources became temporarily limited again. Research and its output was perceived to be at a low.  Concurrently, one might realize that this does not imply that all research and developments in both computing hardware and software halted during these so-called proverbial winters. The work continued, albeit for some with some additional constraints or under a different name or field (not as “AI”). One might agree that in science, research and development across academic fields seems to ebb, flow and meander yet, persist with grit. 

From 1990 onward slowly, but surely, the concept of probability and “uncertainty” took more of the center-stage (i.e. Bayesian networks). Statistical approaches became increasingly important in work towards AI methods. “Evolution-based methods, such as genetic algorithms and genetic programming” helped to move AI research forward.[36]  It was increasingly hypothesized that a learning agent could adapt to (or read: learn from) the changing attributes in its environment. Change implies the varying higher probabilities of a number of events occurring and the varying lower probability of some other attributes, as events, occurring.

AI solutions started to extract patterns from data sets rather than be guided by a line of code only. This probabilistic approach in combination with further algorithmic developments was gradually heralding a radically different approach from the previous “Knowledge-based Systems’. This approach to AI solutions has warranted in what some refer to as the AI Spring[37] some perceive the past few years up till present day.[38]

In the twenty-first century to present-day, Artificial Neural Networks have been explored in academic research and this with increasing success. More and more, it became clear that huge data sets of high quality  were needed to make a specific area of research in AI, known as Machine Learning, more powerful.

During the first decade of this century Professor Li Feifei oversaw the creation of such a huge and high quality image dataset, which would be one of the milestones in boosting confidence back into the field of AI and the quality of algorithm design.[39]

This story now arrived at the more recent years of AI Research and Development.

Following the first decade of this twenty-first century, an increasing usage of GPUs (graphical processing units) as hardware to power Machine Learning applications could be seen. Presently, even more advanced processing hardware is suggested and applied.

Special types of Machine Learning solution are being developed and improved upon. Specifically Deep Learning appeared on the proverbial stage. The developments in the Artificial Neural Networks and the layering of these networks become another important boost in the perception of potentials surrounding applications coming out of AI Research and Development (R&D).[40]

Deep Learning is increasingly becoming its own unique area of creative and innovative endeavor within the larger Field of AI.

Globally, major investment (in the tens of billions of dollars) have been made into AI R&D. There is a continued and even increasing hunger for academics, experts and professionals in various fields related to or within the field of AI.

The historical context of the field of AI, of which the above is a handpicked narrative, has brought us to where we are today. How this research is applied and will be developed for increased application will need to be studied, tried and reflected on, with continued care, considerate debate, creative spirit and innovative drive.

Your warranty may be found useless when you bring your jailbroken iPhone in for wholesale viagra online cell phone repair solutions at Apple stores. The main work of oral medicines is to relax the smooth muscles around the penile region viagra in uk thereby allowing increased blood flow to get a powerful erection. This allows the scope to levitra tabs be easily used in low light conditions exceptionally good. This annual plant is extremely colorful with flowers from yellow to red and greyandgrey.com online discount cialis gradually darkening leaves during the summer.An ancient Chinese legend says that the plant got his name from a goat herdsman who noticed sexual activity in his flock after they consumed the weed.

Footnotes & URLs to Additional Resources

[1] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[2]  Here loosely based on: Professor Dan Klein and Professor Pieter Abbee. (January 21st, 2014)  CS188 “Intro to AI” Lecture. UC Berkeley.

[3] George Boole (1815 – 1864) came up with a kind of algebraic logic that we now know as Boolean logic in his works entitled The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought (1854). He also explored general methods in probability. A Boolean circuit is a mathematical model, with calculus of truth values (1 = true; 0 = false) and set membership, which can be applied to a (digital) logical electronic circuitry.

[4] McCulloch, W.. & Pitts, W. (1943; reprint: 1990). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, pp.115-133. Retrieved online on February 20, 2020 from  https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf   

[5] Turing, A.M. (1950). Computing Machinery and Intelligence. Mind 49: 433-460. Retrieved November 13, 2019 from http://cogprints.org/499/1/turing.html and https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[6] Spice, B. (April 11, 2017). Carnegie Mellon Artificial Intelligence Beats Chinese Poker Players. Online: Carnegie Mellon University. Retrieved January 7, 2020 from https://www.cmu.edu/news/stories/archives/2017/april/ai-beats-chinese.html 

[7] Martinez, E. (2019). History of AI. Retrieved on April 14, 2020 from https://historyof.ai/snarc/

[8] Minsky, M. (2011). Building my randomly wired neural network machine. Online: Web of Stories   Retrieved on April 14, 2020 from https://www.webofstories.com/play/marvin.minsky/136;jsessionid=E0C48D4B3D9635BA883747C9A925B064

[9] Russell, S. and Peter Norvig. (2016) and McCorduck, Pamela. (2004). Machines Who Think. Natick, MA.: A K Peters, Ltd.

[10] McCarthy, J. (2007). What is AI? Retrieved on December 5th, 2019 from http://www-formal.stanford.edu/jmc/whatisai/node1.html This webpage also offers a nice, foundational and simple conversation about intelligence, IQ and related matters. 

[11] McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peters, Ltd

[12] Rosenblatt, F. (January, 1957). The Perceptron. A Perceiving and Recognizing Automaton. Report No. 85-460-1. Buffalo (NY): Cornell Aeronautical Laboratory, Inc. p. 1 & 30 Retrieved on January 17, 2020 from https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf  

[13] Popper, K. (1959, 2011). The Logic of Scientific Discovery. Taylor and Francis

[14] Minsky, M. and Papert, S.A. (1971). Artificial Intelligence Progress Report. Boston, MA:MIT Artificial Intelligence Laboratory. Memo No. 252.  pp. 32 -34 Retrieved on April 9, 2020 from https://web.media.mit.edu/~minsky/papers/PR1971.html or  http://bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

[15] Minsky, M. and Papert, S.A. (1969, 1987). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: The MIT Press

[16] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[17] Samuel, A.L. (1959, 1967, 2000). Some Studies in Machine Learning Using the Game of Checkers. Online: IBM Journal of Research and Development, 44(1.2), 206–226. doi:10.1147/rd.441.0206 Retrieved February 18, 2020 from https://dl.acm.org/doi/10.1147/rd.33.0210 and  https://www.sciencedirect.com/science/article/abs/pii/0066413869900044 and https://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf

[18] It is known as the “unification algorithm”. Robinson, John Alan (January 1965). A Machine-Oriented Logic Based on the Resolution Principle. J. ACM. 12 (1): 23–41 Retrieved on March 24, 2020 from https://dl.acm.org/doi/10.1145/321250.321253 and https://web.stanford.edu/class/linguist289/robinson65.pdf

[19] The form is what now could be referred to as a logic-based declarative programming paradigm = the code is telling a system what you want it does and that by means of formal logic facts and rules for some problem and not exactly by stating how step by step it needs to do it. There are at least 2 main paradigms with each their own sub-categories. This logic-based one is a subcategory of the declarative programming set of coding patterns and standards. The other main paradigm (with its subsets) is imperative programming which includes object-oriented and procedural programming. The latter includes the C language. See Online: Curlie Retrieved on March 24, 2020 from https://curlie.org/Computers/Programming/Languages/Procedural  Examples of (class-based) object-oriented imperative programming languages are C++, Python and R. See: https://curlie.org/en/Computers/Programming/Languages/Object-Oriented/Class-based/

[20] Minsky, M. and Papert, S.A. (1969, 1987) p. 231 “Other Multilayer Machines”.

[21] Lighthill, Sir J. (1972). Lighthill Report: Artificial Intelligence: A General Survey. Retrieved on April 9, 2020 from http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm and https://pdfs.semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf and http://www.aiai.ed.ac.uk/events/lighthill1973/

[22] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. p. 22

[23] McCorduck, P. (2004). pp. xxviii – xxix

[24] Minsky, M. and Papert, S.A. (1969, 1987)

[25] Historic Examples: Pierce, J. R. et al (1966). Language and Machines: Computers in Translation and Linguistics. Washington D. C.: The Automatic Language Processing Advisory Committee (ALPAC). Retrieved on April 9, 2020 from The National Academies of Sciences, Engineering and Medicine at   https://www.nap.edu/read/9547/chapter/1 alternatively: http://www.mt-archive.info/ALPAC-1966.pdf

[26] Hutchins, W. J. (1995). Machine Translation: A Brief History. In Koerner, E. E.K. .et al (eds). (1995). Concise history of the language sciences: from the Sumerians to the cognitivists. Pages 431-445. Oxford: Pergamon, Elsevier Science Ltd. p. 436

[27] Russell, S. et al. (2016, p.24) doesn’t seem to mention this first “AI Winter” and only mentions the later one, by the end of the 1980s nor does McCorduck, Pamela. (2004) pp. xxviii – xxix. Ghatak, A. (2019, p. vii) however, identifies more than one, as do Maini, V., et al. (Aug 19, 2017) and Mueller, J. P. et al. (2019, p. 133), Chollet, F. (2018). P12 Perhaps these authors, who are mainly focusing on Deep Learning, see the absence of research following the Rosenblatt’s perceptron as a “winter”.

[28] Goodfellow, I., et al. (2016, 2017). Deep Learning. Cambridge, MA: The MIT Press. p. 2

[29] More in-depth information can be found in the journal of the same name: https://www.journals.elsevier.com/knowledge-based-systems

[30] Hutchins, W. J. (1995). p. 436

[31] Some Prolog resources related to expert systems: https://www.metalevel.at/prolog/expertsystems AND https://en.wikibooks.org/wiki/Prolog

[32] McCarthy, J. (1996). Some Expert Systems need Common Sense. Online: Stanford University, Computer Science Department. Retrieved on April 7, 2020 from   http://www-formal.stanford.edu/jmc/someneed/someneed.html

[33] Mead, C. Information Retrieved on April 8, 2020 from http://carvermead.caltech.edu/ also see Mead, C. (1998). Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley from https://dl.acm.org/doi/book/10.5555/64998

[34] Russell (2016) p. 24

[35] McCorduck (2004) p. 418

[36] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. pp.24

[37] Manyika, J. et al (2019). The Coming of AI Spring. Online: McKinsey Global Institute. Retrieved on April 9, 2020 from https://www.mckinsey.com/mgi/overview/in-the-news/the-coming-of-ai-spring

[38] Olhede, S., & Wolfe, P. (2018). The AI spring of 2018. Significance, 15(3), 6–7. Retrieved on April 9, 2020 from https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2018.01140.x

[39] Deng, J. et al. (2009). ImageNet: A Large-Scale Hierarchical Image Database. Online: Stanford Vision Lab, Stanford University & Princeton University Department of Computer Science. Retrieved April 7, 2020 from http://www.image-net.org/papers/imagenet_cvpr09.pdf

[40] Trask, A. W. (2019). Grokking Deep Learning. USA: Manning Publications Co. p. 170

predicting pandoflux: a natural shift in Artificial sentiments on Emerging planetary patterns.

While browsing LinkedIn one can quickly sense the site is filled with professionalized visionaries professing the future.

This is the wonderful imagination we can expect from entrepreneurs, inventors, innovators, some makers, a few or more artists, a number of artisans, a whole lot of movers & shakers and policy makers’ think-tank spokespersons, who all frequent this social platform.

These days and months, since the end of 2019 into 2020, I have been noticing a shift into how that “just-a-flu” morphed into an emergency for a relatively few, into a pandemic for some more and into a fore-bearer of dramatic change to the human species, mapped without or with climate change (as an instigator of epidemics).

For some this has “suddenly” appeared the last two weeks or so. For others — that is, for those who are global nomads or global citizens, anyone from around the world living in China yet with loved-ones around the world; any Chinese citizen living in the world — this is now going into its 4th month and counting. For even fewer this was, now in retrospect, foreseeable; or so does the power of probability theory offer us.

Making a forecast, in the spirit of this biased opinion piece here, I foresee to be influenced in an emotionally heightened manner, as it has been, for another 3 months. That’s a very personal event of 6 to 7 months; for each one of those who fit the above category. That is, if I’m allowed being a bit too self-centered, not anticipating the passing of anyone close in these coming months because of virus-related complications.

Then there will be the echoes and reflections (and hopefully as little fall-out as possible) following this. Perhaps adding another 6 months? I’m just using a wild unsubstantiated version of prediction. I will call this my not so impressive “prediction” of the “Pandoflux”. Is a world of, and a world in, change a progressive world? Or, is progress what we do with change in relation to others and their context?

My predicting is not impressive to me since I also sense that flux seems simply inherent; even at a cellular or deeper level; even if we are imposingly-conserving. The latter too will pass, while its mechanism seems ever there?

Although I am very serious when I smile, is this attitude implied here too flippant or is it rather a watered-down version of a Taoist view on the world? At the least, I want you to think with me. Give it a moment.

Things will never be the same again…. we will never go back to how it was.” Previously, in a pre-pandemic sense, such statements seemed to come with an undertone of optimism and progressive thinking. Now, peri-pandemic, it sounds as if driven by fear and loss. It does not have to be, though.

Again, without wanting to be callous nor frivolous, nothing ever is the same and one can never ever go back to how something was before. That is, unless the affect of the memory of a change can be wiped from any mind that has been zapped back into a previous state. You know, like a reset button and a factory preset as the one suffered by Buzz Lightyear, in one of the Toystory animation franchises. Buzz too could not forget his previous setting.

Humanity and its events, however, are not a cartoon. It might seem like one, at times, but this tends to smell of sarcasm, disdain or at least of irony at the awkward moment. Indeed, perhaps this writing runs that risk as well.

When is the right moment to speak of change? Where and by who? When can we observe markers of change? What is such marker but a trigger of a parameter in a probability calculation of an environment that has always been in flux and has thrived on change?

In that regard, and as a side note, is a Machine Learning application an agent of change? Is it rather an agent in a process of corroboration that change is inherently part of the human experience and nature, as formalized via the field of advanced Calculus? Is perhaps such an AI application a neurotic obsession with control and its implied hanging onto a veil of pseudo-fixed and comforting insight?

After all, is a pattern not a pattern because it does not change? Or does it? What shall we call a pattern that is not to be recognized as a fixed pattern; chaos or rather, life?

I choose the latter.

When some individuals reminisce over the obvious how-it-was and the yet unknown changes to come, which dynamic pattern do they envision? A Chaotic one or one of LIFE?

In the struggles we face, whichever type, form, degree or function, we humans do want a sense of meaning as to the changes or the continuity these struggles imply. We make choices. We choose and recognize patterns.

This choice is there even if it is the meaning-giving idea of letting-go, breathing-out, moving-on and not looking for or clinging-on meaning in one attribute of a struggle in question. That is meaning. It could specifically be concerning if the meaning-giving labeling turns out as a painfully meaning-less one; driving one to the brink of or into madness and despair. That too is meaning. Meaning-giving is geared towards giving a future to a past event or to an event imagined becoming a past.

It is equally so as it is with communication: there is no such thing as no communication . One can not not-communicate with one’s brain; that meaning-giving thing between our ears. Even if we are trying to delegate this meaning-creation to the artificial realm of Machine Learning . This meaning-giving is inescapable.

On the other multiple ends of this 4-dimensional spectrum (yes, try to imagine this in a 3D high fidelity manner with a variable changing attribute over time), we can either observe small-minded yet large-sounding conspiracies of contrasting flavors and we can also see analyses of large Geo-political potentials and paradigm shifts.

This morning I was presented with a snippet of just that; the latter that is. The former is too irritating to me, while I do care about its extreme dangers.

In the earliest hours of the morning, I wake up very early, I was listening to BBC News World Service and its Newshour show. In it the astronomic numbers of applications for unemployment benefits in the USA were discussed. The data indicated about 10 million individuals were “shed” from their previous employment . Yes, “shed,” a word used in reporting as if humans are prickly needles from an ever-green pine tree that surprisingly looses its convoluted leaves. They and those without health insurance in the USA were discussed and then this was followed by an interview with Noam Chomsky. He was introduced as the academic who has “a radical solution to the economic shock” yet, who himself has repeatedly, and in this interview, rebutted this by stating that neo-liberalism is the radical paradigm here.

The episode, Newshour-20200402-USJoblessClaimsHitNewRecord, was retrieved on April 3, 2020 from http://www.bbc.co.uk/programmes/w172x2ylvg5rx9l

I suppose there is a reason why this morning the BBC, of all newscasters, suddenly interviewed Professor Noam Chomsky… no longer only Ms. Amy Goodman does so…

Later that same morning, I was sent a second item. It was a audio-video recording of an interview given by the present-day governor of New York State.

But recently, in my mind, some companies have crossed the proverbial line, so to buy levitra viagra speak, by using classic Beatles’ music to pitch their products. And please remember: The Acai Kapsule http://downtownsault.org/wp-content/uploads/2012/01/09-11-13-DDA-Minutes.doc generico viagra on line contains all the healthful elements. This ingredient has power of accelerating the blood flow in veins and arteries. buy cheap viagra Having relationship counseling is also beneficial as it can help you sexually fulfill your join forces effortlessly. vardenafil vs viagra downtownsault.org 100mg is a colossally famous prescription, which is regularly utilized as an option to viagra.

Yes, one competes in a free market construct. Is this “free,” though? Is the following forecast, here below, of not-so-much-change too esoteric? Could it, in the end, be the common USA citizen, with house loans and student loans, in the billions, and some of whom can not afford insurance, that shall pay for this? Is this an attribute of the so-called change we have to see happen (from our distance)?

This pandemic could very well be a massive shift in some human consciousness that previously did not see the issues we are facing. Now, that is without linking this to climate change which has been done, preceding the pandemic in that it was suggested that with climate alterations pandemics might become more frequent.

It might be that the idea of nothing ever being the same again, which some are talking about, is the re-delegation of education to the parents turned teacher, on top of their in-house distant working, their gig-economy project, their home-cooked meals and their in-house floor-mopping.

Isn’t human civilization (at least quantatitively) perceived as great because its members have invented the process of delegation? At least, one person is not looking forward to this change in delegational power:

Or, the foreseen change might be that new EdTech APP we can innovate on with increased human-originated data collection and Machine Learning processes in support of the mother company and its marketing or advertising-placement strategies.

Will it be a never-seen-before change in child-like bickering, finger-pointing, belly-button staring and saddening forms of competing between (nation) states?

Or, is it a change in a form derived from that which people such as Noam Chomsky are speaking of ?

Humans are living proof of the possibility of a multitude of patterns in change and in changing patterns. Surely changing patterns of and in life are as well. Life and lives without meaning and meaning without life and lives, are not the changes a human needs.

Luckily for you, I cannot offer you any of these or other such changes, nor am I a forecaster.

I breathe out. At the least, I can offer one constant of hope: be well and do well, my fellow earthling.