Category Archives: The field of AI

The Field of AI (Part 03): A Recent History


A Consideration on Stories of Recent Histories

This story is not a fixed point. This one story here below is neither one that controls all possible AI stories. We are able to note that a history, such as this one, is a handpicking from a source that is richer than the story that is consequentially put in front of you, here, on a linear and engineered chronology. The entirety of the field of AI is more contextualized with parallel storylines, faded-in trials, and faded-out errors, with many small compounded successes and numerous complexities. Histories tend to be messy. This story does not pretend to do justice to that richness.

Just like a numerical dataset, history is a (swirling) pool of data. Just as an information processing unit, hopefully enabled to identify a relevant pattern that still could be prone to an unwanted bias, ambiguities, and probabilities with given uncertainties, so too is this narrative of a history on the dynamic field of AI studies, its researches and its developments. In realizing this, one can only wish that the reader of this story shares the wish to grow towards an increased self-reliant literacy, nurtured with “data” (read the word “data” here as “historical resources” among more numerical sources) and analytical ability.

A Suggested Mini Project Concept for Students:

Mini Project #___ : 
Datasets & Pattern Recognition Opportunities are Everywhere
The above consideration could be part of any storytelling, and its implication is not an insurmountable problem. It’s an opportunity, especially in the area of data sciences associated with AI research and development. See, this story here as an invitation to go out into the field and study more, to get a more nuanced sense of this history’s depths and its adventures within it. Try to see its datasets and their correlations, fluidities, contradictions and complexities. The ability to do so are essential skills in the field of AI as well as important skills as a human in a complex and changing world.
What “algorithm” might the author of this story here have used when recognizing a pattern from a given dataset in creating this story? (there is no right or wrong answer)
It’s almost obvious that a learner can only aspire toward something they know as something that existed, exists or could be imagined to exist. That is simultaneously through for learning from the data selection processes from another, for instance, the authoring of and the applying of the following history of the field of AI.
Can you create your own version of an AI history? What kind of filter, weight, bias or algorithm have you decided to use in creating your version of an AI history? 
Figure 01 Cells from a pigeon brain. Drawing made in 1899, of Purkinje cells (A) and granule cells (B) from pigeon cerebellum by Santiago Ramón y Cajal; Instituto Cajal, Madrid, Spain. Image: Public Domain Retrieved on March 27, 2020 from here


A Recent History of the Field of AI

Just like a present-day AI solution does, a human learner too needs datasets to see their own pattern of their path within the larger field. Who knows, digging into the layers of AI history might spark a drive to innovate on an idea some researchers had touched on in the past yet, have not further developed. This has been known to happen in a number of academic fields of which the field of AI is no exception.[1] Here it is opted to present a recent history of the field of AI[2] with a few milestones from the 20th century and the 21st century:

By the end of the 1930s and during the decade of 1940-1950, scientists and engineers joined with mathematicians, neurologists, psychologists, economists and political scientists to theoretically discuss the development of an artificial brain or of the comparison between the brain, intelligence and what computers could be (note, these did not yet exist in these earliest years of the 1940s).

In 1943, McCulloch & Pitts offered a theoretical proposal for a Boolean logic[3] circuit model of the brain.[4] These could be seen as the theoretical beginnings of what we know today as the Artificial Neural Networks.

In 1950 Turing wrote his seminal paper entitled “Computing Machinery and Intelligence.[5] Slightly later, in the 1950s, early AI programs included Samuel’s checkers game program, Newell & Simon’s Logic Theorist, or Gelernter’s Geometry Engine. It has been suggested that perhaps the first AI program was the Checkers game software. Games, such as Chess, Wéiqí (aka GO) and others (e.g. LěngPūDàshī, the poker-playing AI[6]) have been, and continue to be, important in the field of AI research and development.

While relatively little about it is said to have sustained the test of time, in 1951, a predecessor of the first Artificial Neural Network was created by Marvin Minsky and Dean Edmonds.[7] It was named “SNARC” which is short for “Stochastic Neural Analog Reinforcement Computer”.[8] The hardware system solved mazes. It simulated a rat finding its way through a maze. This machine was not yet a programmable computer as we know it today.

The academic field of “Artificial Intelligence Research” was created between 1955 and 1956.[9] That year, in 1956, the term “AI” was suggested by John McCarthy. He is claimed to have done so during a Dartmouth College conference that same year in Hanover, New Hampshire, the USA. McCarthy defined AI as “… the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable…”[10]

At that same time, the “Logic Theorist” was introduced by other scientists as the first Artificial Intelligence application.[11] It was able to proof a number of mathematical theorems.

In January 1957 Frank Rosenblatt proposed the concept of a single-layered neural network. He invented the photoperceptron (“perceptron” for short), an electronic automaton and model analogous to the brain, in a simplest sense thinkable, that would have the ability to “learn” visual patterns and to process such “…human-like functions as perception, recognition, concept formation, and the ability to generalize from experience… [This system would get its inputs] directly from the physical world rather than requiring the intervention of a human agent to digest and code the necessary information.[12] In short, the machine was aimed to recognize and classify a given geometric shape, following the input from a camera.

It is natural and healthy in the sciences to inquire with intellectual integrity and wonder, to insist for verification, corroboration and falsifiability[13] of theories and designs. So too did the photoperceptron design not escape scrutiny and the common peer review.[14] At one point, the perceptron was perceived to be of debated applicability and of contested interest as a basis for further research and developments within AI.[15] Several decades later, following a couple of “AI Winters” and academic meanderings, substantial increase in computational power and processing techniques, this will turn out to be a fruitful basis for a specific area of AI research and development: Machine Learning, its area of Deep Learning and its multilayered Artificial Neural Networks.[16]

1959: The term Machine Learning” was invented by the IBM electrical engineer and Stanford Professor, Arthur Lee Samuel. He wrote the first successful self-learning program. It played a Checkers game.[17] This was an early demonstration of an AI-type application which will become a hot topic in the 21st century and into present-day academic work.

The period from the nineteen fifties (1950s) into the earliest years of the nineteen seventies (early 1970s): during this two-decades-long period there was a lot of excitement around the promises suggested within the research and developments in the Artificial Intelligence field.

In 1965 Robinson invented an algorithm[18] for logical reasoning that laid the groundwork for a form of computer programming,[19] allowing for the automated proving of mathematical theorems.

Around 1970 Minsky and Papert considered the multilayered perceptron (MLP)[20] which could be seen as a theoretical predecessor to the multilayered neural networks as they are researched and developed today, in the AI sub-field of Machine Learning and its Deep Learning techniques.

Reflecting back onto the years around 1973,[21] voices tend to speak of the first “AI Winter”[22] while others don’t seem to mention this period at all.[23] Either way, it means that during this time, it is perceived that two forces supposedly collided: one was that of some academics and other people with a wish to do research in specific directions in the field of AI. They continued needing money. However, other academics with some understandable doubts[24] and those controlling the funds,[25]  no longer believed much in the (inflated) promises made within the given AI research of that period in history. Since money as a resource became limited, so too did research and development slow down. More focus and result-oriented work was required to obtain funds. At least, so it seemed for a period of time, until after the mid-seventies or until the early Eighties (1980s).[26] Depending on the historical source this time period has been demarcated rather differently (and perhaps views on what counts as significant might differ).[27]

Fading in from the early 1970s and lasting until the early 1990s, the AI research and developmental focus was on what is referred to as Knowledge-based approaches.  Those designing these type of solutions sought to “hard-code knowledge about the world in formal languages…” However, “…none of these projects has led to a major success. One of the most famous such projects is Cyc…”[28] Humans had to physically code the solutions which created a number of concerns and problems. The experts could not sufficiently and accurately code all the nuances of the reality of the world around the topic which the application was supposed to “intelligently” manage.

With the earliest introductions in 1965 by Edward Feigenbaum, one could continue finding further, yet still early, developments of these “Knowledge-based Systems”[29] (KBS). The development of these systems continued into the 1970s, some of which then came to further (commercial) fruition during the 1980s in the form of what was by then called “Expert Systems”(ES). The two systems, KBS and ES, are not exactly the same but they are historically connected.  These later systems were claimed to represent how a human expert would think through a highly specific problem. In this case the processing method was conducted by means of IF-THEN rules. During the 1980s the mainstream AI research and development focused on these “Logic-based, Programmed Expert Systems”. Prolog, a programming language, initially aimed at Natural Language Processing,[30] has been one of the favorites in designing Expert Systems.[31]  All expert systems are knowledge-based systems, the reverse is not true. By the mid-1980s Professor John McCarthy would criticize these systems as not living up to their promises.[32]

In the late Eighties (late 1980s), Carver Mead[33] introduced the idea to mimic the structure and functions of neuro-biological architecture (e.g. of brains or of the eye’s retina and visual perception) in the research and development of AI solutions (both in hardware and software). This approach (especially in chip design) has been increasingly and presently known as “Neuromorphic Engineering”. This is considered a sub-field of Electrical Engineering.

Jumping forward to present-day research and development, “neuromorphic computing” implies the promise of a processing of data in more of an analog manner rather than the digital manner traditionally known in our daily computers. It could, for instance, imply the implementation of artificial neural networks onto a computer chip.  This, for instance, could mean that the intensity of the signal is not bluntly on or off (read: 1 or 0) but rather that there could be a varying intensity.   One could read more in relation to this and some forms of artificial neural networks by, for instance, looking at gates, thresholds, and the practical application of the mathematical sigmoid function; to name but a few.

Simultaneously, these later years, following 1988 until about 1995, some claim, can be referred to as the second surge of an “AI Winter”.[34]  Some seem to put the period a few years earlier.[35] The accuracy of years put aside, resources became temporarily limited again. Research and its output was perceived to be at a low.  Concurrently, one might realize that this does not imply that all research and developments in both computing hardware and software halted during these so-called proverbial winters. The work continued, albeit for some with some additional constraints or under a different name or field (not as “AI”). One might agree that in science, research and development across academic fields seems to ebb, flow and meander yet, persist with grit. 

From 1990 onward slowly, but surely, the concept of probability and “uncertainty” took more of the center-stage (i.e. Bayesian networks). Statistical approaches became increasingly important in work towards AI methods. “Evolution-based methods, such as genetic algorithms and genetic programming” helped to move AI research forward.[36]  It was increasingly hypothesized that a learning agent could adapt to (or read: learn from) the changing attributes in its environment. Change implies the varying higher probabilities of a number of events occurring and the varying lower probability of some other attributes, as events, occurring.

AI solutions started to extract patterns from data sets rather than be guided by a line of code only. This probabilistic approach in combination with further algorithmic developments was gradually heralding a radically different approach from the previous “Knowledge-based Systems’. This approach to AI solutions has warranted in what some refer to as the AI Spring[37] some perceive the past few years up till present day.[38]

In the twenty-first century to present-day, Artificial Neural Networks have been explored in academic research and this with increasing success. More and more, it became clear that huge data sets of high quality  were needed to make a specific area of research in AI, known as Machine Learning, more powerful.

During the first decade of this century Professor Li Feifei oversaw the creation of such a huge and high quality image dataset, which would be one of the milestones in boosting confidence back into the field of AI and the quality of algorithm design.[39]

This story now arrived at the more recent years of AI Research and Development.

Following the first decade of this twenty-first century, an increasing usage of GPUs (graphical processing units) as hardware to power Machine Learning applications could be seen. Presently, even more advanced processing hardware is suggested and applied.

Special types of Machine Learning solution are being developed and improved upon. Specifically Deep Learning appeared on the proverbial stage. The developments in the Artificial Neural Networks and the layering of these networks become another important boost in the perception of potentials surrounding applications coming out of AI Research and Development (R&D).[40]

Deep Learning is increasingly becoming its own unique area of creative and innovative endeavor within the larger Field of AI.

Globally, major investment (in the tens of billions of dollars) have been made into AI R&D. There is a continued and even increasing hunger for academics, experts and professionals in various fields related to or within the field of AI.

The historical context of the field of AI, of which the above is a handpicked narrative, has brought us to where we are today. How this research is applied and will be developed for increased application will need to be studied, tried and reflected on, with continued care, considerate debate, creative spirit and innovative drive.

Your warranty may be found useless when you bring your jailbroken iPhone in for wholesale viagra online cell phone repair solutions at Apple stores. The main work of oral medicines is to relax the smooth muscles around the penile region viagra in uk thereby allowing increased blood flow to get a powerful erection. This allows the scope to levitra tabs be easily used in low light conditions exceptionally good. This annual plant is extremely colorful with flowers from yellow to red and greyandgrey.com online discount cialis gradually darkening leaves during the summer.An ancient Chinese legend says that the plant got his name from a goat herdsman who noticed sexual activity in his flock after they consumed the weed.

Footnotes & URLs to Additional Resources

[1] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[2]  Here loosely based on: Professor Dan Klein and Professor Pieter Abbee. (January 21st, 2014)  CS188 “Intro to AI” Lecture. UC Berkeley.

[3] George Boole (1815 – 1864) came up with a kind of algebraic logic that we now know as Boolean logic in his works entitled The Mathematical Analysis of Logic (1847) and An Investigation of the Laws of Thought (1854). He also explored general methods in probability. A Boolean circuit is a mathematical model, with calculus of truth values (1 = true; 0 = false) and set membership, which can be applied to a (digital) logical electronic circuitry.

[4] McCulloch, W.. & Pitts, W. (1943; reprint: 1990). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, pp.115-133. Retrieved online on February 20, 2020 from  https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf   

[5] Turing, A.M. (1950). Computing Machinery and Intelligence. Mind 49: 433-460. Retrieved November 13, 2019 from http://cogprints.org/499/1/turing.html and https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[6] Spice, B. (April 11, 2017). Carnegie Mellon Artificial Intelligence Beats Chinese Poker Players. Online: Carnegie Mellon University. Retrieved January 7, 2020 from https://www.cmu.edu/news/stories/archives/2017/april/ai-beats-chinese.html 

[7] Martinez, E. (2019). History of AI. Retrieved on April 14, 2020 from https://historyof.ai/snarc/

[8] Minsky, M. (2011). Building my randomly wired neural network machine. Online: Web of Stories   Retrieved on April 14, 2020 from https://www.webofstories.com/play/marvin.minsky/136;jsessionid=E0C48D4B3D9635BA883747C9A925B064

[9] Russell, S. and Peter Norvig. (2016) and McCorduck, Pamela. (2004). Machines Who Think. Natick, MA.: A K Peters, Ltd.

[10] McCarthy, J. (2007). What is AI? Retrieved on December 5th, 2019 from http://www-formal.stanford.edu/jmc/whatisai/node1.html This webpage also offers a nice, foundational and simple conversation about intelligence, IQ and related matters. 

[11] McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peters, Ltd

[12] Rosenblatt, F. (January, 1957). The Perceptron. A Perceiving and Recognizing Automaton. Report No. 85-460-1. Buffalo (NY): Cornell Aeronautical Laboratory, Inc. p. 1 & 30 Retrieved on January 17, 2020 from https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf  

[13] Popper, K. (1959, 2011). The Logic of Scientific Discovery. Taylor and Francis

[14] Minsky, M. and Papert, S.A. (1971). Artificial Intelligence Progress Report. Boston, MA:MIT Artificial Intelligence Laboratory. Memo No. 252.  pp. 32 -34 Retrieved on April 9, 2020 from https://web.media.mit.edu/~minsky/papers/PR1971.html or  http://bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

[15] Minsky, M. and Papert, S.A. (1969, 1987). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: The MIT Press

[16] Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

[17] Samuel, A.L. (1959, 1967, 2000). Some Studies in Machine Learning Using the Game of Checkers. Online: IBM Journal of Research and Development, 44(1.2), 206–226. doi:10.1147/rd.441.0206 Retrieved February 18, 2020 from https://dl.acm.org/doi/10.1147/rd.33.0210 and  https://www.sciencedirect.com/science/article/abs/pii/0066413869900044 and https://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf

[18] It is known as the “unification algorithm”. Robinson, John Alan (January 1965). A Machine-Oriented Logic Based on the Resolution Principle. J. ACM. 12 (1): 23–41 Retrieved on March 24, 2020 from https://dl.acm.org/doi/10.1145/321250.321253 and https://web.stanford.edu/class/linguist289/robinson65.pdf

[19] The form is what now could be referred to as a logic-based declarative programming paradigm = the code is telling a system what you want it does and that by means of formal logic facts and rules for some problem and not exactly by stating how step by step it needs to do it. There are at least 2 main paradigms with each their own sub-categories. This logic-based one is a subcategory of the declarative programming set of coding patterns and standards. The other main paradigm (with its subsets) is imperative programming which includes object-oriented and procedural programming. The latter includes the C language. See Online: Curlie Retrieved on March 24, 2020 from https://curlie.org/Computers/Programming/Languages/Procedural  Examples of (class-based) object-oriented imperative programming languages are C++, Python and R. See: https://curlie.org/en/Computers/Programming/Languages/Object-Oriented/Class-based/

[20] Minsky, M. and Papert, S.A. (1969, 1987) p. 231 “Other Multilayer Machines”.

[21] Lighthill, Sir J. (1972). Lighthill Report: Artificial Intelligence: A General Survey. Retrieved on April 9, 2020 from http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm and https://pdfs.semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf and http://www.aiai.ed.ac.uk/events/lighthill1973/

[22] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. p. 22

[23] McCorduck, P. (2004). pp. xxviii – xxix

[24] Minsky, M. and Papert, S.A. (1969, 1987)

[25] Historic Examples: Pierce, J. R. et al (1966). Language and Machines: Computers in Translation and Linguistics. Washington D. C.: The Automatic Language Processing Advisory Committee (ALPAC). Retrieved on April 9, 2020 from The National Academies of Sciences, Engineering and Medicine at   https://www.nap.edu/read/9547/chapter/1 alternatively: http://www.mt-archive.info/ALPAC-1966.pdf

[26] Hutchins, W. J. (1995). Machine Translation: A Brief History. In Koerner, E. E.K. .et al (eds). (1995). Concise history of the language sciences: from the Sumerians to the cognitivists. Pages 431-445. Oxford: Pergamon, Elsevier Science Ltd. p. 436

[27] Russell, S. et al. (2016, p.24) doesn’t seem to mention this first “AI Winter” and only mentions the later one, by the end of the 1980s nor does McCorduck, Pamela. (2004) pp. xxviii – xxix. Ghatak, A. (2019, p. vii) however, identifies more than one, as do Maini, V., et al. (Aug 19, 2017) and Mueller, J. P. et al. (2019, p. 133), Chollet, F. (2018). P12 Perhaps these authors, who are mainly focusing on Deep Learning, see the absence of research following the Rosenblatt’s perceptron as a “winter”.

[28] Goodfellow, I., et al. (2016, 2017). Deep Learning. Cambridge, MA: The MIT Press. p. 2

[29] More in-depth information can be found in the journal of the same name: https://www.journals.elsevier.com/knowledge-based-systems

[30] Hutchins, W. J. (1995). p. 436

[31] Some Prolog resources related to expert systems: https://www.metalevel.at/prolog/expertsystems AND https://en.wikibooks.org/wiki/Prolog

[32] McCarthy, J. (1996). Some Expert Systems need Common Sense. Online: Stanford University, Computer Science Department. Retrieved on April 7, 2020 from   http://www-formal.stanford.edu/jmc/someneed/someneed.html

[33] Mead, C. Information Retrieved on April 8, 2020 from http://carvermead.caltech.edu/ also see Mead, C. (1998). Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley from https://dl.acm.org/doi/book/10.5555/64998

[34] Russell (2016) p. 24

[35] McCorduck (2004) p. 418

[36] Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. pp.24

[37] Manyika, J. et al (2019). The Coming of AI Spring. Online: McKinsey Global Institute. Retrieved on April 9, 2020 from https://www.mckinsey.com/mgi/overview/in-the-news/the-coming-of-ai-spring

[38] Olhede, S., & Wolfe, P. (2018). The AI spring of 2018. Significance, 15(3), 6–7. Retrieved on April 9, 2020 from https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2018.01140.x

[39] Deng, J. et al. (2009). ImageNet: A Large-Scale Hierarchical Image Database. Online: Stanford Vision Lab, Stanford University & Princeton University Department of Computer Science. Retrieved April 7, 2020 from http://www.image-net.org/papers/imagenet_cvpr09.pdf

[40] Trask, A. W. (2019). Grokking Deep Learning. USA: Manning Publications Co. p. 170

The Field of AI (Part 02-6): A “Pre-History” & a Foundational Context

post version: 2 (April 28, 2020)
It has active components that help supply blood to penile organ. sildenafil 100mg tab Source When you read such ads again, tell yourself that it is not really that bad and you can learn to super cialis cheap live with it. This method goes well with their hectic timetable after and during school classes, while parents truly admire distance education the best viagra training. This herb is also known to enhance heart health in men, reduce oxygen damage to LDL or bad cholesterol in men, and protect them cialis 40 mg from the inside.

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


05 — The Field of AI: A Foundational Context: Mathematics & Statistics

Mathematics & Statistics

The word ‘mathematics’ comes from Ancient Greek and means as much as “fond of learning, study or knowledge”. Dr. Hardy, G.H. (1877 – 1947), a famous mathematician, defined mathematics as the study and the making of patterns[1]. At least intuitively, as seen from these different perspectives, this might make a link between the fields of Cognitive Science, AI and mathematics a bit more obvious or exciting to some.

Looking at these two simple identifiers of math, one might come to appreciate math in itself even more but, also one might think slightly differently of  “pattern recognition” in the field of “Artificial Intelligence” and its sub-study of  “Machine Learning.”[2] Following, one might wonder whether mathematics perhaps lies at the foundation of machine or other learning.

Mathematics[3] and its many areas are covering formal proof, algorithms, computation and computational thinking, abstraction, probability, decidability, and so on. Many introductory K-16 resources are freely accessible on various mathematical topics[4] such as statistics.[5]

Statistics, as a sub-field or branch of mathematics, is the academic area focused on data and their collection, analysis (e.g. preparation, interpretation, organization, comparison, etc.), and visualization (or other forms of presentation). The field studies models based on these processes imposed onto data. Some practitioners argue that Statistics stands separately from mathematics.

These following areas of study in mathematics (and more) lie at the foundation of Machine Learning (ML).[6] Yet, it should be noted, one never stops learning mathematics for specialized ML applications:

  • (Bayesian) Statistics[7]
    • Statistics.[8]
    • See a future post for more perceptions on probability
    • Probability[9] Theory[10] which, is applied to make assumptions of a likelihood in the given data (Bayes’ Theorem, distributions, MLE, regression, inference, …);[11]
    • Markov[12] Chains[13] which model probability[14] in processes that are possibly changing from one state into another (and back) based on the present state (and not past states).[15]
    • Linear Algebra[16] which, is used to describe parameters and build algorithm and Neural Network structures;
      • Algebra for K-16[17]. Again, over-simplified, algebra is a major part of mathematics studying the manipulation of mathematical symbols with the use of letters, such as to make equations and more.
      • Vectors[18]            
      • Matrix Algebras[19]
    •  (Multivariate or multivariable) Calculus[20] which, is used to develop and improve learning-related attributes in Machine Learning.
      • Pre-Calculus & Calculus[21]: oversimplified, one can state that this is the mathematical study of change and thus also motion.[22] Note, just perhaps it might be advisable to consider first laying some foundations of (linear) algebra, geometry and trigonometry before calculus.
      • Multivariate (Multivariable) Calculus: instead of only dealing with one variable, here one focuses on calculus with many variables. Note, this seems not commonly covered within high school settings, ignoring the relatively few exceptional high school students who do study it.[23]
        • Vector[24] Calculus (i.e. Gradient, Divergence, Curl) and vector algebra:[25] of use in understanding the mathematics behind the Backpropagation Algorithm, used in present-day artificial neural networks, as part of research in Machine Learning or Deep Learning and the supervised learning technique.
      • Mathematical Series and Convergence, numerical methods for Analysis
    • Set Theory[26] or Type Theory: the latter is similar to the former except that the latter eliminates some paradoxes found in Set Theory.
    • Basics of (Numerical) Optimization[27] (Linear / Quadratic)[28]
    • Other: discrete mathematics (e.g. proof, algorithms, set theory, graph theory), information theory, optimization, numerical and functional analysis, topology, combinatorics, computational geometry, complexity theory, mathematical modeling, …
    • Additional: Stochastic Models and Time Series Analysis; Differential Equations; Fourier’s and Wavelengths; Random Fields;
    • Even More advanced: PDEs; Stochastic Differential Equations and Solutions; PCA; Dirichlet Processes; Uncertainty Quantification (Polynomial Chaos, Projections on vector space)
Mini Project #___ : 
Markov Chains 
Can you rework this Python project by Ms. Linsey Bieda, to use Chinese or another language’s word list?
Project context: https://rarlindseysmash.com/posts/2009-11-21-making-sense-and-nonsense-of-markov-chains 
Code source: https://gist.github.com/3928224 

[1] Hardy. H.R. & Snow, C.P. (1941).  A Mathematician’s Apology. London: Cambridge University Press

[2] More on “pattern recognition” in the field of “Artificial Intelligence” and its sub-study of  “Machine Learning” will follow elsewhere in future posts.

[3] Courant, R. et al. (1996). What Is Mathematics? An Elementary Approach to Ideas and Methods. USA: Oxford University Press  

[4] For instance (in alphabetical order):

[5] Meery, B. (2009). Probability and Statistics (Basic). FlexBook.  Online: CK-12 Foundation. Retrieved on March 31, 2020 from  http://cafreetextbooks.ck12.org/math/CK12_Prob_Stat_Basic.pdf

[6] a sub-field in the field of Artificial Intelligence research and development (more details later in a future post). A resource covering mathematics for Machine learning can be found here:

Deisenroth, M. P. et al. (2020). Mathematics for Machine Learning. Online: Cambridge University Press. Retrieved on April 28, 2020 from https://mml-book.github.io/book/mml-book.pdf AND https://github.com/mml-book/mml-book.github.io

Orland, P. (2020). Math for Programmers. Online: Manning Publications. Retrieved on April 28, 2020 from https://www.manning.com/books/math-for-programmers 

[7] Downey, A.B. (?).Think Stats. Exploratory Data Analysis in Python. Version 2.0.38 Online: Needham, MA: Green Tea Press. Retrieved on March 9, 2020 from http://greenteapress.com/thinkstats2/thinkstats2.pdf

[8] A basic High School introduction to Statistics (and on mathematics) can be freely found at Khan Academy. Retrieved on March 31, 2020 from https://www.khanacademy.org/math/probability

[9] Grinstead, C. M.; Snell, J. L. (1997). Introduction to Probability. USA: American Mathematical Society (AMS). Online: Dartmouth College. Retrieved on March 31, 2020 from https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/amsbook.mac.pdf AND solutions to the exercises retrieved from http://mathsdemo.cf.ac.uk/maths/resources/Probability_Answers.pdf

[10] Such as: Distributions, Expectations, Variance, Covariance, Random Variables, …

[11] Doyle, P. G. (2006). Grinstead and Snell’s Introduction to Probability. The CHANCE Project. Online: Dartmouth retrieved on March 31, 2020 from https://math.dartmouth.edu/~prob/prob/prob.pdf

[12] Norris, J. (1997). Markov Chains (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge: Cambridge University Press. Information retrieved on March 31, 2020 from https://www.cambridge.org/core/books/markov-chains/A3F966B10633A32C8F06F37158031739  AND http://www.statslab.cam.ac.uk/~james/Markov/  AND  http://www.statslab.cam.ac.uk/~rrw1/markov/    http://www.statslab.cam.ac.uk/~rrw1/markov/M.pdf AND https://books.google.com.hk/books/about/Markov_Chains.html?id=qM65VRmOJZAC&redir_esc=y

[13] Markov, A. A. (January 23, 1913). An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains. Lecture at the physical-mathematical faculty, Royal Academy of Sciences, St. Petersburg, Russia. In (2006, 2007). Science in Context 19(4), 591-600. UK: Cambridge University Press. Information retrieved on March 31, 2020 from https://www.cambridge.org/core/journals/science-in-context/article/an-example-of-statistical-investigation-of-the-text-eugene-onegin-concerning-the-connection-of-samples-in-chains/EA1E005FA0BC4522399A4E9DA0304862

[14] Doyle, P. G. (2006). Grinstead and Snell’s Introduction to Probability. Chapter 11, Markov Chains. Dartmouth retrieved on March 31, 2020 from https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter11.pdf

[15] A fun and fantasy-rich introduction to Markov Chains: Bieda, L. (2009). Making Sense and Nonsense of Markov Chains. Online, retrieved on March 31, 2020 from https://rarlindseysmash.com/posts/2009-11-21-making-sense-and-nonsense-of-markov-chains AND https://gist.github.com/LindseyB/3928224

[16] Such as: Scalars, Vectors, Matrices, Tensors….

See:

Lang, S. (2002). Algebra. Springer AND

Strang, G. (2016). Introduction to Linear Algebra. (Fifth Edition). Cambridge MA, USA: Wellesley-Cambridge & The MIT Press. Information retrieved on April 24, 2020 from https://math.mit.edu/~gs/linearalgebra/ AND https://math.mit.edu/~gs/AND

Strang, G. (Fall 1999). Linear Algebra. Video Lectures (MIT OpenCourseWare). Online: MIT Center for Advanced Educational Services. Retrieved on March 9, 2020 from https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/ AND

Hefferon, J. Linear Algebra. http://joshua.smcvt.edu/linearalgebra/book.pdf  AND http://joshua.smcvt.edu/linearalgebra/#current_version  (teaching slides, answers to exercises, etc.)

[17] Algebra basics and beyond can be studied via these resources retrieved on March 31, 2020 from https://www.ck12.org/fbbrowse/list?Grade=All%20Grades&Language=All%20Languages&Subject=Algebra

[18] Roche, J. (2003). Introducing Vectors. Online Retrieved on April 9, 2020 from http://www.marco-learningsystems.com/pages/roche/introvectors.htm

[19] Petersen, K.B & Pedersen, M.S. (November 15, 2012). The Matrix Cookbook. Online Retrieved from http://matrixcookbook.com and https://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf

[20] Such as: Derivatives, Integrals, limits, Gradients, Differential Operators, Optimization. …See a leading text book for more details: Goodfellow, I. et al. (2017). Deep Learning. Cambridge, MA: MIT Press + online via www.deeplearningbook.org and its https://www.deeplearningbook.org/contents/linear_algebra.html Retrieved on March 2, 2020.

[21] Spong, M. et al. (20-19). CK-12 Precalculus Concepts 2.0. Online: CK-12 Retrieved on March 31, 2020 from https://flexbooks.ck12.org/cbook/ck-12-precalculus-concepts-2.0/ and more at https://www.ck12.org/fbbrowse/list/?Subject=Calculus&Language=All%20Languages&Grade=All%20Grades

[22] Jerison, D. (2006, 2010). 18.01 SC Single Variable Calculus. Fall 2010. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu. License: Creative Commons BY-NC-SA. Retrieved on March 31, 2020 from https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/#

[23] A couple of anecdotal examples can be browsed here: https://talk.collegeconfidential.com/high-school-life/1607668-how-many-people-actually-take-multivariable-calc-in-high-school-p2.html and https://www.forbes.com/sites/johnewing/2020/02/15/should-i-take-calculus-in-high-school/#7360ae8a7625 .  In this latter article references to formal studies are provided; it is suggested to be cautious about taking Calculus, let alone the multivariable type. An online course on Multivariable Calculus for High school students is offered at John Hopkins’s Center for Talented Youth: Retrieved on March 31, 2020 from https://cty.jhu.edu/online/courses/mathematics/multivariable_calculus.html Alternatively, the MIT Open Courseware option is also available: https://ocw.mit.edu/courses/mathematics/18-02sc-multivariable-calculus-fall-2010/Syllabus/

[24] Enjoy mesmerizing play with vectors here: https://anvaka.github.io/fieldplay  

[25] Hubbard, J. H. et al. (2009). Vector Calculus, Linear Algebra, and Differential Forms A Unified Approach. Matrix Editions

[26] The study of collections of distinct objects or elements. The elements can be any kind of object (number or other)

[27] Boyd, S & Vandenberghe, L. (2009). Convex Optimization. Online: Cambridge University Press. Retrieved on March 9, 2020 from https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

[28] Luke, S. (October 2015). Essentials of Metaheuristics. Online Version 2.2. Online: George Mason University. Retrieved on March 9, 2020 from https://cs.gmu.edu/~sean/book/metaheuristics/Essentials.pdf 

The Field of AI (Part 02-5): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


04 — The Field of AI: A Foundational Context: Cognitive Science

Cognitive Science

Cognitive Science combines various fields of academic research into one.[1] This is therefore called an interdisciplinary field, or even more coherently integrated into one: a transdisciplinary field with possibly the involvement of non-academic participants.[2] It touches on the fields of anthropology, psychology, neurology or neuro sciences, biology, health sciences, philosophy, linguistics, computer sciences, and so on.

The work by Roger Shepherd or by Terry Winograd[3] or David Marr, among many others, is considered to have been crucial in the development of this academic field.[4] It is also claimed that Noam Chomsky, as well as the founders of the field of AI, had a tremendous influence on the development of Cognitive Science.[5] The links between the field of Cognitive Science and the field of AI are noticeable in a number of research projects (e.g. see a future post on AGI) and publications.[6]

It is the field that scientifically studies the biological “mental operations” (human and other) as well as the processes and its attributes assigned to or associated with “thinking” and the acquisition of or processes of “language”, “consciousness”, “perception”, “memory”, “learning”, “understanding”, “knowledge”, “creativity”, “emotions”, “mind”, “intelligence”, “motor control,” “vision,” models of intentional processes, the application of Bayesian methods to mental processes or other intellectual functions.[7] Any of these and related terms, through scientific lenses –while seemingly obvious in meaning in a daily use– are very complex, if not debated or contested[8]. The field does research and developments of the “mental architecture” which includes a model both of “information processing and of how the mind is organized.”[9]

Hence, the need for fields such as Cognitive Science. Since these areas are implying different systems, the need for various fields (or disciplines) being a source for Cognitive Science is not only inevitable, it is necessary. The contexts of each individual system (or field, or discipline) is potentially the core research area of a field covering another system. As suggested above, this implies an overlap and integration of other systems (or fields or disciplines, etc.) into one. Following, this requires an increased scientific awareness and practice of inter-dependence between fields of research.

Cognitive Science has developed advances in computational modeling, the creation of cognitive models and the study of computational cognition.[10]

The field of AI, through its history, found inspiration in Cognitive Science for its study of artificial systems. One example is the loose analogy with neurons (i.e. some of the cells making up a brain) and with neural networks (i.e. the connection of such cells) for its mathematical models.

To some extent an AI researcher could take the models distilled, following research in Cognitive Science, for their own research in artificial systems. The bridge between the two are arguably the models and specifically the mathematical models.

Figure 1 Cognitive Science is a multi-disciplinary academic field at the nexus of a number of other fields, including these shown here above. Image in the Public Domain Retrieved on March 18, 2020 from here

Simultaneously, researchers in Cognitive Science can also use solutions found in the field of AI to conduct their research.

Research in Artificial General Intelligence (AGI) partially aims to recreate functions and the implied processes with their This is achieved by firstly inhibiting c-GMP molecules which causes release of nitric oxide in the penile tissues can lead to an outflow of blood from the heart to the body) and Veins (that carry blood back to the heart). generic viagra from usa cute-n-tiny.com There are many results that say that pharmacy online viagra http://cute-n-tiny.com/cute-animals/my-cute-new-kitten/attachment/lilububbles/ knowing the reason for erection along with the usage of kamagra tablets. This type of ED in men with 30s last for a few tadalafil 5mg days only and would not need any sort of medical assistance. It viagra mastercard españa really is through this manner that human being is capable of reproduce. output, which Cognitive Science studies in biological neural networks (i.e. brains).

Some have argued that the field of AI is a sub-field of the field of Cognitive Science, many do not subscribe to this notion. [11] The argument has been made since in the field of AI one can find the research of processes that are innate to the processes found in a brain: sound pattern recognition, speech recognition, object recognition, gesture recognition, and so on which are in turn studied in other fields, such as Cognitive Science. It is more commonly agreed that AI is a sub-field of Computer Science. Still, as stated in the opening lines of this chapter, many do agree with the strong interdisciplinary or transdisciplinary links between the two.[12]


[1] Bermudez J.L.(2014). Cognitive Science. An Introduction to the Science of the Mind. Cambridge: Cambridge University Press. p. 2 Retrieved on March 23, 2020 from https://www.cambridge.org/us/academic/textbooks/cognitivescience

[2] https://semanticcomputing.wixsite.com/website-4

[3] He conducted some of his work at the Artificial Intelligence Laboratory, a Massachusetts Institute of Technology (MIT) research program. See Winograd, T. (1972). Understanding Natural Language. In Cognitive Psychology; Volume 3, Issue 1, January 1972, pp. 1 – 191. Boston: MIT; Online” Elsevier. Retrieved on March 25, 2020 from https://www.sciencedirect.com/science/article/abs/pii/0010028572900023   

[4] Bermudez J.L.(2014). pp. 3, 16, and on.

[5] Thagard, Paul, (Spring 2019 Edition). Cognitive Science. In Edward N. Zalta (ed.). The Stanford Encyclopedia of Philosophy. Online: Stanford University. Retrieved on March 23, 2020 from https://plato.stanford.edu/archives/spr2019/entries/cognitive-science/

[6] Gurumoorthy, S. et al. (2018). Cognitive Science and Artificial Intelligence: Advances and Applications. Springer

[7] Green, C. D. (2000). Dispelling the “Mystery” of Computational Cognitive Science. History of Psychology, 3(1), 62–66.

[8] Crowther-Heyck, H. (1999). George A. Miller, language, and the computer metaphor and mind. History of Psychology, 2(1), 37–64

[9] Bermudez J.L.(2014). p. xxix

[10] Houdé, O., et al (Ed.). (2004). Dictionary of cognitive science; neuroscience, psychology, artificial intelligence, linguistics, and philosophy. New York and Hove: Psychology Press;  Taylor & Francis Group.

[11] Zimbardo, P., et al. (2008). Psychologie. München: Pearson Education.

[12]An example thereof is the Bachelor of Science program in “Cognitive Science and Artificial Intelligence” at the Tilburg University, The Netherlands. Retrieved on March 23, 2020 from  https://www.tilburguniversity.edu/education/bachelors-programs/cognitive-science-and-artificial-intelligence

The Field of AI (Part 02-4): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


03 — The Field of AI: A Foundational Context: Control Theory


Control Theory

When thinking at a daily and personal level, one can observe that one’s body, the human body’s physiology, seemingly has a number of controls in place for it to function properly.

Humans, among many other species, could be observed showing different types of control. One can observe control in the biological acts within the body. For instance, by the physiological nature of one’s body’s processes; be they more or less autonomous or automatic processes. Besides, for instance, the beating process of the heart, or the workings of the intestines, one could also consider processes within, e.g. the brain and those degrees of control with and through the human senses.

Humans also exert control by means of, for instance, their perceptions, their interpretations, and by a set of rituals and habitual constraints which in turn might be controlled by a set of social, cultural  or in-group norms, rules, laws and other external or internalized constraints.

Really broadening one’s view onto ‘control’: one can find the need for some form and degree of control not only within humans but also in any form of life; in any organism. In effect, to be an organism is an example of a system of cells working together, in an organized and cooperative manner, instrumental to their collective survival as unified into the organism. Come to think of it, an organism can be considered sufficiently organized and working, if some degrees and some forms of shared, synchronized control is underlying their cooperation.

Interestingly enough, to some perhaps, such control is shared, within the organism, with colonies of supportive bacteria; its microbiome (e.g. the human biome). [1] While this seems very far from the topic of this text at the same time, analogies and links between Control Theory, Machine Learning and the biological world are at the foundation of the academic field of AI.[2]

If one were to somewhat abstract the thinking on the topic of ‘control’, then these controlling systems could be seen as a support towards learning from sets of (exchanged) information or data. These systems might engage in such acts of interchanged learning, with possibly the main aim to sustain forms and degrees of stability, through adaptations, depending on needs and contextual changes. At the very least, the research surrounding complex dynamic systems can use insights in both Control Theory and consequentially, the processing potentials as promised within Machine Learning.

Control could imply the constraining of the influence of certain variables or attributes within, or in context of, a certain process. One attribute (e.g. a variable or constant) could control another attribute and vice versa. These interactions of attributes could then be found to be compounded into a more complex system.

Control seems most commonly allowing for the reduction of risks and could allow for a given form and function (not) to exist. The existence of a certain form and function of control can allow for a system (not) to act within its processes.  

When one zooms in and focuses, one can consider that perhaps similar observations and reflections have brought researchers to constructing what is known as “Control Theory.”

Control Theory is the mathematical field that studies control in systems. This is through the creation of mathematical models, and how these dynamic systems optimize their behavior by controlling processes within a given, influencing environment.[3]

Through mathematics and engineering it allows for a dynamic system to perform in a desired manner (e.g. an AI system, an autonomous vehicle, a robotic system).  Control is exercised over the behavior of a system’s processes of any size, form, function or complexity. Control, as a sub-process, could be inherent to a system itself, controlling itself and learning from itself.

In a broader sense, Control Theory[4], can be found in a number of academic fields. For instance, it is found in the field of Linguistics with, for instance, Noam Chomsky[5] and the control of a grammatical contextual construct over a grammatical function. A deeper study of this aspect, while foundational to the fields of Cognitive Science and AI, is outside of the introductory spirit of this section.

As an extension to a human and their control within their own biological workings, humans and other species have created technologies and processes that allow them to exhort more (perceived) control over certain aspects of (their perceived) reality and their experiences and interactions within it.

Looking closer, as it is found in the area of biology and also psychology, with the study of an organism’s processes and its (perceptions of) positive and negative feedback loops. These control processes allow a life form (control of its perception of) maintaining a balance, where it is not too cold or hot, not too hungry and so on; or to act on a changing situation (e.g. start running because fear is increasing).

As one might notice, “negative” is not something “bad” here. Here the word means that something is being reduced so that a system’s process (e.g. heat of a body) and its balance can be maintained and stabilized (e.g. not too cold and not too hot). Likewise, “Positive” here does not (always) mean something “good”. It means that something is being increased. Systems using these kinds of processes are called homeostatic systems.[6] Such systems, among others, have been studied in the field of Cybernetics;[7] the science of control.[8] This field, in simple terms, studies how a system regulates itself through its control and the communication of information[9] towards such control.

These processes (i.e. negative and positive feedback loops) can be activated if a system predicts (or imagines) something to happen. Note: here is a loose link with probability, thus with data processing and hence with some processes also found in AI solutions.

In a traditional sense, a loop in engineering and its Control Theory could, for instance, be understood as open-loop and closed-loop control. A closed loop control shows a feedback function.  This feedback is provided by means of the data sent from the workings of a sensor, back into the system, controlling the functioning of the system (e.g. some attribute within the system is stopped, started, increased or decreased, etc.).

A feedback loop is one control technique. Artificial Intelligence applications, such as with Machine Learning and its Artificial Neural Networks can be applied to exert degrees of control over a changing and adapting system with these, similar or more complex loops. These AI methods too, use applications that found their roots in Control Theory. These could be traced to the 1950s with the Perceptron system (a kind of Artificial Neural Network), built by Rosenblatt.[10] A number of researchers in Artificial Neural Networks and Machine Learning in general found their creative steppingstones in Control Theory. 

The field of AI has links with Cognitive Science or with some references to brain forms and brain functions (e.g. see the loose links with neurons). Feedback loops, as they are found in biological systems, or loops in general, have consequentially been referenced and applied in fields of engineering as well. Here, associated with the field of AI, Control Theory and these loops, are mainly referring to the associated engineering and mathematics used in the field of AI. In association with the latter, since some researchers are exploring Artificial General Intelligence (AGI), it might also increasingly interest one to maintain some degree of awareness of these and other links between Biology and Artificial Intelligence as a basis for sparking one’s research and creative thinking, in context.


[1]  Huang, S. et al. (February 11, 2020). Human Skin, Oral, and Gut Microbiomes Predict Chronological Age. Retrieved on April 13, 2020 from https://msystems.asm.org/content/msys/5/1/e00630-19.full-text.pdf

[2] See for instance, Dr. Liu, Yang-Yu (刘洋彧). “…his current research efforts focus on the study of human microbiome from the community ecology, dynamic systems and control theory perspectives. His recent work on the universality of human microbial dynamics has been published in Nature…” Retrieved on April 13, 2020 from Harvard University, Harvard Medical School, The Boston Biology and Biotechnology (BBB) Association, The Boston Chapter of the Society of Chinese Bioscientists in America (SCBA; 美洲华人生物科学学会: 波士顿分会) at https://projects.iq.harvard.edu/bbb-scba/people/yang-yu-liu-%E5%88%98%E6%B4%8B%E5%BD%A7-phd and examples of papers at https://scholar.harvard.edu/yyl

[3] Kalman, R. E. (2005). Control Theory (mathematics). Online: Encyclopædia Britannica. Retrieved on March 30, 2020 from https://www.britannica.com/science/control-theory-mathematics

[4] Manzini M. R. (1983). On Control and Control Theory. In Linguistic Inquiry, 14(3), 421-446. Information Retrieved April 1, 2020, from www.jstor.org/stable/4178338

[5] Chomsky, N. (1981, 1993). Lectures on Government and Binding. Holland: Foris Publications. Reprint. 7th Edition. Berlin and New York: Mouton de Gruyter,

[6] Tsakiris, M. et al. (2018). The Interoceptive Mind: From Homeostasis to Awareness. USA: Oxford University Press

[7] Wiener, N. (1961). Cybernetics: or the Control and Communication in the Animal and the Machine: Or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press

[8] The Editors of Encyclopaedia Britannica. (2014). Cybernetics. Retrieved on March 30, 2020 from https://www.britannica.com/science/cybernetics

[9] Kline, R. R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age. New Studies in American Intellectual and Cultural History Series. USA: Johns Hopkins University Press.

[10] Goodfellow, I., et al. (2017). Deep Learning. Cambridge, MA: MIT Press. p. 13

The12$ that Pfizer charges for tadalafil cheap in US cannot weigh against the Rs.594 they charge in India due to the lesser GDP and the way local medicines are priced in India. Jamentz notes that managers must be able to levitra viagra cialis recognize whether it hits you really or not. Your wife is dressed very sexy and she knows that you cialis generic no prescription are big fan of her lingerie. Many men nowadays are going through some cheap sildenafil uk worst problems such as too much smoking, work load, stress, etc.


Image Caption:

A typical, single-input, single-output feedback loop with descriptions for its various parts.”

Image source:

Retrieved on March 30, 2020 from here License & attribution: Orzetto / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)

The Field of AI (Part 02-3): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


02 — The Field of AI: A Foundational Context:
Philosophy, Psychology & Linguistics

Philosophy:

 In the early days of philosophy (while often associated with the Ancient Greeks, surely found in other comparable forms in many intellectual, knowledge-seeking communities throughout history) and up till present days, people create forms of logic, they study and think about the (existence, developments, meaning, processes, applications, … of) mind, consciousness, cognition, language, reasoning, rationality, learning, knowledge, and so on.   

Logic too, often has been claimed as being an Old Greek invention; specifically by Aristotle (384 B.C to 322 B.C.). It has, however, more or less independent traditions across the globe and across time. Logic lies at the basis of, for instance, Computational Thinking, of coding, of mathematics, of language, and of Artificial Intelligence. In its most basic (and etymologically), logic comes from Ancient Greek “Logos” (λόγος), which simply means “speech”, “reasoning”, “word” or “study”. Logic can, traditionally, be understood as “a method of human thought that involves thinking in a linear, step-by-step manner about how a problem can be solved. Logic is the basis of many principles including the scientific method.”[1] Note, following the result of research and development (R&D) in fields that could be associated with the field of AI and within the field of AI itself, can show that today logic, in its various forms, is not only a linear process. Moreover, at present, the study of logic has been an activity no longer limited to the field of philosophy alone and is studied in various fields including computer science, linguistics or cognitive science as well.

 One author covering a topic of AI, tried to make the link between Philosophy and Artificial Intelligence starkly clear. As a discipline, AI is offered the consideration as possibly being “philosophical engineering.”[2] In this linkage, the field of AI is positioned as one researching more philosophical concepts from any field of science and from Philosophy itself that are then transcoded, from mathematical algorithms to artificial neural networks. This linkage proposes that philosophy covers ideas that are experienced as, for instance, ambiguous or complex or open for deep debate. Historically, philosophy tried to define, or at least explore, many concepts including ‘knowledge,’ ‘meaning,’ and ‘reasoning,’ which are broadly considered to be processes or states of a larger set known as “intelligence”. The latter itself too has been a fertile topic for philosophy. The field of AI as well has been trying to explore or even solve some of these attributes. The moment it solved some expressions of these, it was often perceived as taking away not only the mystery but also the intelligence of the expressed form. The first checker or chess “AI” application is hardly considered “intelligent” these days. The first AI solution beating a champion in such culturally established board games has later been shown to lack sufficient “intelligence” to beat a newer version of an AI application. Maybe that improved version might (or will) be beaten again, perhaps letting the AI applications race on and on? Just perhaps contrary to “philosophical engineering,” would the field of AI be practically engineering the philosophy out of some concepts?

Mini
Project #___:  algorithms in daily life
Find out what “algorithm” in general (in a more non-mathematical or more non-coding sense) means. Can you find it has similarities with the meaning of “logic”? If so, which attributes seem similar?
What do you think ‘algorithm’ could mean and could be in daily life (outside of the realm of Computer Science)? Are there algorithms we use that are not found in a computer?

Mini Project #___: The Non-technological Core of AI
Collect references of what consciousness, intelligence, rationality, reasoning and mind have meant in the history of the communities and cultures around you. 
Share your findings in a collection of references from the entire class. 
Maybe add your findings to the collage (see the Literature project above).
Alternative: the teacher shares a few resources or references of philosophers that covered these topics and that are examples of the pre-history of AI.



Psychology

Psychology has influenced and is influenced by research in AI. To some degree and further developing this is still the case today.[3]

Not only as a field related to cognitive science and the study of the processes involving perception and motor control (i.e. control of muscles and movement) but also the experiments and findings from within the longer history of psychology, have been of influence in the areas of AI.

It is important to note that while there are links between the field of AI and psychology, some attributes in this area of study have been contested, opposed and surpassed by cognitive science and computer science, with its subfield of AI.

An example of a method that can be said to have found its roots in psychology is called “Transfer Learning”. This refers to a process or method learned within one area that is used to solve an issue in an entirely different set of conditions. For a machine the area and conditions are the data sets and how its artificial neural network model is being balanced (i.e. “weighted”). The machine uses a method acquired in working within one data set to work in another data set. In this way the data set does not have to be sufficiently large for the machine to return workable outputs.

Anything that interrupts a man taking pleasure of a physical intimacy and gets both persons disappointed. best female viagra First you need to select a dose of your own, you may search for internet generic viagra 100mg sites where lots of numbers and e-mail addresses are available. You cialis 40 mg know your partner inside out. It increases the nitric oxide level in body and curbs product of an enzyme PDE5 and emits cGMP in blood. cGMP purchase levitra why not try these out enzyme is believed indispensable for smooth blood flow in direction of male organ.

The AI method known as Reinforcement Learning is one that could be said to have some similarities with experiments such as those historically conducted by Pavlov and B.F. Skinner. With Pavlov the process of “Classical Conditioning” was introduced. This milestone in the field of psychology is most famously remembered with Pavlov’s dog that started to produce saliva the moment it heard a sound of a bell (i.e. the action which Pavlov desired to observe). This sound was initially associated with the offering of food; first the bell was introduced, then the food and then the dog would produce saliva. Pavlov showed that the dog indeed did link both the bell and the food. Eventually, the dog would produce saliva at the hearing of the bell without getting any food. What is important here is that the dog has no control over the production of saliva. That means the response was involuntary; it was automatic. This is, in an over-simplified explanation, Classical Conditioning.

That stated, Reinforcement Learning (RL) is a Machine Learning method, where the machine is confronted with degrees of “reward” or the lack thereof. See the section on RL for further details. Studies surrounding reward have been found in historical research conducted by a researcher named Skinner and others. It’s interesting to add that this research has been contested by Chomsky, questioning the scientific validity and transferability to human subjects.[4] Chomsky’s critique has been considered as important in the growth of the fields of cognitive science and AI, back then in the 1950s. In these experiments a process called “Operant Conditioning” was being tested. The researchers were exploring voluntary responses (as opposed to the involuntary ones seen with Pavlov). That is to say, these were responses that were believed to be under the control of the test subject and that would lead to some form of learning, following some form of reward.

Again, these descriptions are too simplistic. They are here to nudge you towards further and deeper exploration, if this angle were to excite you positively towards your learning about areas in the academic field of AI.



Linguistics

With Linguistics come the studies of semiotics. Semiotics could be superficially defined as the study of symbols and various systems for meaning-giving including and beyond the natural languages. One can think of visual languages, such as icons, architecture, or another form is music, etc. Arguably, each sense can have its own meaning-giving system. Some argue that Linguistics is a subfield of semiotics while again others turn that around. Linguistics also comes with semantics, grammatical structures (see: Professor Noam Chomsky and the Chomsky Hierarchy)[5], meaning-giving, knowledge representation and so on.

Linguistics and Computer Science both study the formal properties of language (formal, programming or natural languages). Therefor any field within Computer Science, such as Artificial Intelligence, share many concepts, terminologies and methods from the fields within Linguistics (e.g. grammar, syntax, semantics, and so on). The link between the two is studied via a theory known as the “automata theory”[6], the study of the mathematical properties of such automata. A Turing Machine is a famous example of such an abstract machine model or automaton. It is a machine that can take a given input by executing some rule, as expressed in a given language and that in a step by step manner; called an algorithm, to end up offering an output. Other “languages” that connects these are, for instance, Mathematics and Logic.

Did you know that the word “automaton” is from Ancient Greek and means something like “self-making”, “self-moving”, or “self-willed”? That sounds like some attributes of an idealized Artificial Intelligence application, no?

Mini
Project #___: What are automatons? 
What do you know you feel could be seen as an “automaton”? 
Can you find any automaton in your society’s history?

[1] Retrieved on January 12, 2020 from https://en.wiktionary.org/wiki/logic

[2] Skanski, S. (2018). Introduction to Deep Learning. From Logical Calculus to Artificial Intelligence. In Mackie, I. et al. Undergraduate Topics in Computer Science Series (UTiCS). Switzerland: Springer. p. v . Retrieved on March 26, 2020 from http://www.springer.com/series/7592  AND https://github.com/skansi/dl_book

[3] Crowder, J. A. et al. (2020). Artificial Psychology: Psychological Modeling and Testing of AI Systems. Springer

[4] Among other texts, Chomsky, N. (1959). Reviews: Verbal behavior by B. F. Skinner. Language. 35 (1): 26–58. A 1967 version retrieved on March 26, 2020 from https://chomsky.info/1967____/

[5] Chomsky, N. (1956). Three models for the description of language. IEEE Transactions on Information Theory, 2(3), 113–124. doi:10.1109/tit.1956.1056813 AND Fitch, W. T., & Friederici, A. D. (2012). Artificial grammar learning meets formal language theory: an overview. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598), 1933–1955. doi:10.1098/rstb.2012.0103

[6] The automata theory is the study of abstract machines (e.g. “automata”, “automatons”; notice the link with the word “automation”). This study also considers how automata can be used in solving computational problems.

The Field of AI (Part 02-2): A “Pre-History” & a Foundational Context

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • The post here is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following, one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)


My very rough and severely flawed mapping of some of the fields and applications associated with the Field of AI. (use, for instance, CRTL+ to zoom in)
The doctor can show you whether there is any narrowing or blockages of cardiovascular passages in your central nervous system. http://secretworldchronicle.com/tag/veronica-giguere/ cialis uk A lot of men have problems with your sex life or impotence, you’ll probably find help in online levitra india special exercises known as Kegel exercises. How to treat sexual weakness problem in men is a condition the most prevalent kinds of sexual debilitation. buy cheap levitra The patented technology from Bathmate allows you to create erections faster, and you will soon get the ability to succeed in inhibiting the PDE5 enzyme levitra low price so as to enhance the ability to cause improvement in blood circulation.Consumption doses involve 3 types: 25 mg, 50 mg and 100 mg.


01 — The Field of AI: A Foundational Context: Literature, Mythology, Visual Arts

Figure 1 Talos or Talus, the artificial lifeform described in the Greek mythology. Here depicted with the mythological character of Medea or Medeia by Sybil Tawse image: public domain

 The early Greek Myths (about 2500 years ago) showcase stories of artificially intelligent bronze automatons or statues that were brought to life which then in turn exhibited degrees of “intelligence”. If you want to dig deeper search for Pygmalion’s Galateia, or look up the imaginary stories of Talos (Talus)[1].

China’s literary classics; for instance, Volume 5 “The Questions of Tang” of the Lièzǐ, perhaps unwittingly also explored the imagination of Artificial Intelligence. See, the example mentioned in the post on the Field f AI and a Pre-History.

The many thousands years old Jewish myth of the “golem” (גולם‎), fantasized about a creature made of clay that magically came to life. It could be interpreted as an imagination of the raw material for a controllable automaton and an artificial form of some degree of intelligence. While its cultural symbolism is far richer than given justice here, it could be imagined as symbolizing a collective human capability to envision giving some form and function of intelligence to materials that we, in general, do not tend to equate with comparable capability (i.e. raw materials for engineered design).

Golem (Prague Golem reproduction) photo: public domain

It is suggested in some sources[2] that artificial intelligence (in the literary packaging of imagined automatons or other) was also explored in European literary works such as in the 1816 German Der Sandmann (The Sandman) by Ernst Theodor Amadeus Hoffman,[3]  with the story’s character Olympia. The artificial is also explored by the fictional character Dr. Wagner, who creates Homunculus (a little man-like automaton), in Faust by Goethe,[4] and in Mary Shelley’s Frankenstein.[5] Much earlier yet, far less literary and rather philosophically, the artificial was suggested in the 1747 publication entitled L’Homme Machine (Man—Machine) by the French Julien Offray de la Mettrie, who posited the hypothesis that a human being as any other animal, are automatons or machines.

The next post will cover some hints of Philosophy in association with the Field of AI

Mini
Project #___ : Exploring the Pre-History of AI in your own and your larger
context.
Collect any other old stories from within China, Asia or elsewhere (from a location or culture that is not necessarily your own) that reference similar imaginations of “artificial intelligence” as constructed in the creative minds of our ancestors. 
Share your findings in a collection of references from the entire class. 
Maybe make a large collage that can be hung up on the wall, showing “artificial intelligence” from the past, through-out the ages.
Alternative: the teacher shares a few resources or references from the Arts (painting, sculpture, literature, mythology, etc.) that covered these topics and that are examples of the pre-history of AI.

[1] Parada, C. (Dec 10, 1993). Genealogical Guide to Greek Mythology. Studies in Mediterranean Archaeology, Vol 107. Coronet Books

[2] McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peters, Ltd. p. xxv

[3]Hoffmann,  E.T.A. (1816). Der Sandmann. In Hoffmann (1817). Die Nachtstücke.  Retrieved on April 8, 2020 from https://germanstories.vcu.edu/hoffmann/sand_pics.html  translated here https://germanstories.vcu.edu/hoffmann/sand_e_pics.html additional information: http://self.gutenberg.org/articles/The_Sandman_(short_story) 

[4] Nielsen, W. C. (2016). Goethe, Faust, and Motherless Creations. Goethe Yearbook, 23(1), 59–75. North American Goethe Society.  Information retrieved on April 8, 2020 from https://muse.jhu.edu/article/619344/pdf

[5] An artistic interpretation of the link between the artificial life of the Frankenstein character and AI is explored here: http://frankenstein.ai/  Retrieved on April 8, 2020

The Field of AI (part 02): A “Pre-History” & a Foundational Context.

last update: Friday, April 24, 2020

URLs for A “Pre-History” & a Foundational Context:

  • This post is the main post on a Pre-History & a Foundational context of the Field of AI. In this post a narrative is constructed surrounding the “Pre-History”. It links with the following posts:
  • This post is a first and very short linking with on Literature, Mythology & Arts as one of the foundational contexts of the Field of AI
  • The second part in the contextualization is the post touching on a few attributes from Philosophy, Psychology and Linguistics
  • Following one can read about very few attributes picked up on from Control Theory as contextualizing to the Field of AI
  • Cognitive Science is the fourth field that is mapped with the Field of AI.
  • Mathematics & Statistics is in this writing the sixth area associated as a context to the Field of AI
  • Other fields contextualizing the Field of AI are being considered (e.g. Data Science & Statistics, Economy, Engineering fields)
According to a blog associated with the Institute for Marital Healing, parents can at times be the culprit: not by treating the deeprootsmag.org prescription de viagra canada child badly, but the opposite, being too permissive and indulging a child’s selfishness. With the help of male enhancement pills, you will cherish your decision throughout your cheapest viagra for sale life. The natural means of curing erectile dysfunction cialis online australia include minimizing fat in your diet, cutting down alcohol intake, reducing cholesterol levels in the body, and exercising regularly. Go For Only Genuine Products A safe and original medication helps to lead a successful ED treatment. viagra price uk

The Field of AI: A “Pre-History”.

A “pre-history” and a foundational context of Artificial Intelligence can arguably by traced back to a number of events in the past as well as to a number of academic fields of study. In this post only a few have been handpicked.

This post will offer a very short “pre-history” while following posts will dig into individual academic fields that are believed to offer the historical and present-day context for the field of AI.

It is not too far-fetched to link the roots of AI, as the present-day field of study, with the human imagination of artificial creatures referred to as “automatons” (or what could be understood as predecessors to more complex robots).

While it will become clear here that the imaginary idea of automatons in China is remarkably older, it has been often claimed that the historic development towards the field of AI, as it is intellectually nurtured today, commenced more than 2000 years ago in Greece, with Aristotle and his formulation of the human thought activity known as “Logic”.

Presently, with logic, math and data one could make a machine appear to have some degree of “intelligence”. Note, it is rational to realize that the perception of an appearance does not mean the machine is intelligent. What’s more, it could be refreshing to consider that not all intelligent activity is (intended to be seen as) logical.

It’s fun, yet important, to add that to some extent, initial studies into logic could asynchronously be found in China’s history with the work by Mòzǐ (墨子), who conducted his philosophical reflections a bit more than 2400 years ago. 

Coming back to the Ancient Greeks: besides their study of this mode of thinking, they also experimented with the creation of basic automatons.

Automatons (i.e. self-operating yet artificial mechanical creatures) were likewise envisioned in China and some basic forms were created in its long history of science and technology.[1] An early mentioning can be found in, Volume 5 “The Questions of Tang” (汤问; 卷第五 湯問篇) of the Lièzǐ (列子)[2], an important historical Daoist text.

In this work there is mentioning of this kind of (imagined) technologies or “scientific illusions”.[3] The king in this story became upset by the appearance of intelligence and needed to be reassured that the automaton was only that, a machine …

Figure 1 King of Zhōu, who reigned a little more than 2950 years ago ( 周穆王; Zhōu Mù Wáng ) , introduced by Yen Shi, is meeting an automaton (i.e. the figure depicted with straighter lines, on the top-left), as mentioned in the fictional book Lièzǐ. Image retrieved on March 5, 2020 from here
Figure 2 Liè Yǔkòu (列圄寇/列禦寇), aka the Daoist philosopher Lièzĭ (列子) who imagined an (artificial) humanoid automaton. This visual was painted with “ink and light colors on gold-flecked paper,” by Zhāng Lù (张路); during the Míng Dynasty (Míng cháo, 明朝; 1368–1644). Retrieved on January 12, 2020 from here ; image license: public domain.

Jumping forward to the year 1206, the Arabian inventor, Al-Jazari, supposedly designed the first programmable humanoid robot in the form of a boat, powered by water flow, and carrying four mechanical musicians. He wrote about it in his work entitled “The Book of Knowledge of Ingenious Mechanical Devices.

It is believed that Leonardo Da Vinci was strongly influenced by his work.[4] Al-Jazari additionally designed clocks with water or candles. Some of these clocks could be considered programmable in a most basic sense.

figure 3 Al-jazari’s mechanical musicians machine (1206). Photo Retrieved on March 4, 2020 from here; image: public domain

One could argue that the further advances of the clock (around the 15th and 16th century) with its gear mechanisms, that were used in the creation of automatons as well, were detrimental to the earliest foundations, moving us in the direction of where we are exploring AI and (robotic) automation or autonomous vehicles today.

Between the 16th and the 18th centuries, automatons became more and more common.  René Descartes, in 1637, considered thinking machines in his book entitled “Discourse on the Method of Reasoning“. In 1642, Pascal created the first mechanical digital calculating machine.

Figure 4 Rene Descartes; oil on canvas; painted by Frans Hals the Elder (1582 – 1666; A painter from Flanders, now northern Belgium, working in Haarlem, the Netherlands. This work: circa 1649-1700; photographed by André Hatala . File retrieved on January 14, 2020 from here. Image license: public Domain

Between 1801 and 1805 the first programmable machine was invented by Joseph-Marie Jacquard. He was strongly influenced by Jacques de Vaucanson with his work on automated looms and automata. Joseph-Marie’s loom was not even close to a computer as we know it today. It was a programmable loom with punched paper cards that automated the action of the textile making by the loom. What is important here was the system with cards (the punched card mechanism) that influenced the technique used to develop the first programmable computers.

Figure 5 Close-up view of the punch cards used by Jacquard loom on display at the Museum of Science and Industry in Manchester, England. This public domain photo was retrieved n March 12, 2020 from here; image: public domain

In the first half of the 1800s, the Belgian mathematician, Pierre François Verhulst discovered the logistic function (e.g. the sigmoid function),[1] which will turn out to be quintessential in the early-day developments of Artificial Neural Networks and specifically those called “perceptrons” with a threshold function, that is hence used to activate the output of a signal, and which operate in a more analog rather than digital manner, mimicking the biological brain’s neurons. It should be noted that present-day developments in this area do not only prefer the sigmoid function and might even prefer other activation functions instead.


[1] Bacaër, N. (2011). Verhulst and the logistic equation (1838). A Short History of Mathematical Population Dynamics. London: Springer. pp. 35–39.  Information retrieved from https://link.springer.com/chapter/10.1007%2F978-0-85729-115-8_6#citeas and from mathshistory.st-andrews.ac.uk/Biographies/Verhulst.html  

In 1936 Alan Turing proposed his Turing Machine. The Universal Turing Machine is accepted as the origin of the idea of a stored-program computer. This would later, in 1946, be used by John von Neumann for his “Electronic Computing Instrument“.[6] Around that same time the first general purpose computers started to be invented and designed. With these last events we could somewhat artificially and arbitrarily claim the departure from “pre-history” into the start of the (recent) history of AI.

figure 6 Alan Turing at the age of 16. Image Credit: PhotoColor [CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)] ; Image source Retrieved April 10, 2020 from here


As for fields of study that have laid some “pre-historical” foundations for AI research and development, which continue to be enriched by AI or that enrich the field of AI, there are arguably a number of them. A few will be explored in following posts. The first posts will touch on a few hints of Literature, Mythology and the Arts.


[1] Needham, J. (1991). Science and Civilisation in China: Volume 2, History of Scientific Thought. Cambridge, UK: Cambridge University.

[2] Liè Yǔkòu (列圄寇 / 列禦寇). (5th Century BCE). 列子 (Lièzǐ). Retrieved on March 5, 2020 from https://www.gutenberg.org/cache/epub/7341/pg7341-images.html  and 卷第五 湯問篇 from https://chinesenotes.com/liezi/liezi005.html   and an English translation (not the latest) from  https://archive.org/details/taoistteachings00liehuoft/page/n6/mode/2up  

[3] Zhāng, Z. (张 朝 阳).  ( November 2005). “Allegories in ‘The Book of Master Liè’ and the Ancient Robots”. Online: Journal of Heilongjiang College of Education. Vol.24 #6. Retrieved March 5, 2020 from https://wenku.baidu.com/view/b178f219f18583d049645952.html

[4] McKenna, A. (September 26, 2013). Al-Jazarī Arab inventor. In The Editors of Encyclopaedia Britannica. Online: Encyclopaedia Britannica Retrieved on March 25, 2020 from https://www.britannica.com/biography/al-Jazari AND:

Al-Jazarī, Ismail al-Razzāz; Translated & annotated by Donald R. Hill. (1206). The Book of Knowledge of Ingenious Mechanical Devices. Dordrecht, The Netherlands: D. Reidel Publishing Company. Online Retrieved on March 25, 2020 from https://archive.org/details/TheBookOfKnowledgeOfIngeniousMechanicalDevices/mode/2up

[5] Bacaër, N. (2011). Verhulst and the logistic equation (1838). A Short History of Mathematical Population Dynamics. London: Springer. pp. 35–39.  Information retrieved from https://link.springer.com/chapter/10.1007%2F978-0-85729-115-8_6#citeas and from mathshistory.st-andrews.ac.uk/Biographies/Verhulst.html

[6] Davis, M. (2018). The Universal Computer: the road from Leibniz to Turing. Boca Raton, FL: CRC Press, Taylor & Francis Group

The Field of AI (part 01): Context, Learning & Evolution

One could state that Artificial Intelligence (AI) methods enable the finding of and interaction with patterns in the information available from contexts to an event, object or fact. These can be shaped into data points and sets. Many of these sets are tremendously large data sets. So large are these pools of data, so interconnected and so changing that it is not possible for any human to see the patterns that are actually there or that are meaningful, or that can actually be projected to anticipate the actuality of an imagined upcoming event.

While not promising that technologies coming out from the field of AI are the only answer, nor the answer to everything, one could know their existence and perhaps apply some of the methods used in creating them. One could, furthermore, use aspects from within the field of AI to learn about a number of topics, even about the processes of learning itself, about how to find unbiased or biased patterns in the information presented to us. Studying some basics about this field could offer yet another angle of meaning-giving in the world around and within us.  What is a pattern, if not an artificial promise to offer some form of meaning?

It’s not too far-fetched to state that the study of Artificial Intelligence is partly the study of cognitive systems[1] as well as the context within which these (could) operate. While considering AI[2], one might want to shortly consider “context.”

Here “context” is the set of conditions and circumstances preceding, surrounding or following a cognitive system and that related to its processed, experienced, imagined or anticipated events. One might want to weigh how crucial conditions and circumstances are or could be to both machine and human.[3]  The field of AI is one of the fields of study that could perhaps offer one such opportunity.

A context is a source for a cognitive system to collect its (hopefully relevant) information, or at least, its data from. Cognitive Computing (CC) systems are said to be those systems that try to simulate the human thought processes, to solve problems, via computerized models.[4] It is understandable that some classify this as a subset of Computer Science while some will obviously classify CC as a (sometimes business-oriented) subset of the field of AI.[5] Others might link this closer to the academic work done in Cognitive Science. Whether biological or artificial, to a number of researchers the brain-like potentials are their core concern.[6]

As can be seen in a few of the definitions and as argued by some experts, the broad field of AI technologies do not necessarily have to mimic *human* thought processes or human intelligence alone. As such, AI methods might solve a problem in a different way from how a human might do it.

However similar or different, the meaning-giving information, gotten from a context, is important to both an AI solution as well as to a biological brain. One might wonder that it is their main reason for being: finding and offering meaning.

The contextual information an AI system collects could be (defined by or categorized as) time, locations, user profiles, rules, regulations, tasks, aims, sensory input, various other big to extremely huge data sets and the relationships between each of these data sets in terms of influencing or conflicting with one another. All of these sources for data are simultaneously creating increasing complexities, due to real-time changes (i.e. due to ambiguity, uncertainty, and shifts). AI technologies offer insights through their outputs of the *best* solution, rather than the one and only certain solution for a situation, in a context at a moment in spacetime.

The wish to understand and control “intelligence” has attracted humans for a long time. It is then reasonable to think that it will attract our species’ creative and innovative minds for a long time to come. It is in our nature to wonder, in general, and to wonder about intelligence and wisdom in specific; whatever their possible interlocked or independent definitions might be(come) and whichever their technological answers might be.

In considering this, one might want to be reminded that the scientific name of our species itself is a bit of a give-away of this (idealized) intention or aspiration: “Homo Sapiens.” This is the scientific name of our animal species. Somewhat loosely translated, it could be understood to mean: “Person of Wisdom”. 

In the midst of some experts who think that presently our intelligence is larger than our wisdom, others feel that, if handled with care, consideration and contextualization, AI research and developments just might positively answer such claim or promise and might at least augment our human desires towards becoming wiser.[7] Just perhaps, some claim,[8] it might take us above and beyond[9] being Homo Sapiens.[10]

For now, we are humans exploring learning with and by machines in support of our daily yet global needs.

For you and I, the steps to such aim need to be practical. The resources to take the steps need to be graspable here and now.

At the foundation, to evaluate the validity or use of such claims, we need to understand a bit what we are dealing with. Besides the need for the nurturing of a number of dimensions in our human development, we might want to nurture our Technological Literacy (or “Technology Literacy”).[11]

A number of educators[12] seem to agree that,[13] while considering human experiences and their environments, this area of literacy is not too bad a place to start off with.[14] In doing so, we could specifically unveil a few points of insight associated with Artificial Intelligence; that human-made technological exploration of ambiguous intelligence.

Few people know that heart comes http://appalachianmagazine.com/2016/12/25/why-stink-bugs-are-taking-over-the-eastern-united-states/ online prescription for viagra under pressure, when stress control of the mind. The uses of alcohol, smoking, tobacco, and http://appalachianmagazine.com/2019/03/10/mountain-tradition-eating-ramps-in-springtime/ super generic viagra illegal drugs have great influence of carrying this sexual complication. You on line viagra can also get discount on bulk order for this product. While viagra lowest prices some states don’t need that this online drug is not good as their branded counterparts are.




[1] Sun F., Liu, H., Hu, D.  (eds). (2019). Cognitive Systems and Signal Processing: 4th International Conference, ICCSIP 2018, Beijing, China, November 29 – December 1, 2018, Revised Selected Papers, Part 1 & Part 2. Singapore: Springer

[2] DeAngelis, S. F. (April 2014). Will 2014 be the Year you Fall in Love with Cognitive Computing? Online: WIRED. Retrieved November 22, 2019 from https://www.wired.com/insights/2014/04/will-2014-year-fall-love-cognitive-computing/

[3] Desouza, K. (October 13, 2016). How can cognitive computing improve public services? Online Brookings Institute’s Techtank Retrieved November 22, 2019 from https://www.brookings.edu/blog/techtank/2016/10/13/how-can-cognitive-computing-improve-public-services/

[4] Gokani, J. (2017). Cognitive Computing: Augmenting Human Intelligence. Online: Stanford University; Stanford Management Science and Engineering; MS&E 238 Blog. Retrieved November 22, 2019 from https://www.datarobot.com/wiki/cognitive-computing/

[5] https://www.datarobot.com/wiki/cognitive-computing/

[6] One example is: Poo, Mu-ming. (November 2, 2016). China Brain Project: Basic Neuroscience, Brain Diseases, and Brain-Inspired Computing. Neuron 92, NeuroView, pp. 591-596.  Online: Elsevier Inc. Retrieved on February 25, 2020 from https://www.cell.com/neuron/pdf/S0896-6273(16)30800-5.pdf  . Another example is: The work engaged at China’s Research Center for Brain-Inspired Intelligence (RCBII), by the teams led by Dr XU, Bo and Dr. ZENG, Yi. Founded in April 2015, at the CAS’ Institute of Automation, the center contains 4 research teams: 1. The Cognitive Brain Modeling Group (aka Brain-Inspired Cognitive Computation); 2. The Brain-Inspired Information Processing Group; 3. The Neuro-robotics Group (aka Brain-Inspired Robotics and Interaction) and 4. Micro-Scale Brain Structure Reconstruction. Find some references here: bii.ia.ac.cn

[7] Harari, Y. N. (2015). Sapiens. A Brief History of Humankind. New York: HarperCollings Publisher

[8] Gillings, M. R., et al. (2016). Information in the Biosphere: Biological and Digital Worlds. Online: University California, Davis (UCD). Retrieved on March 25, 2020 from https://escholarship.org/uc/item/38f4b791

[9] (01 June 2008). Tech Luminaries Address Singularity. Online: Institute of Electrical and Electronics Engineers (IEEE Spectrum). Retrieved on March 25, 2020 from  https://spectrum.ieee.org/static/singularity

[10] Maynard Smith, J. et al. (1995). The Major Transitions in Evolution. Oxford, England: Oxford University Press  AND Calcott, B., et al. (2011). The Major Transitions in Evolution Revisited. The Vienna Series in Theoretical Biology. Boston, MA: The MIT Press.

[11]  National Academy of Engineering and National Research Council. (2002). Technically Speaking: Why All Americans Need to Know More About Technology. Washington, DC: The National Academies Press   Online: NAP Retrieved on March 25, 2020 from https://www.nap.edu/read/10250/chapter/3

[12] Search, for instance, the search string “Technological Literacy” through this online platform: The Education Resources Information Center (ERIC), USA https://eric.ed.gov/?q=Technological+Literacy

[13] Dugger, W. E. Jr. et al (2003). Advancing Excellence in Technology Literacy. In Phi Delta Kappan, v85 n4 p316-20 Dec 2003 Retrieved on March 25, 2020 from https://eric.ed.gov/?q=Technology+LIteracy&ff1=subTechnological+Literacy&ff2=autDugger%2c+William+E.%2c+Jr.&pg=2

[14] Cydis, S. (2015). Authentic Instruction and Technology Literacy. In Journal of Learning Design 2015 Vol. 8 No.1 pp. 68 – 78. Online: Institute of Education Science (IES) & The Education Resources Information Center (ERIC), USA. Retrieved on March 25, 2020 from https://files.eric.ed.gov/fulltext/EJ1060125.pdf


IMAGE CREDITS:

An example artificial neural network with a hidden layer.

en:User:Cburnett / CC BY-SA (http://creativecommons.org/licenses/by-sa/3.0/) Retrieved on March 12, 2020 from https://upload.wikimedia.org/wikipedia/commons/e/e4/Artificial_neural_network.svg


The Field of AI (Part 07): a bibliography with URLs

compared to the previous posts this post is very rough and messy. When time permits I will clean it up.


Here below are a few lists associated with my personal learning about the academic field of AI and its related fields.

I am not an expert in this field; rather the contrary. I am interested in considering questions related with bringing technology literacy into the thinking of educators and learners within the K-12 realm. The resources below and elsewhere here in this blog are mainly catered to high school (or those beyond). If some resources seem too advanced, they were added to show where one could aim to learn toward or where one could end up if one continued studies in this area. Therefore, some resources are technical while others are not.

This post includes a name list. It is a list of what I understand to be leading voices in the field. Secondly, there is a bibliography of works I found online or offline.

I superficially or more profoundly browsed these , in search of a better understanding and contextualization (in a historical and trans-disciplinary setting).

Where found and available URLs are offered. Some additional or the same URLs or references can be found in other posts (see the one on mathematics and the field of AI, to name but one).

Lastly, a list of URLs to data sets is included at the end of this post.

An Incomplete List of Leading Voices

As a young learner on your educational path and as a young participant in your community you might like to find some references that can give you an idea or that can ignite your imagination of how and where and with whom to walk your path towards growing into tomorrow’s AI expert.

We learn on our own but better even by the guidance of others and by observing and learning from what they have done or are doing.

Here below is a list of historical figures and present-day scholars. The list is surely not complete and needs continued updating. The list does not imply a preference imposed by the author. It tries to highlight a few scholars from around the world, scholars from within China and scholars that are in pure academic research as well as in innovation and entrepreneurial areas. More scholars and innovators, involved in important work in the field of AI, are missing than those listed. It also is incomplete as to what all the work is that these experts have been involved in up till now. The list does however give you a spark, a hint and points you in directions you can further explore by means of your own research and study.

Maybe you can add a few names to this list. Perhaps one day your work will be a beacon as well for another young learner.

Following The Master Part 1: Some Historical, Foundational key-Figures or Technological Pioneers in the Earliest Development of the field of AI.

·         BOOLE, George (1815 – 1864)
·         Dr. CHOMSKY, Noam (1928 – )
·         SHANNON, Claude
·         Dr. FEIGENBAUM, Edward (1936 –     )
·         Dr. FREGE, Gottlob (1848 – 1925)
·         Dr. GÖDEL, Kurt (1906 – 1978)
  • Mathematician and one of the most important logisticians.
    • His work lies at the mathematical and logical foundation of, for instance, Turing’s work.
·         Dr. GOOD, Irving John (1916 – 2009)
    • [2] on “intelligence explosion,”[3] that are claimed to have led to today’s hypothetical concept of the “technological singularity”.[4] Similar to this present-day hypothesis, Dr. Good envisioned a potential future creation, in the likes of what we now call the future-existence of an Artificial General Intelligence agent, (AGI), that would be able to solve human concerns and that would far outweigh human (intellectual) ability. He felt it stood to reason that it would be the last invention humans had to make.[5]
·         Dr. HOLLAND, John Henry (1929 – 2015)
  • 1929-2015
    • While suggested by Turing[6], he pioneered Genetic Algorithms with other scientists such as Dr. GOLDBERG, David Edward.
·         Dr. Licklider, J. C. R. (1915 – 1990)
·         Dr. McCARTHY, John (1927 – 2011)
  • Inventor of the LISP programming language, used in the early research and development of AI systems
    • Co-founder of the academic field of Artificial Intelligence
    • Invented the term “Artificial Intelligence”
·         Dr. MINSKY, Marvin (1927 – 2016)
  • Co-founder of The MIT Artificial Intelligence Lab
    • Leading voice on and pioneer in Artificial Intelligence
    • Built the first neural network learning machine in 1951
·         Dr. NEWELL, Allen (1927 – 1992)
·         Dr. PAPERT, Seymour (1928 – 2016)
·         Dr. ROBINSON, John Alan
·         ROCHESTER, Nathaniel
·         Dr. SAMUEL, Arthur (1901 – 1990)
  • A pioneer in computer gaming and AI
    • He put the term and the research surrounding “Machine Learning” on the map in the 1950s and with his paper: “Some Studies in Machine Learning Using the Game of Checkers“. IBM Journal of Research and Development. 44: 206–226
  • ·         STRACHERY, Christopher
·         Dr. TURING, Alan (1912 – 1954)
  • By many referred to as the father of Artificial Intelligence.
    • Created fundamental theories of Computation and in Computer Science; e.g. the idea of the binary digital language of ones and zeroes, the (theoretical) Turing Machines
    • Created the Turing Test allowing to see whether or not a machine showcased intelligent behavior[8]
    • Created the first electronic computer
·         Dr. von NEUMANN, John (1903 – 1957)
  • [blablabla]
    • Most computers, as we know them today, are based on his conceptualizations.
  • Dr. WÁNG Hào (王浩)(1921 – 1995)
    • Tsinghua University graduate
    • Mathematician, philosopher and logistician.
    • Proved hundreds of mathematical theories with a computer program written in 1959
    • Inventor of the Wang Tiles; proved that any Turing Machine can be turned into Wang Tiles.
    • Inventor of a number of computational models.
  • Dr. ZADEH, Lotfi Aliasker (1921 – 2017)
    • AI researcher, mathematician, computer scientist
    • Inventor[9] of Fuzzy Mathematics, Fuzzy Algorithms, Fuzzy Sets, Fuzzy Logic and so on.  Over-simplified, Fuzzy logic is a generalization of Boolean logic and a form of many-value logic, with values in-between 0 and 1 (whereas traditionally, logic operates with values that are either on or off; one or zero; right or wrong).
Mini Project #___ : Inspiration by Following the Old Masters
  • Which of these figures or their work are you most inspired by? Why?
  • Can you find some other historical figures associated with research & developments (R&D) in AI whom you are inspired by?

Following The Master Part 2: Some Present-day Leading Figures in the field of AI (in alphabetic order) with attention to Chinese Scholars

The following is an incomplete list composed as of January 2020. Please note, more experts or leading voices are mentioned across the text chapters (e.g. see footnotes and references; see section 05 of this text). The author apologizes to any expert or leading voice that might not yet be represented or that might be inaccurately represented here. Pease contact the author so that refinements can be added. The aim is to give young learners some sort of direction, motivation, hints and passion for the field of AI. Thank you for your support:

  • ·         Dr. BENGIO, Yoshua:  Research into Artificial Neural Networks, Unsupervised Machine Learning and Deep Learning
  • ·         Dr. BLEI, David M.: research in Machine Learning, Statistical Models (e.g. Topic Modeling), algorithms, and related topics.
  • ·         Dr. BOSTROM, Nick: Research in ethics surrounding Artificial Superintelligence.
  • ·         Dr. BREASEAL, Cynthia: Research in Human-Robot Interaction.
  • ·         Dr. BRYSON, Joanna: Research in AI ethics and in AI systems helping to understand biological intelligence
  • ·         Dr. Chén Dānqí 陈丹琦: a Tsinghua & Stanford Universities graduate. Assistant Professor at Princeton University. Main field of research in Natural Language Processing (NLP).
  • Graduate from and Professor at the Institute of Computing Technology, Chinese Academy of Sciences. Selected by MIT Technology Review as one of the 2015 top 35 innovators under 35 years old. Research and developments in large-scale Machine Learning solutions, Deep Learning, reduction of energy requirements or computational costs, brain-inspired processor chips, and related topics.
  • ·         Dr. DUAN Weiwen: AI Director of the Department of Philosophy of Science and Technology, The Institute of Philosophy, The Chinese Academy of Social Sciences (CASS). He specializes, among others, in ignorance in sciences, philosophy of IT, Big Data issues, and Artificial Intelligence. Dr. Duan is the deputy chairman of the Committee of Big Data Experts of China. His research is supported by the National Social Sciences Fund of China (NSSFC).
  • Research and developments in Emotional AI, subtle expression recognition, facial recognition.  Entrepreneur in related AI developments.
  • ·         Dr. ERKAN, Ayse Naz: Research in Content Understanding & Applied Deep Learning.
  • ·         Dr. FREUND, Yoav: Research and advances in algorithm design, Machine Learning and probability theory.
  • ·         Dr. FUNG, Pascale馮雁: leading researcher in Natural Language Processing.
  • Research on Human-centered AI, autonomous vehicles, Deep Learning, robotics
  • Research on algorithm design, Bayesian Machine Learning, Computational Neuroscience, Bioinformatics, Statistics and other areas.
  • ·         Dr. GOERTZEL, Ben Research on and developments in AGI and robotics.
  • Research and developments in Machine Learning and Deep Learning
  • Neuroscientist. Research and developments in Artificial General Intelligence (AGI), Machine Learning (AlphaGo) and related topics.
  • ·         Dr. HINTON, Geoffrey E:  Leading AI academic, computer scientist and cognitive psychologist with a focus on artificial neural networks. The great-great-grandson of the logician George Boole ( who’s work on, among others, algorithms for logic deduction, is still of computational importance). Dr. Hinton is considered one of the most prominent pioneers in Deep Learning.
  • ·         Dr. HUTTER, Marcus. researching the mathematical foundations of AI and Reinforcement Learning; leading authority on theoretical models of super intelligent machines.
  • Trained Dr. Ng and other leading scholars in the field of AI. Research in Machine Learning, recurrent neural networks, Bayesian networks in Machine Learning and other links between Machine Learning and statistics
  • ·         Dr. KARPATHY, Andrej: Research on Deep Learning in Computer Vision, Generative Modeling, Reinforcement Learning, Convolutional Neural Networks, Recurrent Neural Networks, Natural Language Processing
  • Research on Artificial Intelligence (e.g. Learning platforms for humans; Content recommendation, etc.)1
  • ·         Dr. KURZWEIL, Ray: a legendary authority on AI and thinker on the technological singularity
  • ·         Dr. LAFFERTY, John D.: Research in Language and graphic models, semi supervised learning, information retrieval, speech recognition.
  • ·         Dr. LECUN, Yann: Research in computational neuroscience, Machine Learning, mobile robotics, and computer vision. Developed the ‘Convolutional Neural Networks’ (a model of image recognition mimicking biological processes)
  • ·         Dr. LI Fei-fei: AI scientist & Machine Learning expert with a focus on computational neuroscience, image / visual recognition and Big Data. Pushed the collection and creation of large quality datasets and this towards the improvement of algorithm design. The result was ImageNet, containing more than 10million hierarchically organized images.
  • ·         Dr. LI Kāifù: research on Machine Learning and pattern recognition. The world’s first speaker-independent, continuous speech recognition system; investor in mainly China’s AI R&D; Chairman of the World Economic Forum’s Global AI Council; author on AI and a leading force in supporting the training of AI-related engineers in China.
  • ·         Dr. LI, Sheng李生 : leading research on Natural Language Processing (NLP) and one of China’s pioneers in this field. Graduate from and professor at the Harbin Institute of Technology (HIT). President of Chinese Information Processing Society of China (CIPSC).
  • Former President of Beihang University and Professor of Computer Science at the Beihang University. Co-lead China’s National Engineering Laboratory of Deep Learning. Research in AI and network computing.
  • ·         Dr. LIM, Angelica: robotic development & human-styled learning
  • ·         Dr. LIN Dekang: A Tsinghua University graduate. Senior research scientist in Machine Learning, Natural Language Processing and more at a major AI lab.
  • ·         Dr. LIN Yuanqing: A Tsinghua University graduate. Research and developments in AI, Big Data, Deep Learning,
  • ·         Dr. LIU, Ting (刘挺): research and development in the area of Natural Language Processing.
  • Fudan University graduate. Leading AI researcher and developer with a focus on autonomous driving and other aspects in the field of AI. Leadership in related industry.
  • ·         Dr. MARCUS, Gary: Research on natural and artificial intelligence in areas of psychology, genetics and neuroscience.
  • ·         Dr. McCALLUM, Andrew: Research in Machine Learning (i.e. Semi Supervised Learning, natural language processing, information extraction, information integration, and social network analysis)
  • ·         Dr. MIN Wanli: A University of Science & Technology of China graduate. Research and developments in AI applications, traffic pattern recognition, Machine Learning and related aspects.
  • AI researcher, famous AI educator, worked on autonomous helicopters, Artificial Intelligence for robots and created the Robot Operating System (ROS). Research in Machine Learning (Reinforcement Learning, Supervised Learning, …). IS also well-known for his online course material.[10]
  • ·         Dr. Oliphant, Travis: Scientific Computing developer. Created NumPy, SciPy and Numba. Founded Anaconda and more
  • Data Scientist. Research in Big Data, Data Mining and Machine Learning. Editor of KDnugget (a source for Data Science and Machine Learning).
  • Research in AI in self-driving vehicles
  • ·         Dr. RUS, Daniela L.: research in self-reconfiguring distributed and collaborative robots (e.g. autonomous swarms), autonomous environment-adaptable shape-shifting machines (this is of use where conditions cannot be foreseen and therefore cannot be hard-coded)
  • ·         Dr. RUSSELL, Stuart J.: author of the most cited book (together with Dr. NORVIG, Peter) on AI which is also used in about 1300 universities, across about 116 countries as AI course material. He did research on inductive and analogical reasoning. He founded the Center for Human-Compatible Artificial Intelligence
  • ·         Dr. SCHMIDHUBER, Jurgen: AI scientist with a focus on and self-improving AI and (recurrent) neural networks, used for speech recognition. He works on AI for finance and autonomous vehicles.
  • ·         Dr. SCHöLKOPF, Bernhard: Research in Machine Learning, Brain-computer interfaces, and other related areas,
  • ·         Dr. SHAPIRE, Robert: Research and Developments in Machine Learning, Decision Trees, Game Theory.
  • ·         Dr. SHEN Xiangyang; Industry leader in Research and Development in the field of Artificial Intelligence.
  • AI scientist with a focus on learning. One of the creators behind the AI method known as modern Computational Reinforcement Learning.
  • ·         Dr. SWEENEY, Latanya: Research in the area of biases in Machine Learning algorithms
  • ·         Dr. TANG Xiaoou: Research in computer vision, pattern recognition, and video processing. A leading entrepreneur.
  • ·         Dr. TEGMARK, Max: Investigates existential risk from advanced artificial intelligence
  • ·         Dr. WHYE, Teh Yee: Research in Machine Learning (Deep Learning), Statistical Machine Learning and Face Recognition.
  • ·         Dr. THRUN, Sebastian: Research in Machine Learning, autonomous vehicles, probabilistic algorithms, robotic mapping.
  • ·         Dr. VALIANT, Leslie: research and advances in computational theory, complexity theory, algorithms and machine learning.
  • ·         Dr. VAPNIK, Vladimir: Research in the area of Machine Learning and Statistical Learning. Co-invented the Support-Vector Machine Method
  • Research in the area of Machine Learning, Deep Learning and Reinforcement Learning
  • 王海峰: a Harbin Institute of Technology graduate. Leadership in AI developments with foci on Deep Learning, Big Data, computer vision, Natural Language Processing (NLP), machine translation, speech recognition, personalized recommendations, and so on.
  • ·         Dr. WU Hua: A graduate from the Chinese Academy of Sciences. Cutting-edge breakthroughs in Conversational AI & Natural Language Processing (NLP), dialog systems, Neural Machine Translation and related topics.
  • ·         Dr. XU Wei: A Tsinghua University graduate. Received the title of “Distinguished Scientist”. Research and developments in areas of Deep Learning, image classification, autonomous vehicles, translation processes, and so on,
  • ·         Dr. YANG Yiming: Research in Machine Learning
  • ·         Dr. YE Jieping: A Fudan University graduate. Research and developments in Big Data, Data Mining, Machine Learning, autonomous vehicles, and so on.
  • ·         Dr. YU Dong: A Zhejiang University graduate. Research and developments in Speech recognition, Natural Language Processes, natural language understanding, and related topics.
  • ·         Dr. YU Kai: a Nanjing University graduate. Research and developments in Deep Learning, pervasive AI hardware systems, facial recognition, automatic ordering, driver-assistant systems, and related areas.
  • ·         Dr. ZADEH, Reza: Research on Discrete Applied Mathematics, AI, Machine Learning
  • Research on technical models for Brain-inspired AI, AI Ethics and Governance. Professor and Deputy Director at Research Center for Brain-inspired Intelligence (RCBII), Institute of Automation, Chinese Academy of Sciences. He is Director for the Research Center on AI Ethics and Governance, Beijing Academy of Artificial Intelligence. Dr. Zeng is a board member for the National Governance Committee for the New Generation Artificial Intelligence, Ministry of Science and Technology China.
  • A Hefei University of Technology graduate. Professor of Machine Learning at the Peking University (aka Beida; 北京大学, PKU). Former Vice Dean of the School of EECS. Research in Machine Perception, computer vision and other related areas.
  • ·         Dr. ZHANG Bo: Co-leads China’s National Engineering Laboratory of Deep Learning. A member of the member of the Chinese Academy of Sciences. A graduate from and Professor at Tsinghua University. Research in Machine Learning, neural networks, task and motion planning, pattern recognition, image retrieval and classification, and other areas.
  • ·         Dr. ZHANG, Min张民: research and development in the area of Natural Language Processing at the Soochow University (in Sūzhōu, Jiāngsū Province, P.R. China; Sūzhōu Dàxué, 苏州大学).
  • 张潼). Research focus on Machine Learning algorithms and theory, statistical methods for big data and their applications, computer vision, speech recognition, Natural Language Processing, and so on.
  • ·         Dr ZHAO, Tiejun (赵铁军): research and development in the area of Natural Language Processing.
  • Graduated from 3 Chinese universities: Northeastern University, Beihang University and the Chinese Academy of Sciences, Institute of Automation. Conducts research on Machine Learning and Explainable AI at one of the front-running labs in AI development.
  • ·         Dr. ZHOU Jingren: A graduate from the University of Science and Technology of China. Research and Developments in AI, Big Data, large scale Machine Learning, Speech and Language Processing, image & video processing, and so on.
  • ·         Dr. ZHOU, Ming: cutting edge research and development in the area of Natural Language Processing.
  • Doctor of Science at the Chinese Academy of Sciences. Research and Developments in ChatBots, conversational interfaces, and related areas.
Mini Project #___: Inspiration from Following Today’s Masters
  • Which of these figures or their work are you most inspired by? Why?
  • Can you find some more details and up-to-date information about your chosen role model?
  • Can you find some other leading figures you are inspired by that are also working in the field of AI and that are not (yet) in this list?
  • Can you figure out how your choice of leading figures relates to AI and to other researchers by creating an Entity-Relationship Model (see example here)?

[1] Retrieved on March 27, 2020 from https://library.stanford.edu/collections/edward-feigenbaum-papers

[2] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. in F. L. Alt and M. Rubinoff (eds.). (1966). Advances in Computers Vol.6: pp. 31–88.

[3] Bostrom. N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press

[4] Shanahan, M. (2015). The Technological Singularity. The MIT Essential Knowledge Series. Cambridge, MA: The MIT Press.

[5] Good, I.J. (1965). p.33

[6] Turing, A. M. (October, 1950). “Computing Machinery and Intelligence” in Mind Lix(236), 49:433-460. pp.459-460

[7] Robinson, John Alan (January 1965). A Machine-Oriented Logic Based on the Resolution Principle. J. ACM. 12 (1): 23–41 Retrieved on March 24, 2020 from https://web.stanford.edu/class/linguist289/robinson65.pdf

[8] Turing, A. (1948). Intelligent Machinery. http://www.turingarchive.org/viewer/?id=127&title=1 and https://weightagnostic.github.io/papers/turing1948.pdf see: Copeland, J. (2004). The Essential Turing. Oxford: Clarendon Press. pp. 411-432

[9] Zadeh, L. A. (1965). Fuzzy Sets. Information and Control, 8(3),pp. 338–353. Online: Elsevier Inc. ScienceDirect; Retrieved on March 18, 2020 from https://www.sciencedirect.com/journal/information-and-control/vol/8/issue/3

[10] An example of Dr. Ng’s online course material: https://www.coursera.org/learn/machine-learning

A List of Leading Academic Voices, Bibliography, References, Examples & URLs

AI Magazine. Online: Association for the Advancement of Artificial Intelligence. Retrieved on April 21, 2020 from:

Algebra basics and beyond can be studied via these resources retrieved on March 31, 2020 from https://www.ck12.org/fbbrowse/list?Grade=All%20Grades&Language=All%20Languages&Subject=Algebra

Al-Jazarī, Ismail al-Razzāz; Translated & annotated by Donald R. Hill. (1206). The Book of Knowledge of Ingenious Mechanical Devices. Dordrecht, The Netherlands: D. Reidel Publishing Company. Online Retrieved on March 25, 2020 from https://archive.org/details/TheBookOfKnowledgeOfIngeniousMechanicalDevices/mode/2up

Alpaydin, E. (2020). Introduction to Machine Learning. The MIT Press Essential Knowledge Series. Cambridge, MA: MIT Press. Retrieved an introduction on March 25, 2020 from https://mitpress.mit.edu/contributors/ethem-alpaydin Lecture notes to the 2014 print retrieved from https://www.cmpe.boun.edu.tr/~ethem/i2ml3e/

Angwin, J., et al. (2016). Machine Bias. There’s software used across the country to predict future criminals. And it’s biased against blacks. In Pro Publica Retrieved on July 23rd, 2019 from https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Anthony, S. (March 14, 2017). DeepMind in talks with the National Grid to reduce UK energy use by 10%. Online: ars technica. Retrieved February 14, 2020 from https://arstechnica.com/information-technology/2017/03/deepmind-national-grid-machine-learning/

Bacaër, N. (2011). Verhulst and the logistic equation (1838). A Short History of Mathematical Population Dynamics. London: Springer. pp. 35–39.  Information retrieved from https://link.springer.com/chapter/10.1007%2F978-0-85729-115-8_6#citeas

Barat, J. (2013). Our Final Invention: Artificial Intelligence and the End of the Human Era.  New York: Thomas Dunne Books

Baum, S. (2017). A Survey of Artificial General Intelligence Projects for Ethics, Risk and Policy. The Global Catastrophic Risk Institute Working Paper 17-1. Onine: The Global Catastrophic Risk Institute. Retrieved on February 25, 2020 from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741

Bayes, T. (1763). An Essay towards solving a Problem in the Doctrine of Chances.  Retrieved on March 13, 2020 from the University of California, Irvine, School of Social Sciences at https://www.socsci.uci.edu/~bskyrms/bio/readings/bayes_essay.pdf  and with some additional annotations from MIT OPenCourseWare at https://ocw.mit.edu/courses/literature/21l-017-the-art-of-the-probable-literature-and-probability-spring-2008/readings/bayes_notes.pdf

BBC Bitesize. What is an algorithm? Retrieved on February 12, 2020 from https://www.bbc.co.uk/bitesize/topics/z3tbwmn/articles/z3whpv4

Bieda, L. (2009). Making Sense and Nonsense of Markov Chains. Online, retrieved on March 31, 2020 from https://rarlindseysmash.com/posts/2009-11-21-making-sense-and-nonsense-of-markov-chains  AND https://gist.github.com/LindseyB/3928224 

Berke, J.D. (2018).What does dopamine mean? Online: Nature Neuroscience 21, 787–793. Retrieved on March 27, 2020 from https://www.nature.com/articles/s41593-018-0152-y#citeas  

Berridge, K. C. et al (1998). What is the Role of Dopamine in Reward: Hedonic Impact, Reward Learning or Incentive Salience? In Brain Research Reviews 28 (1998) 309-369. Elsevier

Bermúdez, J. L., (2014). Cognitive Science: An Introduction to the Science of the Mind. Cambridge: Cambridge University Press. Retrieved on March 23, 2020 from https://www.cambridge.org/us/academic/textbooks/cognitivescience 

Bird, S. et al. (2010). Natural Language Processing with Python — Analyzing Text with the Natural Language Toolkit. Retrieved on April 29, 2020 from https://www.nltk.org/book/ AND https://www.nltk.org/book_1ed/ AND https://www.nltk.org/nltk_data/

Bishop, C. M. (2006). Pattern Recognition and Machine Learning. Springer Retrieved March 1, 2020 from https://www.microsoft.com/en-us/research/people/cmbishop/prml-book/ AND https://www.microsoft.com/en-us/research/uploads/prod/2006/01/Bishop-Pattern-Recognition-and-Machine-Learning-2006.pdf   This book is aimed at “advanced undergraduates or first-year PhD students, as well as researchers and practitioners.” Information retrieved on April 24, 2020 from https://www.microsoft.com/en-us/research/publication/pattern-recognition-machine-learning/

Blum, A. et al. (2018). Foundations of Data Science. Online: Cornell University; Department of Computer Science. Retrieved on April 28, 2020 from https://www.cs.cornell.edu/jeh/book.pdf

Boole, G. (1847). The Mathematical Analysis of Logic. Cambridge: MacMillan, Barclay & Macmillan. Online: Internet Archive. Retrieved on March 25, 2020 from https://archive.org/details/mathematicalanal00booluoft . Alternatively from: https://history-computer.com/Library/boole1.pdf  and https://www.gutenberg.org/files/36884/36884-pdf.pdf . See Lifschitz, V. (2009) for lecture notes

Boole, G. (1853 1854). An Investigation of the Laws of Thought. Cork: Queens College. Online: Auburn University. Samuel Ginn College of Engineering.Retrieved on March 25, 2020 from http://www.eng.auburn.edu/~agrawvd/COURSE/READING/DIGITAL/15114-pdf.pdf . Alternatively: https://www.gutenberg.org/files/15114/15114-pdf.pdf

Bostrom, N. (2014). Superintelligence. Paths, Dangers, Strategies. Oxford: Oxford University Press. A book review: Brundage, M. (2015). Taking Superintelligence seriously. Superintelligence: Paths, dangers, strategies by Nick Bostrom. In Futures 72 (2015) 32 – 35. Online: University of Oxford; Future of Humanity Institute. Retrieved on March 25, 2020 from https://www.fhi.ox.ac.uk/wp-content/uploads/1-s2.0-S0016328715000932-main.pdf

Brownlee, J. (2016). Machine Learning Mastery with Python. Vermont Victoria, Australia: Machine Learning Mastery Pty. Ltd

Boyd, S. & Vandenberghe, L. (2009). Convex Optimization. Online: Cambridge University Press. Retrieved on March 9, 2020 from https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf

Butz, M. V. et al. (2017). How the Mind Comes into Being: Introducing Cognitive Science from a Functional and Computational Perspective. Oxford, UK: Oxford University Press.

Calcott, B., et al. (2011). The Major Transitions in Evolution Revisited. The Vienna Series in Theoretical Biology. Cambridge, MA: The MIT Press.

Calculus basics and beyond can be studied via these resources retrieved on March 31, 2020 from https://www.ck12.org/fbbrowse/list?Grade=All%20Grades&Language=All%20Languages&Subject=Calculus

Charniak, E. and McDermott, D. (1985). Introduction to Artificial Intelligence. Addison-Wesley

Charniak, E. (2018). Introduction to Deep Learning. Cambridge, MA: The MIT Press

Chen N. (2016).  China Brain Project to Launch Soon, Aiming to Develop Effective Tools for Early Diagnosis of Brain Diseases. Online: CAS. Retrieved on February 25, 2020 from The Chinese Academy of Sciences English site at http://english.cas.cn/newsroom/archive/news_archive/nu2016/201606/t20160617_164529.shtml  

Chollet, F. ( ). Deep Learning with R.

Chollet, F. (2018). Deep Learning with Python. USA: Manning Publications. Retrieved on April 21, 2020 from https://livebook.manning.com/book/deep-learning-with-python?origin=product-liveaudio-upsell   Information Retrieved from https://github.com/fchollet/deep-learning-with-python-notebooks

Chomsky, N. (1956). Three models for the description of language. IEEE Transactions on Information Theory, 2(3), 113–124. doi:10.1109/tit.1956.1056813 

Chomsky, N. (1965). Aspects of the Theory of Syntax. Cambridge, MA: The MIT Press.

Chomsky, N. (1981, 1993). Lectures on Government and Binding. Holland: Foris Publications. Reprint. 7th Edition. Berlin and New York: Mouton de Gruyter,

Chomsky, N. (2000). New Horizons in the Study of Language and Mind. Cambridge, UK: Cambridge University Press.

Chomsky, N. (2002). Syntactic Structure.  Berlin and New York: Mouton De Gruyter.

Copeland, J. (May, 2000). What is Artificial Intelligence? Sections: Chess. Online: AlanTuring.net Retrieved February 14, 2020 from http://www.alanturing.net/turing_archive/pages/Reference%20Articles/what_is_AI/What%20is%20AI12.html

Copeland, J. (2004). The Essential Turing. Oxford: Clarendon Press.

Courant, R. et al. (1996). What Is Mathematics? An Elementary Approach to Ideas and Methods. USA: Oxford University Press  

Courtland, R. (June, 2018). Bias Detectives: The Researchers Striving to Make Algorithms Fair, in Nature 558, no. 7710 (June 2018): 357–60. Retrieved on July 23, 2019 from https://doi.org/10.1038/d41586-018-05469-3

Crowder, J. A. et al. (2020). Artificial Psychology: Psychological Modeling and Testing of AI Systems. Springer

Crowther-Heyck, H. (1999). George A. Miller, language, and the computer metaphor and mind. History of Psychology, 2(1), 37–64 Retrieved on March 23, 2020 from https://psycnet.apa.org/doiLanding?doi=10.1037%2F1093-4510.2.1.37 

Cydis, S. (2015). Authentic Instruction and Technology Literacy. In Journal of Learning Design 2015 Vol. 8 No.1 pp. 68 – 78. Online: Institute of Education Science (IES) & The Education Resources Information Center (ERIC), USA. Retrieved on March 25, 2020 from https://files.eric.ed.gov/fulltext/EJ1060125.pdf

Dalbey. J. (2003). Pseudocode Standard. Online California Polytechnic State University (Cal Poly); Cal Poly College of Engineering; Department of Computer Science and Software Engineering. Retrieved on February 21, 2020 from https://users.csc.calpoly.edu/~jdalbey/SWE/pdl_std.html

Davenport, T. H. (2018). The AI Advantage. How to Put the Artificial Intelligence Revolution to Work. Management on the Cutting Edge Series. Cambridge, MA: MIT Press. Information retrieved on May 5, 2020 from https://mitpress.mit.edu/books/ai-advantage  

Davis, M. (2018). The Universal Computer: the road from Leibniz to Turing. Boca Raton, FL: CRC Press, Taylor & Francis Group.

DeAngelis, S. F. (April 2014). Will 2014 be the Year you Fall in Love with Cognitive Computing? Online: WIRED. Retrieved November 22, 2019 from https://www.wired.com/insights/2014/04/will-2014-year-fall-love-cognitive-computing/

Dechter, R. (1986). Learning while Searching in Constraint-Satisfaction Problems. AAAI-86: Proceedings of the Fifth AAAI National Conference on Artificial Intelligence, August 1986. pp. 178–183 The first mentioning of “Deep Learning” on p. 180 Retrieved on April 15, 2020 from https://www.aaai.org/Papers/AAAI/1986/AAAI86-029.pdf

De Marchi, L. et al. (2019). Hands-on Neural Networks. Learn how to Build and Train your First Neural Network Model Using Python. Birmingham & Mumbai: Packt Publishing.synapse

Deng, J. et al. (2009). ImageNet: A Large-Scale Hierarchical Image Database. Online: Stanford Vision Lab, Stanford University & Princeton University Department of Computer Science. Retrieved April 7, 2020 from http://www.image-net.org/papers/imagenet_cvpr09.pdf

Desouza, K. (October 13, 2016). How can cognitive computing improve public services? Online Brookings Institute’s Techtank Retrieved November 22, 2019 from https://www.brookings.edu/blog/techtank/2016/10/13/how-can-cognitive-computing-improve-public-services/

Deisenroth, M. P. et al. (2020). Mathematics for Machine Learning. Online: Cambridge University Press. Retrieved on April 28, 2020 from https://mml-book.github.io/book/mml-book.pdf AND https://github.com/mml-book/mml-book.github.io  

Dignum, V. (2019). Responsible Artificial Intelligence. How to Develop and Use AI in a Responsible Way. Cham: Springer Nature Switzerland AG.­­­­ pp. 3

Domingos, P. (2015). The Master Algorithm. How the Quest for the Ultimate Learning Machine will remake our World. Basic Books

Downey, A.B. Think Stats. Exploratory Data Analysis in Python. Version 2.0.38 Online: Needham, MA: Green Tea Press. Retrieved on March 9, 2020 from http://greenteapress.com/thinkstats2/thinkstats2.pdf 

Doyle, P. G. (2006). Grinstead and Snell’s Introduction to Probability. The CHANCE Project. Online: Dartmouth College. Retrieved on March 31, 2020 from https://math.dartmouth.edu/~prob/prob/prob.pdf

Dugger, W. E. Jr. et al (2003). Advancing Excellence in Technology Literacy. In Phi Delta Kappan, v85 n4 p316-20 Dec 2003 Retrieved on March 25, 2020 from https://eric.ed.gov/?q=Technology+LIteracy&ff1=subTechnological+Literacy&ff2=autDugger%2c+William+E.%2c+Jr.&pg=2

Du Sautoy, M. (2019). The Creativity Code. How AI is Learning to Write, Paint and Think. London: 4th Estate, HarperCollins Publishers.

Duda, R.O. (1973, 2000). Pattern Classification.

ECD. (2019). Algorithms and Collusion: Competition Policy in the Digital Age. http://www.oecd.org/daf/competition/Algorithms-and-colllusion-competition-policy-in-the-digital-age.pdf

Eisenstein, J. (November 13, 2018). Natural Language Processing. Online: Github. Retrieved on April 21, 2020 from https://github.com/jacobeisenstein/gt-nlp-class/blob/master/notes/eisenstein-nlp-notes.pdf   AND https://github.com/jacobeisenstein/gt-nlp-class/blob/master/notes/errata.md

Eliasmith, C. (2013). How to Build a Brain: A Neural Architecture for Biological Cognition. UK: Oxford University Press. 

Eliasmith, C. et al. (2003). Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems. Cambridge, MA: The MIT Press.

European Parliament. (2016). Ethical Aspects of Cyber-Physical Systems. Scientific Foresight study. Retrieved June 5, 2019 from http://www.europarl.europa.eu/RegData/etudes/STUD/2016/563501/EPRS_STU%282016%29563501_EN.pdf

Ewing, J. (February 15, 2020). Should I take Calculus in High School? Online: Forbes Retrieved on March 31, 2020 from https://www.forbes.com/sites/johnewing/2020/02/15/should-i-take-calculus-in-high-school/#7360ae8a7625 

Fitch, W. T., & Friederici, A. D. (2012). Artificial grammar learning meets formal language theory: an overview. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1598), 1933–1955. doi:10.1098/rstb.2012.0103

Flach, P. ( ). Machine Learning: The Art and Science of Algorithms that Make Sense of Data

Gelman, A, et al. ( ) . Bayesian Data Analysis .

Géron, A. (2017). Capsule Networks (CapsNets) – Tutorial (video). Retrieved on April 22, 2020 from https://www.bilibili.com/video/av17961595/  AND https://www.youtube.com/watch?v=pPN8d0E3900 

Géron, A. (February, 2018). Introducing capsule networks. How CapsNets can overcome some shortcomings of CNNs, including requiring less training data, preserving image details, and handling ambiguity. Online: O’Reilly Media. Retrieved on April 22, 2020 from https://www.oreilly.com/content/introducing-capsule-networks/

Géron, A. (2019). Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. USA: O’Reilly Media. Information retrieved on April 22, 2020 from https://github.com/ageron/handson-ml2 (CN tittle: 机器学习实战:基于Scikit-Learn和TensorFlow)

Gerrish, S. (2018). How Smart Machines Think. Cambridge, MA: The MIT Press. pp. 18

Ghatak, A. (2019). Deep Learning with R. Singapore: Springer Nature.

Ginsparg, P., et al. (1991). arXiv e-Print Archive. Online: Cornell University & The Simons Foundation. “a free distribution service and an open archive for scholarly articles in the fields of physics, mathematics, computer science, quantitative biology, quantitative finance, statistics, electrical engineering and systems science, and economics.” Retrieved May 11, 2020 from https://arxiv.org/about AND  https://arxiv.org/about/people/leadership_team AND https://arxiv.org/about/ourmembers AND https://arxiv.org/

Gillings, M. R., et al. (2016). Information in the Biosphere: Biological and Digital Worlds. Online: University California,  Davis (UCD). Retrieved on March 25, 2020 from https://escholarship.org/uc/item/38f4b791

Gokani, J. (2017). Cognitive Computing: Augmenting Human Intelligence. Online: Stanford University; Stanford Management Science and Engineering; MS&E 238 Blog. Retrieved November 22, 2019 from https://www.datarobot.com/wiki/cognitive-computing/

Gonick, L. (1983). The Cartoon Guide to Computer Science. New York: Barnes & Noble.

Goldberg, Y. (2017). Neural Network Methods in Natural Language Processing (Synthesis Lectures on Human Language Technologies; Book 37). Morgan & Claypool Publishing. p.1

Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. in F. L. Alt and M. Rubinoff (eds.). (1966). Advances in Computers Vol.6: pp. 31–88.

Goodfellow, I., et al. (2016, 2017). Deep Learning. Cambridge, USA: The MIT Press. Retrieved on March 27, 2020 from https://www.deeplearningbook.org/ AND https://www.commonlounge.com/community/e531572c319542e8bc1a28658ef85cb1 AND https://www.youtube.com/channel/UCF9O8Vj-FEbRDA5DcDGz-Pg/videos . Additional information: https://search.bilibili.com/all?keyword=Ian%20Goodfellow Note: this book I said to be a good sequel to Skanski’s introduction.

Graham, A. C. (2003). Later Mohist Logic, Ethics and Science. Hong Kong: Chinese University Press.

Green, C. D. (2000). Dispelling the “Mystery” of Computational Cognitive Science. History of Psychology, 3(1), 62–66.

Greenhalgh, T. (2019). How to Read a Paper: The Basics of Evidence-based Medicine and Healthcare. [NOTE: while some attributes are not relevant to the field of AI, some items can be generalized to any field and its output of papers]. Some online content of this book are available online. Retrieved on April 29, 2020 from https://www.bmj.com/about-bmj/resources-readers/publications/how-read-paper

Grimmett, G.; et al. (1992). Probability and Random Processes. Oxford Science Publications. Oxford: Oxford University Press

Grinstead, C. M.; Snell, J. L. (1997). Introduction to Probability. USA: American Mathematical Society (AMS). Online: Dartmouth. Retrieved on March 31, 2020 from https://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/amsbook.mac.pdf  AND Solutions to the Exercises retrieved from http://mathsdemo.cf.ac.uk/maths/resources/Probability_Answers.pdf 

Grishman, R. (1986). Computational Linguistics: An Introduction. (Studies in Natural Language Processing). UK: Cambridge University Press.

Gulli, A. et al. (2017). Deep Learning with Keras. Birmingham: Packt publishing. Information retrieved on March 27, 2020 from https://github.com/PacktPublishing/Deep-Learning-with-Keras . This is said to be more advanced than Goodfellow’s (according to Skanski).

Gurumoorthy, S. et al. (2018). Cognitive Science and Artificial Intelligence: Advances and Applications. Springer

Guyon, I. et al. (2003). An Introduction to Variable and Feature Selection. Online: Journal of Machine Learning Research 3 (2003) 1157-1182. Retrieved April 28, 2020 from https://dl.acm.org/doi/10.5555/944919.944968

Harari, Y. N. (2015). Sapiens. A Brief History of Humankind. New York: HarperCollings Publisher

Harari, Y. N. (2017). Homo Deus: A Brief History of Tomorrow. New York: HarperCollings Publisher

Hardy. H.R. & Snow, C.P. (1941).  A Mathematician’s Apology. London: Cambridge University Press

Hastie, T., et al. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Retrieved on April 21, 2020 from https://web.stanford.edu/~hastie/ElemStatLearn/  AND https://web.stanford.edu/~hastie/ElemStatLearn//printings/ESLII_print10.pdf

Haugeland, J. (Ed.). (1985). Artificial Intelligence: The Very Idea. Cambridge, MA: MIT Press.

Haykin, S. (2008). Neural Networks and Learning Machines. New York: Pearson Prentice Hall.

Hebb, D. O. (1949, 2002). The Organization of Behavior: A Neuropsychological Theory

The Heffernan, M. (2020). Uncharted. How to Map the Future Together. Simon & Schuster

Hefferon, J. Linear Algebra. http://joshua.smcvt.edu/linearalgebra/book.pdf AND http://joshua.smcvt.edu/linearalgebra/#current_version (teaching slides, answers to exercises, etc.)

Houdé, O., et al (Ed.). (2004). Dictionary of cognitive science; neuroscience, psychology, artificial intelligence, linguistics, and philosophy. New York and Hove: Psychology Press; Taylor & Francis Group.

Hubbard, J. H. et al. (2009). Vector Calculus, Linear Algebra, and Differential Forms A Unified Approach. Matrix Editions

Hutchins, J. (2014). Publications on the History of Machine Translation. Online. Retrieved on April 9, 2020 from http://www.hutchinsweb.me.uk/ and http://www.hutchinsweb.me.uk/history.htm and http://www.hutchinsweb.me.uk/MTNI-14-1996.pdf  

Hutchins, J. (2017). Machine Translation Archive. Retrieved on April 9, 2020 from http://www.mt-archive.info/

Hutter, F. et al. (2019). Automated Machine Learning. Methods, Systems, Challenges. Retrieved on April 21, 2020 from https://link.springer.com/book/10.1007/978-3-030-05318-5 AND https://link.springer.com/content/pdf/10.1007%2F978-3-030-05318-5.pdf AND https://www.automl.org/book/

Huyen, C. (?). Machine Learning Interviews. Machine Learning Systems Design. Online: Github. Retrieved on April 21, 2020 from https://github.com/chiphuyen/machine-learning-systems-design/blob/master/build/build1/consolidated.pdf AND https://github.com/chiphuyen/machine-learning-systems-design

IEEE. (2019). Ethically Aligned Design. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems. Retrieved June 4, 2019 from https://standards.ieee.org/news/2019/ieee-ead1e.html

James, G. et al. (2014). An Introduction to Statistical Learning with Applications in R (ISLR). Online: Springer. Retrieved on April 28, 2020 from https://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf   AND https://faculty.marshall.usc.edu/gareth-james/ISL/  AND https://www.alsharif.info/iom530  AND https://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/ AND https://cran.r-project.org/web/packages/ISLR/index.html

Jaynes, E.T. ( ). Probability Theory: The Logic of Science. + see Aubrey Clayton’s lectures based on this book.

Jerison, D. (2006, 2010). 18.01 SC Single Variable Calculus. Fall 2010. Massachusetts Institute of Technology: MIT OpenCourseWare, https://ocw.mit.edu . License: Creative Commons BY-NC-SA. Retrieved on March 31, 2020 from https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/# 

Johnson, M. (March, 2009). How the Statistical Revolution Changes (Computational) Linguistics. Online: (US) Association for Computational Linguistics & ACM Digital Library. Retrieved on February 21, 2020 from https://dl.acm.org/doi/10.5555/1642038.1642041 

Joyce, J. (1999). The Foundations of Causal Decision Theory. New York: Cambridge University Press.

Joyce, J. (2003, Spring 2019). Bayes Theorem. Online: Stanford Encyclopedia of Philosophy. Retrieved on March 13, 2020 from https://plato.stanford.edu/archives/spr2019/entries/bayes-theorem/

Kalman, R. E. (2005). Control Theory (mathematics). Online: Encyclopædia Britannica. Retrieved on March 30, 2020 from https://www.britannica.com/science/control-theory-mathematics

Kapoor, A. (2019). Hands-on Artificial Intelligence for IoT. Packt Publishing.

Kasabov, K. (2019). Time-Space, Spiking-Neural Networks and Brain-inspired Artificial Intelligence. Germany: Springer-Verlag.

Kasparov, G. (March 25, 1996). The Day I Sensed a New Kind of Intelligence. Online: Time Retrieved February 14, 2020 from http://content.time.com/time/subscriber/article/0,33009,984305-1,00.html

Khan, S.; et al. (?). High School Statistics. Online: Khan Academy.Retrieved on March 31, 2020 from . https://www.khanacademy.org/math/probability

King, B. (2016). Augmented Life in the Smart Lane. Singapore: Marshall Cavendish International.

Kline, R. R. (2015). The Cybernetics Moment: Or Why We Call Our Age the Information Age. New Studies in American Intellectual and Cultural History Series. USA: Johns Hopkins University Press.

Knight, W. (Apr 11, 2017). The Dark Secret at the Heart of AI. MIT Technology Review, May/June 2017. Retrieved on July 23rd, 2019 from https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/?set=604193

Krohn, J. et al. (2019). Deep Learning Illustrated. A Visual Interactive Guide to Artificial Intelligence. Addison Wesley Data Analytics Series. London: Addison Wesley, Pearson Education.

Kruschke, J. ( ). Doing Bayesian Data Analysis .

Kunin, D. et al (?). Seeing Theory (making “…statistics more accessible through interactive visualizations”). Online: Retrieved on March 13, 2020 from Brown University at https://seeing-theory.brown.edu/index.html#3rdPage and in Chinese from https://seeing-theory.brown.edu/cn.html#3rdPage  

Kurt, W. (2019). Bayesian Statistics t he Fun Way.  Understanding Statistics and Probability with Star Wars, Lego and Rubber Ducks. San Francisco: No Starch Press. Also see: https://www.countbayesie.com  

Kurzweil, R. ( ). How to Create a Mind: The Secret of Human Thought Revealed.

Lane, D. (2017). Machine Learning for Kids. Online. Information retrieved on April 28, 2020 from https://machinelearningforkids.co.uk/

Lang, S. (2002). Algebra. Springer

Lecun, Y. (?). LeNet-5, Convolutional Neural Networks. Retrieved on April 15, 2020 rom http://yann.lecun.com/exdb/lenet/ 

Lee, J. A. N. (1995, 2019). Computer Pioneers. Online: IEEE Computer Society and the Institute of Electrical and Electronics Engineers Inc. Retrieved April 9, 2020 from  https://history.computer.org/pioneers/index.html

Lee, K. (2019). AI Superpowers: China, Silicon Valley and The New World Order.  New York: Houghton Mifflin Harcourt

Lighthill, Sir J. (1972). Lighthill Report: Artificial Intelligence: A General Survey. Retrieved on April 9, 2020 from http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/p001.htm  and https://pdfs.semanticscholar.org/b586/d050caa00a827fd2b318742dc80a304a3675.pdf  and http://www.aiai.ed.ac.uk/events/lighthill1973/

Lin, P. et al. (2011). Robot Ethics: The Ethical and Social Implications of Robotics (Intelligent Robotics and Autonomous Agents series). Cambridge, MA: The MIT Press

Lifschitz, V. (2009). Lecture Notes on Mathematical Logic. (see Boole). Online: University of Texas at Austin; Computer Science. Retrieved on March 25, 2020 from https://www.cs.utexas.edu/users/vl/teaching/388Lnotes.pdf

Luke, S. (October 2015). Essentials of Metaheuristics. Online Version 2.2. Online: George Mason University. Retrieved on March 9, 2020 from https://cs.gmu.edu/~sean/book/metaheuristics/ AND https://cs.gmu.edu/~sean/book/metaheuristics/Essentials.pdf   

Liè Yǔkòu (列圄寇 / 列禦寇). (5th Century BCE). 列子 (Lièzǐ). Retrieved on March 5, 2020 from https://www.gutenberg.org/cache/epub/7341/pg7341-images.html  and 卷第五 湯問篇 from https://chinesenotes.com/liezi/liezi005.html  and an English translation (not the latest) from  https://archive.org/details/taoistteachings00liehuoft/page/n6/mode/2up 

MacKay. D.J.C. (2008). Sustainable Energy – without the hot air. Online: UIT Cambridge.  Retrieved on March 20, 2020 from www.withouthotair.com  

Maini, V., et al. (Aug 19, 2017). Machine Learning for Humans. Online: Medium.com. Retrieved November 2019 from e-Book https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0 or https://medium.com/machine-learning-for-humans/why-machine-learning-matters-6164faf1df12 https://www.dropbox.com/s/e38nil1dnl7481q/machine_learning.pdf?dl=0

Manning, C. D. et al.  (2014). The Stanford CoreNLP Natural Language Processing Toolkit In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pp. 55-60. Online: https://stanfordnlp.github.io/CoreNLP/ AND https://nlp.stanford.edu/pubs/StanfordCoreNlp2014.pdf AND https://nlp.stanford.edu/software/

Manyika, J. et al (2019). The Coming of AI Spring. Online: McKinsey Global Institute. Retrieved on April 9, 2020 from https://www.mckinsey.com/mgi/overview/in-the-news/the-coming-of-ai-spring or https://www.project-syndicate.org/commentary/artificial-intelligence-spring-is-coming-by-james-manyika-and-jacques-bughin-2019-10?barrier=accesspaylog

Marchi, De, L. et al. (2019). Hands-on Neural Networks. Learn How to Build and Train Your First Neural Network Model Using Python. Birmingham & Mumbai: Packt Publishing.

Marcus, G. (Feb, 2020). The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence. Online: arXiv e-Print Archive; Cornell University.  Retrieved on May 11, 2020 from https://arxiv.org/abs/2002.06177  

Markov, A. A. (January 23, 1913). An Example of Statistical Investigation of the Text Eugene Onegin Concerning the Connection of Samples in Chains. Lecture at the physical-mathematical faculty, Royal Academy of Sciences, St. Petersburg, Russia. In (2006, 2007). Science in Context 19(4), 591-600. UK: Cambridge University Press.  Information retrieved on March 31, 2020 from https://www.cambridge.org/core/journals/science-in-context/article/an-example-of-statistical-investigation-of-the-text-eugene-onegin-concerning-the-connection-of-samples-in-chains/EA1E005FA0BC4522399A4E9DA0304862

Marsland, S. (2015). Machine Learning. An Algorithmic Perspective. Boca Raton, FL, USA: CRC Press.

Martinez, E. (2019). History of AI. SNARC. Retrieved on April 14, 2020 from https://historyof.ai/snarc/ 

Maynard Smith, J. et al. (1995). The Major Transitions in Evolution. Oxford, England: Oxford University Press

McCarthy, J. (1959). Programs with Common Sense. Online; Stanford Retrieved on April 7, 2020 from http://www-formal.stanford.edu/jmc/mcc59.pdf

McCarthy, J. (1996). Some Expert Systems need Common Sense. Online: Stanford University, Computer Science Department. Retrieved on April 7, 2020 from   http://www-formal.stanford.edu/jmc/someneed/someneed.html 

McCarthy, J. (2007). What is AI? Retrieved on December 5th, 2019 from http://www-formal.stanford.edu/jmc/whatisai/node1.html 

McCorduck, P. (2004). Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence. Natick: A K Peters, Ltd

McCulloch, W. & Pitts, W. (1943; reprint: 1990). A Logical Calculus of the Ideas Immanent in Nervous Activity. Bulletin of Mathematical Biophysics, Vol. 5, pp.115-133. Retrieved online on February 20, 2020 from  https://www.cs.cmu.edu/~./epxing/Class/10715/reading/McCulloch.and.Pitts.pdf   

McKenna, A (September 26, 2013). Al-Jazarī Arab inventor. In The Editors of Encyclopaedia Britannica. Online: Encyclopaedia Britannica Retrieved on March 25, 2020 from https://www.britannica.com/biography/al-Jazari 

Mead, C. (1998). Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley. Information retrieved on April 8, 2020 from https://dl.acm.org/doi/book/10.5555/64998

Meery, B. (2009). Probability and Statistics (Basic). FlexBook.  Online: CK-12 Foundation. Retrieved on March 31, 2020 from  http://cafreetextbooks.ck12.org/math/CK12_Prob_Stat_Basic.pdf 

Minsky, M. and Papert, S.A. (1969, 1987). Perceptrons: An Introduction to Computational Geometry. Cambridge, MA: The MIT Press

Minsky, M. and Papert, S.A. (1971). Artificial Intelligence Progress Report. Boston, MA: MIT Artificial Intelligence Laboratory. Memo No. 252.  pp. 32 -34 Retrieved on April 9, 2020 from https://web.media.mit.edu/~minsky/papers/PR1971.html or  http://bitsavers.trailing-edge.com/pdf/mit/ai/aim/AIM-252.pdf

Minsky, M. (1988). The Society of Mind. New York, NY: Simon and Schuster.

Minsky, M. (1991). Logical Versus Analogical or Symbolic Versus Connectionist or Neat versus Scruffy. In AI Magazine Vol.12 Number. 2 (1991). AAAI. Retrieved April 21, 2020 from https://web.mit.edu/6.034/www/6.s966/Minsky-NeatVsScruffy.pdf

Minsky, M. (2006). The Emotion Machine. Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York: Simon & Schuster.

Minsky, M. (2011). Building my randomly wired neural network machine. (audio-video file). Online: Web of Stories   Retrieved on April 14, 2020 from https://www.webofstories.com/play/marvin.minsky/136;jsessionid=E0C48D4B3D9635BA883747C9A925B064

Mitkov, R. (2005). The Oxford Handbook of Computational Linguistics. (Oxford Handbooks). UK: Oxford University Press

Mohri, M. et al. (2018). Foundations of Machine Learning. Online: Cambridge, MA: The MIT Press. Retrieved on April 21, 2020 from https://cs.nyu.edu/~mohri/mlbook/

Moitra, A. (2014). Algorithmic Aspects of Machine Learning. Online: MIT Retrieved on April 28, 2020 from https://people.csail.mit.edu/moitra/docs/bookex.pdf

Montavon, G. et al. (2012). Neural Networks: Tricks of the Trade. New York: Springer. Retrieved on March 27, 2020 from https://link.springer.com/book/10.1007/978-3-642-35289-8 AND https://machinelearningmastery.com/neural-networks-tricks-of-the-trade-review/  . This publication is considered to be very advanced compared to Gulli, Goodfellow or Skanski (according to Skanski).

Mueller, J. P. et al. (2019). Deep Learning for Dummies. Hoboken, NJ: Wiley p. 133

Munro, R. et al. (2012). Tracking Epidemics with Natural Language Processing and Crowdsourcing. Online: Association for the Advancement of Artificial Intelligence. Retrieved on April 21, 2020 from http://www.robertmunro.com/research/munro12epidemics.pdf  

Murphy, K. P.  (2012). Machine Learning: A Probabilistic Perspective. In the Adaptive Computation and Machine Learning Series. Cambridge, MA: The MIT Press. Information retrieved on April 21, 2020 from https://www.cs.ubc.ca/~murphyk/MLbook/

Needham, J. (1991). Science and Civilisation in China: Volume 2, History of Scientific Thought. Cambridge: Cambridge University.

Newell, A et al. (1956). The Logic Theory Machine. A Complex Information Processing System. Retrieved April 15, 2020 from http://shelf1.library.cmu.edu/IMLS/MindModels/logictheorymachine.pdf

Nielsen, M. (2019). Neural Networks and Deep Learning. Online: Determination Press. Retrieved on April 24, 2020 from http://neuralnetworksanddeeplearning.com/ AND https://github.com/mnielsen/neural-networks-and-deep-learning AND http://michaelnielsen.org/

Nilsson, N. J. (2013). The quest for artificial intelligence a history of ideas and achievements. Cambridge University Press

Norris, J. (1997). Markov Chains (Cambridge Series in Statistical and Probabilistic Mathematics). Cambridge: Cambridge University Press. Information retrieved on March 31, 2020 from https://www.cambridge.org/core/books/markov-chains/A3F966B10633A32C8F06F37158031739 AND http://www.statslab.cam.ac.uk/~james/Markov/ AND  http://www.statslab.cam.ac.uk/~rrw1/markov/   http://www.statslab.cam.ac.uk/~rrw1/markov/M.pdf AND https://books.google.com.hk/books/about/Markov_Chains.html?id=qM65VRmOJZAC&redir_esc=y

OECD. (2019). Artificial Intelligence in Society. Retrieved on June 3, 2019 from http://www.oecd.org/going-digital/artificial-intelligence-in-society-eedfee77-en.htm

Olazaran, M. (1996). A Sociological Study of the Official History of the Perceptrons Controversy. Social Studies of Science, 26(3), 611–659. London: Sage Publications

Olhede, S., & Wolfe, P. (2018). The AI spring of 2018. Significance, 15(3), 6–7. Retrieved on April 9, 2020 from https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2018.01140.x 

Orland, P. (2020). Math for Programmers. Online: Manning Publications. Retrieved on April 28, 2020 from https://www.manning.com/books/math-for-programmers   

Paisley, J. (2016). Course Notes for Bayesian Models for Machine Learning. Online: Columbia University; Department of Electrical Engineering. Retrieved on April 21, 2020 from http://www.columbia.edu/~jwp2128/Teaching/E6720/BayesianModelsMachineLearning2016.pdf

Parada, C. (Dec 10, 1993). Genealogical Guide to Greek Mythology. Studies in Mediterranean Archaeology, Vol 107. Coronet Books.

Petersen, K.B & Pedersen, M.S. (November 15, 2012). The Matrix Cookbook. Online Retrieved from http://matrixcookbook.com  and https://www2.imm.dtu.dk/pubdb/views/edoc_download.php/3274/pdf/imm3274.pdf

Piatetsky-Shapiro, G. et al. (2020). KDnuggets. Knowledge Discovery Nuggets is a site on AI, Analytics, Big Data, Data Mining, Data Science, and Machine Learning. Here a selection for beginners: https://www.kdnuggets.com/tag/beginners

Pierce, J. R. et al. (1966). Language and Machines: Computers in Translation and Linguistics. Washington D. C.: The Automatic Language Processing Advisory Committee (ALPAC). Retrieved on April 9, 2020 from The National Academies of Sciences, Engineering and Medicine at   https://www.nap.edu/read/9547/chapter/1 , alternatively: http://www.mt-archive.info/ALPAC-1966.pdf

Pinker, S. (2018 ). Enlightenment Now: a Manifesto for Science, Reason, Humanism, and Progress. New York: ALLEN LANE

Polson, N. and James Scott. (2018). AIQ. How People and Machines Are Smarter Together. St. Martin’s Press.

Popper, K. (1959, 2011). The Logic of Scientific Discovery. Taylor and Francis

Poo, Mu-ming. (November 2, 2016). China Brain Project: Basic Neuroscience, Brain Diseases, and Brain-Inspired Computing. Neuron 92, NeuroView, pp. 591-596.  Online: Elsevier Inc. Retrieved on February 25, 2020 from https://www.cell.com/neuron/pdf/S0896-6273(16)30800-5.pdf  .

Poole, D., Mackworth, A. K., and Goebel, R. (1998). Computational intelligence: A logical approach. Oxford University Press

Qi, F. and Wu, W. (1 June 2019). Human-like Machine Thinking: Language Guided Imagination. Online: arXiv e-Print Archive; Cornell University; Retrieved December 10, 2019 from https://arxiv.org/abs/1905.07562v2

Roche, (2003). Introducing Vectors. Online Retrieved on April 9, 2020 from http://www.marco-learningsystems.com/pages/roche/introvectors.htm

Rashid, T.  (2016). Make Your Own Neural Network.  CreateSpace Independent Publishing Platform information retrieved on April 2, 2020 from http://makeyourownneuralnetwork.blogspot.com/

Rasmussen, D. E. et al. (2006). Gaussian Processes for Machine Learning. Online: The MIT Press. Retrieved on April 21, 2020 from http://www.gaussianprocess.org/gpml/chapters/RW.pdf AND http://www.gaussianprocess.org/gpml/errata.html  

Rosasco, L. (2017). Introductory Machine Learning Notes. Online: MIT Retrieved on April 21, 2020 from http://lcsl.mit.edu/courses/ml/1718/MLNotes.pdf 

Rungta, K. (2018). TensorFlow in 1 Day Make your own Neural Network

Raschka, S. ( ). Python Machine Learning. Packt Information retrieved on March 27, 2020 from https://github.com/rasbt/python-machine-learning-book

Rasmussen, C. E. et al. (2006). Gaussian Processes for Machine Learning. Online: The MIT Press. Retrieved on April 28, 2020 from http://www.gaussianprocess.org/gpml/ AND http://www.gaussianprocess.org/gpml/chapters/ AND http://www.gaussianprocess.org/gpml/chapters/RW.pdf

Reese, B. (2018). The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. New York: Atria Books

Rezzoug, N. et al. (2006). Robotic Grasping: A Generic Neural Network Architecture. In Aleksandar Lazinica (ed.) (2006). Mobile Robots Towards New Applications.  pp. 784, ARS/plV. London & Online: IntechOpen. Retrieved on April 29, 2020 from https://cdn.intechopen.com/pdfs/51/InTech-Robotic_grasping_a_generic_neural_network_architecture.pdf  AND https://www.intechopen.com/books/mobile_robots_towards_new_applications

Robinson, John Alan (January 1965). A Machine-Oriented Logic Based on the Resolution Principle. J. ACM. 12 (1): 23–41 Retrieved on March 24, 2020 from https://dl.acm.org/doi/10.1145/321250.321253  and https://web.stanford.edu/class/linguist289/robinson65.pdf   

Roggio, R. (2015). Pseudocode Examples. Online: University of North Florida. Retrieved on February 21, 2020 from https://www.unf.edu/~broggio/cop2221/2221pseu.htm      

Rosasco, L. (2017). Introductory Machine Learning Notes. Online: MIT. Retrieved on April 28, 2020 from http://lcsl.mit.edu/courses/ml/1718/MLNotes.pdf

Rosenblatt, F. (January, 1957). The Perceptron. A Perceiving and Recognizing Automaton. Report No. 85-460-1. Buffalo (NY): Cornell Aeronautical Laboratory, Inc. Retrieved on January 17, 2020 from https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf 

Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J. (9 October 1986). “Learning representations by back-propagating errors”. Nature. 323 (6088): 533–536

Russell, S. and Peter Norvig. (2016). Artificial Intelligence: A Modern Approach. Third Edition. Essex: Pearson Education.

Russell, S. (2019). Human Compatible. Artificial Intelligence and the Problem of Control. Viking

Sabbatini, R. (Feb 2003). Neurons and Synapses. The History of Its Discovery. IV. The Discovery of the Synapse. Online: cerebromente.org. Retrieved on April 23, 2020 from http://cerebromente.org.br/n17/history/neurons4_i.htm 

Sabour, S. et al. (2017). Dynamic Routing Between Capsules. Online: arXiv e-Print Archive; Cornell University; Retrieved on April 22, 2020 from https://arxiv.org/pdf/1710.09829.pdf

Samuel, A.L. (1959, 1967, 2000). Some Studies in Machine Learning Using the Game of Checkers. Online: IBM Journal of Research and Development, 44(1.2), 206–226. doi:10.1147/rd.441.0206 Retrieved February 18, 2020 from https://dl.acm.org/doi/10.1147/rd.33.0210 and  https://www.sciencedirect.com/science/article/abs/pii/0066413869900044 and https://researcher.watson.ibm.com/researcher/files/us-beygel/samuel-checkers.pdf

Sanderson, G.  (? Post-2016).  3BLUE1BROWN SERIES. But what is a Neural Network? | Deep learning, chapter 1.  S3 • E1 (Video). Online. Retrieved on April 22, 2020 from https://www.bilibili.com/video/BV12t41157gx?from=search&seid=15254673027813667063  AND https://www.youtube.com/watch?v=aircAruvnKk Information Retrieved from https://www.3blue1brown.com/about

Sarkar, D. (2019). Text Analytics with Python. A Practitioner’s Guide to Natural Language Processing. Bangalore: Apress.

Shalev-Shwartz, S. et al. (2014). Understanding Machine Learning: From Theory to Algorithms. Cambridge:  Cambridge University Press Information retrieved on April 24, 2020 from https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/ AND https://www.cse.huji.ac.il/~shais/UnderstandingMachineLearning/understanding-machine-learning-theory-algorithms.pdf

Shashta,  et al. (2019). Application of Reinforcement Learning to a Robotic Drinking Assistant. In Robotics. Special Issue “Reinforcement Learning for Robotics Applications” Robotics 2020, 9(1), 1; Online: MDPI Publishing. Retrieved on April 29, 2020 from https://www.mdpi.com/journal/robotics  AND https://www.mdpi.com/2218-6581/9/1/1/htm  AND https://www.mdpi.com/2218-6581/9/1/1/pdf

Schrittwieser, J. et al. (2020). Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model. Online: arXiv e-Print Archive; Cornell University; Retrieved on April 1, 2020 from  https://arxiv.org/abs/1911.08265

Schultz, W. (2015). Neuronal Reward and Decision Signals: From Theories to Data. Physiological Reviews, 95(3), 853–951. Retrieved on March 27, 2020 from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4491543/

Sexton, C. (2020). CK-12 Interactive Algebra 1 for CCSS. Online: CK-12 retrieved on March 31, 2020 from https://flexbooks.ck12.org/cbook/ck-12-interactive-algebra-1-for-ccss/

Shanahan, M. (2015). The Technological Singularity. The MIT Essential Knowledge Series. Cambridge, MA: The MIT Press.

Simeone, O. (2018). A Brief Introduction to Machine Learning for Engineers. Online: King’s College London, UK; Department of Informatics. Retrieved on April 21, 2020 from Online: arXiv e-Print Archive; Cornell University;   https://arxiv.org/pdf/1709.02840.pdf  

Simon, H. A. (1996). The Sciences of the Artificial. Cambridge, MA: The MIT Press

Skanski, S. (2020). Guide to Deep Learning. Basic Logical, Historical and Philosophical Perspectives. Switzerland: Springer Nature.

Skanski, S. (2018). Introduction to Deep Learning. From Logical Calculus to Artificial Intelligence. In Mackie, I. et al. (2018). Undergraduate Topics in Computer Science Series (UTiCS). Switzerland: Springer. Information retrieved on March 26, 2020 from http://www.springer.com/series/7592  AND https://github.com/skansi/dl_book 

Spacey, J. (2016, March 30). 33 Types of Artificial Intelligence. Retrieved from https://simplicable.com/new/types-of-artificial-intelligence  on February 10, 2020

Spice, B. (April 11, 2017). Carnegie Mellon Artificial Intelligence Beats Chinese Poker Players. Online: Carnegie Mellon University. Retrieved January 7, 2020 from https://www.cmu.edu/news/stories/archives/2017/april/ai-beats-chinese.html   

Spielkamp, M. (June 12, 2017). “We need to shine more light on algorithms so they can help reduce bias, not perpetuate It.” MIT Technology Review. Retrieved on July 23, 2019 from https://www.technologyreview.com/s/607955/inspecting-algorithms-for-bias/

Spong, M. et al. (20-19). CK-12 Precalculus Concepts Flexbook 2.0. Online: CK-12 Retrieved on March 31, 2020 from https://flexbooks.ck12.org/cbook/ck-12-precalculus-concepts-2.0/ and more at https://www.ck12.org/fbbrowse/list/?Subject=Calculus&Language=All%20Languages&Grade=All%20Grades

Statistics basics and beyond can be studied via these resources retrieved on March 31, 2020 from https://www.ck12.org/fbbrowse/list?Grade=All%20Grades&Language=All%20Languages&Subject=Statistics

Stevens (Stevenson), E. et al (2019). Deep Learning with PyTorch. Essential Excerpts. Online: Manning Publications. Retrieved on April 21, 2020 from https://pytorch.org/assets/deep-learning/Deep-Learning-with-PyTorch.pdf AND https://pytorch.org/deep-learning-with-pytorch

Stone, P. (Chair) et al. (2016). Artificial Intelligence and Life in 2030. One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016. Stanford, CA: Study Panel, Stanford University. Retrieved on March 23, 2020 from https://ai100.stanford.edu/sites/g/files/sbiybj9861/f/ai_100_report_0831fnl.pdf 

Strang, G. (Fall 1999). Linear Algebra. Video Lectures (MIT OpenCourseWare). Online: MIT Center for Advanced Educational Services. Retrieved on March 9, 2020 from https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/video-lectures/   AND https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/

Strang, G. (2016). Introduction to Linear Algebra. (Fifth Edition). Cambridge MA, USA: Wellesley-Cambridge & The MIT Press. Information retrieved on April 24, 2020 from https://math.mit.edu/~gs/linearalgebra/ AND https://math.mit.edu/~gs/  

Strogatz, S. ( ). Infinite Powers: How Calculus Reveals the Secrets of the Universe

Sun F., Liu, H., Hu, D.  (eds). (2019). Cognitive Systems and Signal Processing: 4th International Conference, ICCSIP 2018, Beijing, China, November 29 – December 1, 2018, Revised Selected Papers, Part 1 & Part 2. Singapore: Springer

Sutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction. Cambridge, MA: A Bradford Book, The MIT Press. Retrieved on March 26, 2020 from https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf  AND http://www.incompleteideas.net/book/RLbook2018.pdf

Tegmark, M. (?). Benefits & Risks of Artificial Intelligence. Online: Future of Life Institute. Retrieved on May 5, 2020 from https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/?cn-reloaded=1 AND in Chinese 中文: https://futureoflife.org/background/benefits-risks-artificial-intelligence-chinese/  

Tegmark, M. (2017). Life 3.0; Being Human in the Age of Artificial Intelligence. New York:Alfred A. Knopf

Thagard, Paul, (Spring 2019 Edition). Cognitive Science. In Edward N. Zalta (ed.). The Stanford Encyclopedia of Philosophy. Online: Stanford University. Retrieved on March 23, 2020 from https://plato.stanford.edu/archives/spr2019/entries/cognitive-science/

Trask, A. W. (2019). Grokking Deep Learning. USA: Manning Publications Co.

Tsakiris, M. et al. (2018). The Interoceptive Mind: From Homeostasis to Awareness. USA: Oxford University Press

Turing, A. (1948). Intelligent Machinery. http://www.turingarchive.org/viewer/?id=127&title=1 and https://weightagnostic.github.io/papers/turing1948.pdf also see: Copeland, J. (2004). The Essential Turing. Oxford: Clarendon Press. pp. 411-432

Turing, A.M. (1950). Computing Machinery and Intelligence. in Mind Lix(236), 49:433-460. Retrieved November 13, 2019 from http://cogprints.org/499/1/turing.html and https://www.csee.umbc.edu/courses/471/papers/turing.pdf

UN. (16 May, 2018). 68% of the World Population projected to live in urban areas by 2050, says UN. Online: UN DESA. Retrieved on November 28, 2019 from  https://www.un.org/development/desa/en/news/population/2018-revision-of-world-urbanization-prospects.html

Vapnik, V. ( ). The Nature of Statistical Learning Theory.

Various authors. (2016, 2018). Wikijunior. Programming for Kids. Writing your own Algorithms. Online: Wikibooks Retrieved on April 7, 2020 from https://en.wikibooks.org/wiki/Wikijunior:Programming_for_Kids/Writing_Your_Algorithms

Vincent, J. (February 28, 2018). A Video game-playing AI beat Q*bert in a way no one’s ever seen before. Online: The Verge. Retrieved February 14, 2020 from https://www.theverge.com/tldr/2018/2/28/17062338/ai-agent-atari-q-bert-cracked-bug-cheat

Wang, Y., Kosinski, M. (2017, September 7). Deep neural networks are more accurate than humans at detecting sexual orientation from facial images. https://doi.org/10.31234/osf.io/hv28a

West, S.M., Whittaker, M. and Crawford, K. (2019). Discriminating Systems: Gender, Race and Power in AI. AI Now Institute. Retrieved from https://ainowinstitute.org/discriminatingsystems.html.

Wiener, N. (1961). Cybernetics: or the Control and Communication in the Animal and the Machine: Or Control and Communication in the Animal and the Machine. Cambridge, MA: The MIT Press

Winn, J. et al. (2019). Model-Based Machine Learning (Early Access Copy).  Retrieved on April 21, 2020 from http://mbmlbook.com/MBMLbook.pdf AND http://mbmlbook.com/

Winograd, T. (1972). Understanding Natural Language. In Cognitive Psychology; Volume 3, Issue 1, January 1972, pp. 1 – 191. Boston: MIT; Online” Elsevier. Retrieved on March 25, 2020 from https://www.sciencedirect.com/science/article/abs/pii/0010028572900023    

Willman, Marshall. (Nov 6, 2018).  “Logic and Language in Early Chinese Philosophy”, in Edward N. Zalta (ed.). The Stanford Encyclopedia of Philosophy. Online: Metaphysics Research Lab, Stanford University. Retrieved March 5, 2020 from https://plato.stanford.edu/entries/chinese-logic-language/#DaoiReplCritLogi

Winston, P. H. (1992). Artificial Intelligence (Third edition). Addison-Wesley.

World Economic Forum. (2019). AI Governance. A Holistic Approach to Implement Ethics into AI. Retrieved on June 3, 2019 from https://www.weforum.org/whitepapers/ai-governance-a-holistic-approach-to-implement-ethics-into-ai

Zhang, A. et al. (April 20, 2020). Dive into Deep Learning. Retrieved on April 21, 2020 from https://d2l.ai/d2l-en.pdf  AND https://d2l.ai/chapter_preliminaries/index.html  AND https://github.com/d2l-ai/d2l-en AND Chinese 中文: https://zh.d2l.ai/ 

Zhāng, Z. (张 朝 阳).  ( November 2005). “Allegories in ‘The Book of Master Lie’ and the Ancient Robots”. Online: Journal of Heilongjiang College of Education. Vol.24 #6. Retrieved March 5, 2020 from https://wenku.baidu.com/view/b178f219f18583d049645952.html

Zeng, Y., Lu, E. and Huangfu, C. (12 Dec. 2018). Linking Artificial Intelligence Principles. Online: arXiv e-Print Archive; Cornell University. Retrieved December 7, 2019 from https://arxiv.org/ftp/arxiv/papers/1812/1812.04814.pdf

Zimbardo, P., et al. (2008). Psychologie. München, Germany: Pearson Education.

Additionally, an incomplete list of online data sets:

The company has spent alot on advertising which has given provillus a good name. viagra on line prescription You can take any strength of it but it has taken its toll cheapest price for levitra over youths simultaneously. Forzest Benefits Lasts much longer than traditional ED medication – up to 36 hours Action is not reduced by fatty foods, so can be eaten with or without a meal Effects can be felt as quickly as half an hour. viagra prices australia Men suffer with impotence, but women also get to suffer the bad impact as the sex life becomes tasteless due secretworldchronicle.com levitra tablets to some sexual dysfunction? It will surely impact your relationship negatively.

Data sets

Data set location area
http://cricsheet.org/downloads/  
http://usfundamentals.com/download  
http://www.sports-reference.com/  
http://yann.lecun.com/exdb/mnist/  
  https://data.world/  
https://dev.twitter.com/streaming/overview  
https://stocktwits.com/developers/docs  
https://www.ehdp.com/vitalnet/datasets.htm  
https://www.quandl.com/data/FRED/documentation/documentation  
https://www.quandl.com/data/WIKI/documentation/bulk-download  
https://www.quandl.com/open-data  
https://www.quantopian.com/data  
https://github.com/awesomedata/awesome-public-datasets Agriculture, biology, climate, weather, earth sciences, economics, education, energy, finance, healthcare, …
http://datamarket.azure.com/browse/data?price=free Agriculture, weather and so on
https://openlibrary.org/developers/dumps Book library data sets
137.189.35.203/WebUI/CatDatabase/catData.html   Cat images
https://archive.org/details/2015_reddit_comments_corpus Chat data set
https://developer.twitter.com/en/docs/tweets/search/overview Chats-related content data
https://developers.facebook.com/products/instagram/ Chats-related content data
https://dataportals.org/search Civil, country, etc.
  http://gcmd.nasa.gov/ earth sciences and environmental sciences
https://nces.ed.gov/ Education data sets
https://nces.ed.gov/ education demographics in the USA and the world
http://www.mlopt.com/?p=6598 Electric Vehicle recharge points dataset from Northern Ireland and Republic of Ireland
http://www.cs.cmu.edu/~enron/      Email text as data set
http://open-data.europa.eu/en/data/ EU civil data sets
https://data.europa.eu/euodp/en/home EU civil, social
https://opendatamonitor.eu/frontend/web/index.php?r=dashboard%2Findex European data sets
http://vis-www.cs.umass.edu/lfw/ Faces data set
http://www.imdb.com/interfaces/ Film data
http://www.bfi.org.uk/education-research/film-industry-statistics-research Film data (UK)
https://markets.ft.com/data/ finance
https://www.imf.org/en/Data finances, debt rates, foreign exchange
https://cmu-perceptual-computing-lab.github.io/foot_keypoint_dataset/ Foot Keypoint Dataset (Carnegie Mellon University)
https://opencorporates.com/ Global database of companies
https://trends.google.com/trends/ Global internet searches
https://comtrade.un.org/ Global trade
  http://yann.lecun.com/exdb/mnist/ handwritten digits examples, and a test set of 10,000 examples
https://www.data.gov/health/ Health (USA)
http://data.nhm.ac.uk/ historical specimens in the London museum
https://www.glassdoor.com/developer/index.htm Human resource related data sets
https://archive.org/details/audio-covers image processing research data set
  https://archive.ics.uci.edu/ml/datasets/Iris Iris flowers data set. This is claimed to be one of the better freely available classification sets. It is used in the beginners project on Machine Learning and classification.
https://www.bjs.gov/index.cfm?ty=dca law enforcement in the USA
http://www.londonair.org.uk/london/asp/datadownload.asp London air quality data
http://archive.ics.uci.edu/ml/datasets.php Machine Learning
  http://archive.ics.uci.edu/ml/ Machine Learning data sets
http://mldata.org/ machine learning datasets for training systems
http://www.msmarco.org/ machine learning datasets for training systems in reading comprehension and question answering
https://aws.amazon.com/datasets/million-song-dataset/ Music data set
  https://opendata.cityofnewyork.us/ New York City data sets
https://data.worldbank.org/data-catalog/health-nutrition-and-population-statistics Nutrition, health, population
https://go.developer.ebay.com/ebay-marketplace-insights Online sales datasets (mainly USA)
http://opendata.cern.ch/ particle physics experiments data
https://exoplanetarchive.ipac.caltech.edu/ planets and stars
https://data.worldbank.org/ population demographics, economics
https://www.qlik.com/us/products/qlik-data-market population, currencies
https://deepmind.com/research/open-source/kinetics pose and action data sets
http://fivethirtyeight.com/ public opinion on sport and more
https://www.google.com/publicdata/directory public-interest data (USA)
https://scholar.google.com/ Scholarly text as data sets
https://science.mozilla.org/projects Sciences
https://cooldatasets.com/ Sciences, civil, entertainment, machine learning, etc.
https://www.ukdataservice.ac.uk/ social, economic population in the UK
http://www.databasesports.com/ Sports data
https://data.gov.uk/ UK civil, social
https://data.unicef.org/ UNICEF data sets, civil
http://data.un.org/ United Nations data sets
  http://data.humdata.org/ United Nations humanitarian data sets
https://archive.ics.uci.edu/ml/index.php University of California Machine Learning dataset
https://lodum.de/ University of Münster data sets
https://www.data.gov/ USA civil, social
https://ucr.fbi.gov/ USA crime statistics
http://www.census.gov/data.html USA population
https://www.yelp.com/dataset User data sets, (Yelp users)
http://opendataimpactmap.org/ Various
http://plenar.io/ Various
https://ckan.org/ Various
https://datahub.io/search Various
https://dataverse.org/ Various
https://github.com/datasets/ various
https://knoema.com/ various
https://opendatakit.org/ Various
https://opendatamonitor.eu/frontend/web/index.php?r=dashboard%2Findex Various
https://registry.opendata.aws/ Various
https://rs.io/100-interesting-data-sets-for-statistics/ various
https://www.columnfivemedia.com/100-best-free-data-sources-infographic Various
https://www.kaggle.com/datasets Various data sets
https://github.com/freeCodeCamp/open-data Various for coders
  http://datahub.io/ Various open data sets
https://www.kaggle.com/ Various, general data resource
https://www.reddit.com/r/datasets/comments/exnzrd/coronavirus_datasets/    https://github.com/CryptoKass/ncov-data Virus data set
https://wiki.dbpedia.org/ Wikipedia data sets
https://www.who.int/gho/database/en/ World Health Organization data sets
https://cdan.dot.gov/query   https://github.com/wgetsnaps/ftp.nhtsa.dot.gov–fars Traffic, USA, fatality data set
http://www.image-net.org/  Images as huge quality dataset
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.296.9477&rep=rep1&type=pdf  
http://xiaming.me/posts/2014/10/23/leveraging-open-data-to-understand-urban-lives/ New York (civil) data sets and more

AI application in social settings: Facial Recognition & Surgical Masks

In some Asian countries the deployment of AI applications for facial recognition in the public sphere has been well on its way. In these same countries, for other reasons, may people are used to wearing facial masks. Some wear them, respecting their fellow citizens, when going through the motions of a cold or other illness. Some wear them to protect themselves from pollutants in the air. At times, facial masks or coverings are used to protect oneself from the effects of sunlight or sandstorms.

In European countries, for instance, masks have been used during festivities such as carnival and during civil disobedience acts, such as demonstrations. Presently, yet reluctantly, a few more individuals use surgical masks or similar filtering masks to protect themselves from illness. In general, until recently, governments in the EU have not been promoting their usage.

At the opposite extreme, the purchase viagra on line reasons may spring from a doting mother who, alongside her chicken and dumplings, served up dishes of control or guilt until the parent-child relationship was reduced to one of mere obligation or appeasement. All you have to do is confirm the diagnosis and the buying viagra in uk treatment of ED. The spine consists of muscles, online viagra canada http://secretworldchronicle.com/2016/07/ep-8-21-next-to-normal-part-1/ nerves, ligaments, joints and bones. Many other male enhancement pills claim to give you a bigger penis, the manufacturers of cialis prescription canada Vigrx Plus is clearly visible from the positive feedback of large number of people over the internet.

Now that surgical masks are used even more onto faces within the general population, will this have an effect on investments made in AI research and developments towards facial recognition in the public sphere? Will it effect the usability of these systems already in place?