Introduction to Artificial Intelligence with Applications to Public Health
Lecture 2 - Foundations
Dr. Anthony G. Francis, Jr.
January 31, 2005
· Artificial Intelligence: Preface, Chapters 1 and 2
· Machines Who Think: Preface, Chapters 1, 2 and 3
· Dive Into Python: Chapters 1 and 2
· Robot Dreams: The Prehistory of Artificial Intelligence
· Foundations of Machine Intelligence
· Mechanized Calculation
· Symbolic Representation
· Automated Reasoning
· Knowledge Systems
· Functional Specialization
· The Hard Problems: Language, Vision, Robotics
· Biological Simulation: Neurons, Learning, Animats
· Mechanized Calculation: ENIAC et al.
· Symbolic Representation: the Logic Theorist and LISP
· Automated Reasoning: SIR and the General Problem Solver
· Knowledge Systems: MYCIN
· Functional Specialization: Vision, et al.
· Language and Translation: MT, Eliza, SHRDLU
· Vision: Early Systems, Marr, Aaron and Kosslyn
· Robotics: Shakey, Robot Systems, Darpa Challenge
· Neurons: Minsky, Perceptrons, Neural Nets
· Learning: Samuels, Arch, Genetic Algorithms
· Animats: Braitenberg, Games, The Fish
·
Assignment 1: Introduction to Python
There are a lot of good introductions to the history of artificial intelligence: Machines Who Think, the first chapters of Russel & Norvig’s Artificial Intelligence, the first chapter of AI Application Programming, the Stottler Henke online text, and so on. But it’s just important to consider the conceptual foundations of AI: not “what happened when”, though that will come up; but instead “what follows what”.
Artificial intelligence begins with inspiration: the idea that machine intelligence is possible, rather than unthinkable. It also needs inclination: the idea that artificial intelligence is desirable, perhaps even necessary, rather than heretical. Unfortunately, these two ideas are not enough: because of its complexity machine intelligence also requires a solid base of engineering to build intelligent artifacts. Engineering itself is not enough: with it you can build interesting toys, but you also need the scientific method to evaluate their performance and formal mathematical models to guide the development of more and more complex systems. With all of these tools, you can construct computing machinery that can perform increasingly sophisticated computations: mathematics, text manipulations, logical operations, and ultimately symbolic processing. While basic mathematics is sufficient for basic control systems and neural networks, symbolic processing is necessary for automated reasoning, problem solving, and knowledge systems. With these tools, AI had all that it needed to address problems in language, vision, planning, learning, and the control of robotic effectors. However, real robots in complex environments — as well as complete software agents, computer game characters, and models of human reasoning — need more than just isolated vision, planning, and action systems: they need agent architectures which integrate these separate systems into a whole which responds coherently to its environment.
As with any model, we run the risk of oversimplifying our target phenomenon. The real history of artificial intelligence includes fits and starts, complete robots built before the age of formal theory, and agent architectures devised far ahead of the component tasks they presumably would need to integrate. However, this model will enable us focus on the kinds of computations that AI systems perform at each increasing stage of complexity, and examine what we need to learn about each stage.
In the last lecture we reviewed many of the modern inspirations of artificial intelligence — it’s common for students to cite HAL from 2001 or Commander Data from Star Trek as their inspiration. The quest for artificial intelligence is far older, however. We can break inspirations for AI down into three major groups: the supernatural imagination, the philosophical imagination, and the desire to save labor.
· The Supernatural Imagination: transfer of the idea of human agency onto a non-human agent
· Artificial Intelligence by Miracle Workers
· The Blacksmith as Miracle Worker — Hephasteus
· The Inventor as Miracle Worker — Daedalus
· The Philosopher as Miracle Worker — The Golden Head of Pope Sylvester II
· The Priest as Miracle Worker — the Golem
But these advances were often seen as blasphemous — the occult works of Sylvester II are said to have scandalized generations to come. Artificial intelligence needed more than just the idea of intelligence transferred to a non-human agent: it needed a philosophical system that encouraged such exploration.
· The Philosophical Imagination: formalizing thought
· The Zajira
· The Ars Magna
· Leibniz’ Language of thought
Just having the idea of artificial intelligence was not enough: according to some interpretations of religion the idea to create a thinking machine is heretical (by suggesting that men are akin to machines) or blasphemous (by usurping God’s special domain). It’s no accident that some of the earliest ideas of formalizing thought, the Zajira and the Ars Magna, were products of their religious systems — proposing structures outside of the scope of approved religious thought could get you killed.
As science and mathematics progressed, however, it became fertile ground for rich systems of thought that cried out to be formalized. Some of the earliest attempts at “machine intelligence” came when mathematicians like Lebniz attempted to construct the first non-religious formal framework for mathematical thought … a thread that would continue for centuries and later culminate in the works of Boole, Frege, Pierce, Russel, Whitehead and Godel.
· Desire to save labor: trying to eliminate work done by rote
· The Abacus
· The Pascaline and Leibniz’ calculator
· Charles Babbage’s Difference Engine
Tools to save labor have existed since the abacus. However, it wasn’t until modern mathematical methods were developed in the 1600s that more complicated calculators were developed, including Blaise Pascal’s Pascaline and Leibniz’s calculator. These remained toys until engineering, navigation and ballistics created a demand for vast tables of mathematical numbers prone to error.
Charles Babbage was one of the pioneers of mechanizing mathematics. Originally inspired by the idea of saving the labor of calculating tables with his Difference Engine, Babbage was never able to complete his ideas in part because the mechanical engineering technology of the times did not permit it. Over the next century, other researchers would create a variety of calculating machines, but reliability remained a problem up until modern electronic components and the creation of the transistor.
Babbage is also notable for being sidetracked into the idea of creating a more general machine called the Analytical Engine and inspiring the first computer programmer (Lady Ada Lovelace). Babbage was not the last scientist to begin with the idea of saving labor who was later pulled into the idea of mechanizing thought: a number of researchers of Alan Turing’s time, originally interested in the idea of mechanizing ballistics, also became enamored with the idea of computing machines.
· Developing toys is not enough: you need
· Early single-purpose automata
· Initial pre-electronic attempts at chess-playing machines
· Cybernetics: the dawn of formal methods for the study of intelligence
· Mathematical tools must exist which can support the tasks at hand
· Newton and Leibniz and the development of the calculus
· Subsequent scientific development pushed mathematics ahead of physics
· Both geometry and logic raced ahead of physics and AI in the 1800s
· Formal mathematics of logic fairly recent
· Boolean Algebra
· Propositional Logic
· Formalized Reasoning
· Modeling using the calculus
With these tools — the inspiration and inclination to create machine intelligence, sound engineering practice, a scientific method to build upon previous work, and a rich body of mathematical knowledge which could provide a foundation for detailed work — the foundations for machine intelligence were laid and work began in earnest.
To be blunt, I am not going distinguish deeply between initial efforts to create computers and initial efforts towards artificial intelligence. I don’t want to appropriate all of computer science as part of artificial intelligence — or perhaps I do — but ALL computer science is an attempt to recreate at least one particular human job in a machine — the job of the human “computer”, or professional human calculator.
Looking at artificial intelligence and computer science as a whole, we can see a smooth progression between the foundations laid by early computer efforts and more complicated reasoning systems:
· Mechanized Calculation
· Symbolic Representation
· Automated Reasoning
· Knowledge Systems
· Functional Specialization
· Language
· Vision
· Learning
· Robotic Control
· Neural Simulation
· Evolutionary Programs
· Architectural Integration
· Robotic Architectures
· Agent Architectures
· Cognitive Architectures
While a vast array of mechanical devices exist to perform computations — differential gears, analogue wheels, hydraulic logic — it was the modern electronic computer that made practical computing machinery possible. While the first computers were intended as pure calculating machines, and some computer scientists like Djikstra are careful to note that computers are not brains, it is also important to note that many of the aspects of early computers were either designed to perform functions done by brains or explicitly designed the architecture of the brain in mind. Von Neumann, who was skeptical about the prospects of artificial intelligence, nevertheless modeled his machine architecture on that of the human brain, including input and output neurons, a memory store, and so on.
Modern digital calculating machines are based first and foremost on discrete information units — numbers, or at the most primitive, binary numbers or bits (binary digits). At their heart, almost all the information stored in a modern computer is simply a series of numbers.
· Computers as Calculators
· Designed to perform calculations
· Later designed in imitation of human brain
· Numerical Calculation
· Digital computers based on numerical representation in physical objects
· Numerical representations enables a wide range of mathematical calculation
· Usages – ballistics, census tabulation
· Text Encoding
· Text is not stored in most computers
· Mapping of letters or glyphs to numbers
· This mapping also applies to a variety of other data formats: images, etc.
· Usages - human readable output
· Logical encoding
· The computer making a choice
· True and False
· Branching and Switches
· Lists and Data Structures
· Numbers and text are difficult to process in large numbers
· Pointers enable one number in a computer to point to another
· Lists enable one data item to point to its next data item – the CONS cell
· Arrays are a sequence of data items with a size
· Hash tables and dictionaries are content based stores
· All lists, arrays, and so on can be seen as pointers to memory locations combined with size info
· Usages – more complex lookup operations
· Early artificial intelligences
· Control Systems - Cybernetics
· Feedforward: Control Output
· Feedback: Output Monitoring
· Comparison
· Compensation
· Negative Feedback – Homeostasis – Governors
· Positive Feedback – Augmentation – Servomechanisms
· Hunting – unwanted fluctuations requiring smoothing of control
· Neural networks
· Node locations
· Node weights
· Update functions
· Formal version of the classical syllogism AEIO
· Universal Affirmative: All subjects are predicate
· Universal Negative: No subjects are predicate
· Particular Affirmative: Some subjects are predicate
· Particular Negative: Some subjects is not predicate
· Boolean Algebra
· True, False
· And, Or, and Not
· Laws: Commutative, Associative, Distributive, Identities, Redundancy
· DeMorgan’s Theorem
· Propositional Logic
· True, False, Terms
· And, Or, Not, Exclusive Or, Implication (if), Equivalence (If and Only If)
· Parenthetical groupings
· Sentences (compositions of the above)
· Predicate Calculus (First Order Predicate Calculus or First Order Logic)
· Objects in a domain, Variables, Quantifier Terms
· Well-formed formulas
· As distinguished from higher-order logic
· Models of Computation
· Logical Reasoning
· Definite Procedures
· Computable Functions
· Finite State Automata
· The Turing Machines
· What they are
· What they can prove
· What can be reduced to them
· The Church-Turing Thesis
A physical symbol system is an actual physical object or system with the following components:
· Symbols – physical patterns that exist in some kind of system
· Expressions – groupings of symbols into larger structures
· Memory – collection of expressions in the system at any given time
· Processes –create, modify, reproduce and destroy expressions
Physical symbol systems are machines whose symbolic processes produce over time an evolving collection of symbolic expressions, and are connected to the world through expression and designation:
· Expressions designate objects when the system can manipulate or react to the object.
· The system can interpret expressions that designate processes the system can perform.
Physical symbol systems must satisfy the following features:
· A symbol can designate any expression realizable within the system.
· Each process in the system must be designated by some expression.
· Processes must exist to create and modify expressions in any way.
· Once created, expressions in memory persist until modified or deleted.
· The memory can hold an unbounded number of expressions.
The physical symbol system hypothesis is:
· A physical symbol system has the necessary and sufficient means for intelligent action.
What makes a knowledge system?
Specializing – note Binet always was this way
Robotic Architectures
Agent Architectures
Animats
Cognitive Architectures
· Programming Systems
· Interpreters
· Data values
· Variables
· Functions
· Recursion
· Loops
· Input and output
· Lists
· Dictionaries
· String Processing
Simply, ways of specifying sets of instructions that the computer can understand — punched cards, wiring arrangements, loader switches, or modern systems — bits in files that are edited with the computers themselves. At the most basic level a machine takes a set of instructions; a programming system provides a notation to express a human task in ways that can be reliably translated into machine instructions.
These programs can be broken down into several varieties: direct machine codes, a human-readable language like assembly, or high-level languages like C, Basic, APL or Python. Machine codes are rarely used except for embedded devices. Assembly language is very close to the machine — uses human-readable words to stand for machine instructions, plus certain mumbles to enable convenient shorthand or provide access operating system features.
The next stage is compiled programs — like assembly, these take a notation and transform it into machine code (or perhaps to assembly for further mechanical translation) but the machine level has largely disappeared. This includes languages which are very close to the machine, like FORTRAN and C, and languages which are farther away, such as COBOL and ALGOL.
Beyond this are interpreted programs — these include some form of runtime program mediating between the notation and the machine. One of the earliest such systems is LISP, whereas others are Basic and Smalltalk. (Many interpreted languages also have compilers; so while Lisp has a reputation for being slow, compiled Lisp can be faster than C).
Later systems began to blur these distinctions. The C language contained small kernel compiler supported by many libraries — port the compiler, and you could port the whole language. Bytecode interpreters took this further by compiling their languages down to bytecode — a language for a “virtual machine” which runs independently of the underlying platforms, like USCD Pascal or Java.
As systems got more complicated the need for running many different programs in sequence became important. Early scripting languages were purely interpreted — advanced job-control language which scheduled the running of programs written in other languages. Modern scripting languages strike a balance — they contain runtime engines and libraries written in C and Assembler, and compile their programs into bytecodes which run on top of these runtimes, but can be run in interpreted mode, and call out to arbitrary programs written in other languages. Perl is the best-known example of this.
Python is one of this last breed — a modern object oriented scripting language, compiled on the fly to bytecodes executed on a runtime engine available for almost all modern computer systems. For fast execution it provides a large library of numeric routines and an easy ability to run C. Best of all, it is designed to be very easy to read and use.
Numbers – 1, 1.0. 1253152251
Strings – 'a', 'Hello, World', 'This string has “quotes” in it', """This string
Spans multiple
Lines"""
Lists= [1, 2, 3]; [“this”, “is”, “a”, “list”]
Tuples = (1,2,3)
Dictionaries = {1: “a”, 2: “b”}
Objects: ‘this string has quotes’.split(‘ ‘)
This = 1
This,that, theOther = 1,2,3
def square (n): n*n
def factorial(n): if n<1: 1 else: n*factorial(n-1)
For thing in range(1,10): print thing
Print “Hello”
Print “hello”,
This = read()
File = open(“this”)
File.readline()
File.read()
Word[value]
“hello”[0:3]