Donald Tveter
Pattern Recognition Basis Artificial Intelligence
Donald Tveter
Pattern Recognition Basis Artificial Intelligence
- Broschiertes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
The book is organized in a manner in which the reader will get an intuitive feeling for the principles of AI. Throughout the book applications of basic principles are demonstrated by examining some classic AI programs in detail. The book can serve as a text for juniors, seniors and first year graduate students in Computer Science or Psychology and includes sample problems and data for exercises and a list of frequently asked questions.
Andere Kunden interessierten sich auch für
- Richard O DudaPattern Classification 2nd Edition with Computer Manual 2nd Edition Set203,99 €
- Richard O. DudaPattern Classification210,99 €
- Ulisses M. Braga NetoError Estimation for Pattern Recognition162,99 €
- Ricardo CaferraLogic for Computer Science and Artificial Intelligence261,99 €
- Jan FlusserMoments and Moment Invariants in Pattern Recognition138,99 €
- Pradipta MajiRough-Fuzzy Pattern Recognition140,99 €
- Jacques PitratArtificial Beings189,99 €
-
-
-
The book is organized in a manner in which the reader will get an intuitive feeling for the principles of AI. Throughout the book applications of basic principles are demonstrated by examining some classic AI programs in detail. The book can serve as a text for juniors, seniors and first year graduate students in Computer Science or Psychology and includes sample problems and data for exercises and a list of frequently asked questions.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Wiley
- Seitenzahl: 388
- Erscheinungstermin: 13. März 1998
- Englisch
- Abmessung: 235mm x 191mm x 21mm
- Gewicht: 723g
- ISBN-13: 9780818677960
- ISBN-10: 0818677961
- Artikelnr.: 22078517
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
- Verlag: Wiley
- Seitenzahl: 388
- Erscheinungstermin: 13. März 1998
- Englisch
- Abmessung: 235mm x 191mm x 21mm
- Gewicht: 723g
- ISBN-13: 9780818677960
- ISBN-10: 0818677961
- Artikelnr.: 22078517
- Herstellerkennzeichnung
- Libri GmbH
- Europaallee 1
- 36244 Bad Hersfeld
- 06621 890
Donald Tveter is the author of The Pattern Recognition Basis of Artificial Intelligence, published by Wiley.
Preface.
1 Artificial Intelligence.
1.1 Artificial Intelligence and Intelligence
1.1.1 Intelligence.
1.1.2 Thinking.
1.1.3 The Turing Test for Thinking.
1.1.4 The Chinese Room Argument.
1.1.5 Consciousness and Quantum Mechanics.
1.1.6 Dualism.
1.2 Association.
1.3 Neural Networking.
1.3.1 Artificial Neural Networks.
1.3.2 Biological Neural Networks.
1.4 Symbol Processing.
1.5 Heuristic Search.
1.6 The Problems with AI.
1.7 The New Proposals.
1.7.1 Real Numbers.
1.7.2 Picture Processing.
1.7.3 Memories.
1.7.4 Quantum Mechanics.
1.8 The Organization of the Book.
1.9 Exercises.
2 Pattern Recognition I.
2.1 A Simple Pattern Recognition Algorithm.
2.2 A Short Description of the Neocognitron.
2.2.1 Detecting Short Lines.
2.2.2 A Typical Neocognitron.
2.2.3 Training the Neocognitron.
2.2.4 Some Results.
2.3 Recognizing Words.
2.4 Expanding the Pattern Recognition Hierarchy.
2.4.1 Hearing.
2.4.2 Higher Levels.
2.4.3 The Hierarchy.
2.4.4 On the Hierarchy.
2.5 Additional Perspective.
2.5.1 Other Systems.
2.5.2 Realism.
2.5.3 Bigger Problems.
2.6 Exercises.
3 Pattern Recognition II.
3.1 Mathematics, Pattern Recognition, and the Linear Pattern Classifier.
3.1.1 The Linear Pattern Classifier.
3.1.2 ADALINEs and MADELINEs.
3.1.3 Perceptrons.
3.2 Separating Nonlinearly Separable Classes.
3.2.1 The Nearest Neighbor Algorithm.
3.2.2 Learning Vector Quantization Methods.
3.3 Hopfield Networks.
3.3.1 The Hopfield Network.
3.3.2 Storing Patterns.
3.3.3 The Boltzman Machine.
3.3.4 Pattern Recognition.
3.3.5 Harmony.
3.3.6 Comparison with Human Thinking.
3.4 Back-Propagation.
3.4.1 History.
3.4.2 The Network.
3.4.3 Computing the Weights.
3.4.4 Speeding Up Back-Propagation.
3.4.5 Dealing with Local Minima.
3.4.6 Using Back-Propagation to Train Hopfield/Boltzman Networks.
3.5 Pattern Recognition and Curve Fitting.
3.5.1 Pattern Recognition as Curve Fitting.
3.5.2 Approximating Real-Valued Functions.
3.5.3 Overfitting.
3.6 Associative Memory and Generalization.
3.6.1 Associative Memory.
3.6.2 Local and Distributed Representations.
3.6.3 Reasoning within a Network.
3.7 Applications of Back-Propagation.
3.7.1 Interpreting Sonar Returns.
3.7.2 Reading Text. 3.7.3 Speech Recognition.
3.7.4 Detecting Bombs.
3.7.5 Economic Analysis.
3.7.6 Learning to Drive.
3.7.7 DNA Analysis.
3.8 Additional Perspective.
3.9 Exercises.
4 Rule-Based Methods.
4.1 Introduction.
4.2 Some Elementary Prolog.
4.2.1 Stating Facts.
4.2.2 Syntax.
4.2.3 Asking Questions.
4.2.4 Rules.
4.2.5 Recursion.
4.2.6 List Processing.
4.2.7 Other Predicates.
4.3 Rules and Basic Rule Interpretation Methods.
4.3.1 A Small Rule-Based System.
4.3.2 Forward Chaining.
4.3.3 Backward Chaining.
4.4 Conflict Resolution.
4.5 More Sophisticated Rule Interpretation.
4.5.1 Dealing with Incomplete Data by Asking Questions.
4.5.2 Other Activation Functions.
4.5.3 Uncertain Input.
4.5.4 Extra Facilities for Rule Interpreters.
4.6 The Famous Expert Systems.
4.6.1 DENDRAL.
4.6.2 MYCIN.
4.6.3 PROSPECTOR.
4.6.4 ACE.
4.6.5 XCON.
4.7 Learning Rules in SOAR.
4.7.1 A Searching Example.
4.7.2 The Power Law of Practice.
4.8 Rules versus Networks.
4.9 Exercises.
5 Logic.
5.1 Standard Form and Clausal Form.
5.2 Basic Inference Rules.
5.2.1 Inference Rules.
5.2.2 Clauses with Variables.
5.3 Controlling Search.
5.3.1 The Problem with Blind Searching.
5.3.2 Proof by Contradiction.
5.3.3 The Set-of-Support Strategy.
5.3.4 Weighting.
5.3.5 Prolog's Strategy.
5.4 An Example Using Otter.
5.4.1 The Problem.
5.5 The Usefulness of Predicate Calculus.
5.6 Other Reasoning Methods.
5.7 Exercises.
6 Complex Architectures.
6.1 The Basic Human Architecture.
6.2 Flow of Control.
6.3 The Virtual Symbol Processing Machine Proposal.
6.4 Mental Representation and Computer Representation.
6.4.1 A Problem with Symbolic Representation.
6.4.2 Symbol Grounding as a Solution.
6.4.3 Structure and Operations on Structures.
6.5 Storing Sequential Events.
6.5.1 The Symbolic Solution.
6.5.2 Neural Solutions.
6.6 Structuring Individual Thoughts.
6.6.1 The Symbolic Methods.
6.6.2 Neural Methods.
6.7 Frames and Scripts.
6.7.1 Schemas and Frames.
6.7.2 Scripts.
6.8 Exercises.
7 Case-Based and Memory-Based Reasoning
7.1 Condensed versus Uncondensed Knowledge
7.1.1 Arguments For Condensed Knowledge
7.1.2 Arguments Against Condensed Knowledge.
7.1.3 Problems with Condensed Representations.
7.2 Memory-Based Reasoning.
7.2.1 A Simple Example.
7.2.2 MBRtalk.
7.2.3 A HERBIE Solution to Reading.
7.2.4 JOHNNY.
7.2.5 PACE.
7.3 Case-Based Reasoning.
7.3.1 Case-Based Reasoning in People.
7.3.2 CHEF.
7.4 Other Case-Based Programs.
7.5 Exercises.
8 Problem Solving and Heuristic Search.
8.1 The 8-Puzzle.
8.1.1 The Blind Search Methods.
8.1.2 Heuristic Searches.
8.1.3 Other Methods.
8.2 A Geometry Theorem Prover.
8.3 Symbolic Integration and Heuristic Search.
8.3.1 SAINT.
8.3.2 A Symbolic Program to Learn Integration.
8.3.3 A Partial Back-Propagation Solution.
8.4 Other Heuristic Programs.
8.5 Exercises.
9 Game Playing.
9.1 General Game Playing Techniques.
9.1.1 Minimax.
9.1.2 More Sophisticated Searching Methods.
9.1.3 Using Experience.
9.2 Checkers.
9.2.1 Rote Learning.
9.2.2 Generalization Learning.
9.2.3 Samuel's Later Work.
9.2.4 Chinook.
9.3 Backgammon.
9.3.1 Berliner's BKG Program.
9.3.2 Backgammon using Back-Propagation.
9.3.3 A Second Back-Propagation Approach.
9.3.4 Temporal Difference Learning.
9.4 Exercises.
10 Natural Language Processing.
10.1 Formal Languages.
10.2 The Transition Network Grammar.
10.2.1 A Simple Transition Network.
10.2.2 A Prolog Implementation.
10.2.3 A Neural Analog.
10.2.4 Syntax is not Enough.
10.3 Semantics-Based Methods.
10.3.1 Semantic Grammar.
10.3.2 Conceptual Dependency Notation.
10.4 Scripts and Short Stories.
10.5 A Neural-Network-Based Approach.
10.6 Defining Words by the Way they are Used.
10.7 A Recurrent Network for Sentences.
10.8 Neural-Based Scripts.
10.9 Learning the Past Tense of Verbs.
10.9.1 Over-Regularization.
10.9.2 The Rumelhart and McClelland Network.
10.9.3 The Classical Rule-Based Model.
10.9.4 A Hybrid Model.
10.10 Other Positions on Language.
10.11 Exercises Afterword.
A Appendix A.
A.1 A Derivation of Back-Propagation.
A.1.1 The Delta Rule.
A.1.2 The Generalized Delta Rule, or Back-Propagation.
Glossary.
Bibliography.
Index.
1 Artificial Intelligence.
1.1 Artificial Intelligence and Intelligence
1.1.1 Intelligence.
1.1.2 Thinking.
1.1.3 The Turing Test for Thinking.
1.1.4 The Chinese Room Argument.
1.1.5 Consciousness and Quantum Mechanics.
1.1.6 Dualism.
1.2 Association.
1.3 Neural Networking.
1.3.1 Artificial Neural Networks.
1.3.2 Biological Neural Networks.
1.4 Symbol Processing.
1.5 Heuristic Search.
1.6 The Problems with AI.
1.7 The New Proposals.
1.7.1 Real Numbers.
1.7.2 Picture Processing.
1.7.3 Memories.
1.7.4 Quantum Mechanics.
1.8 The Organization of the Book.
1.9 Exercises.
2 Pattern Recognition I.
2.1 A Simple Pattern Recognition Algorithm.
2.2 A Short Description of the Neocognitron.
2.2.1 Detecting Short Lines.
2.2.2 A Typical Neocognitron.
2.2.3 Training the Neocognitron.
2.2.4 Some Results.
2.3 Recognizing Words.
2.4 Expanding the Pattern Recognition Hierarchy.
2.4.1 Hearing.
2.4.2 Higher Levels.
2.4.3 The Hierarchy.
2.4.4 On the Hierarchy.
2.5 Additional Perspective.
2.5.1 Other Systems.
2.5.2 Realism.
2.5.3 Bigger Problems.
2.6 Exercises.
3 Pattern Recognition II.
3.1 Mathematics, Pattern Recognition, and the Linear Pattern Classifier.
3.1.1 The Linear Pattern Classifier.
3.1.2 ADALINEs and MADELINEs.
3.1.3 Perceptrons.
3.2 Separating Nonlinearly Separable Classes.
3.2.1 The Nearest Neighbor Algorithm.
3.2.2 Learning Vector Quantization Methods.
3.3 Hopfield Networks.
3.3.1 The Hopfield Network.
3.3.2 Storing Patterns.
3.3.3 The Boltzman Machine.
3.3.4 Pattern Recognition.
3.3.5 Harmony.
3.3.6 Comparison with Human Thinking.
3.4 Back-Propagation.
3.4.1 History.
3.4.2 The Network.
3.4.3 Computing the Weights.
3.4.4 Speeding Up Back-Propagation.
3.4.5 Dealing with Local Minima.
3.4.6 Using Back-Propagation to Train Hopfield/Boltzman Networks.
3.5 Pattern Recognition and Curve Fitting.
3.5.1 Pattern Recognition as Curve Fitting.
3.5.2 Approximating Real-Valued Functions.
3.5.3 Overfitting.
3.6 Associative Memory and Generalization.
3.6.1 Associative Memory.
3.6.2 Local and Distributed Representations.
3.6.3 Reasoning within a Network.
3.7 Applications of Back-Propagation.
3.7.1 Interpreting Sonar Returns.
3.7.2 Reading Text. 3.7.3 Speech Recognition.
3.7.4 Detecting Bombs.
3.7.5 Economic Analysis.
3.7.6 Learning to Drive.
3.7.7 DNA Analysis.
3.8 Additional Perspective.
3.9 Exercises.
4 Rule-Based Methods.
4.1 Introduction.
4.2 Some Elementary Prolog.
4.2.1 Stating Facts.
4.2.2 Syntax.
4.2.3 Asking Questions.
4.2.4 Rules.
4.2.5 Recursion.
4.2.6 List Processing.
4.2.7 Other Predicates.
4.3 Rules and Basic Rule Interpretation Methods.
4.3.1 A Small Rule-Based System.
4.3.2 Forward Chaining.
4.3.3 Backward Chaining.
4.4 Conflict Resolution.
4.5 More Sophisticated Rule Interpretation.
4.5.1 Dealing with Incomplete Data by Asking Questions.
4.5.2 Other Activation Functions.
4.5.3 Uncertain Input.
4.5.4 Extra Facilities for Rule Interpreters.
4.6 The Famous Expert Systems.
4.6.1 DENDRAL.
4.6.2 MYCIN.
4.6.3 PROSPECTOR.
4.6.4 ACE.
4.6.5 XCON.
4.7 Learning Rules in SOAR.
4.7.1 A Searching Example.
4.7.2 The Power Law of Practice.
4.8 Rules versus Networks.
4.9 Exercises.
5 Logic.
5.1 Standard Form and Clausal Form.
5.2 Basic Inference Rules.
5.2.1 Inference Rules.
5.2.2 Clauses with Variables.
5.3 Controlling Search.
5.3.1 The Problem with Blind Searching.
5.3.2 Proof by Contradiction.
5.3.3 The Set-of-Support Strategy.
5.3.4 Weighting.
5.3.5 Prolog's Strategy.
5.4 An Example Using Otter.
5.4.1 The Problem.
5.5 The Usefulness of Predicate Calculus.
5.6 Other Reasoning Methods.
5.7 Exercises.
6 Complex Architectures.
6.1 The Basic Human Architecture.
6.2 Flow of Control.
6.3 The Virtual Symbol Processing Machine Proposal.
6.4 Mental Representation and Computer Representation.
6.4.1 A Problem with Symbolic Representation.
6.4.2 Symbol Grounding as a Solution.
6.4.3 Structure and Operations on Structures.
6.5 Storing Sequential Events.
6.5.1 The Symbolic Solution.
6.5.2 Neural Solutions.
6.6 Structuring Individual Thoughts.
6.6.1 The Symbolic Methods.
6.6.2 Neural Methods.
6.7 Frames and Scripts.
6.7.1 Schemas and Frames.
6.7.2 Scripts.
6.8 Exercises.
7 Case-Based and Memory-Based Reasoning
7.1 Condensed versus Uncondensed Knowledge
7.1.1 Arguments For Condensed Knowledge
7.1.2 Arguments Against Condensed Knowledge.
7.1.3 Problems with Condensed Representations.
7.2 Memory-Based Reasoning.
7.2.1 A Simple Example.
7.2.2 MBRtalk.
7.2.3 A HERBIE Solution to Reading.
7.2.4 JOHNNY.
7.2.5 PACE.
7.3 Case-Based Reasoning.
7.3.1 Case-Based Reasoning in People.
7.3.2 CHEF.
7.4 Other Case-Based Programs.
7.5 Exercises.
8 Problem Solving and Heuristic Search.
8.1 The 8-Puzzle.
8.1.1 The Blind Search Methods.
8.1.2 Heuristic Searches.
8.1.3 Other Methods.
8.2 A Geometry Theorem Prover.
8.3 Symbolic Integration and Heuristic Search.
8.3.1 SAINT.
8.3.2 A Symbolic Program to Learn Integration.
8.3.3 A Partial Back-Propagation Solution.
8.4 Other Heuristic Programs.
8.5 Exercises.
9 Game Playing.
9.1 General Game Playing Techniques.
9.1.1 Minimax.
9.1.2 More Sophisticated Searching Methods.
9.1.3 Using Experience.
9.2 Checkers.
9.2.1 Rote Learning.
9.2.2 Generalization Learning.
9.2.3 Samuel's Later Work.
9.2.4 Chinook.
9.3 Backgammon.
9.3.1 Berliner's BKG Program.
9.3.2 Backgammon using Back-Propagation.
9.3.3 A Second Back-Propagation Approach.
9.3.4 Temporal Difference Learning.
9.4 Exercises.
10 Natural Language Processing.
10.1 Formal Languages.
10.2 The Transition Network Grammar.
10.2.1 A Simple Transition Network.
10.2.2 A Prolog Implementation.
10.2.3 A Neural Analog.
10.2.4 Syntax is not Enough.
10.3 Semantics-Based Methods.
10.3.1 Semantic Grammar.
10.3.2 Conceptual Dependency Notation.
10.4 Scripts and Short Stories.
10.5 A Neural-Network-Based Approach.
10.6 Defining Words by the Way they are Used.
10.7 A Recurrent Network for Sentences.
10.8 Neural-Based Scripts.
10.9 Learning the Past Tense of Verbs.
10.9.1 Over-Regularization.
10.9.2 The Rumelhart and McClelland Network.
10.9.3 The Classical Rule-Based Model.
10.9.4 A Hybrid Model.
10.10 Other Positions on Language.
10.11 Exercises Afterword.
A Appendix A.
A.1 A Derivation of Back-Propagation.
A.1.1 The Delta Rule.
A.1.2 The Generalized Delta Rule, or Back-Propagation.
Glossary.
Bibliography.
Index.
Preface.
1 Artificial Intelligence.
1.1 Artificial Intelligence and Intelligence
1.1.1 Intelligence.
1.1.2 Thinking.
1.1.3 The Turing Test for Thinking.
1.1.4 The Chinese Room Argument.
1.1.5 Consciousness and Quantum Mechanics.
1.1.6 Dualism.
1.2 Association.
1.3 Neural Networking.
1.3.1 Artificial Neural Networks.
1.3.2 Biological Neural Networks.
1.4 Symbol Processing.
1.5 Heuristic Search.
1.6 The Problems with AI.
1.7 The New Proposals.
1.7.1 Real Numbers.
1.7.2 Picture Processing.
1.7.3 Memories.
1.7.4 Quantum Mechanics.
1.8 The Organization of the Book.
1.9 Exercises.
2 Pattern Recognition I.
2.1 A Simple Pattern Recognition Algorithm.
2.2 A Short Description of the Neocognitron.
2.2.1 Detecting Short Lines.
2.2.2 A Typical Neocognitron.
2.2.3 Training the Neocognitron.
2.2.4 Some Results.
2.3 Recognizing Words.
2.4 Expanding the Pattern Recognition Hierarchy.
2.4.1 Hearing.
2.4.2 Higher Levels.
2.4.3 The Hierarchy.
2.4.4 On the Hierarchy.
2.5 Additional Perspective.
2.5.1 Other Systems.
2.5.2 Realism.
2.5.3 Bigger Problems.
2.6 Exercises.
3 Pattern Recognition II.
3.1 Mathematics, Pattern Recognition, and the Linear Pattern Classifier.
3.1.1 The Linear Pattern Classifier.
3.1.2 ADALINEs and MADELINEs.
3.1.3 Perceptrons.
3.2 Separating Nonlinearly Separable Classes.
3.2.1 The Nearest Neighbor Algorithm.
3.2.2 Learning Vector Quantization Methods.
3.3 Hopfield Networks.
3.3.1 The Hopfield Network.
3.3.2 Storing Patterns.
3.3.3 The Boltzman Machine.
3.3.4 Pattern Recognition.
3.3.5 Harmony.
3.3.6 Comparison with Human Thinking.
3.4 Back-Propagation.
3.4.1 History.
3.4.2 The Network.
3.4.3 Computing the Weights.
3.4.4 Speeding Up Back-Propagation.
3.4.5 Dealing with Local Minima.
3.4.6 Using Back-Propagation to Train Hopfield/Boltzman Networks.
3.5 Pattern Recognition and Curve Fitting.
3.5.1 Pattern Recognition as Curve Fitting.
3.5.2 Approximating Real-Valued Functions.
3.5.3 Overfitting.
3.6 Associative Memory and Generalization.
3.6.1 Associative Memory.
3.6.2 Local and Distributed Representations.
3.6.3 Reasoning within a Network.
3.7 Applications of Back-Propagation.
3.7.1 Interpreting Sonar Returns.
3.7.2 Reading Text. 3.7.3 Speech Recognition.
3.7.4 Detecting Bombs.
3.7.5 Economic Analysis.
3.7.6 Learning to Drive.
3.7.7 DNA Analysis.
3.8 Additional Perspective.
3.9 Exercises.
4 Rule-Based Methods.
4.1 Introduction.
4.2 Some Elementary Prolog.
4.2.1 Stating Facts.
4.2.2 Syntax.
4.2.3 Asking Questions.
4.2.4 Rules.
4.2.5 Recursion.
4.2.6 List Processing.
4.2.7 Other Predicates.
4.3 Rules and Basic Rule Interpretation Methods.
4.3.1 A Small Rule-Based System.
4.3.2 Forward Chaining.
4.3.3 Backward Chaining.
4.4 Conflict Resolution.
4.5 More Sophisticated Rule Interpretation.
4.5.1 Dealing with Incomplete Data by Asking Questions.
4.5.2 Other Activation Functions.
4.5.3 Uncertain Input.
4.5.4 Extra Facilities for Rule Interpreters.
4.6 The Famous Expert Systems.
4.6.1 DENDRAL.
4.6.2 MYCIN.
4.6.3 PROSPECTOR.
4.6.4 ACE.
4.6.5 XCON.
4.7 Learning Rules in SOAR.
4.7.1 A Searching Example.
4.7.2 The Power Law of Practice.
4.8 Rules versus Networks.
4.9 Exercises.
5 Logic.
5.1 Standard Form and Clausal Form.
5.2 Basic Inference Rules.
5.2.1 Inference Rules.
5.2.2 Clauses with Variables.
5.3 Controlling Search.
5.3.1 The Problem with Blind Searching.
5.3.2 Proof by Contradiction.
5.3.3 The Set-of-Support Strategy.
5.3.4 Weighting.
5.3.5 Prolog's Strategy.
5.4 An Example Using Otter.
5.4.1 The Problem.
5.5 The Usefulness of Predicate Calculus.
5.6 Other Reasoning Methods.
5.7 Exercises.
6 Complex Architectures.
6.1 The Basic Human Architecture.
6.2 Flow of Control.
6.3 The Virtual Symbol Processing Machine Proposal.
6.4 Mental Representation and Computer Representation.
6.4.1 A Problem with Symbolic Representation.
6.4.2 Symbol Grounding as a Solution.
6.4.3 Structure and Operations on Structures.
6.5 Storing Sequential Events.
6.5.1 The Symbolic Solution.
6.5.2 Neural Solutions.
6.6 Structuring Individual Thoughts.
6.6.1 The Symbolic Methods.
6.6.2 Neural Methods.
6.7 Frames and Scripts.
6.7.1 Schemas and Frames.
6.7.2 Scripts.
6.8 Exercises.
7 Case-Based and Memory-Based Reasoning
7.1 Condensed versus Uncondensed Knowledge
7.1.1 Arguments For Condensed Knowledge
7.1.2 Arguments Against Condensed Knowledge.
7.1.3 Problems with Condensed Representations.
7.2 Memory-Based Reasoning.
7.2.1 A Simple Example.
7.2.2 MBRtalk.
7.2.3 A HERBIE Solution to Reading.
7.2.4 JOHNNY.
7.2.5 PACE.
7.3 Case-Based Reasoning.
7.3.1 Case-Based Reasoning in People.
7.3.2 CHEF.
7.4 Other Case-Based Programs.
7.5 Exercises.
8 Problem Solving and Heuristic Search.
8.1 The 8-Puzzle.
8.1.1 The Blind Search Methods.
8.1.2 Heuristic Searches.
8.1.3 Other Methods.
8.2 A Geometry Theorem Prover.
8.3 Symbolic Integration and Heuristic Search.
8.3.1 SAINT.
8.3.2 A Symbolic Program to Learn Integration.
8.3.3 A Partial Back-Propagation Solution.
8.4 Other Heuristic Programs.
8.5 Exercises.
9 Game Playing.
9.1 General Game Playing Techniques.
9.1.1 Minimax.
9.1.2 More Sophisticated Searching Methods.
9.1.3 Using Experience.
9.2 Checkers.
9.2.1 Rote Learning.
9.2.2 Generalization Learning.
9.2.3 Samuel's Later Work.
9.2.4 Chinook.
9.3 Backgammon.
9.3.1 Berliner's BKG Program.
9.3.2 Backgammon using Back-Propagation.
9.3.3 A Second Back-Propagation Approach.
9.3.4 Temporal Difference Learning.
9.4 Exercises.
10 Natural Language Processing.
10.1 Formal Languages.
10.2 The Transition Network Grammar.
10.2.1 A Simple Transition Network.
10.2.2 A Prolog Implementation.
10.2.3 A Neural Analog.
10.2.4 Syntax is not Enough.
10.3 Semantics-Based Methods.
10.3.1 Semantic Grammar.
10.3.2 Conceptual Dependency Notation.
10.4 Scripts and Short Stories.
10.5 A Neural-Network-Based Approach.
10.6 Defining Words by the Way they are Used.
10.7 A Recurrent Network for Sentences.
10.8 Neural-Based Scripts.
10.9 Learning the Past Tense of Verbs.
10.9.1 Over-Regularization.
10.9.2 The Rumelhart and McClelland Network.
10.9.3 The Classical Rule-Based Model.
10.9.4 A Hybrid Model.
10.10 Other Positions on Language.
10.11 Exercises Afterword.
A Appendix A.
A.1 A Derivation of Back-Propagation.
A.1.1 The Delta Rule.
A.1.2 The Generalized Delta Rule, or Back-Propagation.
Glossary.
Bibliography.
Index.
1 Artificial Intelligence.
1.1 Artificial Intelligence and Intelligence
1.1.1 Intelligence.
1.1.2 Thinking.
1.1.3 The Turing Test for Thinking.
1.1.4 The Chinese Room Argument.
1.1.5 Consciousness and Quantum Mechanics.
1.1.6 Dualism.
1.2 Association.
1.3 Neural Networking.
1.3.1 Artificial Neural Networks.
1.3.2 Biological Neural Networks.
1.4 Symbol Processing.
1.5 Heuristic Search.
1.6 The Problems with AI.
1.7 The New Proposals.
1.7.1 Real Numbers.
1.7.2 Picture Processing.
1.7.3 Memories.
1.7.4 Quantum Mechanics.
1.8 The Organization of the Book.
1.9 Exercises.
2 Pattern Recognition I.
2.1 A Simple Pattern Recognition Algorithm.
2.2 A Short Description of the Neocognitron.
2.2.1 Detecting Short Lines.
2.2.2 A Typical Neocognitron.
2.2.3 Training the Neocognitron.
2.2.4 Some Results.
2.3 Recognizing Words.
2.4 Expanding the Pattern Recognition Hierarchy.
2.4.1 Hearing.
2.4.2 Higher Levels.
2.4.3 The Hierarchy.
2.4.4 On the Hierarchy.
2.5 Additional Perspective.
2.5.1 Other Systems.
2.5.2 Realism.
2.5.3 Bigger Problems.
2.6 Exercises.
3 Pattern Recognition II.
3.1 Mathematics, Pattern Recognition, and the Linear Pattern Classifier.
3.1.1 The Linear Pattern Classifier.
3.1.2 ADALINEs and MADELINEs.
3.1.3 Perceptrons.
3.2 Separating Nonlinearly Separable Classes.
3.2.1 The Nearest Neighbor Algorithm.
3.2.2 Learning Vector Quantization Methods.
3.3 Hopfield Networks.
3.3.1 The Hopfield Network.
3.3.2 Storing Patterns.
3.3.3 The Boltzman Machine.
3.3.4 Pattern Recognition.
3.3.5 Harmony.
3.3.6 Comparison with Human Thinking.
3.4 Back-Propagation.
3.4.1 History.
3.4.2 The Network.
3.4.3 Computing the Weights.
3.4.4 Speeding Up Back-Propagation.
3.4.5 Dealing with Local Minima.
3.4.6 Using Back-Propagation to Train Hopfield/Boltzman Networks.
3.5 Pattern Recognition and Curve Fitting.
3.5.1 Pattern Recognition as Curve Fitting.
3.5.2 Approximating Real-Valued Functions.
3.5.3 Overfitting.
3.6 Associative Memory and Generalization.
3.6.1 Associative Memory.
3.6.2 Local and Distributed Representations.
3.6.3 Reasoning within a Network.
3.7 Applications of Back-Propagation.
3.7.1 Interpreting Sonar Returns.
3.7.2 Reading Text. 3.7.3 Speech Recognition.
3.7.4 Detecting Bombs.
3.7.5 Economic Analysis.
3.7.6 Learning to Drive.
3.7.7 DNA Analysis.
3.8 Additional Perspective.
3.9 Exercises.
4 Rule-Based Methods.
4.1 Introduction.
4.2 Some Elementary Prolog.
4.2.1 Stating Facts.
4.2.2 Syntax.
4.2.3 Asking Questions.
4.2.4 Rules.
4.2.5 Recursion.
4.2.6 List Processing.
4.2.7 Other Predicates.
4.3 Rules and Basic Rule Interpretation Methods.
4.3.1 A Small Rule-Based System.
4.3.2 Forward Chaining.
4.3.3 Backward Chaining.
4.4 Conflict Resolution.
4.5 More Sophisticated Rule Interpretation.
4.5.1 Dealing with Incomplete Data by Asking Questions.
4.5.2 Other Activation Functions.
4.5.3 Uncertain Input.
4.5.4 Extra Facilities for Rule Interpreters.
4.6 The Famous Expert Systems.
4.6.1 DENDRAL.
4.6.2 MYCIN.
4.6.3 PROSPECTOR.
4.6.4 ACE.
4.6.5 XCON.
4.7 Learning Rules in SOAR.
4.7.1 A Searching Example.
4.7.2 The Power Law of Practice.
4.8 Rules versus Networks.
4.9 Exercises.
5 Logic.
5.1 Standard Form and Clausal Form.
5.2 Basic Inference Rules.
5.2.1 Inference Rules.
5.2.2 Clauses with Variables.
5.3 Controlling Search.
5.3.1 The Problem with Blind Searching.
5.3.2 Proof by Contradiction.
5.3.3 The Set-of-Support Strategy.
5.3.4 Weighting.
5.3.5 Prolog's Strategy.
5.4 An Example Using Otter.
5.4.1 The Problem.
5.5 The Usefulness of Predicate Calculus.
5.6 Other Reasoning Methods.
5.7 Exercises.
6 Complex Architectures.
6.1 The Basic Human Architecture.
6.2 Flow of Control.
6.3 The Virtual Symbol Processing Machine Proposal.
6.4 Mental Representation and Computer Representation.
6.4.1 A Problem with Symbolic Representation.
6.4.2 Symbol Grounding as a Solution.
6.4.3 Structure and Operations on Structures.
6.5 Storing Sequential Events.
6.5.1 The Symbolic Solution.
6.5.2 Neural Solutions.
6.6 Structuring Individual Thoughts.
6.6.1 The Symbolic Methods.
6.6.2 Neural Methods.
6.7 Frames and Scripts.
6.7.1 Schemas and Frames.
6.7.2 Scripts.
6.8 Exercises.
7 Case-Based and Memory-Based Reasoning
7.1 Condensed versus Uncondensed Knowledge
7.1.1 Arguments For Condensed Knowledge
7.1.2 Arguments Against Condensed Knowledge.
7.1.3 Problems with Condensed Representations.
7.2 Memory-Based Reasoning.
7.2.1 A Simple Example.
7.2.2 MBRtalk.
7.2.3 A HERBIE Solution to Reading.
7.2.4 JOHNNY.
7.2.5 PACE.
7.3 Case-Based Reasoning.
7.3.1 Case-Based Reasoning in People.
7.3.2 CHEF.
7.4 Other Case-Based Programs.
7.5 Exercises.
8 Problem Solving and Heuristic Search.
8.1 The 8-Puzzle.
8.1.1 The Blind Search Methods.
8.1.2 Heuristic Searches.
8.1.3 Other Methods.
8.2 A Geometry Theorem Prover.
8.3 Symbolic Integration and Heuristic Search.
8.3.1 SAINT.
8.3.2 A Symbolic Program to Learn Integration.
8.3.3 A Partial Back-Propagation Solution.
8.4 Other Heuristic Programs.
8.5 Exercises.
9 Game Playing.
9.1 General Game Playing Techniques.
9.1.1 Minimax.
9.1.2 More Sophisticated Searching Methods.
9.1.3 Using Experience.
9.2 Checkers.
9.2.1 Rote Learning.
9.2.2 Generalization Learning.
9.2.3 Samuel's Later Work.
9.2.4 Chinook.
9.3 Backgammon.
9.3.1 Berliner's BKG Program.
9.3.2 Backgammon using Back-Propagation.
9.3.3 A Second Back-Propagation Approach.
9.3.4 Temporal Difference Learning.
9.4 Exercises.
10 Natural Language Processing.
10.1 Formal Languages.
10.2 The Transition Network Grammar.
10.2.1 A Simple Transition Network.
10.2.2 A Prolog Implementation.
10.2.3 A Neural Analog.
10.2.4 Syntax is not Enough.
10.3 Semantics-Based Methods.
10.3.1 Semantic Grammar.
10.3.2 Conceptual Dependency Notation.
10.4 Scripts and Short Stories.
10.5 A Neural-Network-Based Approach.
10.6 Defining Words by the Way they are Used.
10.7 A Recurrent Network for Sentences.
10.8 Neural-Based Scripts.
10.9 Learning the Past Tense of Verbs.
10.9.1 Over-Regularization.
10.9.2 The Rumelhart and McClelland Network.
10.9.3 The Classical Rule-Based Model.
10.9.4 A Hybrid Model.
10.10 Other Positions on Language.
10.11 Exercises Afterword.
A Appendix A.
A.1 A Derivation of Back-Propagation.
A.1.1 The Delta Rule.
A.1.2 The Generalized Delta Rule, or Back-Propagation.
Glossary.
Bibliography.
Index.