Knowing our World: An Artificial Intelligence Perspective considers the methodologies of science, computation, and artificial intelligence to explore how we humans come to understand and operate in our world. While humankind's history of articulating ideas and building machines that can replicate the activity of the human brain is impressive, Professor Luger focuses on understanding the skills that enable these goals.
Based on insights afforded by the challenges of AI design and program building, Knowing our World proposes a foundation for the science of epistemology. Taking an interdisciplinary perspective, the book demonstrates that AI technology offers many representational structures and reasoning strategies that support clarification of these epistemic foundations.
This monograph is organized in three Parts; the first three chapters introduce the reader to the foundations of computing and the philosophical background that supportsthe AI tradition. These three chapters describe the origins of AI, programming as iterative refinement, and the representations and very high-level language tools that support AI application building.
The book's second Part introduces three of the four paradigms that represent research and development in AI over the past seventy years: the symbol-based, connectionist, and complex adaptive systems. Luger presents several introductory programs in each area and demonstrates their use.
The final three chapters present the primary theme of the book: bringing together the rationalist, empiricist, and pragmatist philosophical traditions in the context of a Bayesian world view. Luger describes Bayes' theorem with a simple proof to demonstrate epistemic insights. He describes research in model building and refinement and several philosophical issues that constrain the future growth of AI. The book concludes with his proposal of the epistemic stance of an active, pragmatic, model-revising realism.
Based on insights afforded by the challenges of AI design and program building, Knowing our World proposes a foundation for the science of epistemology. Taking an interdisciplinary perspective, the book demonstrates that AI technology offers many representational structures and reasoning strategies that support clarification of these epistemic foundations.
This monograph is organized in three Parts; the first three chapters introduce the reader to the foundations of computing and the philosophical background that supportsthe AI tradition. These three chapters describe the origins of AI, programming as iterative refinement, and the representations and very high-level language tools that support AI application building.
The book's second Part introduces three of the four paradigms that represent research and development in AI over the past seventy years: the symbol-based, connectionist, and complex adaptive systems. Luger presents several introductory programs in each area and demonstrates their use.
The final three chapters present the primary theme of the book: bringing together the rationalist, empiricist, and pragmatist philosophical traditions in the context of a Bayesian world view. Luger describes Bayes' theorem with a simple proof to demonstrate epistemic insights. He describes research in model building and refinement and several philosophical issues that constrain the future growth of AI. The book concludes with his proposal of the epistemic stance of an active, pragmatic, model-revising realism.
"This book is a 'must read' for anyone that has an interest in Artificial Intelligence and epistemologically related issues. Besides offering new insights into the philosophical foundations of epistemology, it is a veritable encyclopedia covering the history and core aspects of contemporary AI and Cognitive Science." (Daniel G. Schwartz, AI & SOCIETY, Vol. 39 (4), 2024)