
Algorithmic Randomness and Complexity
Versandkostenfrei!
Versandfertig in 6-10 Tagen
106,99 €
inkl. MwSt.
Weitere Ausgaben:
PAYBACK Punkte
53 °P sammeln!
This book is concerned with the theory of computability and complexity over the real numbers. This theory was initiated by Turing, Grzegorczyk, Lacombe, Banach, and Mazur and has seen rapid growth in recent years. Computability and complexity theory are two central areas of research in theoretical computer science. Until recently, most work in these areas concentrated on problems over discrete structures, but there has been enormous growth of computability theory and complexity theory over the real numbers and other continuous structures, especially incorporating concepts of "randomness". One reason for this growth is that more and more computation problems over the real numbers are being dealt with by computer scientists in computational geometry and in the modeling of dynamical and hybrid systems. Scientists working on these questions come from such diverse fields as theoretical computer science, domain theory, logic, constructive mathematics, computer arithmetic, numerical mathematics, and analysis. An essential resource for all researchers in theoretical computer science, logic, computability theory and complexity.
Intuitively, a sequence such as 101010101010101010... does not seem random, whereas 101101011101010100..., obtained using coin tosses, does. How can we reconcile this intuition with the fact that both are statistically equally likely? What does it mean to say that an individual mathematical object such as a real number is random, or to say that one real is more random than another? And what is the relationship between randomness and computational power. The theory of algorithmic randomness uses tools from computability theory and algorithmic information theory to address questions such as these. Much of this theory can be seen as exploring the relationships between three fundamental concepts: relative computability, as measured by notions such as Turing reducibility; information content, as measured by notions such as Kolmogorov complexity; and randomness of individual objects, as first successfully defined by Martin-Löf. Although algorithmic randomness has been studied for several decades, a dramatic upsurge of interest in the area, starting in the late 1990s, has led to significant advances. This is the first comprehensive treatment of this important field, designed to be both a reference tool for experts and a guide for newcomers. It surveys a broad section of work in the area, and presents most of its major results and techniques in depth. Its organization is designed to guide the reader through this large body of work, providing context for its many concepts and theorems, discussing their significance, and highlighting their interactions. It includes a discussion of effective dimension, which allows us to assign concepts like Hausdorff dimension to individual reals, and a focused but detailed introduction to computability theory. It will be of interest to researchers and students in computability theory, algorithmic information theory, and theoretical computer science.