This book grew out of previously published papers of mine composed over a period of years; they have been reworked (sometimes beyond recognition) so as to form a reasonably coherent whole. Part One treats of informative inference. I argue (Chapter 2) that the traditional principle of induction in its clearest formulation (that laws are confirmed by their positive cases) is clearly false. Other formulations in terms of the 'uniformity of nature' or the 'resemblance of the future to the past' seem to me hopelessly unclear. From a Bayesian point of view, 'learning from experience' goes by conditionalization (Bayes' rule). The traditional stum bling block for Bayesians has been to fmd objective probability inputs to conditionalize upon. Subjective Bayesians allow any probability inputs that do not violate the usual axioms of probability. Many subjectivists grant that this liberality seems prodigal but own themselves unable to think of additional constraints that might plausibly be imposed. To be sure, if we could agree on the correct probabilistic representation of 'ignorance' (or absence of pertinent data), then all probabilities obtained by applying Bayes' rule to an 'informationless' prior would be objective. But familiar contra dictions, like the Bertrand paradox, are thought to vitiate all attempts to objectify 'ignorance'. BuUding on the earlier work of Sir Harold Jeffreys, E. T. Jaynes, and the more recent work ofG. E. P. Box and G. E. Tiao, I have elected to bite this bullet. In Chapter 3, I develop and defend an objectivist Bayesian approach.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.