22,99 €
inkl. MwSt.

Versandfertig in 6-10 Tagen
  • Broschiertes Buch

High Quality Content by WIKIPEDIA articles! High Quality Content by WIKIPEDIA articles! In information theory (elaborated by Claude E. Shannon, 1948), self-information is a measure of the information content associated with the outcome of a random variable. It is expressed in a unit of information, for example bits, nats, or hartleys, depending on the base of the logarithm used in its calculation. The term self-information is also sometimes used as a synonym of entropy, i.e. the expected value of self-information in the first sense, because I(X;X) = H(X), where I(X;X) is the mutual information…mehr

Produktbeschreibung
High Quality Content by WIKIPEDIA articles! High Quality Content by WIKIPEDIA articles! In information theory (elaborated by Claude E. Shannon, 1948), self-information is a measure of the information content associated with the outcome of a random variable. It is expressed in a unit of information, for example bits, nats, or hartleys, depending on the base of the logarithm used in its calculation. The term self-information is also sometimes used as a synonym of entropy, i.e. the expected value of self-information in the first sense, because I(X;X) = H(X), where I(X;X) is the mutual information of X with itself. These two meanings are not equivalent, and this article covers the first sense only. For the other sense, see entropy. By definition, the amount of self-information contained in a probabilistic event depends only on the probability of that event: the smaller its probability, the larger the self-information associated with receiving the information that the event indeed occurred.