High Quality Content by WIKIPEDIA articles! Signed zero is zero with an associated sign. In ordinary arithmetic, 0 = 0. However, in computing, some number representations allow for the existence of two zeros, often denoted by 0 (negative zero) and +0 (positive zero). This occurs in some signed number representations for integers, and in most floating point number representations. The number 0 is usually encoded as +0, however it can be represented by either +0 and 0. The IEEE 754 standard for floating point arithmetic (presently used by most computers and programming languages that support floating point numbers) requires both +0 and 0. The zeroes can be considered as a variant of the extended real number line such that 1/ 0 = and 1/+0 = + , division by zero is only undefined for ±0/±0.