In the recent decades data collection and data harvesting were established as useful tools to improve production and sales, to satisfy customers, etc. With vast amounts of data stored and processed the topic of data quality gained importance. Empirical studies show that maintaining good data quality is imperative to fully utilize the potential of data. Today, data management projects schedule huge budgets on data maintenance. Yet, the value of improving data quality remains obscure and intangible. But how can high investments on data maintenance be justified? This work proposes a definition and a normative model to assess the value of data quality according to this definition. Approaches to calculate the normative value of data are adopted to develop a model fully based on theory of probabilities and statistical decision theory. By applying the model to different scenarios the behaviour of the value of data quality, as proposed in this work, is studied in an axiomatic manner. By analysing the results of these studies consequences of bad data quality are illustrated and discussed. Finally, a formal way to assess and rank data quality improvement measures is demonstrated.