44,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Digital video communication has evolved tremendously in the past few years, experiencing significant advances in compression and transmission techniques. To quantify the performance of a video system, it is important to measure the quality of the video. Since humans are the ultimate receivers of a video signal, quality metrics must take into account the properties of the human visual system. So far, most of the metrics that have been proposed require access to the original video, what makes them unsuitable for real-time applications. We investigate how to estimate video quality in real-time…mehr

Produktbeschreibung
Digital video communication has evolved tremendously
in the past few years, experiencing significant
advances in compression and transmission techniques.
To quantify the performance of a video system, it is
important to measure the quality of the video. Since
humans are the ultimate receivers of a video signal,
quality metrics must take into account the properties
of the human visual system. So far, most of the
metrics that have been proposed require access to the
original video, what makes them unsuitable for
real-time applications. We investigate how to
estimate video quality in real-time applications
using no-reference and reduced reference metrics. For
this, we study the visibility, annoyance, and
relative importance of different types of artifacts
and how they combine to produce annoyance. The work
uses synthetic artifacts that are simpler, purer, and
easier to describe, allowing a high degree of control
with respect to the amplitude, distribution, and
mixture of different types of artifacts. We present
metrics for estimating the strength of four types of
artifacts. The outputs of the best artifact metrics
are used to build a combination model for overall
annoyance.