Marktplatzangebote
Ein Angebot für € 38,52 €
  • Broschiertes Buch

Real-time rendering imposes the challenging task of creating a new rendering of an input scene at least 60 times per second. Although computer graphics hardware has made staggering advances in terms of speed and freedom of programmability in recent years, there still exists a number of algorithms that are just too expensive to be calculated in this time budget, like for instance exact shadows or an exact global illumination solution. One way to circumvent this hard time limit is to capitalize on temporal coherence to formulate algorithms incremental in time. To this end, three algorithms which…mehr

Produktbeschreibung
Real-time rendering imposes the challenging task of creating a new rendering of an input scene at least 60 times per second. Although computer graphics hardware has made staggering advances in terms of speed and freedom of programmability in recent years, there still exists a number of algorithms that are just too expensive to be calculated in this time budget, like for instance exact shadows or an exact global illumination solution. One way to circumvent this hard time limit is to capitalize on temporal coherence to formulate algorithms incremental in time. To this end, three algorithms which successfully incorporate temporal coherence are analysed in detail. To highlight the benefits which are provided by these new practical algorithms, this book also includes the respective previous work. This includes not only the field of temporal coherence, but also the fields of real-time hard and soft shadows and discrete LOD blending. This book targets computer scientists and students with prior knowledge in real-time rendering.
Autorenporträt
Daniel Scherzer is a researcher at the Institute of Computer Graphics and Algorithms of the Vienna University of Technology, where he received an M.Sc. in 2005, an M.Soc.Ec.Sc. in 2008 and a Ph.D. in 2009. His current research interests include shadow algorithms, temporal coherence methods, modelling and level-of-detail approaches.