The fusion of information from sensors with different physical characteristics, such as sight, touch, sound, etc., enhances the understanding of our surroundings and provides the basis for planning, decision-making, and control of autonomous and intelligent machines. The minimal representation approach to multisensor fusion is based on the use of an information measure as a universal yardstick for fusion. Using models of sensor uncertainty, the representation size guides the integration of widely varying types of data and maximizes the information contributed to a consistent interpretation. In this book, the general theory of minimal representation multisensor fusion is developed and applied in a series of experimental studies of sensor-based robot manipulation. A novel application of differential evolutionary computation is introduced to achieve practical and effective solutions to this difficult computational problem.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.