88,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture.
This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and
…mehr

Produktbeschreibung
Although sensor fusion is an essential prerequisite for autonomous driving, it entails a number of challenges and potential risks. For example, the commonly used deep fusion networks are lacking in interpretability and robustness. To address these fundamental issues, this book introduces the mechanism of deep fusion models from the perspective of uncertainty and models the initial risks in order to create a robust fusion architecture.

This book reviews the multi-sensor data fusion methods applied in autonomous driving, and the main body is divided into three parts: Basic, Method, and Advance. Starting from the mechanism of data fusion, it comprehensively reviews the development of automatic perception technology and data fusion technology, and gives a comprehensive overview of various perception tasks based on multimodal data fusion. The book then proposes a series of innovative algorithms for various autonomous driving perception tasks, to effectively improve the accuracy and robustness of autonomous driving-related tasks, and provide ideas for solving the challenges in multi-sensor fusion methods. Furthermore, to transition from technical research to intelligent connected collaboration applications, it proposes a series of exploratory contents such as practical fusion datasets, vehicle-road collaboration, and fusion mechanisms.

In contrast to the existing literature on data fusion and autonomous driving, this book focuses more on the deep fusion method for perception-related tasks, emphasizes the theoretical explanation of the fusion method, and fully considers the relevant scenarios in engineering practice. Helping readers acquire an in-depth understanding of fusion methods and theories in autonomous driving, it can be used as a textbook for graduate students and scholars in related fields or as a reference guide for engineers who wish to apply deep fusion methods.

Autorenporträt
Prof. Xinyu Zhang is an associate professor at the School of Vehicle and Mobility, Tsinghua University. He was a research fellow at the University of Cambridge, UK, in 2008. Since 2014, he has served as Deputy Secretary General of the Chinese Association for Artificial Intelligence. As director of the Tsinghua Mengshi team, he invented the first amphibious autonomous flying car in China and proposed a new method of collaborative fusion for perception information and motion information in three-dimensional traffic. His research interests include multi-model fusion, unmanned ground vehicles, and flying cars. Prof. Jun Li is a professor at the School of Vehicle and Mobility, Tsinghua University. He is the President of the China Society of Automotive Engineers and is also an Academician of the Chinese Academy of Engineering. Relying on the Intelligent Vehicle Design and Safety Technology Research Center, he led the team to focus on the core technology of intelligent driving, mainly to carry out the system engineering research on the integration of smart city-smart transportation-smart vehicle (SCSTSV). He focuses on the research of cutting-edge technologies, such as intelligent shared vehicle design, safety of the intended functionality, 5G vehicle equipment, and fusion perception, to overcome the core problems of intelligent driving and improve the core competitiveness of intelligent networked vehicles. Dr. Zhiwei Li is a tutor of master's students in Beijing University of Chemical Technology. In 2020, he studied as a postdoctoral fellow with Academician Jun Li at Tsinghua University. His main research interests include computer vision, intelligent perception and autonomous driving, and robot system architecture. Prof. Huaping Liu is a professor at the Department of Computer Science and Technology, Tsinghua University. Heserves as an associate editor for various journals, including IEEE Transactions on Automation Science and Engineering, IEEE Transactions on Industrial Informatics, IEEE Robotics and Automation Letters, Neurocomputing, and Cognitive Computation. He has served as an associate editor for ICRA and IROS and on the IJCAI, RSS, and IJCNN Program Committees. His main research interests are robotic perception and learning. Mo Zhou is currently a doctoral candidate at the School of Vehicle and Mobility, Tsinghua University, supervised by Prof. Jun Li. She received MS degree in image and video communications and signal processing from the University of Bristol, Bristol, UK. Her research interests include intelligent vehicles, deep learning, environmental perception, and driving safety. Dr. Li Wang is a postdoctoral fellow in the State Key Laboratory of Automotive Safety and Energy, and the School of Vehicle and Mobility,Tsinghua University. He received his PhD degree in mechatronic engineering at the State Key Laboratory of Robotics and System, Harbin Institute of Technology, in 2020. He was a visiting scholar at Nanyang Technology of University for two years. He is the author of more than 20 SCI/EI articles. His research interests include autonomous-driving perception, 3D robot vision, and multi-modal fusion. Zhenhong Zou is an assistant researcher at the School of Vehicle and Mobility, Tsinghua University. He received his BS degree in Information and Computation Science from Beihang University and was subsequently a visiting student at the University of California, Los Angeles, USA, supervised by Prof. Deanna Needell. His research interests include autonomous driving and multi-sensor fusion.