Machine Learning for Low-Latency Communications presents the principles and practice of various deep learning methodologies for mitigating three critical latency components: access latency, transmission latency, and processing latency. In particular, the book develops learning to estimate methods via algorithm unrolling and multiarmed bandit for reducing access latency by enlarging the number of concurrent transmissions with the same pilot length. Task-oriented learning to compress methods based on information bottleneck are given to reduce the transmission latency via avoiding unnecessary…mehr
Machine Learning for Low-Latency Communications presents the principles and practice of various deep learning methodologies for mitigating three critical latency components: access latency, transmission latency, and processing latency. In particular, the book develops learning to estimate methods via algorithm unrolling and multiarmed bandit for reducing access latency by enlarging the number of concurrent transmissions with the same pilot length. Task-oriented learning to compress methods based on information bottleneck are given to reduce the transmission latency via avoiding unnecessary data transmission. Lastly, three learning to optimize methods for processing latency reduction are given which leverage graph neural networks, multi-agent reinforcement learning, and domain knowledge. Low-latency communications attracts considerable attention from both academia and industry, given its potential to support various emerging applications such as industry automation, autonomous vehicles, augmented reality and telesurgery. Despite the great promise, achieving low-latency communications is critically challenging. Supporting massive connectivity incurs long access latency, while transmitting high-volume data leads to substantial transmission latency.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Yong Zhou received the B.Sc. and M.Eng. degrees from Shandong University, Jinan, China, in 2008 and 2011, respectively, and the Ph.D. degree from the University of Waterloo, Waterloo, ON, Canada, in 2015. From Nov. 2015 to Jan. 2018, he worked as a postdoctoral research fellow in the Department of Electrical and Computer Engineering, The University of British Columbia, Vancouver, Canada. He is currently an Assistant Professor in the School of Information Science and Technology, ShanghaiTech University, Shanghai, China. He was the track co-chair of IEEE VTC 2020 Fall and 2023 Spring, as well as the general co-chair of IEEE ICC 2022 workshop on edge artificial intelligence for 6G. He co-authored the book Mobile Edge Artificial Intelligence: Opportunities and Challenges (Elsevier 2021). His research interests include 6G communications, edge intelligence, and Internet of Things.
Inhaltsangabe
Part 1: Introduction and Overview 1. Introduction and overview Part 2: Learning to Estimate for Access Latency Reduction 2. Learning to estimate via group-sparse based algorithm unrolling 3. Learning to estimate via proximal gradient-based algorithm unrolling 4. Learning to detect via multiarmed bandit (MAB) Part 3: Learning to Compress for Transmission Latency Reduction 5. Learning to compress via information bottleneck 6. Learning to compress via robust information bottleneck with digital modulation 7. Learning to compress for multi-device cooperative edge inference Part 4: Learning to Optimize for Processing Latency Reduction 8. Learning to optimize via graph neural networks 9. Learning to optimize via knowledge guidance 10. Learning to optimize via decentralized multi-agent reinforcement learning Part 5: Conclusions 11. Conclusions and Future Research Directions
Part 1: Introduction and Overview 1. Introduction and overview Part 2: Learning to Estimate for Access Latency Reduction 2. Learning to estimate via group-sparse based algorithm unrolling 3. Learning to estimate via proximal gradient-based algorithm unrolling 4. Learning to detect via multiarmed bandit (MAB) Part 3: Learning to Compress for Transmission Latency Reduction 5. Learning to compress via information bottleneck 6. Learning to compress via robust information bottleneck with digital modulation 7. Learning to compress for multi-device cooperative edge inference Part 4: Learning to Optimize for Processing Latency Reduction 8. Learning to optimize via graph neural networks 9. Learning to optimize via knowledge guidance 10. Learning to optimize via decentralized multi-agent reinforcement learning Part 5: Conclusions 11. Conclusions and Future Research Directions
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497