Sie sind bereits eingeloggt. Klicken Sie auf 2. tolino select Abo, um fortzufahren.
Bitte loggen Sie sich zunächst in Ihr Kundenkonto ein oder registrieren Sie sich bei bücher.de, um das eBook-Abo tolino select nutzen zu können.
This book explores and motivates the need for building homogeneous and heterogeneous multi-core systems for machine learning to enable flexibility and energy-efficiency. Coverage focuses on a key aspect of the challenges of (extreme-)edge-computing, i.e., design of energy-efficient and flexible hardware architectures, and hardware-software co-optimization strategies to enable early design space exploration of hardware architectures. The authors investigate possible design solutions for building single-core specialized hardware accelerators for machine learning and motivates the need for…mehr
This book explores and motivates the need for building homogeneous and heterogeneous multi-core systems for machine learning to enable flexibility and energy-efficiency. Coverage focuses on a key aspect of the challenges of (extreme-)edge-computing, i.e., design of energy-efficient and flexible hardware architectures, and hardware-software co-optimization strategies to enable early design space exploration of hardware architectures. The authors investigate possible design solutions for building single-core specialized hardware accelerators for machine learning and motivates the need for building homogeneous and heterogeneous multi-core systems to enable flexibility and energy-efficiency. The advantages of scaling to heterogeneous multi-core systems are shown through the implementation of multiple test chips and architectural optimizations.
Vikram Jain received his M.Sc degree in Embedded Electronics Systems Design (EESD) from Chalmers University of Technology, Sweden, in 2018, and his PhD degree in Electrical Engineering from KU Leuven, Belgium, in 2023. His PhD research was in implementation of energy efficient digital acceleration and RISC-V processors for machine learning applications at the edge. He was also a visiting researcher at the IIS lab in ETH Zurich working on implementation of networks-on-chip. He is currently a postdoctoral researcher at SpeciaLIzed Computing Ecosystems (SLICE) lab and Berkeley Wireless Research Center (BWRC) in University of California, Berkeley, working on heterogeneous integration and chiplet architectures for high-performance computing. He is a recipient of the SSCS Predoctoral Achievement Award in 2023, the SSCS travel grant in 2022, the Lars Pareto travel grant in 2019, and a prestigious research fellowship from Swedish Institute (SI) in 2016 and 2017.
Marian Verhelst isa full professor at the MICAS laboratories of KU Leuven and a research director at IMEC. Her research focuses on embedded machine learning, hardware accelerators, HW-algorithm co-design and low-power edge processing. She received a PhD from KU Leuven in 2008, and worked as a research scientist at Intel Labs, Hillsboro OR from 2008 till 2010. Marian is a member of the board of directors of tinyML and active in the TPC’s of DATE, ISSCC, VLSI and ESSCIRC, was the chair of tinyML2021 and TPC co-chair of AICAS2020. Marian is an IEEE SSCS Distinguished Lecturer, was a member of the Young Academy of Belgium, an associate editor for TVLSI, TCAS-II and JSSC and a member of the STEM advisory committee to the Flemish Government. Marian received the laureate prize of the Royal Academy of Belgium in 2016, the 2021 Intel Outstanding Researcher Award, and the André Mischke YAE Prize for Science and Policy in 2021. She is an IEEE fellow and holds 2 ERC grants (ERC Starting Grant Re-Sense, and ongoingERC Consolidator Grant BINGO).
Inhaltsangabe
Chapter 1: Introduction.- Chapter 2 Algorithmic Background for Machine Learning.- Chapter 3 Scoping the Landscape of (Extreme) Edge Machine Learning Processors.- Chapter 4 Hardware-Software Co-optimization through Design Space Exploration.- Chapter 5 Energy Efficient Single-core Hardware Acceleration.- Chapter 6 TinyVers: A Tiny Versatile All-Digital Heterogeneous Multi-core System-on-Chip.- Chapter 7 DIANA: Digital and ANAlog Heterogeneous Multi-core System-on-Chip.- Chapter 8 Networks-on-chip to Enable Large-scale Multi-core ML Acceleration.- Chapter 9 Conclusion.
Chapter 1: Introduction.- Chapter 2 Algorithmic Background for Machine Learning.- Chapter 3 Scoping the Landscape of (Extreme) Edge Machine Learning Processors.- Chapter 4 Hardware-Software Co-optimization through Design Space Exploration.- Chapter 5 Energy Efficient Single-core Hardware Acceleration.- Chapter 6 TinyVers: A Tiny Versatile All-Digital Heterogeneous Multi-core System-on-Chip.- Chapter 7 DIANA: Digital and ANAlog Heterogeneous Multi-core System-on-Chip.- Chapter 8 Networks-on-chip to Enable Large-scale Multi-core ML Acceleration.- Chapter 9 Conclusion.