This book discusses innovative ideas in the design, modelling, implementation, and optimization of hardware platforms for neural networks. The book provides an overview of this emerging field, from principles to applications, for researchers, postgraduate students and engineers who work on learning-based services and hardware platforms.
This book discusses innovative ideas in the design, modelling, implementation, and optimization of hardware platforms for neural networks. The book provides an overview of this emerging field, from principles to applications, for researchers, postgraduate students and engineers who work on learning-based services and hardware platforms.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
1. Part I: Deep learning and neural networks: concepts and models * Chapter 1: An introduction to artificial neural networks * Chapter 2: Hardware acceleration for recurrent neural networks * Chapter 3: Feedforward neural networks on massively parallel architectures 2. Part II: Deep learning and approximate data representation * Chapter 4: Stochastic-binary convolutional neural networks with deterministic bit-streams * Chapter 5: Binary neural networks 3. Part III: Deep learning and model sparsity * Chapter 6: Hardware and software techniques for sparse deep neural networks * Chapter 7: Computation reuse-aware accelerator for neural networks 4. Part IV: Convolutional neural networks for embedded systems * Chapter 8: CNN agnostic accelerator design for low latency inference on FPGAs * Chapter 9: Iterative convolutional neural network (ICNN): an iterative CNN solution for low power and real-time systems 5. Part V: Deep learning on analog accelerators * Chapter 10: Mixed-signal neuromorphic platform design for streaming biomedical signal processing * Chapter 11: Inverter-based memristive neuromorphic circuit for ultra-low-power IoT smart applications
1. Part I: Deep learning and neural networks: concepts and models * Chapter 1: An introduction to artificial neural networks * Chapter 2: Hardware acceleration for recurrent neural networks * Chapter 3: Feedforward neural networks on massively parallel architectures 2. Part II: Deep learning and approximate data representation * Chapter 4: Stochastic-binary convolutional neural networks with deterministic bit-streams * Chapter 5: Binary neural networks 3. Part III: Deep learning and model sparsity * Chapter 6: Hardware and software techniques for sparse deep neural networks * Chapter 7: Computation reuse-aware accelerator for neural networks 4. Part IV: Convolutional neural networks for embedded systems * Chapter 8: CNN agnostic accelerator design for low latency inference on FPGAs * Chapter 9: Iterative convolutional neural network (ICNN): an iterative CNN solution for low power and real-time systems 5. Part V: Deep learning on analog accelerators * Chapter 10: Mixed-signal neuromorphic platform design for streaming biomedical signal processing * Chapter 11: Inverter-based memristive neuromorphic circuit for ultra-low-power IoT smart applications
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497
USt-IdNr: DE450055826