- Gebundenes Buch
- Merkliste
- Auf die Merkliste
- Bewerten Bewerten
- Teilen
- Produkt teilen
- Produkterinnerung
- Produkterinnerung
Presents the parallel implementation aspects of all major artificial network models. The text details implementations on various processor architectures built on different hardware platforms, ranging from large parallel computers to MIMD machines using transputers and DSPs.
Andere Kunden interessierten sich auch für
- Frédéric MagoulesParallel Scientific Computing184,99 €
- Manish ParasharAdvanced Computational Infrastructures for Parallel and Distributed Adaptive Applications209,99 €
- Chao WangDomain-Specific Computer Architectures for Emerging Applications141,99 €
- Eric BauerService Quality of Cloud-Based Applications104,99 €
- Richard M. FujimotoParallel and Distributed Simulation Systems222,99 €
- Arslan MunirModeling and Optimization of Parallel and Distributed Embedded Systems142,99 €
- Robert LinggardNeural Networks for Vision, Speech and Natural Language160,99 €
-
-
-
Presents the parallel implementation aspects of all major artificial network models. The text details implementations on various processor architectures built on different hardware platforms, ranging from large parallel computers to MIMD machines using transputers and DSPs.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Produktdetails
- Produktdetails
- Verlag: Wiley
- Seitenzahl: 412
- Erscheinungstermin: 14. Dezember 1998
- Englisch
- Abmessung: 260mm x 183mm x 27mm
- Gewicht: 968g
- ISBN-13: 9780818683992
- ISBN-10: 0818683996
- Artikelnr.: 22116757
- Verlag: Wiley
- Seitenzahl: 412
- Erscheinungstermin: 14. Dezember 1998
- Englisch
- Abmessung: 260mm x 183mm x 27mm
- Gewicht: 968g
- ISBN-13: 9780818683992
- ISBN-10: 0818683996
- Artikelnr.: 22116757
N. Sundararajan and P. Saratchandran are the authors of Parallel Architectures for Artificial Neural Networks: Paradigms and Implementations, published by Wiley.
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
1. Introduction (N. Sundararajan, P. Saratchandran, Jim Torresen).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).
2. A Review of Parallel Implementations of Backpropagation Neural Networks
(Jim Torresen, Olav Landsverk).
I: Analysis of Parallel Implementations.
3. Network Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (R. Arularasan, P. Saratchandran, N.
Sundararajan, Shou King Foo).
4. Training-Set Parallelism for Backpropagation Neural Networks on a
Heterogeneous Architecture (Shou King Foo, P. Saratchandran, N.
Sundararajan).
5. Parallel Real-Time Recurrent Algorithm for Training Large Fully
Recurrent Neural Networks (Elias S. Manolakos, George Kechriotis).
6. Parallel Implementation of ART1 Neural Networks on Processor Ring
Architectures (Elias S. Manolakos, Stylianos Markogiannakis).
II: Implementations on a Big General-Purpose Parallel Computer.
7. Implementation of Backpropagation Neural Networks on Large Parallel
Computers (Jim Torresen, Shinji Tomita).
III: Special Parallel Architectures and Application Case Studies.
8. Massively Parallel Architectures for Large-Scale Neural Network
Computations (Yoshiji Fujimoto).
9. Regularly Structured Neural Networks on the DREAM Machine (Soheil Shams,
Jean-Luc Gaudiot).
10. High-Performance Parallel Backpropagation Simulation with On-Line
Learning (Urs A. Muller, Patrick Spiess, Michael Kocheisen, Beat Flepp,
Anton Gunzinger, Walter Guggenbuhl).
11. Training Neural Networks with SPERT-II (Krste Asanovic;, James Beck,
David Johnson, Brian Kingsbury, Nelson Morgan, John Wawrzynek).
12. Concluding Remarks (N. Sundararajan, P. Saratchandran).