39,80 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in über 4 Wochen
  • Gebundenes Buch

In an era of complex deep learning architectures like transformers, CNNs, and LSTM cells, the challenge persists: the hunger for labeled data and high energy. This dissertation explores Echo State Network (ESN), an RNN variant. ESN's efficiency in linear regression training and simplicity suggest pathways to resource-efficient, adaptable deep learning. Systematically deconstructing ESN architecture into flexible modules, it introduces basic ESN models with random weights and efficient deterministic ESN models as baselines. Diverse unsupervised pre-training methods for ESN components are…mehr

Produktbeschreibung
In an era of complex deep learning architectures like transformers, CNNs, and LSTM cells, the challenge persists: the hunger for labeled data and high energy. This dissertation explores Echo State Network (ESN), an RNN variant. ESN's efficiency in linear regression training and simplicity suggest pathways to resource-efficient, adaptable deep learning. Systematically deconstructing ESN architecture into flexible modules, it introduces basic ESN models with random weights and efficient deterministic ESN models as baselines. Diverse unsupervised pre-training methods for ESN components are evaluated against these baselines. Rigorous benchmarking across datasets - time-series classification, audio recognition - shows competitive performance of ESN models with state-of-the-art approaches. Identified nuanced use cases guiding model preferences and limitations in training methods highlight the importance of proposed ESN models in bridging reservoir computing and deep learning.