Originally published in 1991, this title was the result of a symposium held at Harvard University. It presents some of the exciting interdisciplinary developments of the time that clarify how animals and people learn to behave adaptively in a rapidly changing environment. The contributors focus on aspects of how recognition learning, reinforcement learning, and motor learning interact to generate adaptive goal-oriented behaviours that can satisfy internal needs - an area of inquiry as important for understanding brain function as it is for designing new types of freely moving autonomous…mehr
Originally published in 1991, this title was the result of a symposium held at Harvard University. It presents some of the exciting interdisciplinary developments of the time that clarify how animals and people learn to behave adaptively in a rapidly changing environment. The contributors focus on aspects of how recognition learning, reinforcement learning, and motor learning interact to generate adaptive goal-oriented behaviours that can satisfy internal needs - an area of inquiry as important for understanding brain function as it is for designing new types of freely moving autonomous robots. Since the authors agree that a dynamic analysis of system interactions is needed to understand these challenging phenomena - and neural network models provide a natural framework for representing and analysing such interactions - all the articles either develop neural network models or provide biological constraints for guiding and testing their design.Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Michael L. Commons is Lecturer and Research Associate in the Department of Psychiatry at Harvard Medical School, Massachusetts Mental Health Center, and Director of the Dare Institute. He did his undergraduate work at the University of California at Berkeley, and then at Los Angeles, where in 1965 he obtained a B.A. in mathematics and in psychology. In 1967 he received his M.A., and in 1973 his Ph.D., in psychology from Columbia University. Before coming to Harvard University in 1977 as a postdoctoral fellow and then becoming research associate in psychology, he was an assistant professor at Northern Michigan University. He has co-edited Quantitative Analyses of Behavior, volumes 1-11 and Beyond Formal Operations: Late Adolescent and Adult Cognitive Development. His area of research interest is the quantitative analysis of the construction and understanding of reality as it develops across the life span, especially as these elements affect decision processes, life-span attachment and alliance formation, and ethical, social, cross-cultural, educational, legal, and private sectors. Stephen Grossberg received his graduate training at Stanford University and Rockefeller University, and was a Professor at M.I.T. He is Wang Professor of Cognitive and Neural Systems at Boston University, where he is the founder and Director of the Center for Adaptive Systems, as well as the founder and Co-Director of the graduate program in Cognitive and Neural Systems. He also organized the Boston Consortium for Behavioral and Neural Studies, which includes investigators from six Boston-area institutions. He founded and was first President of the International Neural Network Society, and is editor-in-chief of the Society's journal, Neural Networks. During the past few decades, he and his colleagues at the Center for Adaptive Systems have pioneered and developed a number of the fundamental principles, mechanisms, and architectures that form the foundation for contemporary neural network research, including contributions to content-addressable memory; associative learning; biological vision and multidimensional image processing; cognitive information processing; adaptive pattern recognition; speech and language perception, learning, and production; adaptive robotics; conditioning and attention; development; biological rhythms; certain mental disorders; and their substrates in neurophysical and anatomical mechanisms. John E. R. Staddon is James B. Duke Professor of Psychology, and Professor of Zoology and Neurobiology at Duke University, where he has taught since 1967. His research is on the evolution and mechanisms of learning in humans and animals. He is the author of numerous experimental and theoretical papers and two books, Adaptive Behavior and Learning (1983, Cambridge University Press) and Learning: An Introduction to the Principles of Adaptive Behavior (with R. Ettinger, 1989, Harcourt-Brace-Jovanovich).
Inhaltsangabe
About the Editors. About the Contributors. Preface Part 1: Models of Classical Conditioning 1. Memory Function in Neural and Artificial Networks Daniel L. Alkon, Thomas P. Vogl, Kim T. Blackwell and David Tam 2. Empirically Derived Adaptive Elements and Networks Simulate Associative Learning Douglas A. Baxter, Dean V. Buonomano, Jennifer L. Raymond, David G. Cook, Frederick M. Kuenzi, Thomas J. Carew and John H. Byrne 3. Adaptive Synaptogenesis Can Complement Associative Potentiation/Depression William B. Levy and Costa M. Colbert 4. A Neural Network Architecture for Pavlovian Conditioning: Reinforcement, Attention, Forgetting, Timing Stephen Grossberg 5. Simulations of Conditioned Perseveration and Novelty Preference from Frontal Lobe Damage Daniel S. Levine and Paul S. Prueitt 6. Neural Dynamics and Hippocampal Modulation of Classical Conditioning Nestor A. Schmajuk and James J. DiCarlo 7. Implementing Connectionist Algorithms for Classical Conditioning in the Brain John W. Moore Part 2: Models of Instrumental Conditioning 8. Models of Acquisition and Preference Michael L. Commons, Eric W. Bing, Charla C. Griffy and Edward J. Trudeau 9. A Connectionist Model of Timing Russell M. Church and Hilary Broadbent 10. A Connectionist Approach to Conditional Discriminations: Learning, Short-Term Memory, and Attention William S. Maki and Adel M. Abunawass 11. On the Assignment-of-Credit Problem in Operant Learning John E. R. Staddon and Y. Zhang 12. Behavioral Diversity, Search and Stochastic Connectionist Systems Stephen José Hanson. Author Index. Subject Index.
About the Editors. About the Contributors. Preface Part 1: Models of Classical Conditioning 1. Memory Function in Neural and Artificial Networks Daniel L. Alkon, Thomas P. Vogl, Kim T. Blackwell and David Tam 2. Empirically Derived Adaptive Elements and Networks Simulate Associative Learning Douglas A. Baxter, Dean V. Buonomano, Jennifer L. Raymond, David G. Cook, Frederick M. Kuenzi, Thomas J. Carew and John H. Byrne 3. Adaptive Synaptogenesis Can Complement Associative Potentiation/Depression William B. Levy and Costa M. Colbert 4. A Neural Network Architecture for Pavlovian Conditioning: Reinforcement, Attention, Forgetting, Timing Stephen Grossberg 5. Simulations of Conditioned Perseveration and Novelty Preference from Frontal Lobe Damage Daniel S. Levine and Paul S. Prueitt 6. Neural Dynamics and Hippocampal Modulation of Classical Conditioning Nestor A. Schmajuk and James J. DiCarlo 7. Implementing Connectionist Algorithms for Classical Conditioning in the Brain John W. Moore Part 2: Models of Instrumental Conditioning 8. Models of Acquisition and Preference Michael L. Commons, Eric W. Bing, Charla C. Griffy and Edward J. Trudeau 9. A Connectionist Model of Timing Russell M. Church and Hilary Broadbent 10. A Connectionist Approach to Conditional Discriminations: Learning, Short-Term Memory, and Attention William S. Maki and Adel M. Abunawass 11. On the Assignment-of-Credit Problem in Operant Learning John E. R. Staddon and Y. Zhang 12. Behavioral Diversity, Search and Stochastic Connectionist Systems Stephen José Hanson. Author Index. Subject Index.
Es gelten unsere Allgemeinen Geschäftsbedingungen: www.buecher.de/agb
Impressum
www.buecher.de ist ein Internetauftritt der buecher.de internetstores GmbH
Geschäftsführung: Monica Sawhney | Roland Kölbl | Günter Hilger
Sitz der Gesellschaft: Batheyer Straße 115 - 117, 58099 Hagen
Postanschrift: Bürgermeister-Wegele-Str. 12, 86167 Augsburg
Amtsgericht Hagen HRB 13257
Steuernummer: 321/5800/1497