Matthew John Yee-King
Build AI-Enhanced Audio Plugins with C++
51,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in über 4 Wochen
Melden Sie sich
hier
hier
für den Produktalarm an, um über die Verfügbarkeit des Produkts informiert zu werden.
26 °P sammeln
Matthew John Yee-King
Build AI-Enhanced Audio Plugins with C++
- Broschiertes Buch
Build AI-Enhanced Audio Plugins with C++ explains how to embed artificial intelligence technology inside tools that can be used by audio and music professionals, through worked examples using Python, C++ and audio plug-in APIs which demonstrate how to combine technologies to produce professional, AI-enhanced creative tools.
Andere Kunden interessierten sich auch für
- Francisco Martin RicoA Concise Introduction to Robot Programming with ROS244,99 €
- Artificial Intelligence and Music Ecosystem37,99 €
- Francisco Martin RicoA Concise Introduction to Robot Programming with ROS2165,99 €
- Alexandros DrymonitisThe Python Audio Cookbook47,99 €
- Mike SeniorRecording Secrets for the Small Studio39,99 €
- Roey IzhakiMixing Audio58,99 €
- Simon LangfordDigital Audio Editing46,99 €
-
-
-
Build AI-Enhanced Audio Plugins with C++ explains how to embed artificial intelligence technology inside tools that can be used by audio and music professionals, through worked examples using Python, C++ and audio plug-in APIs which demonstrate how to combine technologies to produce professional, AI-enhanced creative tools.
Produktdetails
- Produktdetails
- Verlag: Taylor & Francis Ltd
- Seitenzahl: 344
- Erscheinungstermin: 21. Juni 2024
- Englisch
- Abmessung: 179mm x 254mm x 26mm
- Gewicht: 676g
- ISBN-13: 9781032430423
- ISBN-10: 1032430427
- Artikelnr.: 70006241
- Verlag: Taylor & Francis Ltd
- Seitenzahl: 344
- Erscheinungstermin: 21. Juni 2024
- Englisch
- Abmessung: 179mm x 254mm x 26mm
- Gewicht: 676g
- ISBN-13: 9781032430423
- ISBN-10: 1032430427
- Artikelnr.: 70006241
Matthew John Yee-King is a professor in the department of computing at Goldsmiths, University of London. He is an experienced educator as well as the programme director for the University of London's online BSc Computer Science degree.
Part 1: Getting started 1. Introduction to the book 2. Setting up your
development environment 3. Installing JUCE 4. Installing and using CMake 5.
Set up libtorch 6. Python setup instructions 7. Common development
environment setup problems 8. Basic plugin development 9. FM synthesizer
plugin Part 2: ML-powered plugin control: the meta-controller 10. Using
regression for synthesizer control 11. Experiment with regression and
libtorch 12. The meta-controller 13. Linear interpolating Superknob 14.
Untrained Torchknob 15. Training the torchknob 16. Plugin meta-controller
17. Placing plugins in an AudioProcessGraph structure 18. Show a plugin's
user interface 19. From plugin host to meta-controller Part 3: The
autonomous music improviser 20. Background: all about sequencers 21.
Programming with Markov models 22. Starting the Improviser plugin 23.
Modelling note onset times 24. Modelling note duration 25. Polyphonic
Markov model Part 4: Neural audio effects 26. Welcome to neural effects 27.
Finite Impulse Responses, signals and systems 28. Convolution 29. Infinite
Impulse Response filters 30. Waveshapers 31. Introduction to neural guitar
amplifier emulation 32. Neural FX: LSTM network 33. JUCE LSTM plugin 34.
Training the amp emulator: dataset 35. Data shapes, LSTM models and loss
functions 36. The LSTM training loop 37. Operationalising the model in a
plugin 38. Faster LSTM using RTNeural 39. Guide to the projects in the
repository
development environment 3. Installing JUCE 4. Installing and using CMake 5.
Set up libtorch 6. Python setup instructions 7. Common development
environment setup problems 8. Basic plugin development 9. FM synthesizer
plugin Part 2: ML-powered plugin control: the meta-controller 10. Using
regression for synthesizer control 11. Experiment with regression and
libtorch 12. The meta-controller 13. Linear interpolating Superknob 14.
Untrained Torchknob 15. Training the torchknob 16. Plugin meta-controller
17. Placing plugins in an AudioProcessGraph structure 18. Show a plugin's
user interface 19. From plugin host to meta-controller Part 3: The
autonomous music improviser 20. Background: all about sequencers 21.
Programming with Markov models 22. Starting the Improviser plugin 23.
Modelling note onset times 24. Modelling note duration 25. Polyphonic
Markov model Part 4: Neural audio effects 26. Welcome to neural effects 27.
Finite Impulse Responses, signals and systems 28. Convolution 29. Infinite
Impulse Response filters 30. Waveshapers 31. Introduction to neural guitar
amplifier emulation 32. Neural FX: LSTM network 33. JUCE LSTM plugin 34.
Training the amp emulator: dataset 35. Data shapes, LSTM models and loss
functions 36. The LSTM training loop 37. Operationalising the model in a
plugin 38. Faster LSTM using RTNeural 39. Guide to the projects in the
repository
Part 1: Getting started 1. Introduction to the book 2. Setting up your
development environment 3. Installing JUCE 4. Installing and using CMake 5.
Set up libtorch 6. Python setup instructions 7. Common development
environment setup problems 8. Basic plugin development 9. FM synthesizer
plugin Part 2: ML-powered plugin control: the meta-controller 10. Using
regression for synthesizer control 11. Experiment with regression and
libtorch 12. The meta-controller 13. Linear interpolating Superknob 14.
Untrained Torchknob 15. Training the torchknob 16. Plugin meta-controller
17. Placing plugins in an AudioProcessGraph structure 18. Show a plugin's
user interface 19. From plugin host to meta-controller Part 3: The
autonomous music improviser 20. Background: all about sequencers 21.
Programming with Markov models 22. Starting the Improviser plugin 23.
Modelling note onset times 24. Modelling note duration 25. Polyphonic
Markov model Part 4: Neural audio effects 26. Welcome to neural effects 27.
Finite Impulse Responses, signals and systems 28. Convolution 29. Infinite
Impulse Response filters 30. Waveshapers 31. Introduction to neural guitar
amplifier emulation 32. Neural FX: LSTM network 33. JUCE LSTM plugin 34.
Training the amp emulator: dataset 35. Data shapes, LSTM models and loss
functions 36. The LSTM training loop 37. Operationalising the model in a
plugin 38. Faster LSTM using RTNeural 39. Guide to the projects in the
repository
development environment 3. Installing JUCE 4. Installing and using CMake 5.
Set up libtorch 6. Python setup instructions 7. Common development
environment setup problems 8. Basic plugin development 9. FM synthesizer
plugin Part 2: ML-powered plugin control: the meta-controller 10. Using
regression for synthesizer control 11. Experiment with regression and
libtorch 12. The meta-controller 13. Linear interpolating Superknob 14.
Untrained Torchknob 15. Training the torchknob 16. Plugin meta-controller
17. Placing plugins in an AudioProcessGraph structure 18. Show a plugin's
user interface 19. From plugin host to meta-controller Part 3: The
autonomous music improviser 20. Background: all about sequencers 21.
Programming with Markov models 22. Starting the Improviser plugin 23.
Modelling note onset times 24. Modelling note duration 25. Polyphonic
Markov model Part 4: Neural audio effects 26. Welcome to neural effects 27.
Finite Impulse Responses, signals and systems 28. Convolution 29. Infinite
Impulse Response filters 30. Waveshapers 31. Introduction to neural guitar
amplifier emulation 32. Neural FX: LSTM network 33. JUCE LSTM plugin 34.
Training the amp emulator: dataset 35. Data shapes, LSTM models and loss
functions 36. The LSTM training loop 37. Operationalising the model in a
plugin 38. Faster LSTM using RTNeural 39. Guide to the projects in the
repository