75,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in über 4 Wochen
payback
38 °P sammeln
  • Broschiertes Buch

Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap.
Starting with the basics, setting up MATLAB for
…mehr

Produktbeschreibung
Beyond simulation and algorithm development, many developers increasingly use MATLAB even for product deployment in computationally heavy fields. This often demands that MATLAB codes run faster by leveraging the distributed parallelism of Graphics Processing Units (GPUs). While MATLAB successfully provides high-level functions as a simulation tool for rapid prototyping, the underlying details and knowledge needed for utilizing GPUs make MATLAB users hesitate to step into it. Accelerating MATLAB with GPUs offers a primer on bridging this gap.

Starting with the basics, setting up MATLAB for CUDA (in Windows, Linux and Mac OS X) and profiling, it then guides users through advanced topics such as CUDA libraries. The authors share their experience developing algorithms using MATLAB, C++ and GPUs for huge datasets, modifying MATLAB codes to better utilize the computational power of GPUs, and integrating them into commercial software products. Throughout the book, they demonstrate many example codes that can be used as templates of C-MEX and CUDA codes for readers' projects. Download example codes from the publisher's website: http://booksite.elsevier.com/9780124080805/

Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Autorenporträt
Jung W. Suh is a senior algorithm engineer and research scientist at KLA-Tencor. Dr. Suh received his Ph.D. from Virginia Tech in 2007 for his 3D medical image processing work. He was involved in the development of MPEG-4 and Digital Mobile Broadcasting (DMB) systems in Samsung Electronics. He was a senior scientist at HeartFlow, Inc., prior to joining KLA-Tencor. His research interests are in the fields of biomedical image processing, pattern recognition, machine learning and image/video compression. He has more than 30 journal and conference papers and 6 patents.
Rezensionen
"This truly is a practical primer. It is well written and delivers what it promises. Its main contribution is that it will assist "naive programmers in advancing their code optimization capabilities for graphics processing units (GPUs) without any agonizing pain."--Computing Reviews,July 2 2014

"Suh and Kim show graduate students and researchers in engineering, science, and technology how to use a graphics processing unit (GPU) and the NVIDIA company's Compute Unified Device Architecture (CUDA) to process huge amounts of data without losing the many benefits of MATLAB. Readers are assumed to have at least some experience programming MATLAB, but not sufficient background in programming or computer architecture for parallelization."--ProtoView.com, February 2014

"This truly is a practical primer. It is well written and delivers what it promises. Its main contribution is that it will assist "naive? programmers in advancing their code optimization capabilities for graphics processing units (GPUs) without any agonizing pain."--Computing Reviews,July 2 2014

"Suh and Kim show graduate students and researchers in engineering, science, and technology how to use a graphics processing unit (GPU) and the NVIDIA company's Compute Unified Device Architecture (CUDA) to process huge amounts of data without losing the many benefits of MATLAB. Readers are assumed to have at least some experience programming MATLAB, but not sufficient background in programming or computer architecture for parallelization."--ProtoView.com, February 2014