32,99 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 6-10 Tagen
  • Broschiertes Buch

With new processor families appearing every few years, it is increasingly harder to achieve high performance rates in sparse matrix computations. This monograph studies new methods for sparse matrix factorizations and applies them efficiently while retaining ease of use of existing solutions. The implementations are timed and analyzed using a commonly accepted set of test matrices. Contemporary processors are used for the tests. The new factorization techniques are proven to be quite competitive with state of the art software. In addition, an optimization effort is applied to an iterative…mehr

Produktbeschreibung
With new processor families appearing every few years, it is increasingly harder to achieve high performance rates in sparse matrix computations. This monograph studies new methods for sparse matrix factorizations and applies them efficiently while retaining ease of use of existing solutions. The implementations are timed and analyzed using a commonly accepted set of test matrices. Contemporary processors are used for the tests. The new factorization techniques are proven to be quite competitive with state of the art software. In addition, an optimization effort is applied to an iterative algorithm that stands out for its numerical robustness. This also gives satisfactory results on the tested computing platforms in terms of performance improvement. The same set of test matrices is used to enable an easy comparison between both investigated techniques, even though they are customarily treated separately in the literature. Possible extensions of the presented work range from easily conceivable merging with existing solutions to rather more evolved schemes dependent on hard to predict progress in theoretical and algorithmic research.
Autorenporträt
Piotr''s most recent work involves benchmarking with primary focus on codes for numerical linear algebra with applications to software self-adaptation. He also investigates parallel language design issues related to scientific programmers'' productivity and code performance in the US government''s HPCS program and, commercially, at the MathWorks.