42,95 €
inkl. MwSt.
Versandkostenfrei*
Versandfertig in 1-2 Wochen
payback
0 °P sammeln
  • Broschiertes Buch

Bachelor Thesis from the year 2024 in the subject Computer Sciences - Artificial Intelligence, grade: 100/100, Baden-Wuerttemberg Cooperative State University (DHBW) (Economics), course: Business Information Systems - Data Science, language: English, abstract: Large machine learning models, such as language models, have garnered increased attention due to their evolving capabilities. However, their performance gains are accompanied by growing sizes and deployment costs, necessitating effective inference optimization techniques. This thesis proposes MLP-Rank, a novel method utilizing centrality…mehr

Produktbeschreibung
Bachelor Thesis from the year 2024 in the subject Computer Sciences - Artificial Intelligence, grade: 100/100, Baden-Wuerttemberg Cooperative State University (DHBW) (Economics), course: Business Information Systems - Data Science, language: English, abstract: Large machine learning models, such as language models, have garnered increased attention due to their evolving capabilities. However, their performance gains are accompanied by growing sizes and deployment costs, necessitating effective inference optimization techniques. This thesis proposes MLP-Rank, a novel method utilizing centrality measures from graph theory to prune artificial neural networks, which reduces both computational requirements and the memory footprint of the model. Specifically, we devise a method for creating a weighted directed a cyclical graph representation of multilayer perceptron models, upon which the weighted PageRank centrality measure is applied to compute node importance scores. Subsequently, the model undergoes pruning based on these scores, resulting in structured sparsity patterns in parameter matrices. Furthermore, we introduce a new implementation of weighted PageRank, tailored to the structure of neural network models, facilitating convergence within a single power iteration and ensuring efficient computational complexity for MLP-Rank. We provide theoretical derivations and proofs underpinning the effectiveness of MLP-Rank pruning, along with extensive empirical evaluations comparing it to established baseline methods. MLP-Rank demonstrates superior performance over methods relying on random scores, L1-norm, activations, and structured Wanda in the majority of comparisons. Notably, it achieves the average high-est accuracy retention across all experiments.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.