This thesis presents research that expands the collective knowledge in the areas of accountability and transparency of machine learning (ML) models developed for complex reasoning tasks over text. In particular, the presented results facilitate the analysis of the reasons behind the outputs of ML models and assist in detecting and correcting for potential harms. It presents two new methods for accountable ML models; advances the state of the art with methods generating textual explanations that are further improved to be fluent, easy to read, and to contain logically connected multi-chain arguments; and makes substantial contributions in the area of diagnostics for explainability approaches. All results are empirically tested on complex reasoning tasks over text, including fact checking, question answering, and natural language inference.
This book is a revised version of the PhD dissertation written by the author to receive her PhD from the Faculty of Science, University of Copenhagen, Denmark. In 2023, it won the Informatics Europe Best Dissertation Award, granted to the most outstanding European PhD thesis in the field of computer science.
This book is a revised version of the PhD dissertation written by the author to receive her PhD from the Faculty of Science, University of Copenhagen, Denmark. In 2023, it won the Informatics Europe Best Dissertation Award, granted to the most outstanding European PhD thesis in the field of computer science.
Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.