Correctly identifying semantic entities and successfully disambiguating the relations between them and their predicates is an important and necessary step for successful natural language processing applications, such as text summarization, question answering, and machine translation. Researchers have studied this problem, semantic role labeling (SRL), as a machine learning problem since 2000. However, after using an optimal global inference algorithm to combine several SRL systems, the growth of SRL performance seems to have reached a plateau. Syntactic parsing is the bottleneck of the task of semantic role labeling and robustness is the ultimate goal. In this book, we investigate ways to train a better syntactic parser and increase SRL system robustness. We demonstrate that parse trees augmented by semantic role markups can serve as suitable training data for training a parser for an SRL system. For system robustness, we propose that it is easier to learn a new set of semantic roles. The new roles are less verb- dependent than the original PropBank roles. As a result, the SRL system trained on the new roles achieves signi cantly better robustness.