Machine learning has become key in supporting decision-making processes across a wide array of applications, ranging from autonomous vehicles to malware detection. However, while highly accurate, these algorithms have been shown to exhibit vulnerabilities, in which they could be deceived to return preferred predictions. Therefore, carefully crafted adversarial objects may impact the trust of machine learning systems compromising the reliability of their predictions, irrespective of the field in which they are deployed. The goal of this book is to improve the understanding of adversarial attacks, particularly in the malware context, and leverage the knowledge to explore defenses against adaptive adversaries. Furthermore, to study systemic weaknesses that can improve the resilience of machine learning models.