In this thesis, we study the robustness and generalization properties of Deep Neural Networks (DNNs) under various noisy regimes, due to corrupted inputs or labels. Such corruptions can be either random or intentionally crafted to disturb the target DNN. Inputs corrupted by maliciously designed perturbations are known as adversarial examples and have been shown to severely degrade the performance of DNNs. However, due to the non-linearity of DNNs, crafting such perturbations is non-trivial. [...]