Ensemble methods are based on the idea of combining the predictions of several classifiers for a better generalization and to compensate for the possible defects of individual predictors.We distinguish two families of methods: Parallel methods (Bagging, Random forests) in which the principle is to average several predictions in the hope of a better result following the reduction of the variance of the average estimator.Sequential methods (Boosting) in which the parameters are iteratively adapted to produce a better mixture.In this work we argue that when the members of a predictor make different errors it is possible to reduce the misclassified examples compared to a single predictor. The performance obtained will be compared using criteria such as classification rate, sensitivity, specificity, recall, etc.