Our project comprises of analysis of the existing algorithms for data compression so that we can utilize some features of these algorithms use them to build a more efficient algorithm. We have also considered detailed study to observe the efficiency of the algorithms for different document types with the intention of building a comprehensive data compression technique. We need to study and analyze all the algorithms separately to incorporate the positives from each of them and try to rectify the flaws that exist in these algorithms. For clear elucidation of the data compression concepts we try to discuss each standard algorithm in detail. There is a close connection between machine learning and compression: a system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution), while an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as justification for data compression as a benchmark for "general intelligence".