Artificial Neural Networks (ANN) has become a powerful tool in decision-making. It has many qualities, which can easily attract the user such as the ability to 'learn from different dynamic data which is obtained through internal adjustment of weight, easy and speedy in calculation, provides robust solution in presence of noise and provides accurate solution when used over a set of previously unseen example from problem domain. However, it has one big drawback of working, as "black box" technology that is input is supply to a trained network, which processed opaquely. Due to this missing transparency, network structures are confusing. Along with this, multiple layer and recurrent networks are complicating the problems, especially when genetic algorithms produce the weights of ANNs, as direct knowledge about the concrete workings is necessary. Some techniques like CART and C 4.5 produced transparent models which provide comprehensible results but they are not that much accurate. We would like to show some perspective on rule extraction (RE) through which we can add explanation facilities to ANN.