Edges define the boundaries between regions in an image, which helps with segmentation and object recognition. They can show where shadows fall in an image or any other distinct change in the intensity of an image. Edges in an image are formed due to variations of some of the physical properties, like surface illumination, shadows, geometry and reflectance of objects in scene. Clear edges can show where shadows fall in an image, any other distinct change in the intensity of an image. The process of extraction of these feature points is called edge detection. This feature plays an important role in object identification methods used in machine vision, image segmentation, 3D reconstruction and image processing systems of an edge. Morphological edge detectors involve simple addition/subtraction operations and max/min operations. Since different edge detectors work better under different conditions, it would be ideal to have an algorithm that makes use of multiple edge detectors, applying each one when the scene conditions are most ideal for its method of detection. In order to create this system, it is first required to know which edge detectors perform better under which conditions.