Inattentiveness in drivers is the major contributing factor in road crashes. Inattention can be caused by fatigue. Alertness of a person is typically characterized by various visual cues like eyelid movement, gaze movement, head movement and facial expression which can then be extracted and concluded into drowsiness level. Mental state of the driver can also be determined by the EEG signals. Thus this project focuses on simultaneously combining multiple visual and non-visual cues to yield a more robust fatigue characterization than a single input. This approach combines facial expressions like eyelid movement and yawning for fatigue detection. The facial features are detected and then the interest points are traced using Harris corner point detection. The area covered by these interest points determines the presence or absence of drowsiness. We are also using EEG signals to deduce the mental state of driver for detection of fatigue making the system more reliable. Hence the system effectively combines the various characteristics to help avoid the mishaps caused due to the presence of fatigue in the drivers.