16,10 €
16,10 €
inkl. MwSt.
Sofort per Download lieferbar
payback
0 °P sammeln
16,10 €
16,10 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
0 °P sammeln
Als Download kaufen
16,10 €
inkl. MwSt.
Sofort per Download lieferbar
payback
0 °P sammeln
Jetzt verschenken
16,10 €
inkl. MwSt.
Sofort per Download lieferbar

Alle Infos zum eBook verschenken
payback
0 °P sammeln
  • Format: PDF

This dissertation focuses on developing a facial analysis system under near infrared illumination (NIR). The use of near infrared imagery alleviates the difficulty of illumination variation for face recognition, and enables the system to work under changeable ambient lighting conditions. At first a multi-modal database for person authentication under near infrared illumination is built, which provides the research community the possibility to test their algorithms for audio- and video-based person authentication under NIR. So far audio and video data from 74 subjects have been recorded. Ground…mehr

Produktbeschreibung
This dissertation focuses on developing a facial analysis system under near infrared illumination (NIR). The use of near infrared imagery alleviates the difficulty of illumination variation for face recognition, and enables the system to work under changeable ambient lighting conditions. At first a multi-modal database for person authentication under near infrared illumination is built, which provides the research community the possibility to test their algorithms for audio- and video-based person authentication under NIR. So far audio and video data from 74 subjects have been recorded. Ground truths, namely coordinations of eyes and lip corners, as well as six distinct viseme (mouth shape) classes are manually collected. All the experiments in this work are carried out on this database. The main part of the face analysis system consists of three key modules: eye detection, feature extraction, and classification. To achieve an automatic face recognition system, reliable detection of eyes is crucial. On the one hand eyes are the most salient and stable facial features; on the other hand eye positions are generally used to align and normalize the face. The algorithm presented in this work achieves robust eye detection under NIR. In most cases this algorithm makes use of the bright pupil effect and relies on a novel thresholding algorithm to select pupil candidates. To increase the robustness against eyeglasses and pupil intensity when selecting pupil candidates, the symmetry transform is incorporated. An appearance model is then employed for eye verification. This dissertation proposes Discrete Cosine Transform (DCT)-based features for face recognition. A multi-block fusion scheme is put forward, whose aim is to easily combine global and local features for face recognition. This scheme can be utilized with any algorithm to improve the performance in general. The DCT based features have been continuously optimized. At first blockwise DCT, which emphasizes local information, outperforms holistic DCT. AdaBoost algorithm is then employed to extract features, which yields better results than blockwise DCT. A novel boosting algorithm, CorrAdaBoost, is proposed, which incorporates correlation with the original AdaBoost algorithm. CorrAdaBoost reduces the redundancy between selected features and is more effective than AdaBoost. The cascaded CorrAdaBoost further improves the performance and efficiency. The proposed normalization strategy generally improves the performance of every kind of features. This work evaluates different classification approaches, namely nearest neighbor, nearest center, Linear Discriminant Analysis (LDA), and Support Vector Machine (SVM). Among all approaches, LDA yields the best performance, then comes the nearest neighbor classifier based on the Manhattan distance. This work realizes video-based recognition using the proposed approaches of face representation and classification. An exemplar learning algorithm is implemented, which is a direct way to build the probabilistic representation of each individual from a gallery video. A natural way to make use of multiple frames is temporal integration. Three schemes of temporal integration, i. e. the max rule, the majority voting rule, and the probabilistic voting rule are carried out. A simultaneous tracking and recognition approach based on condensation algorithm is adopted for true video-based face recognition. Exhaustive experiments have been conducted, a great deal of results demonstrate the effectiveness of the proposed algorithms.

Dieser Download kann aus rechtlichen Gründen nur mit Rechnungsadresse in A, B, BG, CY, CZ, D, DK, EW, E, FIN, F, GR, HR, H, IRL, I, LT, L, LR, M, NL, PL, P, R, S, SLO, SK ausgeliefert werden.