A biometric recognition system that uses as a biometric feature a static digital image of the human face is developed. Detecting and recognizing human faces in photographs and video sequences is an increasing problem in the field of computer vision, and there are many practical applications at present, such as surveillance, videoconferencing, access control, etc. The objective is to return as a result the five people in the database which most resemble the person of the test image. The problem of face recognition can be divided into two phases: Detection of the face within the image and recognition. The detection phase is mainly based on the detection of skin in the image. Subsequently, a selection of candidate skin regions to be expensive and validated through "maps" of eyes and mouth. In addition, an alternative system of face detection, if the previous method has not detected any. This method takes the largest region of skin found in the image and generates an ellipse with its characteristics to return as face the part of the image that coincides with the ellipse.In the recognition phase the areas of the detected images are taken as persons. PCA is used to extract the characteristics that represent the images. These characteristics are then used to train and simulate neural networks. With the outputs of the neural networks, the images of the database that most closely resemble the face of the test image. The evaluation of the implemented system shows the great influence of the type of images used for recognition, with much better results when the images meet certain characteristics.The framework of this book is the digital restoration of images, that is, the process which recovers an original image that has been degraded by imperfections of the acquisition system: blur and noise. Restoring this degradation is a problem poorly conditioned because the direct investment by least squares amplifies the noise at high frequencies. Therefore, regularization is used mathematics as a means to include a priori information of the image that it achieves to stabilize the solution. During the first part of the report a review of certain state-of-the-art algorithms, which will later be used as methods of comparison in the experiments.To solve the problem of regularization, the restoration of images has prerequisites. In the first place, it is necessary to behavior of the image outside its borders, due to non-local ownership of the convolution modeling degradation. The absence of border in the restoration gives rise to the artifact known as boundary ringing. In the second one, the restoration algorithms depend on a significant number of parameters divided into three groups: parameters with respect to the degradation, noise and the original image. All of them need to estimate a priori sufficiently precise, because small errors with respect to their real values produce significant deviations in restoration results. The problem border and sensitivity to estimates are the objectives to be resolved in this book using iterative algorithms.The first of the algorithms addresses the border problem starting from an image truncated in the field of vision as real observation. To solve this no linearity, a neural network is used that minimizes a defined cost function mainly due to regularization by total variation, but not including any type of information on frontiers or require prior training.As a result, a restored image without ringing effects is field of view and in addition the truncated borders are reconstructed until the original size. The algorithm is based on the technique of retro-propagation of energy, with that the network becomes an iterative cycle of two processes: forward and backward, which simulate a restoration and degradation for each iteration. Following the same iterative concept of restoration-degradation, a second algorithm in the frequency domain to reduce dependence with respect to parameter estimates. For this, a new filter of desensitized restoration as a result of applying an iterative algorithm on an original filter. Studying the sensitivity properties of this filter and establishing a criterion for the number of iterations, we arrive at an expression for the desensitization algorithm particularized to the Wiener and Tikhonov filters.The results of the experiments demonstrate the good behavior of the filter with respect to noise-dependent error, making the estimate that is made more robustness is the one corresponding to the parameters of the noise, although desensitization is also extended to other estimates.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.
Hinweis: Dieser Artikel kann nur an eine deutsche Lieferadresse ausgeliefert werden.