Home Life EIGENFACES TUTORIAL PDF
Life

# EIGENFACES TUTORIAL PDF

We’re going to discuss a popular technique for face recognition called eigenfaces . And at the heart of eigenfaces is an unsupervised. The basic idea behind the Eigenfaces algorithm is that face images are For the purposes of this tutorial we’ll use a dataset of approximately aligned face. Eigenfaces is a basic facial recognition introduced by M. Turk and A. Pentland [9] .. [6] Eigenface Tutorial

 Author: Duzilkree Shakagis Country: Bhutan Language: English (Spanish) Genre: Love Published (Last): 8 September 2014 Pages: 75 PDF File Size: 15.5 Mb ePub File Size: 11.38 Mb ISBN: 939-7-92242-251-2 Downloads: 31488 Price: Free* [*Free Regsitration Required] Uploader: Mukinos

February 11, by Shubhendu Trivedi. Eigenfaces is probably one of the simplest face recognition methods and also rather old, then why worry about it at all? Because, while it is simple it works quite well.

I was thinking of writing a post based on face recognition in Bees next, so this should serve as a basis for tutoria, next post too. The idea of this post is to give a tjtorial introduction to the topic with an emphasis on building intuition. For more rigorous treatments, look at the references. We are not yet even close to an understanding of how we manage to do it.

Damage to the temporal lobe can result in the condition in which the concerned person can lose the ability to recognize faces. In one tutoriao my previous postswhich had links to a eigenfcaes of lectures by Dr Vilayanur RamachandranI did link to one lecture by him in which he talks in detail about this condition. All this aside, not much is known how the egenfaces information for a face is coded in the brain too.

Eigenfaces has a parallel to one of the most fundamental ideas in mathematics and signal ekgenfaces — The Fourier Series. This parallel is also very helpful to build an intuition to what Eigenfaces or PCA sort of does and hence must be exploited.

Hence we review the Fourier Series in a few sentences. Representation of tutroial signal in the form of a linear combination tutorila complex sinusoids is called the Fourier Series. The coefficients are given as:. It is common to define the above using. An example that illustrates or the Fourier series is:.

A square wave given in black can be approximated to by using a series of sines and cosines result of this summation shown in blue. Clearly in the limiting case, we could reconstruct the square wave exactly with simply sines and cosines. Though not exactly the same, the idea behind Eigenfaces is similar. The aim is to represent a face as a linear combination of a set of basis images in the Fourier Series the bases were simply sines and cosines. Where represents the face with the mean subtracted from it, represent weights and the eigenvectors.

This was just like mentioning at the start what we have to do. The big idea is that you want to find a set of images called Eigenfaces, which are nothing but Eigenvectors of eigenfacces training data that if you weigh and add together should give you back a image that you are interested in adding images together should give you back an image, Right?

### Chapter Face recognition Eigenfaces

The way you weight these basis images i. In the above figure, a face that was in the training database was reconstructed by taking tutoria, weighted summation of all the basis faces and then adding to them the mean face. They have just been picked randomly from a pool of 70 by me. An Information Theory Approach: First of all the idea of Eigenfaces considers face tutoroal as a 2-D recognition problemthis is based on the assumption that at the time of recognition, faces will be mostly upright and frontal.

AMARILLAMIENTO LETAL DEL COCOTERO PDF

Because of this, detailed 3-D information about the face is not needed. Eugenfaces reduces complexity by a significant bit. Before the method for face recognition using Eigenfaces was introduced, most of the face recognition literature dealt with local and intuitive features, such as distance between eyes, ears and similar other features.

Eigenfaces inspired by a method used in an earlier paper was a significant departure from the idea of using only intuitive features. It uses an Information Theory appraoch wherein the most relevant face information is encoded in a group of faces that will best distinguish the faces.

## Face Recognition with Eigenfaces

It transforms the face images in to a set of basis faces, which essentially are the principal components of the face images. This is particularly useful for reducing the computational effort. This is illustrated by this figure:.

Such an information theory approach will encode not only the local features but also the global features. Such features may or may not be intuitively tutorkal. When we find the principal components or the Eigenvectors of the image set, each Eigenvector has some contribution from EACH face used in the training set. So the Eigenvectors also have a face like appearance. These look ghost like and are ghost images or Eigenfaces.

Every image in the training set can be represented as a weighted linear combination of these basis faces. The number of Eigenfaces that we would obtain therefore would be equal to the number of images in the training set. Let us take this number to be. Like I mentioned one paragraph eigenfacez, some of these Eigenfaces are more important in encoding the variation in face images, thus we could also approximate faces using only the most significant Eigenfaces. There are images in the training set.

There are most significant Eigenfaces using which we can satisfactorily approximate a face. All images are matrices, which can be represented as dimensional vectors. The same logic would apply to images that are not of equal length and breadths. To take an example: An image of size x can be represented as a vector of dimension or simply as a ttorial in a dimensional space.

Algorithm for Finding Eigenfaces: Obtain training images…it is very important that the images are centered. Due to a recent WordPress bug, there is some trouble with constructing matrices with multiple columns. Same goes for some formulae below in the post.

Find the average face vector. Subtract the mean face from each face vector to get a set of vectors. Find the Covariance matrix:. Also note that is a matrix and is a matrix.

We now need to calculate the Eigenvectors ofHowever note that is a matrix and it would return Eigenvectors each being dimensional. For an image this number is HUGE. The computations required would easily make your system run out of memory. How do we get around this problem?

LIZA PICARD VICTORIAN LONDON PDF

Instead of the Matrix consider the matrix. Remember is a matrix, thus is a matrix. Now from some properties of matrices, it follows that: We have found out earlier. This implies that using we can calculate the M largest Eigenvectors of. Remember that as M is simply the number of training images. Find the best Eigenvectors of by using the relation discussed above. Also keep in mind that. Select the best Eigenvectors, the selection of these Eigenvectors is done heuristically.

The Eigenvectors found at the end of the previous section, when converted to a matrix in a process that is reverse to that in STEP 2, have a face like appearance. Since these are Eigenvectors and have a face like appearance, they are called Eigenfaces. Sometimes, they are also called as Ghost Images because of their weird appearance.

Now each face in the training set minus the meancan be represented as a linear combination of these Eigenvectors:. Each normalized training image is represented in this basis as a vector.

This means we have to calculate such a vector corresponding to every image in the training set and store them as templates. Now consider we have found out the Eigenfaces for the training imagestheir associated weights after selecting a set of most relevant Eigenfaces and have stored these vectors corresponding to each training image. If an unknown probe face is to be recognized then:. We normalize the incoming probe as. The normalized probe can then simply be represented as:.

After the feature vector weight vector for the probe has been found out, we simply need to classify it. For the classification task we could simply use some distance measures or use some classifier like Support Vector Machines something that I would cover in an upcoming post.

In case we use distance measures, classification is done as:. This means we take the eigentaces vector of the probe we have just found out and find its distance with the weight vectors associated with each of the training image.

### Face Recognition with Eigenfaces – Zenva | Python Machine Learning Tutorials

And ifwhere is a threshold chosen heuristically, then we can say that the probe image is recognized as the image with which it gives the lowest score. If however then the probe does not belong to the database.

I will come to the point on how the threshold should be chosen. For distance measures the most commonly used measure is the Euclidean Distance. The other being the Mahalanobis Distance. The Mahalanobis distance generally gives superior performance. The Euclidean Distance is probably the most widely used distance metric. It is a special case of a general class of norms and is given as:. The Mahalanobis Distance is a better distance measure when it comes to pattern recognition problems.

It takes into account the covariance between the variables and hence removes the problems related tutorrial scale and correlation that are inherent with the Euclidean Distance. It is given as:. Where is the covariance between the variables involved.