by Prof Tim Dodwell
General Linear Models
11. Looking through the Right Lens - Principle Component Analysis
In this explainer we look out our first unsupervised learning algorithm which is used for dimension reduction, called Principal Component Analysis (PCA). By the end of this sections you will
- Understand the key principles of PCA, what it is and why would it be used.
- Understand the key concepts of what is actually calculated when PCA is run.
- Introduce the key concepts of explained variance (ratio) for choosing the amount of model reduction.
- (Optional) Understand why the principal components are eigenvectors of the covariance matrix of your data set.
- Walkthrough an example of PCA on a real world data set.
Principal Component Analysis (PCA) is a linear dimesion reduction method. The method is unsupervised, and so we just have a set of data
and no labels. Each of the samples are vector representations of the input, so that , this means the sample is represented by D-numbers, or put another way is D-dimensional input / sample space. Often input data can be very high dimensional, i.e is very large.
A good example of this, which often encounters are images. Take for an example a picture made up of 640 by 480 pixels, in which each pixel is represented by 3 color values - Red, Green, Blue (RGB). This gives a total number of dimensions of .
High dimensional data is often challenging to analyse, interpret, visualise and store. Whilst data might be represented in a high-dimensional way, the data is often much more structured and correlation than this.
Dimension Reduction methods seek to exploit this structure and correlations to give a compact, lower dimensional representation of the data, without losing information.
We are interested in finding a compact representation of the data where is much less the original dimension of . This dimension reduction is achieved by a matrix operation , such that
where , and the the reconstruction given by the matrix operation
To visualise these matrix operations we have
PCA is a linear dimension reduction method, since components of the new representation are linear combinations of full representation . The question is now how do we pick the columns . We choose two:
- Choose directions so capture maximum variance of the original data set .
- Choose coloumns and so they are ortogonal .
To visualise what this looks like on a simple data set, we see two principal components. The principal component pointing the direction of the greatest variance, and the second component orthogonal (i.e. at 90 degrees to the first).
The principal components are actually eigenvectors of the covariance matrix
i.e. they are solutions of the equation
The matrix is a symmetric matrix, with eigenvectors and associated eigenvalues, such that
The eigenvalues give us a value of the contribution towards the total variance an eigenvector makes, this is also called the explained variance
Plotting against eigenvalues, shows a natural cutoff of how many principal modes should be include in the reduce order model, without losing to much information.
To show a conceptal plot of what this looks like, see below.
The plot shows individual contributions of the principal components to explained variance (as a percentage). The orange curve shows the culmative effect. So what we mean by this is that if you include the first three components you will include 90% of the variance from the original data set.
We will show a practical example of in the python example walkthrough for PCA.
PCA and General Linear Models
In linear models, we extended the concept away from simple linear functions e.g. , to more general representations of linear combinations of general functions, so that
where we call the basis functions.
So an interest point comes that how do you choose the basis functions. You could engineer them from experience. So, for example, if you were modelling a system that was dependent on a tide for example, you could easily put in some sensible periodic functions perhaps.
In general you might not know what to choose, and for this the principal components become a natural choice. So this means we take
so this would be a sum of just linear features, but could be generally extended to higher order polynomials by taking
Concluding Remarks
PCA is an unsupervised learning algorithm. It is a parameteric model, since it imposes a linear project / transformation of the data only.
It is very useful method which is widely used in the data preparation, input representation and explorations stages of a machine learning workflow. As an AI practioner it tells me how correlated my inputs / features are that I am going to use.
We see that in many machine learning workflows that algorithms are either no unsupervised or supervised, but combine the good bits of both! For example, we have show how we can use PCA to inform the basis functions we use in a General Linear Models.
Bottomline, PCA is a very useful tool to have in your toolbox. Implementation is packages like sklearn
mean that implementations are just a few lines of code.
Bonus Information
A Bit of Maths - Not required for using PCA. There are packages that implementing PCA, so you don't worry about how to calculate the eigenvectors and values of , if you don't want to. This becomes more complex when data sets or dimensions are large. For those interested in the question "Why are the principal components the eigenvectors of ?". . . Here it is.
We wish to maximise variance of the first reduce coordinate of which is
is maximised. Noting that . This is to project of sample into the direction . So we can rewrite the variance
We wish find which maximises this variance, but we note that we can just make the magnitude of arbitarily large, so this problem is ill-posed. We therefore have to solve a constrained maximisation problem, such that
This then becomes a constrained optimisation problem. So we write down the Lagrangian
We can then find maximisers of by differentiating with respect to and , so that
Setting these partial derivatives to zero which gives us our eigenproblem
subsequent eigenvalues will then give us maximisers of orthgonal to . Hence we get the order sequence of eigenvectors (principal components).
We don't show it hear but is possible to show that maximising the variance of the projected data is the same as minimising the reconstruction error over the data set, i.e.