<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>linalg | Nalin Gadihoke</title><link>https://www.nalingadihoke.com/category/linalg/</link><atom:link href="https://www.nalingadihoke.com/category/linalg/index.xml" rel="self" type="application/rss+xml"/><description>linalg</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© Nalin Gadihoke, 2020</copyright><lastBuildDate>Mon, 30 Nov 2020 00:00:00 +0000</lastBuildDate><item><title>Pythonic applications of Linear Algebra</title><link>https://www.nalingadihoke.com/post/linalg/</link><pubDate>Mon, 30 Nov 2020 00:00:00 +0000</pubDate><guid>https://www.nalingadihoke.com/post/linalg/</guid><description>&lt;p>As the title suggests, this project saw me extend some of my linear algebra knowledge with inspiration from a &lt;a href="https://faculty.math.illinois.edu/~phierony/math415-2020.html" target="_blank" rel="noopener">course&lt;/a> I took in 2020. Here, simple face recognition is demonstrated, a timeseries is analyzed and other interesting applications are discussed.&lt;/p>
&lt;p>Principal Component Analysis (&lt;a href="https://en.wikipedia.org/wiki/Principal_component_analysis" target="_blank" rel="noopener">PCA&lt;/a>) is a way of capturing most of the variance in the data (in an orthogonal basis). It reduces dimensions and transforms the data linearly into new properties that are not correlated. Singular Value Decomposition (&lt;a href="https://jonathan-hui.medium.com/machine-learning-singular-value-decomposition-svd-principal-component-analysis-pca-1d45e885e491" target="_blank" rel="noopener">SVD&lt;/a>) will be utilized to diagonalize the matrices from which the basis vectors will be truncated to give us our principal components.&lt;/p>
&lt;h1 id="equations">Equations&lt;/h1>
&lt;p>Assuming a $m \times n$ matrix $X$, the principal components are defined as the eigenvectors of the dataset’s covariance matrix. Assuming $\hat{X}$ is the dataset centered at the origin, it can be said that $\hat{X}^{T}\hat{X}$ is proportional to the covariance matrix which means finding the eigenvectors for $\hat{X}^{T}\hat{X}$ is enough. These are represented by the columns of V in the reduced SVD of $\hat{X}$,&lt;/p>
&lt;p>$$\hat{X} = U\Sigma V^{T}$$&lt;/p>
&lt;h1 id="a-namefaceaface-recognition">&lt;a name="face">&lt;/a>Face Recognition&lt;/h1>
&lt;p>For this analysis I used, AT&amp;amp;T Laboratories Cambridge&amp;rsquo;s “&lt;a href="https://web.archive.org/web/20180802044943/http:/www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html" target="_blank" rel="noopener">Database of Faces&lt;/a>”, a set of gray scale face images (like in the image above) normalized to the same resolution. Each image is a flattened row of a larger ‘faces’ matrix. First, to zero center the faces matrix, an average face is computed and subtracted from each row of the matrix. The ‘average’ face is shown below.&lt;/p>
&lt;figure >
&lt;a data-fancybox="" href="https://www.nalingadihoke.com/post/linalg/combined_face_hu857b26d4a62b18915171cd4a12d2a09d_34142_2000x2000_fit_lanczos_3.png" >
&lt;img data-src="https://www.nalingadihoke.com/post/linalg/combined_face_hu857b26d4a62b18915171cd4a12d2a09d_34142_2000x2000_fit_lanczos_3.png" class="lazyload" alt="" width="353" height="413">
&lt;/a>
&lt;/figure>
&lt;p>Using &lt;code>SVD&lt;/code>, we can calculate the eigenbasis of the desired covariance matrix.&lt;/p>
&lt;pre>&lt;code class="language-sh">U, S, Vt = la.svd(faces_zero_centered, full_matrices=False)
V = Vt.T
&lt;/code>&lt;/pre>
&lt;p>As a side note, the principal components of any one of the images in the training set can be captured by converting them into the eigenface basis using the V vector found above.&lt;/p>
&lt;figure >
&lt;a data-fancybox="" href="https://www.nalingadihoke.com/post/linalg/not_trained_hu56e2c3c43ec906b7851cb15fdbcaeef8_124232_2000x2000_fit_lanczos_3.png" >
&lt;img data-src="https://www.nalingadihoke.com/post/linalg/not_trained_hu56e2c3c43ec906b7851cb15fdbcaeef8_124232_2000x2000_fit_lanczos_3.png" class="lazyload" alt="" width="1412" height="826">
&lt;/a>
&lt;/figure>
&lt;p>Now, with an unknown face (left) that the model is not trained on, first we will subtract the average face from it. The resulting image (right) will then be converted to the eigenface basis by a simple &lt;a href="https://textbooks.math.gatech.edu/ila/linear-transformations.html#:~:text=A%20linear%20transformation%20is%20a,n%20and%20all%20scalars%20c%20." target="_blank" rel="noopener">linear transformation&lt;/a>.&lt;/p>
&lt;pre>&lt;code class="language-sh">unknown_zero_centered = face_unknown - face_avg
unknown_basis = unknown_zero_centered @ V
faces_basis = faces_zero_centered @ V
&lt;/code>&lt;/pre>
&lt;p>To match an existing image, the &lt;code>“closest”&lt;/code> face in the face basis will be selected on the basis of least distance between the vectors.&lt;/p>
&lt;pre>&lt;code class="language-sh">n = 0
differences= la.norm(faces_basis - unknown_basis, axis=1)
n = np.argmin(differences)
plt.imshow(faces[n].reshape(face_shape), cmap=&amp;quot;gray&amp;quot;)
&lt;/code>&lt;/pre>
&lt;figure id="figure-prediction">
&lt;a data-fancybox="" href="https://www.nalingadihoke.com/post/linalg/prediction_hu445b35a498ac64e373d08042fbf7af03_53133_2000x2000_fit_lanczos_3.png" data-caption="prediction">
&lt;img data-src="https://www.nalingadihoke.com/post/linalg/prediction_hu445b35a498ac64e373d08042fbf7af03_53133_2000x2000_fit_lanczos_3.png" class="lazyload" alt="" width="353" height="413">
&lt;/a>
&lt;figcaption>
prediction
&lt;/figcaption>
&lt;/figure>
&lt;p>The above demonstration was a simple implementation of PCA on a data set of images where its assumed each face fills out a similar area in the image. Check out this &lt;a href="https://pythonmachinelearning.pro/face-recognition-with-eigenfaces/" target="_blank" rel="noopener">link&lt;/a> for a more complex implementation involving a trained neural net and scikit-learn&lt;/p>
&lt;h1 id="short-example-of-timeseries">Short Example of Timeseries&lt;/h1>
&lt;figure >
&lt;a data-fancybox="" href="https://www.nalingadihoke.com/post/linalg/time_combined_huabc0c6f2de722beb42ed8042a784a60a_5753847_2000x2000_fit_lanczos_3.png" >
&lt;img data-src="https://www.nalingadihoke.com/post/linalg/time_combined_huabc0c6f2de722beb42ed8042a784a60a_5753847_2000x2000_fit_lanczos_3.png" class="lazyload" alt="" width="4688" height="4024">
&lt;/a>
&lt;/figure>
&lt;p>&lt;code>PCA&lt;/code> can be used to split up various timeseries too. In the image below, the temperature data for six US cities is plotted (top left). Next, The average is subtracted to zero center the data like in the steps above (top right). Finally, the data is broken into its top two PCs (bottom).&lt;/p>
&lt;pre>&lt;code class="language-sh"># zero center the data
temp_avg = np.mean(temperature,axis=0)
temp_zero_center = temperature - temp_avg
# SVD breakdown
U,S, Vt = la.svd(temp_noavg)
V = Vt.T
# plotting the first two eigenvectors
plt.figure(figsize=(20,10))
lines = plt.plot((V[:,:2] ), '-', )
plt.legend(iter(lines), map(lambda x: f&amp;quot;PC {x}&amp;quot;, range(1,6)))
&lt;/code>&lt;/pre>
&lt;p>Since average temperature dips in the winter and peaks in the summer, the first component represents climates that remains relatively static year-round.&lt;/p>
&lt;p>Further Reading:&lt;/p>
&lt;ol>
&lt;li>Markov Matrics was one of the coolest take aways in linalg. &lt;a href="https://towardsdatascience.com/brief-introduction-to-markov-chains-2c8cab9c98ab" target="_blank" rel="noopener">This&lt;/a> article breaks down the concept and its applications in data science.&lt;/li>
&lt;li>Briefly mentioned in the above article, Google &lt;a href="https://en.wikipedia.org/wiki/PageRank" target="_blank" rel="noopener">PageRank&lt;/a> utilizes a special kind of square matrix called the &lt;a href="https://en.wikipedia.org/wiki/Google_matrix" target="_blank" rel="noopener">Google Matrix&lt;/a>&lt;/li>
&lt;li>Sandeep Khurana explains Linear Regression quite eloquently &lt;a href="%28https://towardsdatascience.com/linear-regression-with-example-8daf6205bd49%29">here&lt;/a>.&lt;/li>
&lt;/ol></description></item></channel></rss>