开发者

Linear algebra application in Machine Learning [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
开发者_如何学C

We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.

Closed 1 year ago.

Improve this question

I am learning Linear Algebra(started recently) and was curious to know its applications in Machine Learning, where can I read about this

Thank you


Linear Algebra provides the computational engine for the majority of Machine Learning algorithms.

For instance, probably the most conspicuous and most frequent application of ML is the recommendation engine.

Aside from data retrieval, the real crux of these algorithms is often 'reconstruction' of the ridiculously sparse data used as input for these engines. The raw data supplied to Amazon.com's user-based R/E is (probably) a massive data matrix in which the users are the rows and its products are represented in the columns. Therefore, to organically populate this matrix, every customer would have to purchase every product Amazon.com sells. Linear Algebra-based techniques are used here.

All of the techniques in current use involve some type of matrix decomposition, a fundamental class of linear algebra techniques (e.g., non-negative matrix approximation, and positive-maximum-margin-matrix approximation (warning link to pdf!) are perhaps the two most common)

Second, many if not most ML techniques rely on a numerical optimization technique. E.g., most supervised ML algorithms involve creation of a trained classifier/regressor by minimizing the delta between the value calculated by the nascent classifier and the actual value from the training data. This can be done either iteratively or using linear algebra techniques. If the latter, then the technique is usually SVD or some variant.

Third, the spectral-based decompositions--PCA (principal component analysis) and kernel PCA--are perhaps the most commonly used dimension-reduction techniques, often applied in a pre-processing step just ahead of the ML algorithm in the data flow, for instance, PCA is often used instance in a Kohonen Map to initialize the lattice. The principal insight underneath these techniques is that the eigenvectors of the covariance matrix (a square, symmetric matrix with zeros down the main diagonal, prepared from the original data matrix) are unit length and are orthogonal to each other.


In machine learning, we generally deal with data in form of vectors/matrices. Any statistical method used involves linear algebra as its integral part. Also, it is useful in data mining.
SVD and PCA are famous dimensionality reduction techniques involving linear algebra.
Bayesian decision theory also involves significant amount of LA.You can try it also.


Singular value decomposition (SVD), is a classic method widely used in Machine Learning.

I find this article is fairly easy, explaining a SVD based recommendation system, see http://www.igvita.com/2007/01/15/svd-recommendation-system-in-ruby/ .

And Strang's linear algebra book, contains a section on the application of SVD to rank web pages (HITS algorithm) see Google Books.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜