skip to content
Ben Lau statistics . machine learning . programming . optimization . research

Latent Space

2 min read Updated:

A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set of latent variables that emerge from the resemblances from the objects. wiki

In most cases, the dimensionality of the latent space is chosen to be lower than the dimensionality of the feature space from which the data points are drawn, making the construction of a latent space an example of dimensionality reduction, which can also be viewed as a form of data compression. Latent spaces are usually fit via machine learning, and they can then be used as feature spaces in machine learning models, including classifiers and other supervised predictors.

PCA

世上最生動的 PCA:直觀理解並應用主成分分析: Compress the features into a lower-dimensional space by finding most representative eigenvectors. Elbow method can be used to determine the number of principal components to keep. The projection can be done by SVD or eigendecomposition. The

踏入荒野:實際應用 PCA 來解析真實數據

t-SNE

Embedding

We can actually use neural networks to learn the latent space, which is called embedding. The idea is to learn a mapping from the input space to the latent space, such that the latent space is a more compact representation of the input space. This is useful for tasks like recommendation systems, where we want to find similar items in the latent space.