Comments (7)
I would certainly not apply K-Means to the results of UMAP (or t-SNE) output (since they rarely provide nice spherical clusters). On the other hand I feel that the linked answer is perhaps too cautious -- I don't think you can't apply a density based clustering algorithm to the results of t-SNE so much as that one needs to be careful in interpreting the results. t-SNE can certainly "create" sub-clusters that aren't entirely there (by separating parts of a cluster), and t-SNE does certainly discard some density information, so again, care is needed. In this sense I believe it is perfectly acceptable to perform clustering on the result providing you are going to submit the clusters to further analysis and verification. As long as you are not simply taking the results of clustering at face value (and you shouldn't really ever do that anyway) then the results can provide useful information about your data.
Now, having said all of that: UMAP does offer some improvements over t-SNE on this front. It is significantly less likely to create sub-clusters in the way t-SNE does, and it will do a better job of preserving density (though far from perfect, and requires small min_dist
values). Thus you can have more confidence in the results of clustering UMAP than t-SNE, but I would still strongly encourage actual analysis of the clusters.
If you want evidence that this can work, using HDBSCAN on a UMAP embedding of the MNIST digits dataset (with suitable parameter choices for each algorithm) gave me an ARI of 0.92, which is remarkably good for a purely unsupervised approach, and is clearly capturing real information about the data.
My biggest caveat is with regard to noise in the data: UMAP and t-SNE will both tend to contract noise into clusters. If you have noisy data then UMAP and t-SNE will hide that from you, so it pays to have some awareness of what your data is like before just trusting a clustering (again, as is true of all clustering).
from umap.
It is certainly true that small n_neighbors
will tend to break up clusters, so larger values are probably better if you want to do clustering. Of course too large and you homogenize everything, so ... this is where one wants to do some exploratory work before doing the clustering (and of the resulting clustering) to provide some confidence that there aren't any significant pitfalls.
A low min_dist
also tends to be better for clustering, since concentrating points together, while potentially bad for visualisation, is exactly what you want for clustering.
With regard to clustering parameters I would suggest it would be useful to use a low min_samples
parameter and quite a large min_cluster_size
. Once again, this is something you want verify with some exploratory work on the clusters you get out.
In fun news I think I can now describe HDBSCAN in the same primitives as UMAP, so the two may be more connected that one might think.
from umap.
Another question, what about the
dimension of the embedding for clustering ?
Can we use higher than 2 ? like 3,4 ....
Anumy impact on the clustering ?
from umap.
Thank you so much for the deep answer. Very useful!
from umap.
Since order of samples is preserved under UMAP and then clustering, you can assign cluster labels directly to the original source data and interpret clusters there -- this would be the recommended approach really.
from umap.
I have a related question: My intuition suggests using large n_neighbors
makes sense if using UMAP prior to clustering, because it will better preserve the global structure. Do you agree? Do you have any other preliminary thoughts on parameter choices for combining UMAP with HDBSCAN?
from umap.
edit: dumb question:
For the sake of interpreting the results, if we use Umap to reduce dimensionality before clustering, is it possible to retrieve the original labels of points after clustering?
Put another way: what the point of clustering data on UMAP subspace since the subspace vectors cannot be interpreted?
from umap.
Related Issues (20)
- Is 'n_training_epochs' working for parameteric UMAP?
- visualize video data
- How to combine UMAP models in new data?
- Edit instructions to make them compatible with zsh
- Empty API page on UMAP API Guide? HOT 1
- PCA diagnostic error HOT 2
- Speed inquries HOT 2
- UMAP crashes when torch also imported before first run HOT 2
- Unable to pickle trained UMAP instance
- Reducing Model Size for UMAP on Large Datasets HOT 2
- umap.UMAP accepts strings as n_neighbors and min_dist, causing later failures
- Optimal dimensions
- RunUMAP Failing HOT 1
- Semi-deterministic output even though randon_state is set
- TypeError: Dispatcher._rebuild() got an unexpected keyword argument 'impl_kind' HOT 1
- illegal hardware instruction python HOT 2
- Transform new input with composite model HOT 1
- Inquiry on Utilizing UMAP for Text Similarity and Clustering HOT 4
- No clear documentation of default parameter values HOT 1
- What is the best way to regularize supervised UMAP?
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from umap.