Comments (22)
@jd155 Thanks for your comment and code sample.
First of all, note that Huang's examples are somewhat contrived: he starts with 2 numerical variables and shows the effect of a third categorical dimension by plotting in the 2 original numerical dimensions. The value of this visualization is evident, but there are many possible applications of k-modes
or k-prototypes
where this might not be so -- or where the visualization may be downright misleading!
Applying PCA to categorical variables is generally regarded as unwise, given their non-Gaussian nature. There are other alternatives (e.g., correspondence analysis), but figuring out the best way of plotting is a research question of itself. Given the limitations of PCA (or any of the many other dimensionality reduction techniques), I'm doubtful I want to give users the illusion that what they are plotting is a faithful 2D representation of their data and clusters. This is especially so in the case of k-modes
, and cases where there are many categorical variables and not many numerical ones.
Given the above, I'd rather let the user come up with their own insights into proper visualization methods for their data.
More discussion is welcomed.
from kmodes.
Really like this k-modes implementation, intend to use it a lot. Thanks @nicodv.
I agree that plotting functionality would be instructive, particularly in diagnosing model fit and determining how many clusters and centroids to use. I note the way Huang plots the results of his simulations - see page 9 onwards here: http://grid.cs.gsu.edu/~wkim/index_files/papers/kprototype.pdf - to determine the interactions and influences of numeric and categorical data, which seems advisable given the mixed data types. Scatterplots similar to these would be very useful IMO. Presumably it would be possible to use sklearn's PCA for dimensionality reduction.
To give you a steer, this is a snippet of code I use to visualise k-means model fits using PCA. (I haven't included all the variables and the model as I'm sure you'll get the idea.)
from sklearn.decomposition import PCA
pca_2 = PCA(2)
plot_columns = pca_2.fit_transform(clus_train)
plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=model3.labels_,)
plt.show()
from kmodes.
Would a silhouette plot make sense? Wouldn't we just need to produce a distance matrix to be on our way? If not, what metric should be used to evaluate the performance of the model?
from kmodes.
@avilacabs , it's not currently available from the trained model object, but it's probably doable to set it as a post-training attribute (similar to cost_
, for example).
from kmodes.
@hugo-pires Since this package does not cluster hierarchically, I don't see how a dendrogram would help.
from kmodes.
@bahung I've modified the silhouette_samples function by using the mode (from scipy.stats) rather than mean (there's two instances where this is needed). I pass the precomputed distances (based on the dissimilarity metric) matrix to the function.
I feel this is getting slightly off topic though!
from kmodes.
I generally would consider it outside the scope of this package.
from kmodes.
Well, just asking. I was thinking about a 2-D scatter plot of the data points, with different color by cluster and the centroid with different size. After some kind of dimensionality reduction, of course.
from kmodes.
Something along these could be added to the examples. Not a priority for me, but feel free to make a pull request.
from kmodes.
I am looking for some Seaborn examples like:
Seaborn factor plot
from kmodes.
Could a dendrogram be a better choice?
from kmodes.
@Jomonsugi , yes, a silhouette plot would work well. Scikit-learn gives an example here, that could be adapted: http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html
Since the silhouette score can be computed based on a pre-computed distance matrix, that is all that would be needed to leverage scikit-learn's existing silhouette functions in combination with kmodes.
from kmodes.
I would argue it's not as easy as that to use the existing silhouette score on scikit, as the algorithm calculates the distances between the cluster centres and each point using mean rather than mode. Passing pre-computed distances is not enough.
It is however an easy fix and that's how I've been using it.
In terms of calculating a mixed (numeric and categorical) silhouette score, would one use the same gamma as in k-prototypes? I've been using the average silhouette regardless of gamma
from kmodes.
@mpikoula Do you have to write a new function to import? Could you please share how to fix this function?
from kmodes.
@mpikoula Can you please post your solution?
Thanks.
from kmodes.
@mpikoula I'm trying to figure out how do obtain the dissimilarity metric so I can pass it into the modified silhouette_score
function. It requires the "label values for each sample." Any pointers? Thanks.
from kmodes.
@royzawadzki , have you found a solution? If so, any chance you share the solution?
Thank you.
from kmodes.
Hello and apologies for the late response. The dissimilarity metric I have been using is either a simple dissimilarity metric (obtained using the hamming distance) or the jaccard distance. Both are available through scipy.spatial.distance
from kmodes.
Hi @mpikoula can you please pass a code example to compute silhouette for kmodes?
from kmodes.
@mpikoula so in silhouette_score function you use one of those distances (hamming or jaccard) and instead of the mean you use the mode on return, right?
Are you sure this works well for a mixed (numerical+categorical) dataset?
from kmodes.
@nicodv how do I get pre-computed distance matrix from kprototypes?
from kmodes.
@bahung I've modified the silhouette_samples function by using the mode (from scipy.stats) rather than mean (there's two instances where this is needed). I pass the precomputed distances (based on the dissimilarity metric) matrix to the function.
I feel this is getting slightly off topic though!
Hi @mpikoula , may you help point to those two instances where that is needed?
from kmodes.
Related Issues (20)
- k-prototype seems to focus on one continuous variable HOT 1
- Reduce memory usage in array initialization HOT 2
- GPU ( cuda ) support? HOT 1
- Add L1 as a dissimilarity function option for continuous variables HOT 1
- Performance over binary data HOT 1
- parallelization HOT 4
- KPrototypes fit_predict fails with sample_weight HOT 2
- Apologies if this is redundant but I could not find documentation ... how do you extract class membership from an object created by the function KPrototypes HOT 1
- What are the minimum characteristics that a binary matrix must meet to avoid the following error: "Insufficient Number of data since union is 0"? HOT 1
- ValueError: All arrays must be of the same length HOT 3
- Euclidean distance definiton lacks a square root HOT 2
- Support Arm64 macos HOT 1
- Please add conda installation information HOT 1
- Different clusters when K-Prototypes trained on same data in numpy array and pandas dataframe HOT 1
- Li
- Estimation of Gamma in K-Prototypes HOT 1
- [BUG] Badge not rendering in readme HOT 2
- Incorrect dtype conversion of categoricals when dealing with manually assigned centroids HOT 2
- Create equal-sized clusters within kmodes HOT 1
- Value Error when I pass a NumPy array as init parameter HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from kmodes.