Giter Site home page Giter Site logo

Comments (22)

nicodv avatar nicodv commented on July 25, 2024 7

@jd155 Thanks for your comment and code sample.

First of all, note that Huang's examples are somewhat contrived: he starts with 2 numerical variables and shows the effect of a third categorical dimension by plotting in the 2 original numerical dimensions. The value of this visualization is evident, but there are many possible applications of k-modes or k-prototypes where this might not be so -- or where the visualization may be downright misleading!

Applying PCA to categorical variables is generally regarded as unwise, given their non-Gaussian nature. There are other alternatives (e.g., correspondence analysis), but figuring out the best way of plotting is a research question of itself. Given the limitations of PCA (or any of the many other dimensionality reduction techniques), I'm doubtful I want to give users the illusion that what they are plotting is a faithful 2D representation of their data and clusters. This is especially so in the case of k-modes, and cases where there are many categorical variables and not many numerical ones.

Given the above, I'd rather let the user come up with their own insights into proper visualization methods for their data.

More discussion is welcomed.

from kmodes.

jd155 avatar jd155 commented on July 25, 2024 5

Really like this k-modes implementation, intend to use it a lot. Thanks @nicodv.

I agree that plotting functionality would be instructive, particularly in diagnosing model fit and determining how many clusters and centroids to use. I note the way Huang plots the results of his simulations - see page 9 onwards here: http://grid.cs.gsu.edu/~wkim/index_files/papers/kprototype.pdf - to determine the interactions and influences of numeric and categorical data, which seems advisable given the mixed data types. Scatterplots similar to these would be very useful IMO. Presumably it would be possible to use sklearn's PCA for dimensionality reduction.

To give you a steer, this is a snippet of code I use to visualise k-means model fits using PCA. (I haven't included all the variables and the model as I'm sure you'll get the idea.)

from sklearn.decomposition import PCA
pca_2 = PCA(2)
plot_columns = pca_2.fit_transform(clus_train)
plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=model3.labels_,)
plt.show()

from kmodes.

Jomonsugi avatar Jomonsugi commented on July 25, 2024 4

Would a silhouette plot make sense? Wouldn't we just need to produce a distance matrix to be on our way? If not, what metric should be used to evaluate the performance of the model?

from kmodes.

nicodv avatar nicodv commented on July 25, 2024 2

@avilacabs , it's not currently available from the trained model object, but it's probably doable to set it as a post-training attribute (similar to cost_, for example).

from kmodes.

nicodv avatar nicodv commented on July 25, 2024 1

@hugo-pires Since this package does not cluster hierarchically, I don't see how a dendrogram would help.

from kmodes.

mpikoula avatar mpikoula commented on July 25, 2024 1

@bahung I've modified the silhouette_samples function by using the mode (from scipy.stats) rather than mean (there's two instances where this is needed). I pass the precomputed distances (based on the dissimilarity metric) matrix to the function.

I feel this is getting slightly off topic though!

from kmodes.

nicodv avatar nicodv commented on July 25, 2024

I generally would consider it outside the scope of this package.

from kmodes.

hugo-pires avatar hugo-pires commented on July 25, 2024

Well, just asking. I was thinking about a 2-D scatter plot of the data points, with different color by cluster and the centroid with different size. After some kind of dimensionality reduction, of course.

from kmodes.

nicodv avatar nicodv commented on July 25, 2024

Something along these could be added to the examples. Not a priority for me, but feel free to make a pull request.

from kmodes.

hugo-pires avatar hugo-pires commented on July 25, 2024

I am looking for some Seaborn examples like:
Seaborn factor plot

from kmodes.

hugo-pires avatar hugo-pires commented on July 25, 2024

Could a dendrogram be a better choice?

from kmodes.

nicodv avatar nicodv commented on July 25, 2024

@Jomonsugi , yes, a silhouette plot would work well. Scikit-learn gives an example here, that could be adapted: http://scikit-learn.org/stable/auto_examples/cluster/plot_kmeans_silhouette_analysis.html

Since the silhouette score can be computed based on a pre-computed distance matrix, that is all that would be needed to leverage scikit-learn's existing silhouette functions in combination with kmodes.

from kmodes.

mpikoula avatar mpikoula commented on July 25, 2024

I would argue it's not as easy as that to use the existing silhouette score on scikit, as the algorithm calculates the distances between the cluster centres and each point using mean rather than mode. Passing pre-computed distances is not enough.

It is however an easy fix and that's how I've been using it.

In terms of calculating a mixed (numeric and categorical) silhouette score, would one use the same gamma as in k-prototypes? I've been using the average silhouette regardless of gamma

from kmodes.

bahung avatar bahung commented on July 25, 2024

@mpikoula Do you have to write a new function to import? Could you please share how to fix this function?

from kmodes.

delilio avatar delilio commented on July 25, 2024

@mpikoula Can you please post your solution?

Thanks.

from kmodes.

royzawadzki avatar royzawadzki commented on July 25, 2024

@mpikoula I'm trying to figure out how do obtain the dissimilarity metric so I can pass it into the modified silhouette_score function. It requires the "label values for each sample." Any pointers? Thanks.

from kmodes.

loukach avatar loukach commented on July 25, 2024

@royzawadzki , have you found a solution? If so, any chance you share the solution?
Thank you.

from kmodes.

mpikoula avatar mpikoula commented on July 25, 2024

Hello and apologies for the late response. The dissimilarity metric I have been using is either a simple dissimilarity metric (obtained using the hamming distance) or the jaccard distance. Both are available through scipy.spatial.distance

from kmodes.

LorenzoBottaccioli avatar LorenzoBottaccioli commented on July 25, 2024

Hi @mpikoula can you please pass a code example to compute silhouette for kmodes?

from kmodes.

avilacabs avatar avilacabs commented on July 25, 2024

@mpikoula so in silhouette_score function you use one of those distances (hamming or jaccard) and instead of the mean you use the mode on return, right?
Are you sure this works well for a mixed (numerical+categorical) dataset?

from kmodes.

avilacabs avatar avilacabs commented on July 25, 2024

@nicodv how do I get pre-computed distance matrix from kprototypes?

from kmodes.

rosskempner avatar rosskempner commented on July 25, 2024

@bahung I've modified the silhouette_samples function by using the mode (from scipy.stats) rather than mean (there's two instances where this is needed). I pass the precomputed distances (based on the dissimilarity metric) matrix to the function.

I feel this is getting slightly off topic though!

Hi @mpikoula , may you help point to those two instances where that is needed?

from kmodes.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.