Giter Site home page Giter Site logo

diseaseprogressionmodeling-hmm's Introduction

DiseaseProgressionModeling-HMM

Code to implement a personalized input output hidden Markov model (PIOHMM) and other hidden Markov model variations. PIOHMMs are described in K.A. Severson, L.M. Chahine, L. Smolensky, K. Ng, J. Hu and S. Ghosh, 'Personalized Input-Output Hidden Markov Models for Disease Progression Modeling' MLHC 2020. Full details are available here. The PIOHMM model class is in piohmm.py.

Running the code

See the jupyter notebook 'Sample Model' for a simple example of the model. There are three primary components for using a PIOHMM:

  • HMM to specify the particular model; see __init__ for a description of the options
  • learn_model to perform inference
  • predict_sequence to use the Viterbi algorithm to make state predictions

diseaseprogressionmodeling-hmm's People

Contributors

kseverso avatar

Stargazers

 avatar  avatar

Watchers

 avatar

diseaseprogressionmodeling-hmm's Issues

Problems related to model loss function

Dear kseverso:
Hello, I am trying to reproduce the PD progress model, but have encountered some difficulties. I hope to get some help from you.

I followed the steps you preprocessed in "Discovery-of-PD-States-using-ML", processed the PPMI dataset (I made some changes due to the dataset changes), and then applied it to the PIOHMM. However, even if I set k=8 and hope to get 8 states, I only get 3-4 results in the training set, which are often in the time series t<5(T=31) I have already get the final state, for example, states 3 and 4. This situation becomes more and more significant with more iterations, and may even end up with only two states.
Besides, I may not know exactly what the model parameter learning steps mean. Are the ELBO and log_prob (self.ll) obtained at each iteration concepts similar to 'loss' in neural networks? According to my observation, ELBO was around 30,000 after the first iteration, then changed to around -10000 after the second iteration, and remained negative at -100000~-120000. Log_prob, on the other hand, keeps at about 110000, iterating at the learning rate of 1E-18, and fluctuates around 120000 or so when it reaches 20 times (convergence is impossible even using usE_CC convergence standard). I wonder how ELBO and log_prob change when you apply it into PPMI datasets? And what orders of magnitude are they?
Looking forward to your reply!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.