Giter Site home page Giter Site logo

mtan's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

mtan's Issues

Not able to reproduce results for Physionet dataset

Hi,

I have downloaded the code from the git repository and ran the mTAN classifier full on my local machine with the same hyperparameters mentioned. My results do not match with the published ones. In the paper, the mean ROC is 0.858 while I get 0.830.

I could not install torch==1.4.0, hence used torch---1.9. I am attaching the result for your reference.
physionet-cnn-0-569442.log

Results are not reproducible on MIMIC-Ⅲ

I use the data extraction process to get the MIMIC-Ⅲ dataset which contains 53,211 records. I use the given code to split the train, valid and test set. And I use the hyperparameters given to run mTAN-full, but I do not achieve 0.8544 AUROC on MIMIC-Ⅲ dataset . The highest is AUROC ~0.838.

This is achieved with this command:
python3 tan_classification.py --alpha 5 --niters 300 --lr 0.0001 --batch-size 128 --rec-hidden 256 --gen-hidden 50 --latent-dim 128 --enc mtan_rnn --dec mtan_rnn --save 1 --classif --norm --learn-emb --k-iwae 1 --dataset mimiciii

Classification Task on MIMIC-III Dataset (mTAND-Full).log

Command for executing "Classification Task on Human Activity Dataset (mTAND-Full)"

Thanks a lot for making the code available with clear instructions to execute for different experiments.
Can you please also include in the github page, the command to execute for "Classification Task on Human Activity Dataset (mTAND-Full)". Currently, the github page only contains mTAND-Enc command for Human Activity Dataset "7. Classification Task on Human Activity Dataset (mTAND-Enc)"

Results are not reproducible at all

I was able able to run the code perfectly but the hyperparameters given for mTAN-full do not achieve 0.858 AUROC on physionet2012. The highest is AUROC ~0.827 with AUPRC ~0.45.

This is achieved with this command:
python3 tan_classification.py --alpha 100 --niters 300 --lr 0.0001 --batch-size 50 --rec-hidden 256 --gen-hidden 50 --latent-dim 20 --enc mtan_rnn --dec mtan_rnn --n 8000 --quantization 0.016 --save 1 --classif --norm --kl --learn-emb --k-iwae 1 --dataset physionet

Doubts about the results of Interpolation on PhysioNet

The MSE score of interpolation performance on PhysioNet with 90% data points observed is 4.798 ± 0.036 ×10^−3. But I get 1.2 ×10^−3 after I run python3 tan_interpolation.py --niters 500 --lr 0.001 --batch-size 32 --rec-hidden 64 --latent-dim 16 --quantization 0.016 --enc mtan_rnn --dec mtan_rnn --n 8000 --gen-hidden 50 --save 1 --k-iwae 5 --std 0.01 --norm --learn-emb --kl --seed 0 --num-ref-points 64 --dataset physionet --sample-tp 0.9.

If the test data and prediction results are restored to the original scale according to the normalization of data preprocessing, then I get MSE of 4225.6895.

So how can I get the results of the paper presented?

Not executable without NVIDIA GPU

When running the tan_interpolation.py instruction, I get the following:

50078
(1000, 20) (1000, 20) (1000, 100)
(1000, 20, 3)
[[ 0.8516054 1. 0.09 ]
[ 0.95830714 1. 0.2 ]
[ 1.33468433 1. 0.25 ]
[ 1.95121209 1. 0.37 ]
[ 1.88823672 1. 0.39 ]
[ 1.18921462 1. 0.46 ]
[ 1.03273212 1. 0.47 ]
[ 0.70050108 1. 0.49 ]
[ 0.26575831 1. 0.64 ]
[ 0.42114019 1. 0.69 ]
[ 0.33637057 1. 0.72 ]
[ 0.08979964 1. 0.77 ]
[ 0.01292361 1. 0.79 ]
[-0.01584657 1. 0.8 ]
[-0.03790797 1. 0.81 ]
[-0.05342294 1. 0.82 ]
[-0.04604355 1. 0.87 ]
[-0.03038688 1. 0.88 ]
[-0.03038688 1. 0.88 ]
[ 0.27021355 1. 0.99 ]]
(800, 20, 3) (200, 20, 3)
parameters: 49400 64381
Traceback (most recent call last):
File "tan_interpolation.py", line 129, in
out = rec(torch.cat((subsampled_data, subsampled_mask), 2), subsampled_tp)

(some more callback traces)

and then,

AssertionError:
Found no NVIDIA driver on your system. Please check that you
have an NVIDIA GPU and installed a driver from
http://www.nvidia.com/Download/index.aspx

Is there a way to run these scripts without having a NVIDIA GPU?

Experimental Setup on Activity dataset

In your results table, there is a significant difference between the per-time-point classification accuracies of the Activity dataset, for the different RNN and ODE methods, compared to the results presented in the Latent-ODE paper (https://arxiv.org/pdf/1907.03907.pdf). For instance, you mention an Acc of 88.5% of ODE-RNN while 82.9% is mentioned in the original paper. Is there any difference in the experimental setup/metrics you follow? thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.