Giter Site home page Giter Site logo

Comments (7)

JeanKaddour avatar JeanKaddour commented on June 30, 2024

Thank you very much for your interest in our work, again!

  • For the TCGA dataset, we use a bias strength of 0.1 for the results in Section 6.1, and when we tested the robustness to different bias strengths in Section 6.3, we went up to 0.6. I remember our propensity policy being very sensitive to the bias strength for the TCGA covariates, so bias=1.0 might produce something degenerate (e.g., always assigning the same treatment for all covariates). Although, as of now, it is not clear to me why this should hurt SIN's performance, particularly in comparison to the baselines. Would you please try again with lower bias strengths?
  • The provided default hyper-parameters should produce sensible results. However, we tuned them over 10 runs with random search and values sampled from the ranges provided in Table 4 in the Appendix to ensure a fair comparison across methods. Also, please note that we averaged the errors in the paper across 10 random seeds.
  • The above SIN results look strange; generally speaking, for all methods, the errors should increase as more treatments are added to the test set (i.e., as K increases). Specifically for SIN, the learned propensity features are less informative for test treatments that are very unlikely to be selected for certain covariates.

from sin.

yfzhang114 avatar yfzhang114 commented on June 30, 2024

Thanks for your kind response! I will have a try.

from sin.

JeanKaddour avatar JeanKaddour commented on June 30, 2024

I just ran the TCGA experiment with all default values, as specified in the argument parser (the default bias = 0.3).
SIN:
Screenshot 2021-11-29 at 22 11 36
Graphite:
Screenshot 2021-11-29 at 22 30 49

from sin.

yfzhang114 avatar yfzhang114 commented on June 30, 2024

For bias=0.3, I got almost the same result as you, however, when setting bias=0.1, the performance is just as I mentioned. 😂

from sin.

JeanKaddour avatar JeanKaddour commented on June 30, 2024

Thank you for sharing that insight!

  • I set up a new machine, installed all libraries from scratch and now, for bias=0.1, I'm actually running into errors. I suspect this to be caused by some numerical issues due to changes in the used libraries or the QM9 dataset. When I ran the experiments more than half a year ago on a different machine, I didn't experience them. I will try to debug this soon, my apologies for that issue.
  • For bias=0.2, I get these results with the default hyper-parameters:
    SIN: sin_bias=0 2
    GraphITE: graphite_bias=0 2
    These results still deviate from the results reported in the paper; but there we chose the best runs from our hyper-parameter search based on the validation loss. I hope this helps.

from sin.

yfzhang114 avatar yfzhang114 commented on June 30, 2024

Get it! Thanks!

from sin.

JeanKaddour avatar JeanKaddour commented on June 30, 2024

I found and fixed the runtime error I experienced for TCGA, bias=0.1. The cause was that very few QM9 graphs have a number of edge types <= 1, which is different from the majority with number 3. For some reason, this broke the minibatch graph collation implemented in PyG 2, printing a non-informative exception. I suspect that I didn't experience this back when I ran the experiments almost one year ago because of the switch from PyG1 to PyG2.

Simply running

python generate_data.py
python run_model_training.py

yields the below results.

In-Sample

in_sample

Out-Sample

out_sample

from sin.

Related Issues (4)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.