Giter Site home page Giter Site logo

Comments (11)

pjsjongsung avatar pjsjongsung commented on June 10, 2024 1

It might be due to randomness right?

The test_icm_square function uses square_1 variable.
This variable is a random variable with noise, which, in theory could provide values that the icm_ising model does not have a difference between the beta value of 0 and 2.

I am not sure if it is necessary to compare between the two beta values, but we could change the test to like test_square_iter which compares the initial segmentation and the output of the icm_ising model.

from dipy.

pjsjongsung avatar pjsjongsung commented on June 10, 2024 1

It shouldn't take too long. I can do that

from dipy.

skoudoro avatar skoudoro commented on June 10, 2024

Thank you for this report.

I agree, this is hard to reproduce locally but I am pretty sure this is due to a cast

from dipy.

jhlegarreta avatar jhlegarreta commented on June 10, 2024

Might be the case that it is due to a cast, but results should be consistent across runs as well, isn't it?

from dipy.

skoudoro avatar skoudoro commented on June 10, 2024

Do you have time to take the lead on that @pjsjongsung? I am ok with the proposed solution.

from dipy.

jhlegarreta avatar jhlegarreta commented on June 10, 2024

I'm fine with the result being 0 for a given random value, but this value should be fixed across runs, the very same way we get the exact same result when fixing a random seed, if I am not missing something.

Changing the check of the test will not resolve the underlying issue (provided that the above assumption is correcta). Fixing it and adding the new additional check may be more appropriate IMO.

from dipy.

pjsjongsung avatar pjsjongsung commented on June 10, 2024

It seems we can set a seed, which will be same across platform, but not across numpy versions.

However, it might still be stable enough, and all we would have to do is change the seed if it does not work on newer numpy version.

Alternatively, we could give a few more chances for test by setting a for loop with say 10 iters, and randomizing the number every loop. If the test does not take a lot of time, it won't be a bottleneck but still make the test much more stable.

from dipy.

pjsjongsung avatar pjsjongsung commented on June 10, 2024

The ultimate problem is that there is a noticeable chance of the result being 0 than expected.

I am not entirely sure what the test is supposed to check, but if such comparison between two beta values are necessary, and want to completely remove any possibility of having a random assertion failure, we could also load the file with some random values that we know will pass the check.

I did a similar thing with the DL models using figshare.

from dipy.

jhlegarreta avatar jhlegarreta commented on June 10, 2024

Have not studied closely the behavior, but having consistent results on one platform, for a NumPy versions seems to me the first step to have a test pass. Then if the result can still be 0, then there is a bug, the check needs to be changed using your suggestions, or it is not the right check to have.

from dipy.

pjsjongsung avatar pjsjongsung commented on June 10, 2024

Ok. To summarize what we can do for now is just include seeds in the random functions. This would be a temporary fix because as numpy versions change, the errors could come back.

In the later versions, we could think about why the test exists and what we want to do about it.

Sounds reasonable? @jhlegarreta @skoudoro ?

from dipy.

jhlegarreta avatar jhlegarreta commented on June 10, 2024

This would be a temporary fix because as numpy versions change, the errors could come back.

IMO fixing the seed is always a must unless some intended, documented, and expected randomness (including the appropriate assert checks in such case) is necessary.

It can even fail now if the seed happens to provide a 0 result.

But yes, IMO this is a first step @pjsjongsung 💯. Thanks.

Maybe a decorator can be written so that we do not need to write the same two Python statements on every function that uses some randomness and the behavior becomes apparent at first sight.

from dipy.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.