Giter Site home page Giter Site logo

glia-net's People

Contributors

meteorshub avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

glia-net's Issues

Can't connect to the dataset

Hey, I tried many times but I still can't download your datasets according to tips you offered on 25 Jan.
So could you please provide another ip.
Thanks a lot!

How dose the overall metrics value get?

How dose the overall metrics value get?
When I use your evaluate_per_case.py and metrics.py to calculate the metrics on internal test set, I found the overall velue is different from the actual average of these metrics

ap auc precision recall sensitivity specificity dsc
0.55 0.78 0.83 0.55 0.55 1.00 0.66
0.48 0.77 0.59 0.53 0.53 1.00 0.51

the first line on the above table is the overall velue I get from the logging file using your evaluate_per_case.py and metrics.py code.
the second line on the above table is the average of all 152 internal cases matrics.

I wonder why this inconsistent happends?

CPU RAM usage keeps growing during evaluation

I am training the model using gpu, with batch size = 16, and everything is fine.
The cpu memory keeps growing during evaluation. How to fix it?
I've tried torch.no_grad() and torch.cuda.empty_cache() , but they don't work.

Initialization with shared reference leads to averaging the losses after each epoch.

Hey,

I believe I have found a bug in this project.

When I train a GLIA-Net network, the total, the local and the global average losses are all equal after each epoch.

An example follows:

2021-09-04 23:39:19 [MainThread] INFO [TaskAneurysmSegTrainer] - (Time epoch: 6081.78)train epoch 57/66 finished. total_loss_avg: 0.1874 local_loss_avg: 0.1874 global_loss_avg: 0.1874 ap: 0.1771 auc: 0.9058 precision: 0.6890 recall: 0.0313 dsc: 0.0598 hd95: 20.5659 per_target_precision: 0.0385 per_target_recall: 0.0057

I believe the problem originates from the initialization of the OrderedDicts avg_losses and eval_avg_losses , created in the following lines:

avg_losses = OrderedDict(zip(list(losses.keys()), [RunningAverage()] * len(losses)))

eval_avg_losses = OrderedDict(zip(list(losses.keys()), [RunningAverage()] * len(losses)))

The exact problem is that the list containing the 3 RunningAverages are created using the notation [a]*n. This notation uses a shared reference for the 3 RunningAverages. Therefore, when we try to update a RunningAverage, all three losses are updated together.

A solution for this problem would be to use this initialization [RunningAverage() for _ in range(len(losses))] instead of [RunningAverage()] * len(losses).

This solution seams to work for me.

An example follows:

2021-09-08 20:58:40 [MainThread] INFO [TaskAneurysmSegTrainer] - (Time epoch: 6101.42)train epoch 86/86 finished. total_loss_avg: 0.2528 local_loss_avg: 0.2200 global_loss_avg: 0.0328 ap: 0.4419 auc: 0.9313 precision: 0.5942 recall: 0.3963 dsc: 0.4755 hd95: 12.1462 per_target_precision: 0.1321 per_target_recall: 0.1637

And as we can see the total average loss is equal to the sum of the local and global average losses.

Great work again!

Best regards,

Gabriel

152 internal test dataset missing

Thanks for sharing the dataset and thanks for your work. I have download the 1186 internal training data from the server, but I did not find the 152 internal test data. Can you share the test data on the server, thank you so much!

Can I use the MR data to train your model ?

I have tried many times to use my MRA-TOF data for training, but I still haven't succeeded to train a model that can work. The metrics can not be calculated correctly due to the wrong fp, tp and so on. (For example, The precision results always 0.0000 or 0.5000)

So, how can I change code to train my MRA data successfully? Maybe the spacing? Normalization ? or....?
I think the main question is the difference between MRA and CTA data, I have checked the other possible questions like num_classes and so on. I tried to affirm the difference between my data and your CTA data and tried to change the MRA data like CTA data, but it still failed to work.

Need help :( Maybe you can give me some suggestions about how to preprocess the MRA data or how to change the code..?

How to decide whether the predict target match the ground truth?

Hi, when evaluating performance on target level, what rule u use to see whether 'predict' and 'ground truth' are matched?
I found code related to the judgement in metrics.py. but what exactly it measures?

def _is_match(center_1, area_1, center_2, area_2):
    ndim = len(center_1)
    if sum([(center_1[i] - center_2[i]) ** 2 for i in range(ndim)]) ** 0.5 < (
            0.62 * (area_1 ** (1 / ndim) + area_2 ** (1 / ndim))):  # for 3d case using 0.62 factor
        return True
    return False

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.