Giter Site home page Giter Site logo

rgmp's People

Contributors

seoungwugoh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

rgmp's Issues

Accuracy without pre-training

Hi, I would like to ask about the ablation study without pre-training(-PT).
How many epoch did you train and how to set the learning rate for the fine-tuning part to get the accuracy in Table 5? (J mean is 68.6 and F mean is 68.9)
Thanks for your attention and hope to hear your reply soon.

How about the one-stage training strategy?

Hi, I note that all your models are trained by using two-stage training strategy, which trains models on a larger additional dataset for the first stage. However, in Table 1 from your paper, models you compared with are trained just on DAVIS-2016. THis is not fair for other models, so I wonder how about J mean and F mean of your models trained just on DAVIS-2016?

Why use softmax instead of sigmoid?

line 66: msv_E2[sc] = upsample(F.softmax(e2[0], dim=1)[:,1].data.cpu(), (h,w))

line 71: msv_E2[sc] = F.softmax(e2[0], dim=1)[:,1].data.cpu()

If you run these lines for each object (Propagate_MS is inside the loop for o in range(num_objects) ) , why use softmax instead of sigmoid?

Implement three measures

Hello! I am a newcomer to learning video segmentation. Do you implement three measures:region similarity,contour accuracy and temporal instability? Thank you !

Train with youtube-vos

Sorry to bother you.Have you tried the youtube-vos dataset. Can it work? I have tried the youtube-vos dataset. But I failed to make it work.

Propagate_MS() missing 1 required positional argument: 'P2'

Hello.
I tried the RGMP code to understand it.
And i use the code made by xanderchf.
but there is missing argument in the middle of code.
How can i solve it.

Error was occurred in here
for f in range(0, num_bptt - 1):
output, ms = Propagate_MS(ms, all_F[:,:,f+1], all_E[:,0,f])

TypeError: Propagate_MS() missing 1 required positional argument: 'P2'

Thanks you in advance.

Real time video Analysis.

Thank you for the implementation.

Can you provide any idea how I can test this on real time video ?

Fine-Tuning input size

Hi,
In the paper, input size is stated to be 256x512. However, in the code patch input size is 512x864. Please correct me what I am missing in the code.
Thanks

In train.py, There is no 'P2' argument in Propagate_MS made by xanderchf

Hello.
I tried the RGMP code to understand it.
And i use the code made by xanderchf.
but there is missing argument in the middle of code.
How can i solve it.

Error was occurred in here
for f in range(0, num_bptt - 1):
output, ms = Propagate_MS(ms, all_F[:,:,f+1], all_E[:,0,f])

TypeError: Propagate_MS() missing 1 required positional argument: 'P2'

Thanks you in advance.

the problem of train.py

why do not you use bptt_hsm in train.py? what is the meaning of ntokens in line 134 of train.py? looking forward your replay!Thank you very much

Training & data generation code?

Thanks for releasing the demo code! I'm wondering if you are planning to release training and the code to generate simulated data?

Questions about the logit function

In your RGMP paper Sec 3.3, you write the soft logit function. But in your code logit, I am a little confused about it.

Questions:

  1. In your training, you view the background as one instance and then propagate it? If so, the probability (Mask) of the background is the sum of other instance labels, which really confuses me.
  2. Could you please provide more details in training? (How to logit function with softmax is being used, is it just like the inference stage?)

Thanks in advance!

Sorry for my mistake.

Davis annotation

Hi ,How do you handle the multiple instances tag for Davis 2017 dataset? Is it converted into two categories? How do you visualize Davis 2017 data sets?Because Davis is not a different category with different colors. For example, the label color of bear is the same as that of bicycle.Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.