Comments (21)
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
from pytorch-vsumm-reinforce.
For the TVSum dataset, the results of the 0.5 vector baseline are better than the RL model. I strongly believe that they should have been included in the paper.
from pytorch-vsumm-reinforce.
sorry for not responding promptly, too busy with deadlines.
typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.
i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).
pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results
from pytorch-vsumm-reinforce.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?
And here is the answer (from CVPR 2019):
Rethinking the Evaluation of Video Summaries
from pytorch-vsumm-reinforce.
The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .
from pytorch-vsumm-reinforce.
sorry for not responding promptly, too busy with deadlines.
typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.
i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).
pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results
If I understand correctly wouldn't this also mean that the knapsack-algo will favor short shots? If the score curve is looking good, but the scores are still very much around 0.5 this means a short shot with an average close to 0.5 will always preferred compared to a long shot with an average greater than 0.5?
from pytorch-vsumm-reinforce.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
@divamgupta Is this the reason below for learning nothing?
pytorch-vsumm-reinforce/rewards.py
Lines 15 to 16 in fdd03be
detach...
have you try without detach ? your evaluation score can raise up while training? The dataset is too small, 20 for train, 5 for test. Essentially, the score in 5 test video vary a lot.
from pytorch-vsumm-reinforce.
I also get only 35,54% average F-score for a 5-fold on SumMe!
from pytorch-vsumm-reinforce.
Wow
Any idea @KaiyangZhou ?
from pytorch-vsumm-reinforce.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
Does the pretrained Theano model also only give 0.5 scores or only the PyTorch model you trained yourself?
from pytorch-vsumm-reinforce.
pytorch implementation doesn't strictly follow the 5-fold setting,
Can you elaborate?
from pytorch-vsumm-reinforce.
Yes , using their pretrained theano model only.
https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py
After line 80, replace probs ( the values predicted by the model ) by adding
probs = np.zeros_like(probs). astype ( float ) + 0.5
from pytorch-vsumm-reinforce.
Yes , using their pretrained theano model only.
https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py
After line 80, replace probs ( the values predicted by the model ) by adding
probs = np.zeros_like(probs). astype ( float ) + 0.5
So this actually means the Theano and PyTorch model are both not able to reproduce the results of the paper or train correctly?
from pytorch-vsumm-reinforce.
The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .
Does it also reproduce the numbers when trained from scratch with Theano?
from pytorch-vsumm-reinforce.
I tried the pre-trained model
from pytorch-vsumm-reinforce.
When using the PyTorch implementation without folds (instead train and test on the same videos) I can get 40.1% on SumMe!
from pytorch-vsumm-reinforce.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?
from pytorch-vsumm-reinforce.
@KaiyangZhou . i find this model learned nothing, i use the random initialized model to evaluate. I just got 41.7 but i just got 41.2 after 200 epochs' training.
from pytorch-vsumm-reinforce.
could you please release the code, especially for the part about how to extract feature and get change points.
from pytorch-vsumm-reinforce.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
@divamgupta Is this the reason below for learning nothing?
pytorch-vsumm-reinforce/rewards.py
Lines 15 to 16 in fdd03be
detach...
from pytorch-vsumm-reinforce.
actions are from below...
pytorch-vsumm-reinforce/main.py
Line 125 in fdd03be
Although Bernoulli have grad_fn in pytorch, it's grad is zero.
So even you remove detach, it does not help...
from pytorch-vsumm-reinforce.
Related Issues (20)
- Module Not Found Error HOT 15
- summary2video.py
- Question about Reward funtion ? HOT 1
- original Youtube and OVP datasets HOT 9
- How to find Reward Diversity score ?
- 0.0% Fscore for videos and probably wrong Summary. HOT 1
- Can not download the dataset.
- Could u tell me the relationship between 'gtscore' and 'gtsummary'? HOT 1
- Dataset for video summaization HOT 26
- Link to download dataset in repo is dead. HOT 2
- H5 dataset generation
- frame score
- Few Frame Ids from MachineSummary not available in VideoFrames generated. HOT 1
- Processed features and original videos HOT 1
- Problem with video index when running "summary2video.py" HOT 1
- GoogLe Net implementation HOT 5
- splits
- dist_mat = dist_mat.min(1, keepdim=True)[0] IndexError: Dimension out of range HOT 1
- DataSet Out of Date
- FileNotFoundError: [Errno 2] No such file or directory: 'path_to_your_model.pth.tar' HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch-vsumm-reinforce.