tencent / actiondetection-dbg Goto Github PK
View Code? Open in Web Editor NEWCode for AAAI2020 paper "Fast Learning of Temporal Action Proposal via Dense Boundary Generator"
License: Other
Code for AAAI2020 paper "Fast Learning of Temporal Action Proposal via Dense Boundary Generator"
License: Other
Thank you a lot. Recently, our research group is following your work. If you can share the code and files of THUMOS14 with us, it will be convenient for us to do comparative experiments
I admire your experiment very much,but I have encountered some troubles when trying to implement the algorithm related to Thumos14.
can you send me your code and the features on Thumos14?
my email address : [email protected]
There is no train.py
Can someone give me a proposal.txt or proposal.json of DBG?My Laboratory Conditions can't make it. If you can generate proposal,can you send me one? Thanks.
My email is [email protected]
Hi,
I had follow the instructions to installed tf1.9.0 and python3.6, and compiled "Proposal Feature Generation Layer" successfully, which generate "prop_tcfg.so". but when run "bash auto_run.sh", following errors were occured:
loading config file: ./config.yaml
loading config file: ./config.yaml
2019-11-23 17:13:54.840138: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2019-11-23 17:13:54.841548: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
Runing DBG model ...
0%| | 0/296 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/anaconda2/envs/tf19/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "/home/anaconda2/envs/tf19/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/home/anaconda2/envs/tf19/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: No OpKernel was registered to support Op 'PropTcfg' with these attrs. Registered devices: [CPU], Registered kernels:
device='GPU'; T in [DT_FLOAT]
[[Node: model/PropTcfg = PropTcfg[T=DT_FLOAT, center_num=16, end_num=8, mode=0, start_num=8](model/strided_slice)]]
It seems like the Op 'PropTcfg' was not registered correctly, but How can I fix it? Thank you!
Thank you for your work. The features of ActivityNet are also provided in the BSN code. Is there any difference between the features you provide?
cuda tensorflow 等等都已安装
执行
cd tensorflow/custom_op
make
时却提示make: nvcc: Command not found....
makefile:6: recipe for target 'all' failed
求问这是为什么呀~
Thanks for sharing the project!
Recently, I was learning DBG papers and codes, really want to try training with my own dataset. May i ask the time schedule of releasing the traing codes (train.py
)?
@ideaRunner
After you use TSN to extract video features, the features length is different. You must reshape these features by fitting and interpolation functions.
Originally posted by @lijiannuist in https://github.com/TencentYoutuResearch/ActionDetection-DBG/issues/1#issuecomment-554289967
how extract feature from tsn
Thanks for your contribution, I wonder how to extract 100 frames from my own videos.
Hi,
Thanks for your great work! Could we also have extracted features of THUMOS'14?
Thanks for your great work. Could you provide the original tsn feature before rescaling?
Hello! Thanks for your work.could you send me your code on Thumos14 please?
it will help me a lot. thank you very much!!!
my email address : [email protected]
Brilliant work and thanks for the open source code.
I'm now reading the paper and code about the DBG model and I have a question about the PFG layer.
According to the paper and the code, the parameter w_l and w_r for the the PFG layer's output at location (t_s, t_e, n, c), which corresponding to the w, h, t, c in the code if I have understood correctly, can be directly calculated under the formula (1)(2)(3)(4) in the "Proposal feature generation layer" section. But both the paper and code shows that this layer is a trainable layer. So will the w_l and w_r be updated during the backward? Since it looks like a variable that don't need to be trained rather than a trainable parameter to me and I'm confused about this. Could you please explain this? Thank you very much!
Number of ground truth instances: 0
So how to achieve the result of testing set?
Greeting! since there is only evaluation for proposals, do you plan to release the detection code? thanks.
After I downloaded the 14.2 GB ActivityNet features (Very hard, must use Tencent Weiyun VIP in my network environment), I ran the auto_run.sh
and get the following results:
[INIT] Loaded annotations from validation subset.
processoNumber of ground truth instances: 7292
processoNumber of proposals: 472700
Fixed threshold for tiou score: [0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95]
[RESULTS] Performance on ActivityNet proposal task.
Area Under the AR vs AN curve: 47.82460230389468%
AR@1 is 0.27855183763027974
AR@5 is 0.3704470652770159
AR@10 is 0.4054306088864509
AR@100 is 0.5950630828304992
@lijiannuist
I wonder if the result i got above is correct....?Why does it look like that the result is not good? Thanks...
请问是否有打算放一下thumos14上的实验相关代码呢?
Thanks for your work!
Can you please send me your code and the features on Thumos14?
My email address: [email protected]
THANKS
Hello! Thanks for your work, I have been looking for the features extracted from THUMOS14 dataset. While I haven't seen anyone released it. I wanna kown if you can release the features, which will help me a lot.
@lijiannuist
hello,我想问一下,proposals是在PFG生成的是吗?那个时间生成一定是会有重叠的是吧?
Thanks for your work. But I met an error when I run this code:python pytorch/test.py config/config_pretrained.yaml .The error :
Traceback (most recent call last):
File "pytorch/test.py", line 11, in
from model import DBG
File "/mnt/songyan/10519_xuminhuang/ActionDetection-DBG-master/pytorch/model.py", line 4, in
from custom_op.prop_tcfg_op import PropTcfg
File "/mnt/songyan/10519_xuminhuang/ActionDetection-DBG-master/pytorch/custom_op/prop_tcfg_op.py", line 4, in
import prop_tcfg_cuda
ImportError: /mnt/songyan/10519_xuminhuang/anaconda3/lib/python3.7/site-packages/prop_tcfg_cuda-0.0.0-py3.7-linux-x86_64.egg/prop_tcfg_cuda.cpython-37m-x86_64-linux-gnu.so: undefined symbol: __cudaRegisterFatBinaryEnd
my tool:
pytorch1.1
cuda9
python3.7
gcc5.4
Can someone help me solve this problem?thanks
wonderful work!
I saw the result in your code is about 68%, but in http://activity-net.org/challenges/2019/evaluation.html, the result of your method got about 73%, may I know what is the gap? thanks a lot.
谷歌云链接点进去没了
Hi, I just downloaded the feature from Google Drive.
And when use tar to extract features, it looks like:
tar: This does not look like a tar archive
Do you know there is something wrong with the feature?
I downloaded twice from the Google Drive.
Hello! Thanks for your work. Existing link is dead. I wanna kown if you can release the features extracted from THUMOS14 or ActivityNet1.3 dataset, which will help me a lot.
Great work! Thanks for sharing!!
To generate optical flow, do you use dense flow from https://github.com/yjxiong/anet2016-cuhk use denseFlow or https://github.com/feichtenhofer/gpu_flow same as BSN author? Wonder which one is better.
@ideaRunner
After you use TSN to extract video features, the features length is different. You must reshape these features by fitting and interpolation functions.
Originally posted by @lijiannuist in https://github.com/TencentYoutuResearch/ActionDetection-DBG/issues/1#issuecomment-554289967
作者可以更新一下特征下载的谷歌云和微云链接吗,链接失效了。麻烦您了。
Thank you for your open source work. Optical flow feature extraction is too slow and does not have engineering significance. I want to try the features extracted by C3D. Your paper mentioned this. What are the details?
dear author, i can't find train.py
. did you release it ?
thanks for ur great work!
we used tools/run.sh to get videos' rgb and flow feature csvs,
and then used tools/data_process.py to join then together
but we found the joined csvs are different from the data u provided
any suggestion ? thanks!
the zip file contains three csvs
*_t.csv temporal
*_s.csv spatial
*_a.csv joined
ca.zip
I have compiled the PFG layer to prop_tcfg.so. After that, i'm going to train a new network in which T == 500, for long videos. However, when i define the model, "Segmentation fault" breakouted, and this is the all error description.
Could u please tell what the problem is? Thanks!
error:
WARNING:tensorflow:From ***************/DBG/model.py:19: conv1d (from tensorflow.python.layers.convolutional) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.keras.layers.Conv1D
instead.
WARNING:tensorflow:From ****************/lib/python3.6/site-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.init (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Segmentation fault
Thanks for sharing!
The inference speed is extremely faster with a better result.
I'm thinking about how to reproduce the result on other datasets.
Could you share some detail or insight about how to use linear interpolation to rescale the feature length of all videos to same length 100
?
Any method function, reference papers or codes will be thankful.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.