Comments (13)
@dreamerlin maybe we should make a dummy run of demo.py in the test to make sure it works
from mmaction2.
Currently demo.py is written for videos, so you need to change the dataset type to video, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L21 , and also make changes to the testing pipeline to use video loader such as decord, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L70
from mmaction2.
thanks! I will try for that
from mmaction2.
Rawframe inference in demo scripts will be supported in this.
from mmaction2.
Sorry, I still meet a trouble. I know that demo.py is written for videos. So I put my own video named test1.mp4 in the demo folder.
I change the code as:
python demo/demo.py configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py demo/checkpoints/tsn_r50_video_1x1x8_100e_kinetics400_rgb_20200702-568cde33.pth demo/test1.mp4 demo/label_map.txt
( I first want to use the SlowFast model, but I can't find any checkpoints with videos in the Modelzoo, so I follow your advice to use TSN.)
But it still can't work.
I have provided the video path in the code such as demo/test1.mp4. Should I modify the config?
from mmaction2.
you can run video on models that were trained with rawframes. video/rawframe are input formats and they are not tight with models.
@dreamerlin could u pls check why tsn_r50_video_1x1x8_100e_kinetics400_rgb.py does not work?
from mmaction2.
This is the error message I got when I uesd TSN:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
from mmaction2.
Sorry, I still meet a trouble. I know that demo.py is written for videos. So I put my own video named test1.mp4 in the demo folder.
I change the code as:
python demo/demo.py configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py demo/checkpoints/tsn_r50_video_1x1x8_100e_kinetics400_rgb_20200702-568cde33.pth demo/test1.mp4 demo/label_map.txt
( I first want to use the SlowFast model, but I can't find any checkpoints with videos in the Modelzoo, so I follow your advice to use TSN.)
But it still can't work.
I have provided the video path in the code such as demo/test1.mp4. Should I modify the config?
You can try to write a tsn_r50_video_inference_1x1x8_100e_kinetics400_rgb.py
like tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py by using some setting related with testing
from the tsn_r50_video_1x1x8_100e_kinetics400_rgb.py
.
Since it is for inferencing a single video, There are some hints to modify some params:
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[])
->dict(type='Collect', keys=['imgs'], meta_keys=[])
, removinglabel
inkeys
since we don't need to calculate thetop_k_accuracy
.- Set
ann_file
None. - Set
data_prefix
to None, since your video filename is indemo/test1.mp4
without directory prefix.
from mmaction2.
This is the error message I got when I uesd TSN:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison
This is due to the dict(type='Collect', keys=['imgs', 'label'], meta_keys=[])
, you can change it to dict(type='Collect', keys=['imgs'], meta_keys=[])
by removing the unused label
. BTW, we will hardcode the label
to -1 to avoid this case in #59
Thanks for your report!
from mmaction2.
This is the error message I got when I uesd TSN:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparisonThis is due to the
dict(type='Collect', keys=['imgs', 'label'], meta_keys=[])
, you can change it todict(type='Collect', keys=['imgs'], meta_keys=[])
by removing the unusedlabel
. BTW, we will hardcode thelabel
to -1 to avoid this case in #59Thanks for your report!
Thank you for answering my doubts. TSN currently work!
I will follow your advice to test other models.
from mmaction2.
Another question. How can I get the ouput in video format like the gif ? ( The labels appear in the video)
Dome.py just feedback top-5 recognitions in text. It's useful, but visualization is not good.
from mmaction2.
One way to do it is to paint the label into frames using opencv and save it to mp4, and convert mp4 to gif using ffmpeg or online converter.
Maybe supporting mp4 output with label overlay is an option. You may request this feature in the roadmap issue #19
from mmaction2.
We have already support to output video or gif file in this pr.
from mmaction2.
Related Issues (20)
- [Docs] Custom Dataset Training with PoseC3D HOT 2
- [Bug] different performance about batch size on val pipeline and test pipeline
- [Bug] Use mmaction2 - main/configs/skeleton/posec3d/rgbpose_conv3d/rgbpose_conv3d py error for the demo
- model zoo lost
- [Docs] How to train a model with different inputs from different model feature and videos HOT 1
- [Docs] HOT 1
- [Bug] resize_videos.py : get invalid literal for int() with base 10: '' HOT 1
- [Bug] Fail to use confusion_matrix to RGBPoseC3D HOT 3
- [Docs]
- [Bug]
- [Bug] It is not possible to use "analysis_tools\get_flops.py" to get the number of parameters and calculations HOT 3
- Do we now support the method of Skeleton-based Spatio-Temporal Action Detection and Action Recognition HOT 2
- [Bug] VisualizationHook is not set in the `default_hooks` field of config. Please set `visualization=dict(type="VisualizationHook")`
- train time[Bug]
- train[Bug]
- top1_acc is allways times 0f 0.5or0.00[Bug]
- [Feature] Export the tsm_mobilnetv2 model to onnx format
- [Bug] batch_size problem
- [Bug] Training MVit with RawFrameDataset type
- vis_cam.py not work in timesformer HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from mmaction2.