Comments (11)
@yzfly Please use the following steps to use the codes in the repo. These steps have recently been validated by another user of the repo! They will also answer your questions above.
FOR UCF DATASET:
For Training
- Download ucf dataset, unzip file. Rename the datafolder as ‘ucf24’.
- Create a new folder by the name of ‘datasets’. Put the above renamed ‘ucf24’ folder inside the datasets folder.
- Download code from GitHub page. The code folder is called YOWO
- Open cfg/ucf24.data’ in the YOWO folder. Update address of the base, training and validation dataset
- Download ‘trainlist.txt’ and ‘testlist.txt’ files and put it inside the ‘ucf24’ folder created in step-1.
- Create a folder called ‘weights’ in the YOWO folder.
- Download pre-trained yolo weights, details given the GitHub page, and put it in the folder created in step-6.
- Download ‘resnext-101-kinetics.pth’ from the GitHub page and put in the ‘weights’ folder created in step -6.
For Validation
- Go to YOWO/evaluations/Object-Detection-Metrics
- Unzip the file ‘groundtruths_ucf.zip’
- Go back to the YOWO folder. Create a folder called ‘ucf_detections/detections_0’. Keep creating multiple subdirectories ‘detections_n’ for n = 0,1,2,3….. for further tasks.
- Run validation using the command
Run both training and validating model using the script:
$ sh run_ucf101-24.sh
- Change the ‘detection_n’ portion in the script file for multiple validation tasks.
python ./evaluation/Object-Detection-Metrics/pascalvoc.py --gtfolder groundtruths_ucf --detfolder ../../ucf_detections/detections_0
Thank you for your detailed guidance. I have successfully trained on ucf24 dataset.
But trainning on ucf24 is more time consume than jhmdb21. What trouble me a lot is training on jhmdb21 dataset , since I find no bounding box ground truth file.
I have downloaded all jhmdb21 dataset, and didn't find training annotations just like ucf24's your provided. After searching on the internet, I didn't find exist bounding box annotation files.
To address this problem. I have generate bounding box ground truth files using jhmdb21 puppet mask files by myself. One .txt file for one per frame with contents : [label, xmin, ymin, xmax, ymax].
I have checked my groundtruth file with YOWO provided. In most cases they are similar to those provided,with a difference of one or two pixels. If anyone needed, you can download from jhmdb21_labels
from yowo.
@yzfly Please use the following steps to use the codes in the repo. These steps have recently been validated by another user of the repo! They will also answer your questions above.
FOR UCF DATASET:
For Training
- Download ucf dataset, unzip file. Rename the datafolder as ‘ucf24’.
- Create a new folder by the name of ‘datasets’. Put the above renamed ‘ucf24’ folder inside the datasets folder.
- Download code from GitHub page. The code folder is called YOWO
- Open cfg/ucf24.data’ in the YOWO folder. Update address of the base, training and validation dataset
- Download ‘trainlist.txt’ and ‘testlist.txt’ files and put it inside the ‘ucf24’ folder created in step-1.
- Create a folder called ‘weights’ in the YOWO folder.
- Download pre-trained yolo weights, details given the GitHub page, and put it in the folder created in step-6.
- Download ‘resnext-101-kinetics.pth’ from the GitHub page and put in the ‘weights’ folder created in step -6.
For Validation
- Go to YOWO/evaluations/Object-Detection-Metrics
- Unzip the file ‘groundtruths_ucf.zip’
- Go back to the YOWO folder. Create a folder called ‘ucf_detections/detections_0’. Keep creating multiple subdirectories ‘detections_n’ for n = 0,1,2,3….. for further tasks.
- Run validation using the command
Run both training and validating model using the script:
$ sh run_ucf101-24.sh
- Change the ‘detection_n’ portion in the script file for multiple validation tasks.
python ./evaluation/Object-Detection-Metrics/pascalvoc.py --gtfolder groundtruths_ucf --detfolder ../../ucf_detections/detections_0
from yowo.
I found related issue: issue18
I downloaded all the files you shared on dropbox. And did't find training dataset annotations. just as
from yowo.
@yzfly The annotations in groundtruths_ucf.zip is only used at calculating the Frame_mAP scores. This is why there is no training annotations there. Training annotations can be downloaded from the dataset sources.
from yowo.
@yzfly Hi, have you reproduced the results in Table 2 of the paper?
from yowo.
@yzfly Hi, have you reproduced the results in Table 2 of the paper?
My results are behind by 2-3 percentage points reported in the paper
from yowo.
from yowo.
@yzfly Hi, have you reproduced the results in Table 2 of the paper?
My results are behind by 2-3 percentage points reported in the paper
When you valuate video_mAP for ucf-101, do you find 'testlist_video.txt'?
from yowo.
@yzfly Hi, have you reproduced the results in Table 2 of the paper?
My results are behind by 2-3 percentage points reported in the paper
When you valuate video_mAP for ucf-101, do you find 'testlist_video.txt'?
I generate it myself. In fact, I have given up my efforts on this project, too many questions and not a good baseline.
from yowo.
from yowo.
@yzfly Hi, have you reproduced the results in Table 2 of the paper?
My results are behind by 2-3 percentage points reported in the paper
Can you tell me what does your jhmdb experiment's cfg look like?
My frame mAP is only about 59% for jhmdb.
----------jhmdb.yaml----------
TRAIN:
DATASET: jhmdb21 # ava
, ucf24
or jhmdb21
BATCH_SIZE: 24
TOTAL_BATCH_SIZE: 128
LEARNING_RATE: 1e-4
EVALUATE: False
FINE_TUNE: False
BEGIN_EPOCH: 1
END_EPOCH: 10
SOLVER:
MOMENTUM: 0.9
WEIGHT_DECAY: 5e-4
STEPS: [3,4,5,6]
LR_DECAY_RATE: 0.5
ANCHORS: [0.95878, 3.10197, 1.67204, 4.0040, 1.75482, 5.64937, 3.09299, 5.80857, 4.91803, 6.25225]
NUM_ANCHORS: 5
OBJECT_SCALE: 5
NOOBJECT_SCALE: 1
CLASS_SCALE: 1
COORD_SCALE: 1
DATA:
NUM_FRAMES: 16
SAMPLING_RATE: 1
TRAIN_JITTER_SCALES: [256, 320]
TRAIN_CROP_SIZE: 224
TEST_CROP_SIZE: 224
MEAN: [0.4345, 0.4051, 0.3775]
STD: [0.2768, 0.2713, 0.2737]
MODEL:
NUM_CLASSES: 21
BACKBONE_3D: resnext101
BACKBONE_2D: darknet
WEIGHTS:
BACKBONE_3D: "weights/resnext-101-kinetics-hmdb51_split1.pth"
BACKBONE_2D: "weights/yolo.weights"
FREEZE_BACKBONE_3D: True
FREEZE_BACKBONE_2D: True
LISTDATA:
BASE_PTH: "/data1/su/datasets/JHMDB-YOWO"
TRAIN_FILE: "/data1/su/datasets/JHMDB-YOWO/trainlist.txt"
TEST_FILE: "/data1/su/datasets/JHMDB-YOWO/testlist.txt"
TEST_VIDEO_FILE: "/data1/su/datasets/JHMDB-YOWO/testlist_video.txt"
MAX_OBJS: 1
CLASS_NAMES: [
"brush_hair", "catch", "clap", "climb_stairs", "golf",
"jump", "kick_ball", "pick", "pour", "pullup", "push",
"run", "shoot_ball", "shoot_bow", "shoot_gun", "sit",
"stand", "swing_baseball", "throw", "walk", "wave"
]
BACKUP_DIR: "backup/jhmdb"
RNG_SEED: 1
NUM_GPUS: 4
VISBLE_GPUS: '"0, 1, 2, 3"'
GPUS_ID: [0, 1, 2, 3]
from yowo.
Related Issues (20)
- Working on dataset other than humans HOT 3
- Can you provide each type of map in ava
- About the envs setup HOT 2
- KeyError: 'exp_avg' HOT 1
- YOWO not using GPUs? HOT 1
- model is None
- how is the yolo.weights trained?
- Training YOWO on a custom dataset HOT 1
- Dropbox links for pre-trained models for J-HMDB, UCF and their annotations give 404 error HOT 1
- The [email protected] results reported in the paper are for validation or test splits of the AVA dataset?
- A stronger YOWO achieved by us. HOT 4
- not find yowo_jhmdb21_32f_best.pth
- plz,how to make and train my own dataset?
- test_video_ava.py error
- ava_detection_val_boxes_and_labels.csv is missing
- ava_classnames.json is missing
- /usr/home/sut path HOT 1
- animal action reconginition
- Training YOWO on a customized dataset HOT 2
- This code lacks any Conda environment or usage instructions.
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from yowo.