Giter Site home page Giter Site logo

Train on jhmdb21 about yowo HOT 11 CLOSED

yzfly avatar yzfly commented on July 19, 2024
Train on jhmdb21

from yowo.

Comments (11)

yzfly avatar yzfly commented on July 19, 2024 6

@yzfly Please use the following steps to use the codes in the repo. These steps have recently been validated by another user of the repo! They will also answer your questions above.

FOR UCF DATASET:

For Training

  1. Download ucf dataset, unzip file. Rename the datafolder as ‘ucf24’.
  2. Create a new folder by the name of ‘datasets’. Put the above renamed ‘ucf24’ folder inside the datasets folder.
  3. Download code from GitHub page. The code folder is called YOWO
  4. Open cfg/ucf24.data’ in the YOWO folder. Update address of the base, training and validation dataset
  5. Download ‘trainlist.txt’ and ‘testlist.txt’ files and put it inside the ‘ucf24’ folder created in step-1.
  6. Create a folder called ‘weights’ in the YOWO folder.
  7. Download pre-trained yolo weights, details given the GitHub page, and put it in the folder created in step-6.
  8. Download ‘resnext-101-kinetics.pth’ from the GitHub page and put in the ‘weights’ folder created in step -6.

For Validation

  1. Go to YOWO/evaluations/Object-Detection-Metrics
  2. Unzip the file ‘groundtruths_ucf.zip’
  3. Go back to the YOWO folder. Create a folder called ‘ucf_detections/detections_0’. Keep creating multiple subdirectories ‘detections_n’ for n = 0,1,2,3….. for further tasks.
  4. Run validation using the command

Run both training and validating model using the script:
$ sh run_ucf101-24.sh

  1. Change the ‘detection_n’ portion in the script file for multiple validation tasks.
    python ./evaluation/Object-Detection-Metrics/pascalvoc.py --gtfolder groundtruths_ucf --detfolder ../../ucf_detections/detections_0

Thank you for your detailed guidance. I have successfully trained on ucf24 dataset.
But trainning on ucf24 is more time consume than jhmdb21. What trouble me a lot is training on jhmdb21 dataset , since I find no bounding box ground truth file.

I have downloaded all jhmdb21 dataset, and didn't find training annotations just like ucf24's your provided. After searching on the internet, I didn't find exist bounding box annotation files.

To address this problem. I have generate bounding box ground truth files using jhmdb21 puppet mask files by myself. One .txt file for one per frame with contents : [label, xmin, ymin, xmax, ymax].

I have checked my groundtruth file with YOWO provided. In most cases they are similar to those provided,with a difference of one or two pixels. If anyone needed, you can download from jhmdb21_labels

from yowo.

okankop avatar okankop commented on July 19, 2024 1

@yzfly Please use the following steps to use the codes in the repo. These steps have recently been validated by another user of the repo! They will also answer your questions above.

FOR UCF DATASET:

For Training

  1. Download ucf dataset, unzip file. Rename the datafolder as ‘ucf24’.
  2. Create a new folder by the name of ‘datasets’. Put the above renamed ‘ucf24’ folder inside the datasets folder.
  3. Download code from GitHub page. The code folder is called YOWO
  4. Open cfg/ucf24.data’ in the YOWO folder. Update address of the base, training and validation dataset
  5. Download ‘trainlist.txt’ and ‘testlist.txt’ files and put it inside the ‘ucf24’ folder created in step-1.
  6. Create a folder called ‘weights’ in the YOWO folder.
  7. Download pre-trained yolo weights, details given the GitHub page, and put it in the folder created in step-6.
  8. Download ‘resnext-101-kinetics.pth’ from the GitHub page and put in the ‘weights’ folder created in step -6.

For Validation

  1. Go to YOWO/evaluations/Object-Detection-Metrics
  2. Unzip the file ‘groundtruths_ucf.zip’
  3. Go back to the YOWO folder. Create a folder called ‘ucf_detections/detections_0’. Keep creating multiple subdirectories ‘detections_n’ for n = 0,1,2,3….. for further tasks.
  4. Run validation using the command

Run both training and validating model using the script:
$ sh run_ucf101-24.sh

  1. Change the ‘detection_n’ portion in the script file for multiple validation tasks.
    python ./evaluation/Object-Detection-Metrics/pascalvoc.py --gtfolder groundtruths_ucf --detfolder ../../ucf_detections/detections_0

from yowo.

yzfly avatar yzfly commented on July 19, 2024

I found related issue: issue18
I downloaded all the files you shared on dropbox. And did't find training dataset annotations. just as
image

from yowo.

okankop avatar okankop commented on July 19, 2024

@yzfly The annotations in groundtruths_ucf.zip is only used at calculating the Frame_mAP scores. This is why there is no training annotations there. Training annotations can be downloaded from the dataset sources.

from yowo.

Holmes-GU avatar Holmes-GU commented on July 19, 2024

@yzfly Hi, have you reproduced the results in Table 2 of the paper?

from yowo.

yzfly avatar yzfly commented on July 19, 2024

@yzfly Hi, have you reproduced the results in Table 2 of the paper?

My results are behind by 2-3 percentage points reported in the paper

from yowo.

Holmes-GU avatar Holmes-GU commented on July 19, 2024

from yowo.

Holmes-GU avatar Holmes-GU commented on July 19, 2024

@yzfly Hi, have you reproduced the results in Table 2 of the paper?

My results are behind by 2-3 percentage points reported in the paper

When you valuate video_mAP for ucf-101, do you find 'testlist_video.txt'?

from yowo.

yzfly avatar yzfly commented on July 19, 2024

@yzfly Hi, have you reproduced the results in Table 2 of the paper?

My results are behind by 2-3 percentage points reported in the paper

When you valuate video_mAP for ucf-101, do you find 'testlist_video.txt'?

I generate it myself. In fact, I have given up my efforts on this project, too many questions and not a good baseline.

from yowo.

Holmes-GU avatar Holmes-GU commented on July 19, 2024

from yowo.

Riiick2011 avatar Riiick2011 commented on July 19, 2024

@yzfly Hi, have you reproduced the results in Table 2 of the paper?

My results are behind by 2-3 percentage points reported in the paper

Can you tell me what does your jhmdb experiment's cfg look like?
My frame mAP is only about 59% for jhmdb.
----------jhmdb.yaml----------
TRAIN:
DATASET: jhmdb21 # ava, ucf24 or jhmdb21
BATCH_SIZE: 24
TOTAL_BATCH_SIZE: 128
LEARNING_RATE: 1e-4
EVALUATE: False
FINE_TUNE: False
BEGIN_EPOCH: 1
END_EPOCH: 10
SOLVER:
MOMENTUM: 0.9
WEIGHT_DECAY: 5e-4
STEPS: [3,4,5,6]
LR_DECAY_RATE: 0.5
ANCHORS: [0.95878, 3.10197, 1.67204, 4.0040, 1.75482, 5.64937, 3.09299, 5.80857, 4.91803, 6.25225]
NUM_ANCHORS: 5
OBJECT_SCALE: 5
NOOBJECT_SCALE: 1
CLASS_SCALE: 1
COORD_SCALE: 1
DATA:
NUM_FRAMES: 16
SAMPLING_RATE: 1
TRAIN_JITTER_SCALES: [256, 320]
TRAIN_CROP_SIZE: 224
TEST_CROP_SIZE: 224
MEAN: [0.4345, 0.4051, 0.3775]
STD: [0.2768, 0.2713, 0.2737]
MODEL:
NUM_CLASSES: 21
BACKBONE_3D: resnext101
BACKBONE_2D: darknet
WEIGHTS:
BACKBONE_3D: "weights/resnext-101-kinetics-hmdb51_split1.pth"
BACKBONE_2D: "weights/yolo.weights"
FREEZE_BACKBONE_3D: True
FREEZE_BACKBONE_2D: True
LISTDATA:
BASE_PTH: "/data1/su/datasets/JHMDB-YOWO"
TRAIN_FILE: "/data1/su/datasets/JHMDB-YOWO/trainlist.txt"
TEST_FILE: "/data1/su/datasets/JHMDB-YOWO/testlist.txt"
TEST_VIDEO_FILE: "/data1/su/datasets/JHMDB-YOWO/testlist_video.txt"
MAX_OBJS: 1
CLASS_NAMES: [
"brush_hair", "catch", "clap", "climb_stairs", "golf",
"jump", "kick_ball", "pick", "pour", "pullup", "push",
"run", "shoot_ball", "shoot_bow", "shoot_gun", "sit",
"stand", "swing_baseball", "throw", "walk", "wave"
]
BACKUP_DIR: "backup/jhmdb"
RNG_SEED: 1
NUM_GPUS: 4
VISBLE_GPUS: '"0, 1, 2, 3"'
GPUS_ID: [0, 1, 2, 3]

from yowo.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.