Giter Site home page Giter Site logo

poseval's Introduction

Poseval

Created by Leonid Pishchulin. Adapted by Sven Kreiss.

Install directly from GitHub:

pip install https://github.com/svenkreiss/poseval.git

Install from a local clone:

git clone https://github.com/svenkreiss/poseval.git
cd poseval
pip install -e .  # install the local package ('.') in editable mode ('-e')

Changes:

  • Python 3
  • uses latest motmetrics from PyPI (much(!!!) faster); removed git submodule py-motmetrics

Test command with small test data:

python -m poseval.evaluate \
    --groundTruth test_data/gt/ \
    --predictions test_data/pred/ \
    --evalPoseTracking \
    --evalPoseEstimation \
    --saveEvalPerSequence

Lint: pylint poseval.


Evaluation of Multi-Person Pose Estimation and Tracking

Created by Leonid Pishchulin

Introduction

This README provides instructions how to evaluate your method's predictions on PoseTrack Dataset locally or using evaluation server.

Prerequisites

  • numpy>=1.12.1
  • pandas>=0.19.2
  • scipy>=0.19.0
  • tqdm>=4.24.0
  • click>=6.7

Install

$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive
$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH

Data preparation

Evaluation requires ground truth (GT) annotations available at PoseTrack and your method's predictions. Both GT annotations and your predictions must be saved in json format. Following GT annotations, predictions must be stored per sequence, for each frame of the sequence, using the same structure as GT annotations, and have the same filename as GT annotations. For evaluation on Posetrack 2017, predictions have to follow Posetrack 2017 annotation format, while for evaluation on Posetrack 2018 corresponding 2018 format should be used. Example of json prediction structure for Posetrack 2017 format:

{
   "annolist": [
       {
	   "image": [
	       {
		  "name": "images\/bonn_5sec\/000342_mpii\/00000001.jpg"
	       }
           ],
           "annorect": [
	       {
	           "x1": [625],
		   "y1": [94],
		   "x2": [681],
		   "y2": [178],
		   "score": [0.9],
		   "track_id": [0],
		   "annopoints": [
		       {
			   "point": [
			       {
			           "id": [0],
				   "x": [394],
				   "y": [173],
				   "score": [0.7],
			       },
			       { ... }
			   ]
		       }
		   ]
		},
		{ ... }
	   ],
       },
       { ... }
   ]
}

Note: values of track_id must integers from the interval [0, 999]. For example annotation format of Posetrack 2018 please refer to the corresponding GT annotations.

We provide a possibility to convert a Matlab structure into json format.

$ cd poseval/matlab
$ matlab -nodisplay -nodesktop -r "mat2json('/path/to/dir/with/mat/files/'); quit"

Metrics

This code allows to perform evaluation of per-frame multi-person pose estimation and evaluation of video-based multi-person pose tracking.

Per-frame multi-person pose estimation

Average Precision (AP) metric is used for evaluation of per-frame multi-person pose estimation. Our implementation follows the measure proposed in [1] and requires predicted body poses with body joint detection scores as input. First, multiple body pose predictions are greedily assigned to the ground truth (GT) based on the highest PCKh [3]. Only single pose can be assigned to GT. Unassigned predictions are counted as false positives. Finally, part detection score is used to compute AP for each body part. Mean AP over all body parts is reported as well.

Video-based pose tracking

Multiple Object Tracking (MOT) metrics [2] are used for evaluation of video-based pose tracking. Our implementation builds on the MOT evaluation code [4] and requires predicted body poses with tracklet IDs as input. First, for each frame, for each body joint class, distances between predicted locations and GT locations are computed. Then, predicted tracklet IDs and GT tracklet IDs are taken into account and all (prediction, GT) pairs with distances not exceeding PCKh [3] threshold are considered during global matching of predicted tracklets to GT tracklets for each particular body joint. Global matching minimizes the total assignment distance. Finally, Multiple Object Tracker Accuracy (MOTA), Multiple Object Tracker Precision (MOTP), Precision, and Recall metrics are computed. We report MOTA metric for each body joint class and average over all body joints, while for MOTP, Precision, and Recall we report averages only.

Evaluation (local)

Evaluation code has been tested in Linux and Ubuntu OS. Evaluation takes as input path to directory with GT annotations and path to directory with predictions. See "Data preparation" for details on prediction format.

$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive
$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
$ python evaluate.py \
  --groundTruth=/path/to/annotations/val/ \
  --predictions=/path/to/predictions \
  --evalPoseTracking \
  --evalPoseEstimation

Evaluation of multi-person pose estimation requires joint detection scores, while evaluation of pose tracking requires predicted tracklet IDs per pose.

Evaluation (server)

In order to evaluate using evaluation server, zip your directory containing json prediction files and submit at https://posetrack.net. Shortly you will receive an email containing evaluation results. Prior to submitting your results to evaluation server, make sure you are able to evaluate locally on val set to avoid issues due to incorrect formatting of predictions.

References

[1] DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, and B. Schiele. In CVPR'16

[2] Evaluating multiple object tracking performance: the CLEAR MOT metrics. K. Bernardin and R. Stiefelhagen. EURASIP J. Image Vide.'08

[3] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. In CVPR'14

[4] https://github.com/cheind/py-motmetrics

For further questions and details, contact PoseTrack Team mailto:[email protected]

poseval's People

Contributors

guohengkai avatar leonid-pishchulin avatar svenkreiss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

poseval's Issues

different evaluate result using different format

hi, @leonid-pishchulin ,

the evaluate code can support 2018 format and 2017 format, but I find that using different format I can get different results in some situation. To investigate the issue, I use only one video sequence, and only two frames's ground truth and prediction results.

Use ground truth of 2017 format and prediction of 2017 format, I got the result:

Average Precision (AP) metric:
& Head & Shou & Elb  & Wri  & Hip  & Knee & Ankl & Total\\
& 94.4 & 83.3 &100.0 &100.0 &100.0 & 90.0 & 77.5 & 92.3 \\

Use ground truth of 2018 format and prediction of 2018 format, I got the result:

Evaluation of per-frame multi-person pose estimation
saving results to ./out/total_AP_metrics.json
Average Precision (AP) metric:
& Head & Shou & Elb  & Wri  & Hip  & Knee & Ankl & Total\\
& 33.3 & 37.5 & 37.5 & 50.0 & 37.5 & 50.0 & 37.5 & 40.0 \\

I have upload my test json at https://drive.google.com/open?id=1jfNa75EKAIMHd8TUrOGp44XxXYANLUm9

Posetrack2017 local validation in Python

Hello @leonid-pishchulin , thanks for your eval tool. I have a Posetrack2018 submission and would like to submit to Posetrack2017 for comparison with other prior work. I can write another data-exporter for 2017. Can you confirm that 2018 is a strict superset so I do get all annotations?

How can I verify locally? The 2017 ground truth are matlab files. I dont have matlab. The readme references a mat2json tool. Is that for the ground truth? Is the output of that conversion for the ground truth available somewhere? Would your Python evaluator then work with the converted ground truth and my new 2017 annotations?

Prediction json

is it possible to get a set of prediction json files that correspond with the annotated val json files for testing purposes?

'convert.py' is unable to convert COCO ground truth annotation file 'person_keypoints_minival2014.json'

I've noticed poseval recently starts to support coco datasets, due to the newly added file 'convert.py.' Unfortunately, when I try to evaluate some Detectron results from 'https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md,' it reports:

Traceback (most recent call last):
  File "evaluate.py", line 72, in <module>
    main()
  File "evaluate.py", line 32, in main
    gtFramesAll,prFramesAll = eval_helpers.load_data_dir(argv)
  File "/home/tujun/poseval/py/eval_helpers.py", line 385, in load_data_dir
    data = convert_videos(data)[0]
  File "/home/tujun/poseval/py/convert.py", line 620, in convert_videos
    videos = Video.from_new(track_data)
  File "/home/tujun/poseval/py/convert.py", line 185, in from_new
    assert lm_idx in conversion_table, "Landmark `%s` not found." % (lm_name)
AssertionError: Landmark `head_bottom` not found.

Upon further examination, I found the process stopped when it's trying to convert the ground truth annotation (namely 'person_keypoints_minival2014.json') to posetrack17 format.
Is it due to I handle it in a wrong way, or poseval's support for COCO hasn't complete?

Can not install via pip install https://github.com/leonid-pishchulin/poseval.git

Hi, the recommended way to install poseval does not work locally.

(pose) C:\Users\admin\pose>pip install --no-cache https://github.com/svenkreiss/poseval.git
Collecting https://github.com/svenkreiss/poseval.git
  Downloading https://github.com/svenkreiss/poseval.git
     - 205.0 kB 3.1 MB/s 0:00:00
  ERROR: Cannot unpack file C:\Users\admin\AppData\Local\Temp\pip-unpack-1uy31h6h\poseval.git (downloaded from C:\Users\admin\AppData\Local\Temp\pip-req-build-rbv6emii, content-type: text/html; charset=utf-8); cannot detect archive format
ERROR: Cannot determine archive format of C:\Users\admin\AppData\Local\Temp\pip-req-build-rbv6emii

Some picture's ground truths have problem.

42dabcc9110c589bda04fde77e0050c
I don't think this picture should have such two keypoints as ground truths.

the image name is: 'images/bonn_mpii_test_v2_5sec/17839_mpii/00000143.jpg'
the annotation file contains its ground truth is: 17839_mpii_relpath_5sec_testsub.mat

This is the one that I spotted, I don't know if there are more of them.

MOT Error

Hi all,

When I enable the MOT mode, I get the following error:

Traceback (most recent call last):
File "evaluate.py", line 67, in
main()
File "evaluate.py", line 53, in main
metricsAll = evaluateTracking(gtFramesAll,prFramesAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 121, in evaluateTracking
metricsAll = computeMetrics(gtFramesAll, motAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 82, in computeMetrics
metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')
File "/usr/local/lib/python2.7/dist-packages/motmetrics/metrics.py", line 127, in compute
df = df.events
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 231, in events
self.cached_events_df = MOTAccumulator.new_event_dataframe_with_data(self._indices, self._events)
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 271, in new_event_dataframe_with_data
raw_type = pd.Categorical(tevents[0], categories=['RAW', 'FP', 'MISS', 'SWITCH', 'MATCH'], ordered=False)
IndexError: list index out of range

I checked and it looks like this function makes motAll to be all NANs:

distThresh = 0.5
    # assign predicted poses to GT poses
    _, _, _, motAll = eval_helpers.assignGTmulti(gtFramesAll, prFramesAll, distThresh)

The values that I input to that function are definitely correct as the -e function gives me 100%

Is there a BUG?

When both 'trackidxGT' and 'trackidxPr' are empty, the evaluation code will be broken.
Running time error is shown: "IndexError: list out of range". And the error happened in "mh.compute(accAll[i],...)".

For the case, ['num_misses', 'num_switches', 'num_false_positives', 'num_objects', 'num_detections'] should be [0, 0, 0, 0, 0].

I wish that the authors can fix this bug.

Weird output

Hi, I tried evaluating my model on the validation set using evaluate.py. Everything went well and there wasn't any errors, but the output seemed wrong. I event got negative numbers and NaN (see below). I have no idea what is going wrong. Can someone share some thoughts?

& MOTA & MOTA & MOTA & MOTA & MOTA & MOTA & MOTA & MOTA & MOTP & Prec & Rec \
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total& Total& Total& Total\
&-61.0 &-98.8 &-94.0 &-78.9 &-98.6 &-86.8 &-79.9 &-83.8 & 61.3 & nan & 0.7 \

GT frames only consist of frames with annotations during evaluation

It seems that when convert gt annoations to 2017 format.
In the function to_old in convert.py 129 line.

The predictions will have result with all of the frames in one video, but the groundTruth only consider cases with annotations. So it may occur following error:

`
Traceback (most recent call last):
File "evaluate.py", line 72, in
main()
File "evaluate.py", line 32, in main
gtFramesAll,prFramesAll = eval_helpers.load_data_dir(argv)
File "/poseval/py/eval_helpers.py", line 407, in load_data_dir
raise Exception('# prediction frames %d <> # GT frames %d for %s' % (len(pr),len(gt),predFilename))
Exception: # prediction frames 117 <> # GT frames 52 for /001735_mpii_test.json

`

python2 and python3 have not the save result.

I change the 'print' syntax from python2 to python3, but in the same condition, the result will be differently in MOTP/TOTAL metric. I have checked that MOTP for some joints are 0 when use python2, but python3 is not. Which result I can believe in?

Convert from 2018 to 2017

Hi,

I'm trying to evaluate the results with 2017 format on 2018 eval json. The convert function does not merge the annotations in the same image, so the frame number of predictions and ground truths do not match.

Can not find the evaluation server

Hi,I want to submit my result file to the online evalutaion server. However, I can't find the address. I can only find the DensePose-PoseTrack evaluation server address from the official website. So can you please tell me where I can submit my result file? Thank you very much!

Submit the test set result for posetrack2018 and posetrack2017

I have ran all my experiments on posetrack2018 validation set and posetrack2017 validation set
Now I want to submit the test set result. Does the posetrack2018 test set consists of all the posetrack2017 test set?
If I submit the posetrack2018 result, could you please feed me back with both posetrack2018 and posetrack2017(for comparision with others) test set result?

Evaluation without Head Bounding Box

Good day,

Is it possible to run the single frame keypoint evaluation without the head bounding box?? My estimator only estimates keypoints. If i set the head bounding box size to 0 it gives 0 percent accuracy

Question about annotations of PoseTrack2018

In the download page, annotation data contains only 250 videos for training and 50 for validation. But there are 792 and 170 image sequences for training and validation respectively(375 for test). Where can I get the rest annotations? Besides the 2018 annotation contains the same videos as those in 2017 annotation which seems strange. After I registerd, I could not find addtional annotaions either.

MOT Error not solved

Hi, the closed issue "mot error on 1 Jul 2018" seem to be unfixed
when running the following 3 videos ->017839_mpii_test , 004707_mpii_test ,004712_mpii_test
error occured
2019-03-25 15-39-26屏幕截图

Also when I use ground truth as result running single video
Estimation result of 004707_mpii_test
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\
&100.0 &100.0 &100.0 &100.0 &100.0 & 0.0 & 0.0 & 73.3 \

Estimation result of 004712_mpii_test
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\
&100.0 &100.0 &100.0 &100.0 &100.0 & 0.0 & 0.0 & 73.3 \

Estimation result of 017839_mpii_test
& Head & Shou & Elb & Wri & Hip & Knee & Ankl & Total\
&100.0 &100.0 & 50.0 & 50.0 &100.0 & 0.0 & 0.0 & 60.0 \

Could you please look into that?

how to handle cropped images in test set.

Does anyone happen know how to handle the cropped images in test set? For some sequence, there are duplicate frames, one is full frame and the other is cropped. Will the cropped ones be used during evaluation ? If not, can I just ignore those cropped frames when generating the json files for evaluation?

Thanks!

evaluation error

Hi,
I just want to try the function evaluation.py to get the result. So i use the the annotations of train and val set, e.g. 000001_bonn_relpath_5sec_trainsub.mat to get the json file using mat2json.m function. Then I use the json file as prediction and groundtruth, and use the following command to test:

python evaluate.py --groundTruth=/mnt/data/dataset/posetrack/posetrack_data/annotations/myval/ --predictions=/mnt/data/dataset/posetrack/posetrack_data/annotations/predictions/ --evalPoseEstimation

however, there is an error as following:
image
Do you have any ideas? thank you!

The total MOTA does not equal to the 'average joint-level MOTA'

The total MOTA does not equal to the 'average joint-level MOTA'

I added the following lines in eval_helpers.py :: def getCum(vals)
to compute the average joint-level MOTA.

cum += [(vals[[Joint().right_ankle, Joint().right_knee, Joint().right_hip, Joint().left_hip, Joint().left_knee,
                   Joint().left_ankle, Joint().right_wrist, \
                   Joint().right_elbow, Joint().right_shoulder, Joint().left_shoulder, Joint().left_elbow,
                   Joint().left_wrist, Joint().neck, Joint().nose, Joint().head_top], 0].mean())]

I find that it does not equal to the total MOTA ! The total MOTA is much lower (about 5%).

Is there any direction on how to use the new posetrack/coco annotation converter?

I wrote a posetrack to coco dataset converter. But it only works the ground truth annotations, and only from posetrack to coco. I'm still little fussy on python, so I can't determine how to use your converter or if your converter works on prediction results just by reading the source coding.

Could you provide a guide as to how to use this converter? Many thanks!

-999999

Find '-999997', '-999982' .... in result json

R2016b (9.1.0.441655) 64-bit (glnxa64)
Python 2.7.13
json 2.0.9

MOT Error if a sequence doesnt have a particular limb

17839_mpii_relpath_5sec_testsub (copy).txt

You can attempt to run the evaluation of this file against itself. It will crash immediately. Why is this the case? Immediate debugging will tell you that the MOT Accumlator events is empty.

File "evaluate.py", line 68, in
main()
File "evaluate.py", line 54, in main
metricsAll = evaluateTracking(gtFramesAll,prFramesAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 134, in evaluateTracking
metricsAll = computeMetrics(gtFramesAll, motAll)
File "/media/raaj/Storage/video_datasets/posetrack_valscripts/py/evaluateTracking.py", line 95, in computeMetrics
metricsMid = mh.compute(accAll[i], metrics=metricsMidNames, return_dataframe=False, name='acc')
File "/usr/local/lib/python2.7/dist-packages/motmetrics/metrics.py", line 127, in compute
df = df.events
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 231, in events
self.cached_events_df = MOTAccumulator.new_event_dataframe_with_data(self._indices, self._events)
File "/usr/local/lib/python2.7/dist-packages/motmetrics/mot.py", line 271, in new_event_dataframe_with_data
raw_type = pd.Categorical(tevents[0], categories=['RAW', 'FP', 'MISS', 'SWITCH', 'MATCH'], ordered=False)
IndexError: list index out of rang

    totalJ = [0]*nJoints

    for j in range(len(imgidxs)):
        imgidx = imgidxs[j,0]
        # iterate over joints
        for i in range(nJoints):
            # GT tracking ID
            trackidxGT = motAll[imgidx][i]["trackidxGT"]
            # prediction tracking ID
            trackidxPr = motAll[imgidx][i]["trackidxPr"]
            # distance GT <-> pred part to compute MOT metrics
            # 'NaN' means force no match
            dist = motAll[imgidx][i]["dist"]
            # Call update once per frame
            #if si == 25:
            if len(trackidxGT): totalJ[i] += 1
            accAll[i].update(
                trackidxGT,                 # Ground truth objects in this frame
                trackidxPr,                 # Detector hypotheses in this frame
                dist                        # Distances from objects to hypotheses
            )

    print totalJ

Will give [0, 0, 34, 34, 0, 0, 0, 0, 34, 34, 34, 34, 34, 34, 34], and it crashes. But some datasets in posetrack dont have leg poses for example. Thats how the ground truth is. So why is it crashing???

Convert *.mat to *.json

Dear Leonid,
I'm a master student interested in taking part in the PoseTrack Challenge, and I found your code is best for the evaluation of that task.
However, the official website only provided the *.mat version of the labels not *.json format your code is using, and some keys like "img_idx" the *.mat version seems didn't provide.
It would be greatly helpful if you can give me information about how to get the *.json format label. :)

I really appreciate your work!
Thank you!

Is the 'name' of 'image' really needed in evaluation process?

If I'm reading the source code correctly, it seems 'name' of the 'image' is not necessary for the evaluation process. I'm having trouble to reconcile the result from the validation set and the result from the test server. There is too big a gap between those two.

Test submission example

Could you please give a submission example on posetrack 2017 dataset. It seems that 2018 format is little different from it.

about the Neck

after I run the evaluate.py for my result of posetrack2018 val ,
I got the result of pose estimation, but I found the keypoint "neck" in the Item of "names",
while in the posetrack2018 val ,there is no neck keypoint , how this value come from? it is very low compare with other values

Posetrack 2018

Good day,

My code currently automatically converts the Posetrack 2018 results into exactly the Posetrack 2017 format (same evaluation as last year)

So I have this format basically:

POSETRACK_MAPPING = dict()
POSETRACK_MAPPING["RANKLE"] = 0
POSETRACK_MAPPING["RKNEE"] = 1
POSETRACK_MAPPING["RHIP"] = 2
POSETRACK_MAPPING["LHIP"] = 3
POSETRACK_MAPPING["LKNEE"] = 4
POSETRACK_MAPPING["LANKLE"] = 5
POSETRACK_MAPPING["RWRIST"] = 6
POSETRACK_MAPPING["RELBOW"] = 7
POSETRACK_MAPPING["RSHOULDER"] = 8
POSETRACK_MAPPING["LSHOULDER"] = 9
POSETRACK_MAPPING["LELBOW"] = 10
POSETRACK_MAPPING["LWRIST"] = 11
POSETRACK_MAPPING["NECK"] = 12
POSETRACK_MAPPING["NOSE"] = 13 # Have nose
POSETRACK_MAPPING["TOP"] = 14

If so, will this still work? Have you changed anything in your code that might affect this?

Also, are you evaluating the ears at all then?

Can you confirm which dataset versions were used in the ICCV'17 and ECCV'18 challenges?

Hi, thank you for the effort you put in to creating this large-scale dataset. I would like to make fair comparisons to the papers presented in the PoseTrack workshops. Which version of PoseTrack17 was used in the ICCV'17 workshop? Which version of PoseTrack18 was used in the ECCV'18 workshop? I have looked at some of papers which participated, but did not see a dataset version explicitly stated.

Also, the participating papers seemed to mention the test set they used is a subset of the full testset. If I were to submit code to your evaluation server today (July 11, 2019), would I be using the same test sets as those papers? Thank you.

Head_box

Hi @leonid-pishchulin, is it need to include the position of head box in the prediction file as the format of groundtruth? My method does not perform head part detection and cannot provide the position of head box. Thank you.

PoseTrack2017 upload failure

I've used the code to get the predictions for the test set of PoseTrack2017. After getting 214 predicted json files, I zipped them and uploaded to the offical website. Errors are given as follows

Wrong number of result files! 0 results found in archive, but 214 needed. Please make sure that all files are present in root folder of the zip archive.
Uploaded successfully.
zip archive unpacked successfully!

But I have placed 214 predicted json files of test set at the root folder like

results/

"00001_mpii_new_step1_relpath_5sec_testsub.json"
"00002_mpii_new_step1_relpath_5sec_testsub.json"
***
"**.json"

and zip it in ubuntu using the following command

zip -r results.zip results/

or zip it directly in windows.

I don't know why it fails and wait hopfully for your solution.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.