Thank you for your excellent work! I was just wondering about some issues with the dataset -
Intrinsics not aligned with frame sizes, e.g., mpii-bicycling-bicycling,BMX-056019255-0 from translation_train has frame sizes of [[520 595], [525 595], [529 595], [532 595], [536 595], [537 595], [537 595], [538 595], [543 595], [551 595], [561 595], [572 595], [584 595], [595 582]], but the intrinsic annotations are with principle points (640, 360). This does not make sense.
The frame sizes differ for frames in a single video, with the same example as above.
SMPL annotations are only partial - some videos with humans in the whole video are not with SMPL annotations for all frames.
Not all sequences have annotations. For example, panorama_train has 76 sequences but only 47 are annotated.
Thanks in advance for your clarification. Looking forward to your reply.
Thanks a lot for your interesting work. Is it possible to provide a description of the dataset annotations and how they were computed? What is the difference between panorama and translation frames?
Thank you in advance for your help.