Giter Site home page Giter Site logo

Comments (9)

jackd avatar jackd commented on July 30, 2024 1

Just pushed fix. I changed interfaces at some point to using SkeletonConverters, didn't fix the example code scattered around the place. Should have used the parent directories skeleton.s20_to_s16_converter().convert(native) - native is the native skeleton (i.e. skeleton provided by the original dataset) with 20 joints, where as s14/s16 have 14/16 joints.

Disclaimer: you'll probably find a fair few issues like this. Feel free to file them and I'll get to them when I get a chance, but it won't be a high priority for the next week. Good luck.

from human_pose_util.

jackd avatar jackd commented on July 30, 2024 1

I know I tried doing that once, but I ended up concluding it was a bad idea - video compression is best, and if you try to save raw data it will explode to an unmanagable size. If might work if you wanted to only do a subset of the data - every 10th frame or something - but the size still ends up being quite unmanagable if you're not smart about it. I haven't revisited it since I've done some work with imagenet and learned some things (feel free to check out this script from my imagenet repo that saves externally compressed image data as vlen hdf5 data. Don't try and save frames in individual datasets - you'll get this behaviour), but I can guarantee I haven't implemented anything like that in here.

from human_pose_util.

jackd avatar jackd commented on July 30, 2024

Hi Pavanteja, I've seen this and promise I'll get back to it - things pretty hectic at work for the rest of the week though, sorry for delay. If you're desperate it's probably something I accidentally deleted after I'd done the conversion, so it'll likely be in the git history somewhere - otherwise I'll sort it out in a week or so.

from human_pose_util.

pavanteja295 avatar pavanteja295 commented on July 30, 2024

Hey Jack,
Thanks for the quick reply. Can you at least tell me what th function does on the whole. If I understand correctly annotations of humaneva has a different joint names than the ones used in general and u want to convert the given joints into the general joints ? Lemme know if this is the case

from human_pose_util.

pavanteja295 avatar pavanteja295 commented on July 30, 2024

Thanks a lot for such a quick fix. Just one doubt I have is how do you convert Image_data which are the video files into images which I want to use them for future use. Also can I use the hdf5_tree.py to convert the uncompressed files to a hdf5 file ?

from human_pose_util.

pavanteja295 avatar pavanteja295 commented on July 30, 2024

Hey thanks a lot for the information and such a interactive issue resolving. Last question I have is I think u haven't downsampled any annotations which are stored in hdf5. But I tried to extract frames from the videos provided using ffmpeg using 60 frame rate which is given in the paper. Suprisingly the number of frames in the hdf5 file donot somehow match with the number of images present after extracting the video. Any idea about this ?

from human_pose_util.

jackd avatar jackd commented on July 30, 2024

I observed the same thing, but the difference was only a few frames if a recall correctly - can't remember exactly how I reconciled it - think I just trimmed the last few frames after visually verifying I couldn't really tell the difference between trimming start and end frames.

from human_pose_util.

pavanteja295 avatar pavanteja295 commented on July 30, 2024

Yeah thanks for your suggestion. I was able to create it. One doubt I have is in meta.py you have the partition which shows partition of the frames so is this partition is training and validation partition ? If not how can I find train and validation split ?

from human_pose_util.

jackd avatar jackd commented on July 30, 2024

... ... ... yep, should have docuemented that better. 36 hours to (different) deadline so I won't address it properly now, but I recall the numbers coming straight from the original EVA paper. From memory, and based on the limited comments I have there, S1/Walking/Trial 1 frames[:590] were validation, while frames[590:] were training, while trial 2 was entirely for testing and trial 3 entirely training (total frame counts below: 1180, 980, 3238).

from human_pose_util.

Related Issues (2)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.