Giter Site home page Giter Site logo

jackyjsy / sam-slr-v2 Goto Github PK

View Code? Open in Web Editor NEW
30.0 1.0 8.0 196 KB

SAM-SLR-v2 is an improved version of SAM-SLR for sign language recognition.

License: Creative Commons Zero v1.0 Universal

Python 100.00%
sign-language-recognition sign-language-recognition-system graph-convolutional-networks multi-modal-learning

sam-slr-v2's People

Contributors

jackyjsy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

sam-slr-v2's Issues

The uncorrect performance

Hello, I have trained the proposed model by myself and found the performance lower than reported in this paper on all datasets. And I found the performance is sensitive to the preprocessing and the hyper-parameters.

File name and directory structure for WLAS dataset preparation

Hi, I am interested in applying the SL-GCM algorithm to the WLASL dataset. I have not understood the steps to make and the folder structure in order to correctly use your code. First I downloaded the WLAS dataset following the instructions in https://github.com/dxli94/WLASL. Once processed with the preprocess.py python script I have all the videos in one folder named 'videos'. The name of each file is like univoque_id.mp4. At this point I should use the demo.py script in data-prepare/wholepose/ https://github.com/jackyjsy/data-prepare/tree/89b556b0cb49a5a401ed939e3977c101df912257/wholepose, but how should I rename the files? Should the sign made in the video appear in the name of the file? Moreover, Should I separate the files into a train, test and validation folder and apply the demo.py script to each separate folder?

Could you explain in more details the steps to be taken, how to structure the directory with the files, the files name and the commands to run?

Thanks

Normalization of 2D keypoints

Hello ,

In the paper it is mentioned that the 2D keypoints are normalized in [-1,1]
But when I print the keypoints after the feeder they do not look normalized at [-1,1].

Is this correct or I have done something wrong ?

Thank you in advance

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.