sinc-rppg's People
sinc-rppg's Issues
Could you mind to share the pretrained model weight on PURE?
Thanks for sharing the code of this work, its really amazing! I want to test it on the real world including head movement, but I dont have the permition to access PURE Dataset now. Would you mind to share the model weight on PURE dataset?
Thanks for sharing
Where is the SiNC model
Thank you very much for your work. I would like to ask where the SiNC model is, as I only found PhysNet and RPNet in the downloaded code. Additionally, I would like to inquire about the approximate FLOPs of this model.
Experiment on UBFC cannot work
The UBFC.py is not finished.
def __getitem__(self, idx):
raise NotImplementedError
After preprocessing the UBFC-rPPG Dataset, run train.py with UBFC, there come errors as below:
File "../SiNC/src/datasets/UBFC.py", line 95, in set_augmentations
raise NotImplementedError
NotImplementedError
Sorry, where I can find the .csv file needed in the make_matadata.py
Sorry, where I can find the .csv file needed in the make_matadata.py
Is this really non-contrastive learning?
Hi,
Impressive work, it looks so simple.
In my view, the core of the work consists of three types of loss: Bandwidth, Sparsity, and Variance, which are referred to as IPR
, SNR
, and EMD
in the code. Both IPR and SNR are weakly supervised terms based on prior knowledge. To prevent model collapse into trivial solutions, an EMD term is introduced.
EMD essentially constructs negative pairs by forcing the model to output a uniformly distributed psd within a batch. This effectively compares samples within a batch to ensure their distances are maximized as much as possible. Consider this scenario: if the batch size is 1, then EMD cannot function because it's impossible to construct negative pairs.
EMD does not completely solve the problem of model collapse into trivial solutions; it cannot guarantee that psd distributions produced by models positively correlate with ground truth labels—sometimes models may just randomly distribute outputs within bandwidth.
Based on my replication results, convergence is not very stable; sometimes good outcomes can be achieved while other times they're quite poor.
Looking forward to more code releases from the author!
The results of the PURE test are different from those in the paper
After training and testing on PURE, the test results are significantly different from those in the paper,The results I got are shown below,I got the result step by step according to the readme, but I don't know where I made a mistake. Could you explain why?
pure_testing
ME, MAE, RMSE, r
-1.00
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.