Giter Site home page Giter Site logo

Data Partion Problem about labstreaminglayer HOT 4 CLOSED

sccn avatar sccn commented on May 6, 2024
Data Partion Problem

from labstreaminglayer.

Comments (4)

cboulay avatar cboulay commented on May 6, 2024

Sending and receiving data are decoupled. The inlet doesn't care about the outlet's chunk size, and vice versa. The inlet's pull_chunk will pull as many samples as are available. You may be wondering, if your outlet only pushes 256-sample chunks at a time then why are you able to pull less than 256 samples? It's because push_chunk is just an optimized loop of push_sample. Your inlet happens to be asking for samples in the middle of this loop.

You can try playing around with the chunk_size parameter on the outlet, or max_chunklen on the inlet.

However, this isn't really what we'd consider "best practice". Your approach is adding an artificial 1.0-second delay from the time of the first sample in a chunk until the time that sample is sent. The preferred way is to send samples as fast as possible then let the receiving applications decide how to deal with them.

If you're accumulating 1-second's worth of samples because you want to do an FFT on them, a better thing to do is maintain a FIFO buffer of 256 samples. Whenever a new chunk comes in, whether that chunk has 2 samples or 172 samples, you discard the oldest samples, add the new samples to the buffer,and then calculate the fft on the entire buffer. This will allow you to update your spectra as fast as possible. If you want the spectral updates to be at a constant step-size, then you can do something similar but you need a slightly more complicated buffering strategy (e.g., "full" buffer and "on deck" buffer).

If you can tell me more about why you need exactly 256 samples then I might be able to help you come up with a different strategy.

from labstreaminglayer.

ufukuyan avatar ufukuyan commented on May 6, 2024

Thank you so much for answering my question. I want to start by giving brief information about my work.In our study, we want to detect the visual discomfort of VR users by using EEG signals. We trained a model using raw EEG data from the participants. In our real-time application, we plan to collect simultaneous EEG data while users are exposed to the VR environment and offer a more comfortable scene if discomfort is detected.
Therefore, when the user is exposed to the VR environment, we intend to collect 1-second data(256 samples) and then pass the collected data through the bandpass filter, then scaling and input it into the prediction function. In our study, we use 14 channel Emotiv Epoc + device to collect EEG data. So I need to collect a total of 256 samples from 14 channels to use in preprocessing and prediction. I need to do this in certain steps in my real-time application.
Your suggestions are very valuable to me. Thank you so much.

from labstreaminglayer.

cboulay avatar cboulay commented on May 6, 2024

There's no reason for the 1-second windows you're analyzing to be non-overlapping, right?

For the bandpass filtering step, you can do that on as few samples as you want, you just have to save the filter state after every iteration then pass it to the next iteration. See the zi argument in scipy's sosfilt, and sosfilt_zi for the very beginning, before it's seen any data.
(Note that you should use an FIR filter design so you can compensate for the constant filter delay if you need to merge the filter output with other parallel processing steps, but if you are not doing so then I think IIR filters are better).

I would then take the output of the filter and add it to a FIFO buffer as described above, and run the entire buffer through the rest of the model, even though the buffer has only been updated by N samples. In this way, you'll be updating your output as fast as your signal processing speed allows, which should lead to a better user experience.

from labstreaminglayer.

ufukuyan avatar ufukuyan commented on May 6, 2024

the data must be 1 second to make predictions and it can be overlapped. Can I continue to store new data in the buffer while using the 1-second data I've collected for prediction? If I can't add instant data to the buffer while I'm processing the last 1 second data, will I get the wrong data if I continue adding to the FIFO buffer after the processing is finished?

from labstreaminglayer.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.