Giter Site home page Giter Site logo

labstreaminglayer's Introduction

Quick Start

The lab streaming layer (LSL) is a system for the unified collection of measurement time series in research experiments that handles both the networking, time-synchronization, (near-) real-time access as well as optionally the centralized collection, viewing and disk recording of the data.

The most up-to-date version of this document can always be found in the main repository README and the online documentation.

The most common way to use LSL is to use one or more applications with integrated LSL functionality to stream data from one or more devices (e.g., EEG and Eye Tracker) and from a task application (NBS Presentation, psychopy, etc.) over the local network and record the with the LabRecorder.

Most LSL Applications will come bundled with its own copy of the LSL library (i.e., lsl.dll for a Windows application). However, many applications and interfaces (e.g., like pylsl) do not ship with liblsl.dylib or liblsl.so on Mac or Linux, respectively. In those cases, it is necessary to install liblsl separately and make it available to the application or interface. See the liblsl repo for more info.

  • Take a look at the list of supported devices and follow the instructions to start streaming data from your device. If your device is not in the list then see the Getting Help section below.
  • Download LabRecorder from its release page. (Note that LabRecorder saves data to Extensible Data Format (xdf) which has its own set of tools for loading data after finishing recording.)
  • Go through the Tutorials.
  • Use LSL from your scientific computing environment. LSL has many language interfaces, including Python and Matlab.
    • Python users need to pip install pylsl then try some of the provided examples.
    • The Matlab interface is also popular but requires a little more work to get started; please see its README for more info.

If you are not sure what you are looking for then try browsing through the code which has submodule links to different repositories for tools and devices (Apps) and language interfaces (LSL). When you land in a new repository then be sure to read its README and look at its Releases page.

Getting Help

If you are having trouble with LSL then there are few things you can do to get help.

  • Read the docs
  • Search GitHub issues in the main repository, in the old archived repository, and in the submodule for your App or language interface of interest.
  • Create a new GitHub issue. Please use the repository specific to the item you are having difficulty with. e.g. if you are having trouble with LabRecorder then open a new issue in its repository. If you don't know which repository is best then you can use the parent sccn/labstreaminglayer repository.
  • Join the LabStreamingLayer #users channel on Slack. Invite Link. Someone there may be able to get to the bottom of your problem through conversation.
  • You may also wish to try the very new labstreaminglayer.org forum

labstreaminglayer's People

Contributors

a-mark-mento avatar agricolab avatar arnodelorme avatar behinger avatar brifsttar avatar cboulay avatar chausner avatar chkothe avatar cjo5 avatar cll008 avatar dmedine avatar eugene345 avatar gat-marek-karwacki avatar glokta0 avatar guiomar avatar horschig avatar insane-dreamer avatar pstdavid avatar robertoostenveld avatar trmullen avatar tstenner avatar wsoussou avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

labstreaminglayer's Issues

Labview Interface

Hey there.

I require a Labview Interface to liblsl. When triing to create a wrapper for the liblsl64.dll in labview i noticed that labview cant interface methods which use classes (C++) like xml_element, stream_info, stream_outlet, or stream_inlet as parameters or return types. My current idea is to create a C wrapper for the C++ Code (like so: https://stackoverflow.com/questions/32572505/labview-dll-import ) which i would then compile to a dll and wrap again with labview (like so: https://knowledge.ni.com/KnowledgeArticleDetails?id=kA00Z0000019Ls1SAE&l=de-DE ).

From quickly looking over the i.e. matlab interface i noticed, that those, to my understanding, fully rewrite the classes. I am not sure if that is a better approach. It seems that it would be harder to maintain if lsl is updated.

Before starting to try to implement the wrapper-wrapper i wanted to ask for advice, if a labview interface is already available or planned, or if pure C wrapper code (which would be usefull for other language interfaces as well) is already available.

thank you very much for your time!

libLSL for WASM

Hi. Looking to open a dialog - Has libLSL on WASM been discussed?

Having done a little research to see whether we could bring LSL into our web technologies project ( www.biosignal.network ), to my untrained eye, the answer is yes. The big question might have been as to whether the emscripten transpiler supports threading and it seems that for the past year or so it does (including boost.threads), and Chrome and FF support (on desktop) the pre-requisite shared memory buffer standard.

I am aware that there are one or two node implementations, however IMO a pure browser approach gives a clean solution compared to, say, the Electron route. (We looked into Electron but have rejected it)

I would see our own evolution as starting with the LSL Push Interface, evolving to Push & Pull.

Appreciate any thoughts on this.

Format of timestamp in created_at field

I was wondering which format is used for the timestamp stored in the created_at field of a stream header - I've checked some files, and they contain floats that I cannot convert to Unix timestamps (e.g. 1717048.5769142229 or 29519.438182786, which can't be seconds since 1970-1-1).

I assume these LSL timestamps are seconds since the computer was turned on - is this correct? If yes, obviously I cannot use this to infer the exact datetime when the recording was started. In this case, is there another field in an XDF file that stores the creation time of the recording?

Output in LSL

Except Matlab, how is it possible to stream data in output to LSL?

OverflowError when HD-EEG data streamed to SigVisualizer

LSL SigVisualizer repeatedly crashes when It receive streamed EEG Inlet from 256ch EGI Amplifier. The program starts properly and visualize very short part of initial eeg signals from 256chs, than crash.

Traceback (most recent call last):

  File "/Users/User/Downloads/App-SigVisualizer-64fe663e88246c14fbd60b14eff84c228f5a85a6/paintwidget.py", line 216, in paintEvent

    -self.dataBuffer[0][ch_idx] + chan_offset)

OverflowError: argument 4 overflowed: value must be in the range -2147483648 to 2147483647

Abort trap: 6

Unhelpful error message when building without checked out submodules

Hello, I have two questions.

I tried building LSL according to BUILD.md, and ran into the following error when attempting to configure using
cd build
cmake-gui

led to the following in CMake:

Selecting Windows SDK version 10.0.17763.0 to target Windows 10.0.17134.
The CXX compiler identification is MSVC 19.16.27025.1
Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x86/cl.exe
Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/Hostx86/x86/cl.exe -- works
Detecting CXX compiler ABI info
Detecting CXX compiler ABI info - done
Detecting CXX compile features
Detecting CXX compile features - done

CMake Error at CMakeLists.txt:7 (add_subdirectory):
The source directory
C:/Users/RiftKun/Repos/labstreaminglayer-master/LSL/liblsl
does not contain a CMakeLists.txt file.

CMake Error at CMakeLists.txt:15 (include):
include could not find load file:
LSLCMake

CMake Error at CMakeLists.txt:22 (LSLGenerateCPackConfig):
Unknown CMake command "LSLGenerateCPackConfig".
Configuring incomplete, errors occurred!
See also "C:/Users/RiftKun/Repos/labstreaminglayer-master/build/CMakeFiles/CMakeOutput.log".

I also tried MSVC 2017 with CMake and received the following error:

1> Command line: c:\program files (x86)\microsoft visual studio\2017\community\common7\ide\commonextensions\microsoft\cmake\CMake\bin\cmake.exe -G "Ninja" -DCMAKE_INSTALL_PREFIX:PATH="C:\Users\RiftKun\CMakeBuilds\f2af7fce-6c7b-963a-85da-4f3686ef0529\install\x64-Debug (default)" -DCMAKE_CXX_COMPILER="C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x64/cl.exe" -DCMAKE_C_COMPILER="C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x64/cl.exe" -DCMAKE_BUILD_TYPE="Debug" -DCMAKE_MAKE_PROGRAM="c:\program files (x86)\microsoft visual studio\2017\community\common7\ide\commonextensions\microsoft\cmake\Ninja\ninja.exe" "C:\Users\RiftKun\Repos\labstreaminglayer-master"
1> Working directory: C:\Users\RiftKun\CMakeBuilds\f2af7fce-6c7b-963a-85da-4f3686ef0529\build\x64-Debug (default)
1> -- Configuring incomplete, errors occurred!
1> See also "C:/Users/RiftKun/CMakeBuilds/f2af7fce-6c7b-963a-85da-4f3686ef0529/build/x64-Debug (default)/CMakeFiles/CMakeOutput.log".
1> CMake Error at C:\Users\RiftKun\Repos\labstreaminglayer-master\CMakeLists.txt:7 (add_subdirectory):
1> The source directory
1>
1> C:/Users/RiftKun/Repos/labstreaminglayer-master/LSL/liblsl
1>
1> does not contain a CMakeLists.txt file.
1>
1>
1> CMake Error at C:\Users\RiftKun\Repos\labstreaminglayer-master\CMakeLists.txt:15 (include):
1> include could not find load file:
1>
1> LSLCMake
1>
1>
1> CMake Error at C:\Users\RiftKun\Repos\labstreaminglayer-master\CMakeLists.txt:22 (LSLGenerateCPackConfig):
1> Unknown CMake command "LSLGenerateCPackConfig".
1>
1>
1> c:\program files (x86)\microsoft visual studio\2017\community\common7\ide\commonextensions\microsoft\cmake\CMake\bin\cmake.exe -G "Ninja" -DCMAKE_INSTALL_PREFIX:PATH="C:\Users\RiftKun\CMakeBuilds\f2af7fce-6c7b-963a-85da-4f3686ef0529\install\x64-Debug (default)" -DCMAKE_CXX_COMPILER="C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x64/cl.exe" -DCMAKE_C_COMPILER="C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.16.27023/bin/HostX64/x64/cl.exe" -DCMAKE_BUILD_TYPE="Debug" -DCMAKE_MAKE_PROGRAM="c:\program files (x86)\microsoft visual studio\2017\community\common7\ide\commonextensions\microsoft\cmake\Ninja\ninja.exe" "C:\Users\RiftKun\Repos\labstreaminglayer-master" returned with exit code: 1
CMake Error at CMakeLists.txt:7 (add_subdirectory):
The source directory
C:/Users/RiftKun/Repos/labstreaminglayer-master/LSL/liblsl
does not contain a CMakeLists.txt file.
CMake Error at CMakeLists.txt:15 (include):
include could not find load file:
LSLCMake
CMake Error at CMakeLists.txt:22 (LSLGenerateCPackConfig):
Unknown CMake command "LSLGenerateCPackConfig".

I have both Qt5.12.2 and boost 1.70.0 installed with paths added as "Qt5_DIR" and "DBOOST_ROOT" respectively. Am I missing a step after cloning the LSL repo and creating the build folder, causing the missing CMakeLists.txt error?

Background:
My main purpose with LSL is for recording synchronized eye movements and EEG. Following the tutorials was fine except using vis_stream demonstrated that the MATLAB Viewer from the ftp was out-of-date. I tried using the prebuilt App-MATLABViewer from Git but was missing the liblsl64.dll from liblsl-MATLAB - hence why I tried building LSL.

Intended use cases of channel_format and functions like lsl_push_sample_*

Hello.

I do not understand the intended use of specifiing a channel_format while creating the streaminfo for an outlet and the function groups for different types like lsl_push_sample_*.

there is the channel_format enum

typedef enum {
    cft_float32 = 1,    /* For up to 24-bit precision measurements in the appropriate physical unit */
                        /* (e.g., microvolts). Integers from -16777216 to 16777216 are represented accurately. */
						/* Labview: 4 Byte single
						/* lsl_push_sample_f
    cft_double64 = 2,   /* For universal numeric data as long as permitted by network & disk budget. */
                        /* The largest representable integer is 53-bit. */
						/* Labview: 8 Byte double
						/* lsl_push_sample_d
    cft_string = 3,     /* For variable-length ASCII strings or data blobs, such as video frames, */
                        /* complex event descriptions, etc. */
    cft_int32 = 4,      /* For high-rate digitized formats that require 32-bit precision. Depends critically on */
                        /* meta-data to represent meaningful units. Useful for application event codes or other coded data. */
						/* Labview:I32
						/* lsl_push_sample_i
    cft_int16 = 5,      /* For very high rate signals (40Khz+) or consumer-grade audio */ 
                        /* (for professional audio float is recommended). */
						/* Labview:I16
						/* lsl_push_sample_s
    cft_int8 = 6,       /* For binary signals or other coded data. */
                        /* Not recommended for encoding string data. */
						/* Labview:I8
						/* lsl_push_sample_???
    cft_int64 = 7,      /* For now only for future compatibility. Support for this type is not yet exposed in all languages. */
                        /* Also, some builds of liblsl will not be able to send or receive data of this type. */
						/* Labview:I64
						/* lsl_push_sample_v ?
    cft_undefined = 0   /* Can not be transmitted. */
} lsl_channel_format_t;

to define which data type the outlet would use. Then there are the individual functions to push that type of data into the outlet.:

lsl_push_sample_f
lsl_push_sample_d
lsl_push_sample_s
lsl_push_sample_i
lsl_push_sample_l
lsl_push_sample_c
lsl_push_sample_str
lsl_push_sample_buf
lsl_push_sample_v

  1. I dont understand why there is not only lsl_push_sample which maps its input data to the type defined by the channel_format defined for the corresponding streaminfo. This seems redundant to me. It could be intended, that i can i.e. define the channel format as cft_int16 but instead put uint16 data to perform a typecast. I doubt that this is the intended use case though, as it would be easier and cleaner to just cast uint16 to int16 directly before putting the data into the outlet.

  2. I dont understand which channel_format is to be used with which lsl_push_sample function. There are 7+ channel formats but 9 different lsl_push_sample functions. A table of the mapping as i think it is correct.:

lsl_push_sample_f; cft_float32
lsl_push_sample_d; cft_double64
lsl_push_sample_s; cft_int16
lsl_push_sample_i; cft_int32
lsl_push_sample_l; cft_int64
lsl_push_sample_c; unknown
lsl_push_sample_str; cft_string
lsl_push_sample_buf; unknown
lsl_push_sample_v; unknown
unknown; cft_int8

  1. I dont understand how i would transfer the basic types uint8, uint16, uint32, and uint64 (,and int8). There seems to be no channel format and no function for the unsigned integers and no functions (apart from maybe lsl_push_sample_c) for the int8 type. I could just use signed int and assume that the receiver knows that he has to cast it to unsigned int after pulling the data from the inlet, but that would be not very clean.

This should probably be one of the main chapters in the developers guide, but i couldn't find it.

Thank you very much for your time!

lsl_resolve_all doesnt resolve multiple streams via the matlab interface

I already created an issue in the matlab interface section but i am no longer sure its a matlab problem.: labstreaminglayer/liblsl-Matlab#4

When triing to resolve multiple streams from matlab using

lib = lsl_loadlib();
streaminfos= lsl_resolve_all(lib,2)

streaminfos always just contains a single pointer to a stream, although there are multiple streams visible from lab recorder or my labview inlet implementation.

Digging deeper in the matlab interface and using

lib = lsl_loadlib();
tmp_results= lsl_resolve_all_(lib,2)

as well as modifying and recompiling (remexing) LSL/liblsl-Matlab/mex/lsl_loadlib_.c it seems to me that lsl_loadlib_.c is ok the way it is. Initially i thought the problem is related to the type of "buffer". i verified that only the first element of buffer is ever written, in that sense the value of 1 for return in lsl_loadlib_.c is correct. I messed around with the type of buffer without success.

I then tried to use lsl_resolve_byprop.m instead to work around the issue that "all" just returns a single element. The intention was that if the matlab interface just returns a single pointer to streaminfo i just ask for a single uniquely named one. However, if lsl_resolve_all.m would return i.e. "My_Outlet_5" out of 10 different names, lsl_resolve_byprop would not return anything, unless i ask specifically for "My_Outlet_5" - the only and single outlet being detected by lsl_resolve_all.m. This makes me believe, that the problem is actually being caused somewhere in LSL/liblsl/src/lsl_freefuncs_c.cpp or below. I didnt dig deeper. After ~10h im out of ideas, help would be appreciated.

Add support for varaible length typed data

We are using LSL to send variable length data (in particular a point cloud). The current interface leaves as the only option (unless mistaken) to do this by copying the data into a std::string (or to use a fixed size and pad the data, which is very inflexible). This is not a very good interface as it violates the ordinary semantics of a std::string (and also it leads to issues when trying to receive the data in Python).

Since the system can already handle sending variable size string buffers, it would appear to be a relatively small interface change to add support for variable size float, etc.

Building on Mac OS 10.14.6 with macports

Just an FYI, to get this to build on MAC OS 10.14.6 with the macports I followed the build instructions for the MAC but ran in to errors like

CMake Warning (dev) in Apps/Examples/CMakeLists.txt:
  AUTOGEN: No valid Qt version found for target ReceiveDataC.  AUTOMOC,
  AUTOUIC and AUTORCC disabled.  Consider adding:

    find_package(Qt<QTVERSION> COMPONENTS Widgets)

  to your CMakeLists.txt file.

To fix these issues I added find_package(Qt5 COMPONENTS Widgets) at Line 11 in labstreaminglayer/CMakeLists.txt. The project then appears to build correctly.

Documentation

Track the documentation progress:

  • Migrate the wiki pages to README files in the respective folders
  • Remove outdated documentation
  • Replace links to the old repository / Google Code
  • Create a "Getting Started"-page for developers
  • Create a "Getting Started"-page for users

Very specific LSL sync issues

Hi LSL team,
I have a specific issue that i'd like help with, and not really a bug, so apologies for raising an issue here.

I recorded 20 subjects worth of EEG data while they watched videos.
There were two LSL streams, one for video start markers, and one for EEG as recorded with a muse headset, and the help of bluemuse to create the lsl stream.
The issue is that bluemuse (and the muse stream) were started on one computer, and the marker stream were started on another. Usually, this would not be an issue, and lsl would automatically sync these, but bluemuse writes timestamps as time since epoch, and not as time since local computer startup. This means that the xdf file which stores both streams (recorded with "lab recorder" from the Neuropype software suite), cannot be adjusted on load, because all the internal lsl syncing information is is based on time since local machine startup, and does not correspond with the actual timestamps (in time since epoch).
I've tried to dig into the interals of the lsl stream object and see if firsttimestamp, lasttimestamp and created_at could help, but unfortunately, I cannot see a way to sync.
Can you help me sync these two streams?
If there is a timestamp written when an lsl stream starts to be written to file, then this would help sync things.

load_xdf() suddenly not properly working anymore

Dear colleagues,
we have been using LSL since a while and its a really awesome tool! Many thanks for that! Since a few days, however, we are encountering a really strange problem on all of our lab computers: When trying to load in some arbitrary .xdf-file into Matlab (we use Matlab R2018b) with load_xdf(), the following warning always pops up:

Warning: Escaped character '\U' is not valid. See 'doc sprintf' for supported special characters.
In load_xdf (line 327)
Warning: Invalid MEX-file 'C:
In load_xdf (line 327)

And as a consequence, only the metadata of the .xdf-file become correctly read in, while no time_series and time_stamps is read in.

The strange thing is that we are not aware of any changes made to our MATLAB viewer folder.
Since we urgently need the load_xdf() function for any of our experiments, we would be very thankful for any advise on how to solve this problem.

matlab - pull_chunk for eeg - timestamps not equidistant

Hey everybody,
I'm working with EEG Data that i want to process while reading it. Therefor i use the pull_chunk function to read the data chunk by chunk. My problem is that the difference between the timestamps is not constant. To break down my problem to this issue i used the example code "ReceiveDataInChunks.m" provided together with liblsl_matlab .

`% instantiate the library
disp('Loading the library...');
lib = lsl_loadlib();

% resolve a stream...
disp('Resolving an EEG stream...');
result = {};
while isempty(result)
result = lsl_resolve_byprop(lib,'type','EEG'); end

% create a new inlet
disp('Opening an inlet...');
inlet = lsl_inlet(result{1});
time = [];
ende = 0;
disp('Now receiving chunked data...');
while true
% get chunk from the inlet
[chunk,stamps] = inlet.pull_chunk();
for s=1:length(stamps)
time(ende+s) = stamps(s);
end
ende = ende + length(stamps);
pause(0.05);
end`

I let it run for some time and then I plot

plot(diff(time))

pull_chunk error

So normally that should be a straigt line at y=0,0004883 (~1/2048Hz)
I stream the EEG data via a BioSemi device, if this is of interest.
Does someone know why i get these discontinuities and how i solve them?
Thanks a lot in advance
Marvin

LSL and Simulink

Hey,

do you know a way to use LSL in a Simulink Model / do you have ideas how to do this?

I only found a thing called SimBSI but its unstable and crashes all the time.

Data Partion Problem

I want to send 14 channel EEG data with 256 samples from C#(for collecting Emotiv Raw Data) to my python program

C# Code

class Program
    {
public static liblsl.StreamInfo info = new liblsl.StreamInfo("Emotiv", "EEG", 14, 256, 
             liblsl.channel_format_t.cf_double64, "epoc+");
public static liblsl.StreamOutlet outlet = new liblsl.StreamOutlet(info);
private static int sample = 0;
.
.
.
private static void WriteDataToFile(ArrayList data)
                double[] temp = (double[])data.GetRange(3, 14).ToArray(typeof(double));
                eegData256Sample[sample, 0] = temp[0];
                eegData256Sample[sample, 1] = temp[1];
                eegData256Sample[sample, 2] = temp[2];
                eegData256Sample[sample, 3] = temp[3];
                eegData256Sample[sample, 4] = temp[4];
                eegData256Sample[sample, 5] = temp[5];
                eegData256Sample[sample, 6] = temp[6];
                eegData256Sample[sample, 7] = temp[7];
                eegData256Sample[sample, 8] = temp[8];
                eegData256Sample[sample, 9] = temp[9];
                eegData256Sample[sample, 10] = temp[10];
                eegData256Sample[sample, 11] = temp[11];
                eegData256Sample[sample, 12] = temp[12];
                eegData256Sample[sample, 13] = temp[13];

                sample++;
                if (sample == 256)
                {    
                    outlet.push_chunk(eegData256Sample);
                    sample = 0;
                    Array.Clear(eegData256Sample, 0, eegData256Sample.Length);
                }

Python Code:

while True:
    chunk, timestamps = inlet.pull_chunk()
    if timestamps:
        if len(timestamps)==256:
 .......

However, data collected in the python code is mostly divided into two parts like (241,14) and (15,14). How can get chuck data with the shape of 256x14 ?

Empty folders?

For some reason, all the folders are showing up empty.
labstreaminglayer-master---->LSL------>liblsl
and the liblsl folder has nothing in it. I turned off Windows firewall for private networks and still nothing?

Any clue as to what's going on?

Second international LSL Workshop Wish List

I will just post the list here, maybe single issues can be transferred to their own thread if the developers/the community starts working on them.

  • Android LabRecorder
  • Guinea Pigs: People that try out new things. Preferably those people are collected in a list with hardware specifications
  • programmatic app start
  • Logo
  • include current time and date in the xdf header
  • documentation of all available config file options (lsl_api.cfg)
  • blocking pull_chunk(N) which always pulls N samples for MATLAB
  • ASIO support or VST plugin
  • Interface with ROS

Some structural refactoring

At IEEE SMC in Bari, there was a meeting including Christian, Tim, David, and myself. One of the action items that came out of the meeting were some changes to improve upon small points of dissatisfaction with the current organization of the repos.

  • The language-specific example code should sit next to the LSL source code for that language. In other words, the C/C++ examples should be moved into a folder within the liblsl repository.
  • There are a mix of build instructions. The proposed solution is to put the build instructions only in a BUILD.md doc and remove any other build instructions, including the Wiki. Language-specific build instructions (java, Matlab) should sit with the language binding code.
  • There is some redundancy between the main README.md and the Wiki. Any information that needs to be front and centre should go into the README.md and be removed from the Wiki.
  • The Wiki landing page (HOME) should be nothing more than a table of contents.
  • Having both a within-tree and out-of-tree build system is confusing and prone to break. Out-of-tree is necessary so we will abandon the within-tree build system.*

There are some consequences to abandoning the within-tree build system:

  • Each app will have to have its own copy of the cmake code required to find liblsl.
  • The default way to find liblsl should be to download the latest version of the compiled library from the GitHub release pages. The second way would be to search in known folders.
  • For liblsl developer friendliness, the test applications can either move within liblsl, or if they stay separate then its cmake file can default to finding liblsl locally.

unable to link BrainAmpSeries

I'm assuming the issue is with the drivers because the device is turned on and plugged in. How do I make sure that the driver is installed correctly?

But this is the error message I get when I try to link after opening the BrainAmpSeries Connector:
XXXCould not initialize BrainAmpSeries interface: Could not open USB device. Please make sure that the device is plugged in, turned on, and that the driver is installed correctly. (driver message: could not retrieve error message because the device is closed)XXX

Use LSL with BrainMaster Discovery 24E which is not listed in Supported Devices Section

Hello,
This is Mubin from Bangladesh. I use BrainMaster Discovery 24E device to collect EEG data on my computer. I want to use LSL with my device. But, I didn't find my device listed on the supported devices section here: https://labstreaminglayer.readthedocs.io/info/supported_devices.html

I also didn't find any support email/contact from which I can get any support to connect my device with LSL.

This is the link for BrainMaster Discovery 24E: https://www.brainmaster.com/product/discovery-24e

Thank you.

Labstreaminglayer documentation for the InteraXon Muse incomplete

Issue # 30 was about the fact that Interaxon, the manufacturer of the Muse headbands did not offer the download of the the Muse SDK anymore.

For the Labstreaminglayer Example use with Matlab for InteraXon Muse:

https://labstreaminglayer.readthedocs.io/info/matlab_example_with_muse.html

the InteraXon Muse SDK is necessary. The InteraXon Muse SDK is now available again, and the Issue # 30 was closed.

This issue is about the fact that the Labstreaminglayer documentation for the InteraXon Muse is incomplete:

https://labstreaminglayer.readthedocs.io/info/supported_devices.html

The reader of the Labstreaminglayer documentation of the InteraXon Muse concludes that Labstreaminglayer support is there for all InteraXon Muse headband models for everyone.

This is not true.

  1. The InteraXon Muse SDK is only available for the following persons:

“If you are hosting a hackathon, an academic or institutional research project, or working on an educational application for neuroscience or neurotech, we will continue to offer support for your projects to the best of our ability (including the legacy SDK for Muse 2016). Please reach out to our R&D team at [email protected]. “

https://choosemuse.com/development/

  1. The Labstreaminglayer Example use with Matlab for InteraXon only works with model

Muse MU-01 - Muse - Released 2014 - Bluetooth.

In step 2 of the Labstreaminglayer Example use with Matlab for InteraXon Muse it says:

  1. Pair your computer with your Muse headset and connect to your Muse using muse-io (replace Muse-XXXX with the name of your Muse device as it shows in the Bluetooth settings) .

According to James Clutterbuck, the developer of the Mind Monitor android application, bluetooth pairing of a computer with the Muse headband only works with:

MU-01 - Muse - Released 2014 - Bluetooth. (the first Muse headband releases in 2014)

James Clutterbuck, developer of the Mind Monitor:

«The Muse 2 is almost impossible to pair on PC. I've tried many different Bluetooth USB devices and none of them work. It was similar with the 2016 Muse. Only the 2014 Muse is easy to pair on PC. This is why I only made Muse Monitor for Android/iOS. «

https://musemonitor.com/forums/viewtopic.php?f=15&t=749

Since the Labstreaminglayer Example use with Matlab for InteraXon Muse needs to pair the computer over bluetooth, the example only works with:

MU-01 - Muse - Released 2014 - Bluetooth.

The example does not work works with:

MU-02 - Muse - Released 2016 - Bluetooth LE.
MU-03 - Muse 2 - Released 2018 - Bluetooth LE.

some issues using cmake with visual studio for lsl app development

Just thought I would report my experience in case anyone has time to deal with these little issues or experiences similar roadblocks.

I just had to make a few updates to the BrainVisionRDA and I experienced some non-ideal behavior from CMake. Everything worked pretty well in the end, but I had to do a few things by hand because CMake got it wrong.

First of all, I added cmake support for BrainVisionRDA. The behavior, when I do an in tree build is now identical as it is for LiveAmp, LabRecorder, etc. But, some of this might be due to dumb things that I did in my CMakeLists files.

First off, I have to do a little bit of tweaking to the 'input' field for the linker. When I specify 64 bit and use <path_to_qt5>\msvc2015_64 as my Qt5_dir, for example, the .lib files it points to are in given as <path_to_qt5>\msvc2015 which are obviously libraries with the wrong architecture.

I also don't know how to set the path to the desired lsl directory. I installed lsl in C:\lsl, but the project has some ../../blah_blah path to liblsl which might sometimes be there, but isn't at the moment.

These are minor inconveniences. The more disturbing thing is that after making some changes to the .ui file, I couldn't get the project to regenerate the machine generated qt files. I even tried erasing labstreaminglayer/build and regenerated the whole project, but it still didn't reflect the changes I made to the GUI. In the end I simply called moc and uic manually and added the generated files to my project.

first_timestamp is relative to what windows variable?

Hi LSL,
I am hoping you can help me with a rather specific issue.
An lsl stream when started, writes a first_timestamp to the stream. This is in seconds since the computer was turned on. I need to sync up another non-lsl device. What windows variable does lsl read (or what library.method does it call) to get the time since startup?
Thanks!

LSL and Tobii TX300 interactions

I've been trying to get my Tobii TX300 to sync with LSL using the Tobii app and have run into a problem with the app hanging after I attempt to initialise the link.

I've tried running the app in different configurations (with the Tobii disconnected, with the Tobii connected but incorrect server information provided etc.) and the app reports the error and then continues to function as expected. Adding a link to the calibration file provided by Tobii then seems to allow the linking process to start, but the connection is never made and I have to force quit the app.

I've been able to use all the other apps I've tried (Keyboard, Mouse, Sound recording, Matlab etc.) without issue with my current setup, and haven't had any difficult syncing data streams between different computers.

As I can't open and view the Tobii calibration files I suspect they're written in some proprietary format (the TX300 doesn't have any issues reading the calibration file I had output using the SDK), but I'm not sure of that. Tracing how the calibration file is passed through the app I've been stopped at the call from Tet to the dll and so can't check whether there might be some disconnect between what the calibration file is providing and what the LSL app expects, or if there is some other problem occurring.

Is this an issue that has arisen before? I haven't been able to find any mention of people using or having difficulty with Tobii products and LSL in the reported issues, but obviously it's difficult for me to tell if that's because the problem is unique to my setup or if the app is, as it says in the wiki, still untested with the hardware.

Too many devices for the stream?

Hello,

Currently I am working on a project in which we are hooking up a lot of different devices to the lab streaming layer: respiratory belt, EKG, EEG (/w 64 channels), eye tracking, perhaps GSR. We were concerned given the number of inputs in the stream, it can potentially create a noticeable lag in the receiving machine getting all the signals within a time limit (in seconds). Is this something that we might expect given the number of inputs? Could the potential lag create issues with the accuracy of the timestamp?

Cannot download the whole folder

Hi,

I try to download the whole project here. But when I opened the Apps folder there is a bunch of folders but there are no contents in them. Could you help me in how to clone with subfolders as well ?

Does the program work too if I clone only a selected folder eg, Audio folder ?

Thanks

liblsl-Matlab is missing

So I, er, uh, seem to have caused this.

I implemented set_postprocessing in the matlab API, then added, commited and pushed to origin within my local pull of the submodule. Then I backed up into labstreaminglayer and (following the instructions) did

add -A
commit -m "message"
push origin master

Now there is exactly the problem described here:
https://stackoverflow.com/questions/41499714/how-to-fix-git-submodule-resulting-in-404-not-found

I am not quite sure how to fix this and I don't want to have to undo a push to the master repo, so I don't want to screw this up. Indeed the commit I made is not in the actual submodule (https://github.com/labstreaminglayer/liblsl-Matlab) but I don't know how to get it there.

Any advice is much appreciated.

Wiki links for example code broken

I just noticed that most of the links in https://github.com/sccn/labstreaminglayer/wiki/ExampleCode are broken due to moving the repo. As they are now submodules, the links within /sccn/labstreaminglayer use the commit hash. It is probably best to link to the respective repos in /labstreaminglayer. I started to update ExampleCode.md, but do not know how to fix the wiki without writing rights. Please note that i could not find the liblsl-C or liblsl-Cpp repos in /labstreaminglayer, so it skipped them.

Better option handling in Apps/CMakeLists.txt

Right now, there are three parts to registering an App for an in-tree-build in Apps/CMakeLists.txt:

option(LSLAPPS_AppName "Build AppName app" OFF) # (1)
if(LSLAPPS_AppName) # (2)
    add_subdirectory(AppName) # (3)
endif()

But, since an App can exist as a subrepository directory that's not checked out, we'd need to add a check for that:

if(EXISTS AppName/CMakeLists.txt) # (4)
    add_subdirectory(AppName)
else()
    message(WARNING "AppName not checked out") # (5)
endif()

So, all in all eight lines per app with the same app name repeated five times. Registering an App should be easier, I just need to find out how.

Any cmake option for MINDO app (for Linux OS)?

I'm using Linux OS and trying to connect MINDO with Lab Recorder and there is no cmake option provided for this App. How to get around this?
Or is it not possible to use MINDO on Linux platform at all?

LSL in MacOs

Is there a version of LSL already compiled for Mac?

Thanks

CMake MSVC Generator error - `Evaluation file to be written multiple times with different content.`

Dear all,
I am currently building LSL to synchronize an BrainVision Recorder ECG stream, a NIRStar stream and a video stream. When building the Brain Amp App I encountered an error in Cmake. So these were the steps I took until I encountered an error:

  1. firstly, I cloned the repository in my LSL folder with git clone --recursive https://github.com/sccn/labstreaminglayer.git
  2. I created a "build" folder and moved into that directory
  3. I typed in "cmake-gui .." (Cmake 3.12.3)
  4. the I configured the whole thing with Brain Amp and LabRecorder, which resulted in this

Git version information: HEAD/1.13.0-b3-20-g2a9dbeaa
Included LSL CMake helpers, rev. 9
Found target lsl in current build tree
Included LSL CMake helpers, rev. 9
Found target lsl in current build tree
CMake Warning at Apps/CMakeLists.txt:28 (message):
LSLAPPS_BrainAmp was enabled, but its folder is either nonexistent or not
checked out

Little endian: 1
IEC559: TRUE
MENU appended: LabRecorder
MENU appended: BrainAmpSeries
Boost version: 1.65.1
Found the following Boost libraries:
system
thread
chrono
date_time
atomic
MENU appended: BrainVisionRDA
MENU: LabRecorder/LabRecorder.exe;LabRecorder;BrainAmpSeries/BrainAmpSeries.exe;BrainAmpSeries;BrainVisionRDA/BrainVisionRDA.exe;BrainVisionRDA
Setting packages name for BRAINAMPSERIES
File name: BrainAmpSeries-1.12.0-Win32
Setting packages name for BRAINVISIONRDA
File name: BrainVisionRDA-1.12.0-Win32
Setting packages name for LABRECORDER
File name: LabRecorder-1.12.0-Win32
Setting packages name for LIBLSL
File name: liblsl-1.12.0-Win32
Installing Components: BrainAmpSeries;BrainVisionRDA;LabRecorder;liblsl
Configuring done

so when I clicked on generate I got this error

  Evaluation file to be written multiple times with different content.  This
  is generally caused by the content evaluating the configuration type,
  language, or location of object files:

   C:/Users/User/Documents/labstreaminglayer/build/Apps/LabRecorder/LabRecorder_path


CMake Error in Apps/BrainAmpSeries/CMakeLists.txt:
  Evaluation file to be written multiple times with different content.  This
  is generally caused by the content evaluating the configuration type,
  language, or location of object files:

   C:/Users/User/Documents/labstreaminglayer/build/Apps/BrainAmpSeries/BrainAmpSeries_path


CMake Error in Apps/BrainAmpSeries/CMakeLists.txt:
  Evaluation file to be written multiple times with different content.  This
  is generally caused by the content evaluating the configuration type,
  language, or location of object files:

   C:/Users/User/Documents/labstreaminglayer/build/Apps/BrainAmpSeries/BrainAmpSeries_path


CMake Error in Apps/BrainProducts/BrainVisionRDA/CMakeLists.txt:
  Evaluation file to be written multiple times with different content.  This
  is generally caused by the content evaluating the configuration type,
  language, or location of object files:

   C:/Users/User/Documents/labstreaminglayer/build/Apps/BrainProducts/BrainVisionRDA/BrainVisionRDA_path


CMake Error in Apps/BrainProducts/BrainVisionRDA/CMakeLists.txt:
  Evaluation file to be written multiple times with different content.  This
  is generally caused by the content evaluating the configuration type,
  language, or location of object files:

   C:/Users/User/Documents/labstreaminglayer/build/Apps/BrainProducts/BrainVisionRDA/BrainVisionRDA_path


Generating done```

Can I do something different to resolve this issue?

Thanks so much in advance!

The worst case happened: Interaxon does no longer offer the Muse SDK!!!

I contacted the Muse support to get a copy of the Muse SDK. No luck!

Alternatively, it is possible to stream Muse data to Matlab with Muse Monitor's build in OSC streaming.

http://www.musemonitor.com/forums/viewtopic.php?f=15&t=749&sid=1f538175b41101875e0f88024e281950

but we first have to write a Matab interface for it:

https://ch.mathworks.com/matlabcentral/answers/49153-receiving-osc-messages-in-matlab-64-bit

Here the answer of the Muse support:

Hi Peter,

Thank you for contacting Muse Customer Care and thank you for your enthusiasm about working with Muse.
We greatly value the important links you just sent us.

Unfortunately, as you might be aware, we are no longer offering the SDK. The Muse SDK and research tools have helped make Muse into the bestselling EEG system in the world. They have also required a volume of technical support that has been difficult for us to sustain. As we’ve grown, we’ve struggled to provide the kind of support for developers that we believe our community deserves. For this reason, we’ve had to make the difficult decision to stop actively supporting the Muse software development kit (SDK).

As a possibility, we do offer Muse Direct (iOS). We've developed Muse Direct to allow you to record, stream, and visualize the raw data from the Muse. You can connect your Muse to Muse Direct on an iOS device and then stream the data via OSC (using WiFi) to a computer or another device where you can process the data in whatever programming language you'd like.

You can view the data live and also export it to excel as a CSV file. It is important to note however, this raw data can be difficult to interpret without the assistance of multiple samples, an experienced neuroscientist, and written script (code) to analyze it. You may want to consider getting in touch with an experienced EEG Researcher for your project as the processing of this data is quite complex and the support we are able to offer for individuals is limited (why we have chosen to pull the Muse SDK).

We'll continue to work toward providing a more sustainable solution to help make Muse better than ever and to give our community the best possible tools. Until then, thank you sincerely for your patience and your interest in Muse.

Muse Direct Webpage - https://choosemuse.com/muse-direct/
Download for iOS - https://itunes.apple.com/app/muse-direct/id1326341722?mt=8

Thank you,
Gabby

Muse Customer Care
Interaxon | Brain Sensing Technologies
http://www.choosemuse.com
T: 1 888 508 MUSE (6873)

If you contact Muse Customer Care via phone, please reference Muse: the Brain Sensing Headband - Support Case # 00116105

ref:_00D41g4dE._5002Murf8L:ref

Sending Data from one Pc to another PC useing Labrecorder and LSL4Unity

Dear Chad,

hi my name is Akash and I am student in Germany currently I am useing your C-File for LSL4Unity for a Thesis. But actually have a problem i want to stream a trigger from Unity to Labrecorder but on a other PC.

That means I want to send Data from Unity from one PC to Labrecorder which is runnig on another PC. The Problem is I only can recive Data from Unity on the same PC on which Unity is runnig so actually Labrecorder on the other PC does not show me the Unity Data i want to recive.

i hope you really can help me out.

Best regards
Akash

Is it possible to start recording LabRecorder from a shortcut?

I think the title says it all but here goes:
I have written a config file to pre-load all the required streams (with argument -c config_filename.cfg) but I would also like LabRecorder to start recording as soon as it is called (from batch file).
Is it possible? What is program argument?

Test for moving issues

Issue by tstenner
Friday Nov 02, 2018 at 17:34 GMT
Originally opened as https://github.com/sccn/labstreaminglayer/issues/358


As @cboulay found out, since yesterday admins can move issues from one repository to another (https://help.github.com/articles/transferring-an-issue-to-another-repository/), but apparently only within the same organization. Most of the target repositories are in the labstreaminglayer organization, so we'd need to check if moving an issue works and if it still works if the target repository itself is moved.
If that works, we can move this repository to the labstreaminglayer organization,move the issues and in parallel create the new superrepo (named labstreaminglayer) in the sccn organization. Afterwards, we can move this repository back and archive it.
To check this, @dmedine could move this issue to a newly created sccn repository, then move it the repository to the labstreaminglayer organization and we'll see what happens.

How to properly stream band power values

Hello,

I am working on implementation of neurofeedback system. I am specifically focusing on alpha waves to enhance participants' meditation and OpenBCI GUI provides band powers that has alpha, beta, theta, gamma, beta. I am thinking that I can simply connect to alpha band here and stream those values.

Then, I face the problem. I gave it a try and output the values as csv and the values seem quite random. It seems to me that the columns represent channels (I use Cyton, so total of 8) and rows represent each band type. I don't understand what the values represent. I would appreciate any advice, please. I will also attach the screen shot of my code and the csv files. Thank you so much!

Screen Shot 2020-01-31 at 9 51 44 AM

Screen Shot 2020-01-31 at 9 56 18 AM

![Screen Shot 2020-02-05 at 12 44 22 AM](https://user-images.githubusercontent.com/47463934/73870164-6df20980-4800-11ea-921c-a9d302ed6ecf.png)

please update path to LSLCMake.cmake in documentation

reporting just a minor problem: the build-documentation states that the path to the "LSLCMake.cmake" is:

build/install/LSL/cmake/

but it now seems to be

build/install/LSL/share/LSL/

if thats the new path, it would be helpful to have that consistent with the documentation.

cheers !

Use of QCoreApplication interferes with outlet creation

I create an LSL outlet with the following code:

#include <QCoreApplication>
#include <lsl_cpp.h>
int main(int argc, char *argv[])
{
    QCoreApplication a(argc, argv);
    lsl::stream_info _EEGInfo("my-outlet", "EEG");
    lsl::stream_outlet _EEGOutlet(_EEGInfo);
    return a.exec();
}

While running that code, I scan the available LSL outlets with this python code:

from pylsl import StreamInlet, resolve_stream
streams = resolve_stream('type','EEG')
for stream in streams:
    print( ' - ', stream.name() )

And I get this output:

Listing available outlets:
 -  (invalid: bad lexical cast: source type value could not be interpreted as target)
 -  closeloop01-EEG
2 outlets found

where the first (invalid) outlet corresponds to the one I just created. If there are other outlets in my network I can see them properly listed here.

Interestingly, the following slightly modified code does produce a valid outlet:

#include <QCoreApplication>
#include <lsl_cpp.h>
int main(int argc, char *argv[])
{
    lsl::stream_info _EEGInfo("my-outlet", "EEG");
    lsl::stream_outlet _EEGOutlet(_EEGInfo);
    QCoreApplication a(argc, argv);
    return a.exec();
}
Listing available outlets:
 -  my-outlet
 -  closeloop01-EEG
2 outlets found

Do you guys have any idea on what could explain this behavior? I can't understand why declaring the QCoreApplication before creating the outlet makes it invalid.

In my real application, of course, I need to create the outlets after the QCoreApplication has been declared. So, this is a stopper for me.

I have also tested using the C-API instead of the C++ one, and obtained the same behavior.

My environment:

samuel@patient-X200CA:~/Documents$ uname -a
Linux patient-X200CA 4.15.0-20-generic #21-Ubuntu SMP Tue Apr 24 06:16:15 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
samuel@patient-X200CA:~/Documents$ g++ --version
g++ (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
samuel@patient-X200CA:~/dev/build-qt/out/bin$ ./qmake --version
QMake version 3.1
Using Qt version 5.9.2 in /home/samuel/dev/build-qt/out/lib

LSL was compiled from this version:

samuel@patient-X200CA:~/dev/labstreaminglayer/build/install/LSL$ git log --pretty -n 1
commit af01ef850b4b0fa3523cfa364376ff6fa197e299 (HEAD -> master, origin/master, origin/HEAD)
Author: Tristan Stenner <[email protected]>
Date:   Wed Jun 19 09:29:44 2019 +0200

    Update LSLCMake path in documentation. Thanks to @afkrause!

and with CMake version 3.10.2 (and the same g++ specified above).

Please, let me know if you need more information.

Any ideas are very welcome! :-)

LSL on Mac Catalina iOS and Matlab

Hi SCCN-

I have been using LSL successfully to stream data from a new type of high-density fNIRS device (www.gowerlabs.co.uk/lumo) to my Mac, but last week I updated the Mac OS to Catalina and Matlab 2019b. The LSL scripts seems to run fine, and does not yield any errors, but when I try to resolve a stream it finds nothing. I have checked with a windows laptop and the stream is definitely visible, and nothing else has changed on my Mac.

I have given Matlab all the security permissions I can, and have made sure the iOS will run apps downloaded from anywhere, but nothing is visible.

Are there any known issues with Catalina and LSL? Has anyone else seen this issue?

Compiled lsl for gtec nautilus for win64 64

Hi,
The gtec nautilus app exists in the code, but there is no compiled version in the ftp archive.
Any chance you could upload a compiled version for gtec's Nautilus system?
Many thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.