Giter Site home page Giter Site logo

lmbooth / pybci Goto Github PK

View Code? Open in Web Editor NEW
21.0 2.0 4.0 3.63 MB

Create real-time BCI's with the LSL, PyTorch, SKLearn and TensorFlow packages.

Home Page: https://pybci.readthedocs.io/en/latest/

License: MIT License

Python 95.26% TeX 3.51% C++ 0.58% Dockerfile 0.64%
bci brain-computer-interface human-computer-interaction labstreaminglayer machine-learning lsl pytorch sklearn tensorflow human-machine-interface

pybci's Introduction

Downloads PyPI - Downloads PyPI - version Documentation Status AppVeyor Build codecov status

pybci

A Python package to create real-time Brain Computer Interfaces (BCI's). Data synchronisation and pipelining handled by the Lab Streaming Layer, machine learning with Pytorch, scikit-learn or TensorFlow, leveraging packages like AntroPy, SciPy and NumPy for generic time and/or frequency based feature extraction or optionally have the users own custom feature extraction class used.

The goal of PyBCI is to enable quick iteration when creating pipelines for testing human machine and brain computer interfaces, namely testing applied data processing and feature extraction techniques on custom machine learning models. Training the BCI requires LSL enabled devices and an LSL marker stream for timing stimuli.

All the examples found on the github not in a dedicated folder have a pseudo LSL data generator enabled by default, createPseudoDevice=True so the examples can run without the need of LSL capable hardware. Any generic LSLViewer can be used to view the generated data, example viewers found on this link.

If samples have been collected previously and model made the user can set the clf, model, or torchModel to their sklearn, tensorflow or pytorch classifier and immediately set bci.TestMode().

Official paper here!

ReadTheDocs available here!

Examples found here!

Examples of supported LSL hardware here!

TODO:

  • Add optional LSL outlet configuration for class estimator (either send every classification or send on classification change - help with reducing spam on classification if estimator time is very short)
  • Add example showing previously saved models.
  • Add example showing how feature data can be saved and used to build models so model creation can be done offline whilst data collection and classification can be done online.
  • Update and verify via appveyor for 3.12.

Installation

For stable releases use: pip install pybci-package

For development versions use: pip install git+https://github.com/LMBooth/pybci.git or

git clone https://github.com/LMBooth/pybci.git
cd pybci
pip install -e .

Optional: Virtual Environment

Or optionally, install and run in a virtual environment:

Windows:

python -m venv my_env
.\my_env\Scripts\Activate
pip install pybci-package  # For stable releases
# OR
pip install git+https://github.com/LMBooth/pybci.git  # For development version

Linux/MaxOS:

python3 -m venv my_env
source my_env/bin/activate
pip install pybci-package  # For stable releases
# OR
pip install git+https://github.com/LMBooth/pybci.git  # For development version

Prerequisite for Non-Windows Users

If you are not using windows then there is a prerequisite stipulated on the pylsl repository to obtain a liblsl shared library. See the liblsl repo documentation for more information. Once the liblsl library has been downloaded pip install pybci-package should work.

(currently using pybci-package due to pybci having name too similar with another package on pypi, issue here.)

There has been issues raised with Linux successfully running all pytests and examples, there is a dockerfile included in the root repository outlining what should be a successful build of ubuntu 22:04.

Dockerfile

There is an Ubuntu 22.04 setup found in the Dockerfile in the root of the directory which can be used in conjunction with docker.

Once docker is installed call the following in the root directory:

sudo docker build -t pybci .
sudo docker run -it -p 4000:8080 pybci

Then either run the pybci CLI command or run pytest Tests to verify functionality.

Download the Dockerfile and run

Running Pytest Locally

After installing pybci and downloading and extracting the pybci git repository, navigate to the extracted location and run pip install requirements-devel.txt to install pytest, then call pytest -vv -s Tests\ to run all the automated tests and ensure all 10 tests pass (should take approximately 15 mins to complete), this will ensure pybci functionality is as desired.

Python Package Dependencies Version Minimums

Tested on Python 3.9, 3.10 & 3.11 (appveyor.yml)

The following package versions define the minimum supported by PyBCI, also defined in setup.py:

"pylsl>=1.16.1",
"scipy>=1.11.1",
"numpy>=1.24.3",
"antropy>=0.1.6",
"tensorflow>=2.13.0",
"scikit-learn>=1.3.0",
"torch>=2.0.1"

Earlier packages may work but are not guaranteed to be supported.

Basic implementation

import time
from pybci import PyBCI
if __name__ == '__main__':
    bci = PyBCI(createPseudoDevice=True) # set default epoch timing, looks for first available lsl marker stream and all data streams
    while not bci.connected: # check to see if lsl marker and datastream are available
        bci.Connect()
        time.sleep(1)
    bci.TrainMode() # now both marker and datastreams available start training on received epochs
    accuracy = 0
    try:
        while(True):
            currentMarkers = bci.ReceivedMarkerCount() # check to see how many received epochs, if markers sent to close together will be ignored till done processing
            time.sleep(0.5) # wait for marker updates
            print("Markers received: " + str(currentMarkers) +" Accuracy: " + str(round(accuracy,2)), end="         \r")
            if len(currentMarkers) > 1:  # check there is more then one marker type received
                if min([currentMarkers[key][1] for key in currentMarkers]) > bci.minimumEpochsRequired:
                    classInfo = bci.CurrentClassifierInfo() # hangs if called too early
                    accuracy = classInfo["accuracy"]
                if min([currentMarkers[key][1] for key in currentMarkers]) > bci.minimumEpochsRequired+10:  
                    bci.TestMode()
                    break
        while True:
            markerGuess = bci.CurrentClassifierMarkerGuess() # when in test mode only y_pred returned
            guess = [key for key, value in currentMarkers.items() if value[0] == markerGuess]
            print("Current marker estimation: " + str(guess), end="           \r")
            time.sleep(0.2)
    except KeyboardInterrupt: # allow user to break while loop
        print("\nLoop interrupted by user.")

Background Information

PyBCI is a python brain computer interface software designed to receive a varying number, be it singular or multiple, Lab Streaming Layer enabled data streams. An understanding of time-series data analysis, the lab streaming layer protocol, and machine learning techniques are a must to integrate innovative ideas with this interface.

An LSL marker stream is required to train the model, where a received marker epochs the data received on the accepted datastreams based on a configurable time window around set markers - where custom marker strings can optionally have their epoch time-window split and overlapped to count as more then one marker, example: in training mode a baseline marker may have one marker sent for a 60 second window, whereas target actions may only be ~0.5s long, when testing the model and data is constantly analysed it would be desirable to standardise the window length, we do this by splitting the 60s window after the received baseline marker in to ~0.5s windows. PyBCI allows optional overlapping of time windows to try to account for potential missed signal patterns/aliasing - as a rule of thumb it would be advised when testing a model to have a time window overlap >= 50% (Shannon-Nyquist criterion). See here for more information on epoch timing.

Once the data has been epoched it is sent for feature extraction, there is a general feature extraction class which can be configured for general time and/or frequency analysis based features, ideal for data stream types like "EEG" and "EMG". Since data analysis, preprocessing and feature extraction trechniques can vary greatly between device data inputs, a custom feature extraction class can be created for each data stream maker type. See here for more information on feature extraction.

Finally a passable pytorch, sklearn or tensorflow classifier can be given to the bci class, once a defined number of epochs have been obtained for each received epoch/marker type the classifier can begin to fit the model. It's advised to use bci.ReceivedMarkerCount() to get the number of received training epochs received, once the min num epochs received of each type is >= pybci.minimumEpochsRequired (default 10 of each epoch) the model will begin to fit. Once fit the classifier info can be queried with CurrentClassifierInfo, this returns the model used and accuracy. If enough epochs are received or high enough accuracy is obtained TestMode() can be called. Once in test mode you can query what pybci estimates the current bci epoch is(typically baseline is used for no state). Review the examples for sklearn and model implementations.

All issues, recommendations, pull-requests and suggestions are welcome and encouraged!

pybci's People

Contributors

jsheunis avatar lmbooth avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

pybci's Issues

TEST: there are no tests or CI implementation

There aren't any unit or integration tests, e.g. with pytest and no continuous integration implementation, e.g. with appveyor. Adding these components are essential to ensure that the package runs successfully on multiple platforms and that new code contributions don't cause unintended breakage. It is also useful and encouraged for developers to run tests on their own systems before creating a PR with their contributions to your code base.

Ping openjournals/joss-reviews#5706

LSL installation instructions are missing

Hi,

I followed the instructions on the README and got this error.

>>> import pybci

RuntimeError: LSL binary library file was not found. Please make sure that the binary file can be found in the package lib folder
 (/my-env-path/pybci/lib/python3.10/site-packages/pylsl/lib)
 or the system search path. Alternatively, specify the PYLSL_LIB environment variable.
 You can install the LSL library with conda: `conda install -c conda-forge liblsl`
or otherwise download it from the liblsl releases page assets: https://github.com/sccn/liblsl/releases
  • Could you mention in the installation section that LSL is a dependency and needs to be installed explicitly?
  • If there are any recommended installation instructions, could you provide those for the supported operating systems?
  • Could you also outline the supported operating systems e.g. linux windows mac?

This issue is part of openjournals/joss-reviews#5706.

Out of place TensorFlow message on command line help call

I get the following when I run pybci --help from the command line:

> pybci --help

2023-10-27 09:53:10.009809: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
usage: pybci [-h] {testSimple,testSklearn,testTensorflow,testPyTorch,createPseudoStreams} ...

It seems a bit out of context, perhaps it's useful to handle this programmatically and either catch+ignore it or add more context for the user?

Additionally, it takes several seconds after executing the call and before this message appears. I suspect some calls to tensorflow or some imports are being made (I haven't looked at the code). But it seems like these calls might be unnecessary if the only goal is to show a help docstring with this specific command line call.

Fail to build docs locally

After local installation of requirements for the docs, I try to build it but get an error

> make -C docs html

Application error:
Cannot find source directory (/Users/jsheunis/Documents/psyinf/pybci/docs/source)
make: *** [html] Error 2

Looks like the docs don't follow a standard layout expected with a sphinx setup. The content should all be in a source directory, or the default source directory that sphinx looks for should be changed. I haven't looked at why readthedocs build doesn't fail because of this, perhaps rtd does some internal corrections before building.

I want to do a PR to check the CI run in any case, so I'll do a PR to fix this issue. Then we'll see if rtd also build correctly after the layout change.

JOSS: several paper comments

  • The summary section states "integration into other systems..." - which other systems, could this statement be a bit more specific?
  • The software functionality section states "end-to-end solution for BCI research" but also ""no additional visualization...". These statements seem contradictory, since I would think an end to end solution includes everything one needs to conduct the research. Relatedly, the paper states researchers can focus on their experiments and don't need extensive software development skills. If they have to integrate it with visualization tools or other systems in order to have the whole system function correctly, this statement might not be completely true.

Ping openjournals/joss-reviews#5706

PyPI package name

I could only find your package on PyPI: https://pypi.org/search/?q=pybci

Is the conflict with a different package still a problem? If not, I would suggest renaming the package to pybci, because the current installation process is slightly confusing with the double "install": https://github.com/LMBooth/pybci/blob/main/docs/BackgroundInformation/Introduction.rst?plain=1#L12-L14

If you can't rename the package, perhaps consider naming it pybci-installer instead.

openjournals/joss-reviews#5706

DOC: Introduction section seems unintuitive

The Introduction section of the docs seems a bit unintuitive. To me, "Introduction" is more about an overview of the tool, and not installation and usage. Currently it contains "Installation", "Simple Implementation", and "What is PyBCI?". I suggest reordering it at the top level to a "Get started" section, which contains all the information a user would need to get started. This would be Installation and Simple Implementation. Then I would move the "What is PyBCI?" into its own section, maybe "Introduction" or "Overview", or even "What is PyBCI"

Ping openjournals/joss-reviews#5706

DOC: inconsistent syntax highlighting

I've noticed on several pages of the docs that class names or python expressions or similar are not always rendered in the expected syntax highlighting. E.g. className vs className or X = 1 vs X = 1. Please run through all the docs again to make sure you use this consistently.

Ping openjournals/joss-reviews#5706

DOC: Simple Implementation section suggestions

This section is currently too minimal to bring users up to speed on what they should be doing to get started. The only text is "For example". What should a user do? Should they always first write a python script to get started? should they copy the sample script and update it to their own parameters? What exactly should be in place on the hardware level before they implement a script?

This section is also a good place to mention and link to the pseudo-device description, so that users can know that they don't necessarily need a hardware setup in order to start using pybci.

openjournals/joss-reviews#5706

ENH: consider implementing a command line interface

The first thing I tried after installing pybci was running pybci via the command line, but this command was not recognized. Perhaps I have a bias towards CLIs, but I think it would be useful to use argparse to implement a simple command or two. At the least, I think it would be useful to be able to type pybci --help in order to get a description of what the tool is and what it can do.

In addition, this is up to your discretion, but I think it would be useful to have some sort of toy example that can be run via the command line. E.g. pybci hello-world, which then starts the pseudo-device and runs pybci with some sort of minimal visualization.

If something like this is implemented, a relevant description could also form part of the "Getting started" section in the docs.

Ping openjournals/joss-reviews#5706

Installation process suggestions

I suggest adding the following to the readme and to the docs:

The use of a virtual environment for installing the package, e.g.:

python -m venv my_env
source my_env/bin/activate

followed by pip install

Installing from the source on github, for those wanting latest un-released features or for developers:

pip install git+https://github.com/LMBooth/pybci.git

or

git clone https://github.com/LMBooth/pybci.git
cd pybci
pip install -e .

Ping openjournals/joss-reviews#5706

JOSS: paper section comments

The review checklist states:

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?

My comments:

  • Summary: there are a number of references to machine learning tools in the summary that, in my opinion, do not convey the message optimally for a "diverse, non-specialist audience". I suggest moving those references to the next paragraph where the tools are also mentioned and where they do fit the message. It still makes sense to perhaps mention some of them in the summary, but more as examples of integrated tools so as to illustrate your tool's strong point of interoperability with common machine learning tools.
  • A statement of need: the part about "[pybci's] relation to other work" is missing in this section.
  • State of the field: this whole section is missing in the paper.

some import errors such as NameError

I've noticed a potential oversight in PseudoDevice.py where certain lines are likely to raise NameErrors due to the absence of the multiprocessing import. Specifically, these lines are:

  1. if self.log_queue is not None and isinstance(self.log_queue, type(multiprocessing.Queue)):
  2. if isinstance(self.stop_signal, multiprocessing.synchronize.Event):

For clarity, here's a snippet illustrating the error:

>>> from multiprocessing import Queue
>>> multiprocessing.Queue
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
NameError: name 'multiprocessing' is not defined
>>> 

This brings up three key concerns:

  1. NameError due to the missing import statements in two locations.
  2. The tests currently do not capture this NameError, suggesting potential gaps in our test coverage.
  3. Additionally, there are several imports present in the codebase that are not in use.

To mitigate the first and third issues, utilizing code linters, such as ruff, could be beneficial:

pip install ruff
ruff .
pybci/ThreadClasses/OptimisedDataReceiverThread.py:1:19: F401 [*] `time` imported but unused
pybci/ThreadClasses/OptimisedDataReceiverThread.py:2:25: F401 [*] `collections.deque` imported but unused
pybci/ThreadClasses/OptimisedDataReceiverThread.py:3:8: F401 [*] `itertools` imported but unused
pybci/ThreadClasses/OptimisedDataReceiverThread.py:5:20: F401 [*] `bisect.bisect_left` imported but unused
pybci/ThreadClasses/OptimisedDataReceiverThread.py:128:35: E712 Comparison to `False` should be `cond is False` or `if not cond:`
pybci/Utils/Classifier.py:4:8: F401 [*] `tensorflow` imported but unused

Incorporating a linter into the CI pipeline will help prevent such oversights in future commits.

For the second concern, generating a code coverage report would provide insights into the areas currently untested.

This issue is a part of openjournals/joss-reviews#5706.

DOC: statement of need

According to the JOSS requirements, the documentation should contain the following:

Do the authors clearly state what problems the software is designed to solve and who the target audience is?

Please make sure this goes into the overview / "What is PyBCI" section.

Ping openjournals/joss-reviews#5706

JOSS: paper title seems redundant

PyBCI: A Python Package for Brain-Computer Interface (BCI) Design/An Open Source Brain-Computer Interface Framework in Python

These two titles seem like different phrasing of the same idea/tool. Is there a reason why both are provided in the title field? I would propose using a single title.

Ping openjournals/joss-reviews#5706

DOC: Community guidelines

Ideally, the software should contain guidelines for contributors to allow them to contribute productively to the code, docs or any other aspect of the package. This would include info such as:

  • how/where to ask questions
  • what process to follow to contribute code
  • how to run tests
  • anything else worth knowing when contributing

This usually takes the form of a CONTRIBUTING.md document at the root of the repo as well as a section with the same content in the docs.

Ping openjournals/joss-reviews#5706

RuntimeError: LSL binary library file was not found

This is not a problem with the package, it's user error. I didn't install lsl separately before running pybci --help via command line. This was fixed by installing lsl on my Mac by running brew install labstreaminglayer/tap/lsl.

Just leaving this trace here for future reference in case other mac-users come across the same:

> pybci --help

Traceback (most recent call last):
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pylsl/pylsl.py", line 1287, in <module>
    libpath = next(find_liblsl_libraries())
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/bin/pybci", line 5, in <module>
    from pybci.cli import main
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pybci/__init__.py", line 1, in <module>
    from .pybci import PyBCI
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pybci/pybci.py", line 1, in <module>
    from .Utils.LSLScanner import LSLScanner
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pybci/Utils/LSLScanner.py", line 1, in <module>
    from pylsl import StreamInlet, resolve_stream
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pylsl/__init__.py", line 2, in <module>
    from .pylsl import IRREGULAR_RATE, DEDUCED_TIMESTAMP, FOREVER, cf_float32,\
  File "/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pylsl/pylsl.py", line 1296, in <module>
    raise RuntimeError(err_msg + __dload_msg)
RuntimeError: LSL binary library file was not found. Please make sure that the binary file can be found in the package lib folder
 (/Users/jsheunis/opt/miniconda3/envs/pybci/lib/python3.11/site-packages/pylsl/lib)
 or the system search path. Alternatively, specify the PYLSL_LIB environment variable.
 You can install the LSL library with conda: `conda install -c conda-forge liblsl`
or with homebrew: `brew install labstreaminglayer/tap/lsl`
or otherwise download it from the liblsl releases page assets: https://github.com/sccn/liblsl/releases
On modern MacOS (>= 10.15) it is further necessary to set the DYLD_LIBRARY_PATH environment variable. e.g. `>DYLD_LIBRARY_PATH=/opt/homebrew/lib python path/to/my_lsl_script.py`

JOSS: clarify hardware capabilities

Generally, BCI's could work with any number of related acquisition hardware, including MRI, EEG, fNIRS, etc. People who aren't familiar with LSL might not know which hardware is being referred to and thus with which hardware PyBCI can be used. You do mention:

The software uses the Lab Streaming Layer (LSL) [@lsl] protocol for data acquisition on various LSL enabled data streams...

but I think this is still too generic. I suggest adding some reference to hardware capabilities in the summary already to make it clearer from the start.

Ping openjournals/joss-reviews#5706

Constant warning messages while running the exercises

Hi, I am having difficulty running the exercises.

I installed pybci and tried the following tests under the Examples directory

  • testSimple.py
  • testSklearn.py

Both of them keep giving the following warnings in a loop.

PyBCI: [WARNING] - No Marker streams available, make sure your accepted marker data Type have been set in bci.lslScanner.markerTypes correctly.
PyBCI: [WARNING] - No data streams available, make sure your streamTypes have been set in bci.lslScanner.dataStream correctly.

In testSimple.py only the first 2 print messages below are displayed. The rest of them do not get displayed.

Starting bci
attemting connect to bci

Am I missing some dependency/configuration? Are the examples running fine on the CI?

The full log is below. The system I am testing is an Ubuntu 22.04 LTS

❯ python testSimple.py
Starting bci
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'lo' (status: 1, multicast: 0, broadcast: 0)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'enp0s31f6' (status: 1, multicast: 4096, broadcast: 2)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'lo' (status: 1, multicast: 0, broadcast: 0)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'enp0s31f6' (status: 1, multicast: 4096, broadcast: 2)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:102   INFO|       IPv4 addr: c0a80108
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'lo' (status: 1, multicast: 0, broadcast: 0)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'enp0s31f6' (status: 1, multicast: 4096, broadcast: 2)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:105   INFO|       IPv6 addr: 2a04:ee41:82:7519:76d7:8d91:667c:f269
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'enp0s31f6' (status: 1, multicast: 4096, broadcast: 2)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:105   INFO|       IPv6 addr: 2a04:ee41:82:7519:55b3:6db:9a75:4d6b
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:89    INFO| netif 'enp0s31f6' (status: 1, multicast: 4096, broadcast: 2)
2023-10-29 20:13:50.492 (   1.384s) [python          ]      netinterfaces.cpp:105   INFO|       IPv6 addr: fe80::f07c:2a0a:1f5d:1f06%enp0s31f6
2023-10-29 20:13:50.492 (   1.384s) [python          ]         api_config.cpp:270   INFO| Loaded default config
PyBCI: [INFO] - Invalid or no sklearn classifier passed to clf. Checking tensorflow model... 
2023-10-29 20:13:52.689121: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2023-10-29 20:13:52.689167: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2023-10-29 20:13:52.689206: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2023-10-29 20:13:52.695314: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-29 20:13:53.352814: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
PyBCI: [INFO] - Invalid or no tensorflow model passed to model.  Checking pytorch torchModel...
PyBCI: [INFO] - Invalid or no PyTorch model passed to model. Defaulting to SVM by SkLearn
attemting connect to bci
PyBCI: [WARNING] - No Marker streams available, make sure your accepted marker data Type have been set in bci.lslScanner.markerTypes correctly.
PyBCI: [WARNING] - No data streams available, make sure your streamTypes have been set in bci.lslScanner.dataStream correctly.
PyBCI: [WARNING] - No Marker streams available, make sure your accepted marker data Type have been set in bci.lslScanner.markerTypes correctly.
PyBCI: [WARNING] - No data streams available, make sure your streamTypes have been set in bci.lslScanner.dataStream correctly.
PyBCI: [WARNING] - No Marker streams available, make sure your accepted marker data Type have been set in bci.lslScanner.markerTypes correctly.
...

This is part of the openjournals/joss-reviews#5706 review.

DOC: examples

This section of the documentation is too minimal IMO, and its' a section that users will be likely to visit.

Generally, the same comments I made in #10 also apply here, i.e. what does a user have to do to get any individual example running? Just linking them to an online script does not provide an intuitive user experiencing.

In addition:

  • The table does not seem like a good way to represent the different examples, I had to scroll horizontally in order to be able to read each paragraph. I suggest putting all examples into their own subsection
  • You could consider adding a tag (e.g. using https://shields.io/) to each example that shows whether it can be used with the pseudodevice or not. If this is useful, then these tags can be used for other categorizations of examples as well.

Regarding the statement:

if using with own LSL capable hardware you may need to adjust the scripts accordingly, namely set createPseudoDevice=False.

I think it's suboptimal if users have to change scripts in order to make examples work. I think passing an argument to the script itself would make it easier for them. This is another indication that a command line API would be useful (see #12).

Ping openjournals/joss-reviews#5706

Testing notes

I ran pytest, but turns out this package is not installed.

I think you should add a requirements file (e.g. requirements-devel.txt) to your repo root directory containing any requirements that developers might need for local use, including pytest. In addtion, I suggest adding a note on the readme and in the contributing docs to instruct people what to do in order to run tests locally.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.