Giter Site home page Giter Site logo

courtois-neuromod / task_stimuli Goto Github PK

View Code? Open in Web Editor NEW
7.0 6.0 15.0 62.5 MB

psychopy scripts for stimuli presentations

License: MIT License

Python 98.01% Shell 0.31% Smarty 0.04% Dockerfile 0.86% Singularity 0.78%
acquisition stimuli psychopy

task_stimuli's Introduction

task_stimuli

This software is a set of cognitive tasks developed in psychopy and a system to schedule sets of tasks during a session.

Tasks are classes defined in src/tasks, and are instantiated in src/sessions files that describe a set of tasks in the session.

Material for the task (images/movies/lists...) is stored mainly in data

Outputs (logs, responses) are stored in the output folder and try to mimic a BIDS structure.

When used with option --fmri tasks waits for a new TTL character to start.

When used with the option --eyetracking this software will start Pupil, and trigger the recording of the eye movie and detected pupil position, which outputs to the output folder in a BIDS-like way. Note that eyetracking data would require offline post/re-processing to be used and shared.

utils contains scripts to prepare movies in a reproducible way using the melt command line video editor in singularity.

License: MIT

Attributions

...

INSTALL

apt install python3-pip git
mkdir git
cd git

# this section is optional, only if using eyetracking
git clone https://github.com/pupil-labs/pupil.git
# follow instructions at https://docs.pupil-labs.com/#linux-dependencies

pip3 install git+https://github.com/psychopy/psychopy.git
# modify the file in psychopy that crashes
pip3 install scikit-video

git clone [email protected]:courtois-neuromod/task_stimuli.git
cd task_stimuli
mkdir output

how to launch a session

python3 main.py --subject test --session video003 --tasks videoshorttest --eyetracking --fmri -o /path/to/dataset/

  • --subject: can be whatever, will be used to save data in a bids-like structure

  • --session: a session identifier that will be used to save the data in the BIDS

  • --tasks: must match the name of a session script in src/ses-<session_name>.py, which contains the tasks to be ran on that session

  • --eyetracking: turn on eyetracking, start pupil software and recording of eye

  • -o : specifies the path to the root of the dataset where to output the data (in sourcedata or BIDS )

  • --fmri: will wait for TTL (can be emulated with character 5 on the keyboard) to start the tasks that are labeled as fmri dependent. When not using that flag, tasks will run back to back. It will also append a video loop at the beginning of the session in order for the participant to have sound and visual stimuli to test the setup (then skip to start the session).

  • --meg: TODO!

If you run multiple time this command, there are no risks of overwriting, the data will be suffixed by the date and time of start of the session.

Creating session files

You can create new sessions by adding a ses-xxxx.py file in src/sessions folder. Each file only create a TASKS list of task subclasses instances, that is loaded by the script and ran in the provided order. Check the existing files for examples.

How to interact with the software:

stimuli

  • 5: emulate the trigger of MRI and start task "by hand" (can be changed in src/shared/fmri.py)
  • <ctrl>-c : abort and skip the current task and move to the next one
  • <ctrl>-n : abort the task and restart it, showing again the instruction
  • <ctrl>-q : quit the session, saves and close the eyetracking software

If (and only if) the software stop responding and you cannot quit, switch to the terminal and kill the software with <ctrl>-c.

eyetracking

There are "hotkeys in the pupil software to trigger actions", use the buttons with these letters or type. C (-c): launch the calibration of the eyetracking, showing markers to the participant T (-t): a test of the calibration accuracy, also showing markers on the screen

Important: there are two softwares running, Psychopy and Pupil, when done with calibration, click on the Stimuli window to give the focus back to Psychopy, otherwise it will not get the TTL and the task will not start with the scanner.

This is a problem that has to be fixed in the future to avoid failed acquisition start. Update: should be fixed now, the software takes focus when task is loaded.

source code

psychopy scripts for stimuli presentations

src/tasks contains scripts for tasks

src/shared folder should factorize the code common across tasks

eyetracking

The eyetracking part is managed by launching pupil capture software and launching a single recording for the whole session.

calibration

Run a short calibration task where the subjects have to look at points shown on the screen

gazemapping

Once the calibration has been run (though it seems that pupil reload previous calibration), pupil produces gaze information that corresponds to position on the screen. We then display that information in almost real-time on the experimenter screen.

task_stimuli's People

Contributors

bpinsard avatar francoispgm avatar htwangtw avatar hyruuk avatar maellef avatar mariestlaurent avatar me-pic avatar rainyfields avatar vborghe avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

task_stimuli's Issues

add a flag similar to skip_n_tasks but more intuitive

I was thinking it might be easy to have an argument that would replace skip_n_tasks for video segments, the difference being you tell which segment you want to start with rather than the number of segments to skip.

For example for friends-s6 instead to putting --skip_n_tasks 12 you would put --start_with s06e07a.

It's not that big of a deal but if it is quick to implement, that would enhance the API at minimal cost.

add button to signify pb in cozmofriends task

Add a button (on the controller - "X" for example) that the participant can press when encountering a problem such as going up a wall a not being able to get back down on it's own.
The person supervising will then resolve the problem.
The person will be advised to not press any button until Cozmo is back on the ground facing normally the maze with a clear view.
When the problem is resolved, the person will press another button to drive again.

The duration of this event will be from the press of 'problem' ('X') button until the press of another button
@FrancoisPgm

Irregular timestamps encoded in the .log file for videogame tasks

When running the videogame task, the timestamps associated with Keyreleases are different from the other timestamps, and cannot be interpreted easily. Example (from mario3 sub-01 ses-001 dataset) :

853299.8402 DATA Keypress: b 853300.0108 EXP level step: 60 853300.0735 DATA Keypress: y 25.9670 DATA Keyrelease: b 26.0501 DATA Keyrelease: y

How to run videogame tasks?

Hello Neuromod people,

I am working on Individual Brain Charting (IBC) project, and we would like to implement your videogame protocols for our project. I was trying to figure out how it would work, but it seems certain ROM files are required for that? So if you could provide a walkthrough on how could one replicate your videogame tasks, it would be great!

I am looking forward to hearing from you!

Thanks!

New triggers need to be added to the MEG version of videogames tasks

In order to record frame-events as well as button-press events with maximal precision, we would need to send them both separately on different parallel ports. Additionnally, we would need separate triggers for button-press and -release.

Here is the strategy I suggest :

For frame-events :

  • Send triggers to this address : "/dev/parport1"
  • Use this dictionnary of trigger codes :
    frame_triggers = {"run_start":int("00000101", 2), # Corresponds to the "INFO GO" events in psychopy's .log
    "rep_start":int("00000111", 2), # Corresponds to the "level step: 0" in psychopy's .log, and to the first frame of the associated .bk2
    "rep_stop":int("00000011", 2), # Corresponds to the last frame of the .bk2
    "frame":int("00000001", 2)} # Corresponds to each frame displayed by psychopy

Note : We don't need the "run_stop" trigger anymore because we will need to manually stop the recording. The "rep_stop" trigger is important though, because it would help us to recover any missing frame_trigger that could've happened during the repetition.

For button-events :

  • Send triggers to this address : "/dev/parport0"
  • Use this dictionnary of trigger codes :
    button_triggers = {"D_press":int("00000001", 2),
    "L_press":int("00000010", 2),
    "U_press":int("00000011", 2),
    "R_press":int("00000100", 2),
    "B_press":int("00000101", 2),
    "Y_press":int("00000110", 2),
    "X_press":int("00000111", 2),
    "A_press":int("00001000", 2),
    "D_release":int("00010001", 2),
    "L_release":int("00010010", 2),
    "U_release":int("00010011", 2),
    "R_release":int("00010100", 2),
    "B_release":int("00010101", 2),
    "Y_release":int("00010110", 2),
    "X_release":int("00010111", 2),
    "A_release":int("00011000", 2)}

New calibration

Design a new calibration task with dynamic fixation detection.

fix scenarii

Fix the scenarii of the level 4 and 5 of Shinobi, to avoid going into untrained bonus levels.

Error message when failing to import session tasks is deceptive

In :

task_stimuli/main.py

Lines 11 to 15 in 44f1227

try:
ses_mod = importlib.import_module('src.sessions.ses-%s'%parsed.tasks)
tasks = ses_mod.get_tasks(parsed) if hasattr(ses_mod, 'get_tasks') else ses_mod.TASKS
except ImportError:
raise(ValueError('session tasks file cannot be found for %s'%parsed.session))

the error message includes parsed.session but I think it should instead include parsed.tasks since that's what we're trying to import. I noticed that because I mistyped "shinobi3levels", forgetting the "s" at the end but I got as the error message : ValueError: session tasks file cannot be found for shinobi011 which got me confused.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.