Giter Site home page Giter Site logo

aminsinichi / wearable-hrv Goto Github PK

View Code? Open in Web Editor NEW
7.0 1.0 0.0 4.52 MB

A Python package for the validation of heart rate and heart rate variability in wearables

License: MIT License

TeX 6.48% Python 93.52%
heart-rate heart-rate-variability hrv psychophysiology python wearables validation

wearable-hrv's Introduction

cover

License: MIT Current version at PyPI Supported Python Versions Last Commit Twitter Follow Binder

wearablehrv is a Python package that comes in handy if you want to validate wearables and establish their accuracy in terms of heart rate (HR) and heart rate variability (HRV). wearablehrv is a complete and comprehensive pipeline that helps you go from your recorded raw data through all the necessary pre-processing steps, data analysis, and many visualization tools with graphical user interfaces.

Documentation

For the complete documentation of the API and modules, visit:

Documentation Status

Examples

Getting Started

Individual Pipeline

Group Pipeline

You can also explore the example notebooks directly in your browser without installing any packages by using Binder. Simply click the badge below to get started:

Binder

User Installation

The package can be easily installed using pip:

pip install wearablehrv

The repository can be cloned:

git clone https://github.com/Aminsinichi/wearable-hrv.git

Development

wearablehrv was developed by Amin Sinichi https://orcid.org/0009-0008-2491-1542, during his PhD at Vrije Universiteit Amsterdam in Psychophysiology and Neuropsychology.

Contributors

Overview

The package is divided into two broad ranges of functionalities:

  • Individual Pipeline: You use it for a single participant to process your raw data.
  • Group Pipeline: You use it when you have multiple participants, and you have processed them through the Individual Pipeline.

Below, we offer a quick overview of the main functionalities.

Data Collection

When one wants to establish the validity of a wearable, let's say a smartwatch, that records heart rate and heart rate variability, they should use a "ground truth" device. This is usually a gold-standard electrocardiography (ECG) that measures HR and HRV accurately.

Note: We call this gold-standard a "criterion" device in our pipeline.

Then, a participant wears this ECG, together with the smartwatch, and starts recording data simultaneously. It is beneficial if we test the subject in various conditions, so we get a better sense of how well the device works.

Usually, validating multiple devices at once is a cumbersome task, requiring a lot of data preparation, processing, different alignments, etc. A powerful feature in wearablehrv is that it does not matter how many devices in how many conditions you want to test a participant! You just record your data, and the pipeline walks you through this data to the final decision on whether a device is accurate compared to the ground truth or not.

This is how your experiment may look like: a participant wearing a few wearables named Kyto, Heartmath, Empatica, Rhythm, together with a gold-standard ECG (VU-AMS), with electrodes on the chest, and will perform different tasks in different conditions (e.g., sitting for 5 minutes, standing up for 3 minutes, walking for 3 minutes, and biking for 3 minutes, while having all the devices on):

Sensor Placement

1. Individual Pipeline

1.1 Prepare Data

It is easy to read your data and experimental events with the pipeline from all your devices in one go.

# Importing Module
import wearablehrv

# downloading some example data
path = wearablehrv.data.download_data_and_get_path()
# Define the participant ID 
pp = "test" 
# Define your experimental conditions, for instance, sitting, standing, walking, and biking
conditions = ['sitting', 'standing', 'walking', 'biking'] 

# Define the devices you want to validate against the criterion. 
devices = ["kyto", "heartmath", "rhythm", "empatica", "vu"] 

# Redefine the name of the criterion device
criterion = "vu" 

# Read data, experimental events, and segment the continuous data into smaller chunks
data = wearablehrv.individual.import_data (path, pp, devices)
events = wearablehrv.individual.define_events (path, pp, conditions, already_saved= True, save_as_csv= False)
data_chopped = wearablehrv.individual.chop_data (data, conditions, events, devices)

1.2 Preprocess Data

You have various methods to properly preprocess your raw data.

Correct the Lag, Trim Data

With a user-friendly GUI, correct the lag between devices, align data by cropping the beginning and the end of each of your devices, and have full control over each device and condition.

wearablehrv.individual.visual_inspection (data_chopped, devices, conditions,criterion)

visual_inspection

Detect Outliers and Ectopic Beats

Easily perform different types of detection methods for each device and in each condition. This is an important advantage that allows you to easily run this within a condition, for a specific device, to make the preprocessing independent.

data_pp, data_chopped = wearablehrv.individual.pre_processing (data_chopped, devices, conditions, method="karlsson", custom_removing_rule = 0.25, low_rri=300, high_rri=2000)

Diagnostic Plots

Check how well you performed the preprocessing by comparing the detected outliers in the criterion and your selected device.

wearablehrv.individual.ibi_comparison_plot(data_chopped, data_pp, devices, conditions, criterion, width=20, height=10)

comparison_plot

1.3 Analyze and Plot

Easily calculate all relevant outcome variables (e.g., RMSSD, mean HR, frequency domain measures) in all your devices and conditions, and use various plotting options.

time_domain_features, frequency_domain_features = wearablehrv.individual.data_analysis(data_pp, devices, conditions)
wearablehrv.individual.bar_plot(time_domain_features, frequency_domain_features, devices, conditions, width=20, height=25, bar_width = 0.15)

bar_plot

2. Group Pipeline

2.1 Prepare Data

Easily load all processed data that you have put through the Individual Pipeline.

wearablehrv.data.clear_wearablehrv_cache() 
path = wearablehrv.data.download_data_and_get_path(["P01.csv", "P02.csv", "P03.csv", "P04.csv", "P05.csv", "P06.csv", "P07.csv", "P08.csv", "P09.csv", "P10.csv"])
conditions = ['sitting', 'standing', 'walking', 'biking'] 
devices = ["kyto", "heartmath", "rhythm", "empatica", "vu"] 
criterion = "vu" 
features = ["rmssd", 'mean_hr', 'nibi_after_cropping', 'artefact'] 
data, file_names = wearablehrv.group.import_data(path, conditions, devices, features) # Select the features you are interested in
data = wearablehrv.group.nan_handling(data, devices, features, conditions) 

2.2 Signal Quality

A powerful tool to assess and report signal quality in all your wearables, in all conditions. You just need to define a few thresholds.

data, features, summary_df, quality_df = wearablehrv.group.signal_quality(data, path, conditions, devices, features, criterion, file_names, ibi_threshold = 0.30, artefact_threshold = 0.30)
wearablehrv.group.signal_quality_plot2(summary_df, condition_selection=False, condition=None)

signal_quality

2.3 Statistical Analysis

Perform four of the most common statistical methods for validation, and create plots, again, for all your devices, in all conditions, just by running a few functions.

Mean Absolute Percentage Error

mape_data = wearablehrv.group.mape_analysis(data, criterion, devices, conditions, features)
wearablehrv.group.mape_plot(mape_data, features, conditions, devices)

mape

Regression Analysis

regression_data = wearablehrv.group.regression_analysis(data, criterion, conditions, devices, features, path)
wearablehrv.group.regression_plot(regression_data, data, criterion, conditions, devices, features, marker_color='red', width=10, height_per_condition=4)

regression

Intraclass Correlation Coefficient

icc_data = wearablehrv.group.icc_analysis(data, criterion, devices, conditions, features, path, save_as_csv=False)
wearablehrv.group.icc_plot(icc_data, conditions, devices, features)

icc

Bland-Altman Analysis

blandaltman_data = wearablehrv.group.blandaltman_analysis(data, criterion, devices, conditions, features, path, save_as_csv=False)
wearablehrv.group.blandaltman_plot(data, criterion, conditions, devices, features)

bland_altman

2.4 Descriptive Plots

There are many options for you to meaningfully plot your group data and make an informed decision on the accuracy of your devices.

wearablehrv.group.violin_plot (data, conditions, features, devices)

violin plot

Questions

For any questions regarding the package, please contact:

wearable-hrv's People

Contributors

aminsinichi avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

wearable-hrv's Issues

Joss Paper: Comparison to "state-of-the-art" and references for the process

While reading through the paper, I was missing a little bit more thoprough comparison to other available packages to clearer highlight what this package adds. E.g. how much easier is using this package than using the hrv-analysis package directly.

Also It would be great, if you could add citations to the paper that provide support for the evaluation metrics that the package provides. Basically, to provide some justification why you chose the set of metrics and plots that are included in the package for comparison of the systems.

Potentially Confusing naming: CRITERION DEVICE

In the paper and the package the term CRITERION DEVICE is used. At least for me (and I also couldn't find anything with a quick google search), the use of the term in this context was new. I think a more common name would be "reference device" or "ground-truth device"

Some small comments regarding Code Structure and Code Formatting

Overall the code quality looks great, but here are a couple of smaller things that could be improved:

  • Use of Path instead of strings for file/folder paths -> This will solve a bunch of issues caused by the differences between operating systems. For example it would make .replace('\\', '/') obsolote in the example
  • At the moment the files are huge. It might help overall readibility to split the large files into logical chunks
  • "random" and repeated comments (For example at the beginning of the individual file)
  • Inconsistent and unusuale formatting choices (in particular the space before the opening bracket of a function) can be confusing. I would highly recommend to stick to official code guidelines and enforce them by using ruff or black

Some comments about Packaging and Versioning

Great job publisheing the package on Pypi!

Here a couple of suggestions to further improve packaging and distribution:

  • Your package should have an internal version identifier (usually a version const in the init) that corresponds to the package version, so that users can check which version they have installed
  • You should associate each Pypi release with a git tag (and push the tag to Github) to make sure that people can browse the code of exactly the version they are currently running (e.g. when reporting bugs)
  • Optionally, you can create a Github release for every version
  • Optionally, you can streamline the process, by using Github actions to automatically publish on release or on tag
  • You should consider adding a changelog. Otherwise, new versions and their impact are meaningless to users.

Fix warning messages

The following functions generate warning messages due to dependency package updates:

  • individual.import_data ()
  • individual.chop_data ()
  • individual.display_changes()
  • group.matrix_plot()
  • group.density_plot()
  • group.heatmap_plot() It does not produce a warning, but seems like something has changed.
  • group.bonferroni_correction_icc()

Tests rely on hardcoded path

At the moment the testfiles try to add hardcoed paths to the sys.path

sys.path.append(r"C:\Users\msi401\OneDrive - Vrije Universiteit Amsterdam\PhD\Data\Coding\Validation Study\wearable-hrv")

sys.path.append(r"C:\Users\msi401\OneDrive - Vrije Universiteit Amsterdam\PhD\Data\Coding\Validation Study\wearable-hrv")

That should be adapted to make it possible to run the tests on every system without modifieing the files

Installation Problem - Use of deprecated astropy API

Using pip install to install the package in a clean env results in errors during import.

Specficially, ImportError: cannot import name 'LombScargle' from 'astropy.stats'. This is likely related to the deprecation mentioned here (lightkurve/lightkurve#535)

It seems like the package does not support recent astropy versions. That should be specified in the setup.py to ensure that compatible versions are installed (or even better, the newer versions of astropy should be supported).

Downgrading astropy to <6.0 fixed the issue for now

Some comments about Docstrings and documentation

  • The docstring formatting is inconsistent. Some functions use a numpydoc format while others use a GoogleDoc (?) style format (e.g. )
  • It would be great, if a rendered API docu (e.g. using RTD) would exist to easier browse the available functions
  • The main Documentation example should be restructured, that it can be run without modification end-to-end, so that new users can play around with the functionality right away. For this I would suggest replacing, the "undefined" paths with paths to the example file, splitting the docs up into multiple files (1 showing the main functionality and multiple others showing specific functionality, e.g. pre-processing for a specific system)
  • Ideally, rendered versions of the example notebook should be hosted somewhere (e.g. as part of RTD page), so that potential users can see the functionality "in action" without installing the package
  • If you want to be "fancy" with the handling of your example files, consider using https://pypi.org/project/pooch/ to handle downloading your example files.
  • The current documentation notebook has a syntax error in the first cell (missing ! to mark the first line as command)
  • Ideally add a test that executes your documentation example to make sure that it is working as expected.

Package is missing contribution Guidelines

As per JOSS requirements a package should have Contribution Guidelines:

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.