Giter Site home page Giter Site logo

mmfi_dataset's Introduction

Maintenance DOI

Toolbox for MM-Fi Dataset

Introduction

MM-Fi is the first multi-modal non-intrusive 4D human pose estimation dataset with 27 daily or rehabilitation action categories for high-level wireless human sensing tasks. MM-Fi consists of over 320k synchronized frames of five modalities from 40 human subjects in four domains. The annotations include 2D/3D human pose keypoints, 3D position, 3D dense pose, and the category of action.

For more details and demos about MMFi dataset, please refer to [Project Page] and [Paper].

Please download the dataset through [Google Drive] or [Baidu Netdisk].

Quick Start for MMFi Toolbox

To get started, follow the instructions in this section. We will introduce the simple steps and how you can customize the configuration.

Step 1

Dependencies

Please make sure you have installed the following dependencies before using MMFi dataset.

  • Python 3+ distribution
  • Pytorch >= 1.1.0

Quick installation of depedencies (in one local or vitual environment)

pip install python torch torchvision pyyaml numpy scipy opencv-python

Step 2

Once the environment is built successfully, download the dataset;

After unziping all four parts, the dataset directory structure should be as follows.

Dataset Directory Structure

${DATASET_ROOT}
|-- E01
|   |-- S01
|   |   |-- A01
|   |   |   |-- rgb
|   |   |   |-- mmwave
|   |   |   |-- wifi-csi
|   |   |   |-- ...
|   |   |-- A02
|   |   |-- ...
|   |   |-- A27
|   |-- S02
|   |-- ...
|   |-- S10
|-- E02
|......
|-- E03
|......
|-- E04
|......

Step 3

Edit your code and configuration file (.yaml file) carefully before running. For details of the configuration, please check the keys description.

Here we just take the code snippets in the example.py for instance.

import yaml
import numpy as np
import torch

# Please add the downloaded mmfi directory into your python project. 
from mmfi import make_dataset, make_dataloader

dataset_root = '/data3/MMFi_Dataset'  # path will not be same in your server.
with open('config.yaml', 'r') as fd:  # change the .yaml file in your code.
    config = yaml.load(fd, Loader=yaml.FullLoader)

train_dataset, val_dataset = make_dataset(dataset_root, config)
rng_generator = torch.manual_seed(config['init_rand_seed'])
train_loader = make_dataloader(train_dataset, is_training=True, generator=rng_generator, **config['train_loader'])
val_loader = make_dataloader(val_dataset, is_training=False, generator=rng_generator, **config['validation_loader'])

# Coding

Step 4

Now you can start the implementation. For example, using the commands below:

cd your_project_dir
// make coding
python your_script_name.py path_to_dataset_dir your_config.yaml

Description of Keys in Configuration

modality

  • Single modality

    Please use one of the followings:

    rgb, infra1, infra2, depth, lidar, mmwave, wifi-csi

    Note that every modality should be in lowercase.

    Currently, the raw images (rgb, infra1 and infra2) of subjects are not publicly available for the privacy concerns. Thus, we provide the 17 body keypoints extracted from images using the common ResNet-48 model.

  • Multiple modalities:

    Please use | to connect different modalities.

    Note that space is not allowed in the connection. For example, wifi-csi|mmwave is OK, but wifi-csi | mmwave will not be accepted.

data_unit

  • sequence

    The data generator will return data with sequence as the unit, e.g., each sample contains 297 frames.

  • frame

    The data generator will return data with frame as the unit, e.g., each sample only has 1 frame.

protocol

This key defines how many activities could be enabled in your training/testing.

  • protocol 1: Only the daily activities are enabled.
  • protocol 2: Only the rehabilitation activities are enabled.
  • protocol 3: All activities are enabled.

split

The train/test split of your code. There are already 3 splits which are used in our paper.

  • manual_split: Please refer to the example in .yaml file and customize your own dataset split setting here (which subjects and actions are regarded as the testing data).
  • split_to_use: Specify the split you want.

train_loader validation_loader

These two options define the parameters which are used to construct your dataloaders. We keep these two options open so that you could customize freely.

More Details about MMFi Dataset

Activities Included

MMFi dataset constains two types of actions: daily activities and rehabilitation activities.

Activity Description Category
A01 Stretching and relaxing Rehabilitation activities
A02 Chest expansion(horizontal) Daily activities
A03 Chest expansion (vertical) Daily activities
A04 Twist (left) Daily activities
A05 Twist (right) Daily activities
A06 Mark time Rehabilitation activities
A07 Limb extension (left) Rehabilitation activities
A08 Limb extension (right) Rehabilitation activities
A09 Lunge (toward left-front) Rehabilitation activities
A10 Lunge (toward right-front) Rehabilitation activities
A11 Limb extension (both) Rehabilitation activities
A12 Squat Rehabilitation activities
A13 Raising hand (left) Daily activities
A14 Raising hand (right) Daily activities
A15 Lunge (toward left side) Rehabilitation activities
A16 Lunge (toward right side) Rehabilitation activities
A17 Waving hand (left) Daily activities
A18 Waving hand (right) Daily activities
A19 Picking up things Daily activities
A20 Throwing (toward left side) Daily activities
A21 Throwing (toward right side) Daily activities
A22 Kicking (toward left side) Daily activities
A23 Kicking (toward right side) Daily activities
A24 Body extension (left) Rehabilitation activities
A25 Body extension (right) Rehabilitation activities
A26 Jumping up Rehabilitation activities
A27 Bowing Daily activities

Subjects and Environments

40 volunteers (11 females and 29 males) aging from 23 to 40 participated in the data collection of MMFi. We appreciate their kind assitance in the completion of this work!

In addition, the 40 volunteers were divided into 4 groups corresponding to 4 different environmental settings so that cross-domain research could be conducted for the WiFi sensing.

Reference

Please cite the following paper if you find MMFi dataset and toolbox benefit your research. Thank you for your support!

@article{yang2023mmfi,
      title={MM-Fi: Multi-Modal Non-Intrusive 4D Human Dataset for Versatile Wireless Sensing}, 
      author={Yang, Jianfei and Huang, He and Zhou, Yunjiao and Chen, Xinyan and Xu, Yuecong and Yuan, Shenghai and Zou, Han and Lu, Chris, Xiaoxuan and Xie, Lihua},
      journal={arXiv preprint arXiv:2305.10345}
      year={2023}
}

mmfi_dataset's People

Contributors

marsrocky avatar ybhbingo avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.