Getting Started with Lifting Monocular Events to 3D Human Poses

In brief

  • Train classification and reconstruction models based on ResNet34 and Resnet50

  • Train 2D and 3D HPE models

  • Two datasets (DHP19, Event-H3m)

  • Different event representations (constant-count, spatiotemporal voxelgrid)

Environment

Create a virtualenv environment from requirements.txt. Using pipenv:

` pipenv install -r requirements.txt pipenv shell python -m pip install . `

Data

Generate DHP19

We provide two methods to generate frames from DHP19 dataset. Both requires to download ground truth, DAVIS data, and cameras projection matrix from download. We call this directory rootDataFolder.

Generate frames

Method 1

  1. Use DHP19 tools (you can find them at https://github.com/SensorsINI/DHP19). This generates **events.h5 and *labels.h5* files in outDatasetFolder

  2. Use python script generate_dataset_frames.py --input_dir outDatasetFolder --out_dir output_frames_dir to separate frames for different cameras. You should get: .. code-block:: bash

    +– output_frames_dir | … | +– S1_session_5_mov_7_frame_86_cam_0.npy | +– S1_session_5_mov_7_frame_86_cam_1.npy | +– S1_session_5_mov_7_frame_86_cam_2.npy | +– S1_session_5_mov_7_frame_86_cam_3.npy | …

  3. To generate the labels, check the following generate labels.

Method 2

It’s provided a toolset for generating .mat event frames from DHP19 dataset. The supported representation are: spatiotemporal voxelgrid, time surfaces and constant count. In Generate_DHP.m, fix rootCodeFolder, rootDataFolder and outDatasetFolder to your setup.

You must modify Generate_DHP19 according to your need: a. You can generate constant-count frames along with labels by setting the

extract function to ExtractEventsToFramesAndMeanLabels

  1. You can generate other representations with ExtractEventsToVoxel or ExtractEventsToTimeSurface

After setting the extract function in the script, launch

matlab -r "run('./scripts/dhp19/generate_DHP19/Generate_DHP19.m')"

To generate the labels, check the following generate labels.

Generate labels

Use python script generate_joints.py --input_dir outDatasetFolder --out_dir output_labels_dir --p_matrices_dir rootDataFolder/P_matrices to generate npz files from DHP19 labels. The output tree path should be:

+-- output_labels_dir
| ...
| +-- S1_session_1_mov_1_frame_86_cam_0_2dhm.npz
| +-- S1_session_1_mov_1_frame_86_cam_1_2dhm.npz
| +-- S1_session_1_mov_1_frame_86_cam_2_2dhm.npz
| +-- S1_session_1_mov_1_frame_86_cam_3_2dhm.npz
| ...

Help

You can ask for help either contacting me at gianluca.scarpellini[at]iit.it or opening an issue!

Generate events-Human3.6m

First steps

  1. Request a LICENSE from academic porpoise for Human3.6m

  2. Download and extract human3.6m dataset. You can use the tool inside human3.6m_downloader or download Videos and Raw_Angles from https://github.com/facebookresearch/VideoPose3Dhttp://vision.imar.ro/human3.6m/description.php NOTE: either case, you need to request a LICENSE from the authors NOTE: you DON’T need D3_Positions or any other data from human3.6m. Joint positions are generated from raw values in the following steps. Remove any D3_Positions subdirectory from MyPoseFeatures

Joints

In order to generate good-quality labels, you need to use full joints positions (D3_Positions). These data are not distributed in human3.6m, which instead gives access to RAW_Angles only. We provide a matlab script to convert Raw_angles into Full_D3_Positions

  1. Get a valid MATLAB installation. We tested the script on MATLAB2021a

  2. Launch Matlab into Generate_Full_D3_Positions. You should see add paths for experiments

  3. Specify your folder director

  4. Run script generate_data_h36m.m. After the process, you should have a FULL_3D_Positions in your dataset

Process

  1. Use event_library generator script to generate raw events from mp4 files: .. code-block:: guess

    python event_library/tools/generate.py frames_dir=path/to/dataset out_dir=out upsample=true emulate=true search=false representation=raw

  2. Launch prepare_data_h3m.py to generate a .npz file containing FULL_D3_Positions

  3. Launch genearate_datasets.py to generate constant_count frames and joints

Docker

We provide a docker image at https://hub.docker.com/repository/docker/gianscarpe/event-based-hpe containing human3.6m_downloader, event_library and their dependencies. You still need to generate FULL_D3_Positions using a local MATLAB installation

Copyrights

Model zoo

A model zoo of backbones and models for constant_count and voxelgrid trained both with DHP19 and Events-H3m is publicly accessible at [work in progress].

Agents

Train and evaluate for different tasks If you want to launch an experiment with default parameters (backbone ResNet50, DHP19 with constant-count representation, see the paper for details), you simply do (after setup and data):

Train

A complete configuration is provided at ./confs/train/config.yaml. In particular, refer to ./confs/train/dataset/… for dataset configuration (including path specification), and to ./confs/train/training for different tasks.

` python train.py `

If you want to continue an ended experiment, you can set training.load_training to true and provide a checkpoint path:

` python train.py training.load_training=true training.load_path={YOUR_MODEL_CHECKPONT} `

To initialize a model with a checkpoint of an ended experiments (load only the model, not the trainer neither the optimizer status)

` python train.py training.load_training=false training.load_path={YOUR_MODEL_CHECKPONT} `

To train a margipose_estimator agent: ` python scripts/train.py training=margipose dataset=$DATASET training.model=$MODEL \ training.batch_size=$BATCH_SIZE training.stages=$N_STAGES ` Supported dataset are: constantcount_h3m, voxelgrid_h3m, constantcount_dhp19, voxelgrid_dhp19

Test

To evaluate a model, you can do the following (it generate a results.json file with the outputs): ` python scripts/evaluate.py training.load_path={YOUR_MODEL_CHECKPOINT} dataset=$DATASET `

To evaluate a model on per-movement protocol for DHP19, you can do: ` python scripts/eveluate_dhp19_per_movement.py \ training.load_path={YOUR_MODEL_CHECKPOINT} dataset=$DATASET `