bullet3/examples/pybullet/gym/pybullet_envs/deep_mimic/mocap
2019-04-27 07:31:15 -07:00
..
camera.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
deepmimic_json_generator.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
h36m_dataset.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
humanoid.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
inverse_kinematics.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
mocap_dataset.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
quaternion.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
README.md add Human3.6M support for deep_mimic env 2019-02-13 00:13:56 +08:00
render_reference.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
skeleton.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00
transformation.py add yapf style and apply yapf to format all Python files 2019-04-27 07:31:15 -07:00

Support for Human3.6M Dataset

Developer: Somedaywilldo (somedaywilldo@foxmail.com)

Inverse Kinect and Reference Rendering

If you want to learn movements directly from coordinates, you can use inverse_kinect.py and reder_reference.py, currently it just support a dataset created by VideoPose3D.

Download the pre-prosessed Human3.6M dataset of Videopose3D at here.

After downloading the data_3d_h36m.npz file to this directory, then you can try to run this command:

$ python render_reference.py \
	--dataset_path=<path to data_3d_h36m.npz> \
	--subject=S11 \
    --action=Walking \
	--json_path=<path to save Walking.json> \
    --fps=24 \
    --loop=wrap  \
    --draw_gt

"--draw_gt" will draw the ground truth using pybullet.addUserDebugLine(), the right part of the humanoid lines will be red, other parts will be black. This is just for debugging, the render process will be much faster without the '--draw_gt' flag.

If no errors shows, it should be look like this video.

Contact

Inverse kinect and reference rendering module is developed by Somedaywilldo.

Email: somedaywilldo@foxmail.com

Reference

[1] DeepMimic: Example-Guided Deep Reinforcement Learning of Physics-Based Character Skills [Link]

[2] 3D human pose estimation in video with temporal convolutions and semi-supervised training [Link]