PeopleCap 2018

ECCV 2018 Workshop
Afternoon, September 14th
Submission
Invited speakers
Program
Organizers
PEOPLECAP 2018 - ECCV 2018
  • Home

What is PeopleCap?


Accurately tracking, reconstructing, capturing and animating the human body in 3D is critical for human-computer interaction, games, special effects and virtual reality. In the past, this has required extensive manual animation.
 
Nowadays research in this area is allowing us to capture and learn realistic models of people and hands from real measurements coming from scans, depth cameras, color cameras and inertial sensors.  Such a model is, ultimately, a compact parameterization of surface geometry that can be deformed to generalize to novel poses and shapes. This model can then be used to track bodies and hands from noisy sensors by optimizing the model parameters as to best fit noisy and incomplete image observations.

The workshop is intended to offer a meeting and discussion platform for researchers with diverse backgrounds, such as computer graphics, computer vision and optimization, and machine learning.  This will hopefully push the state-of-the-art in “Capturing and modeling humans” in terms of models, methods and datasets.
 
The call for papers will be in the areas of
  • 3D Human pose and shape estimation from images, depth cameras or inertial sensors
  • 3D Hand pose estimation and tracking
  • Human body, hand and face modeling
  • 3D/4D Performance capture of bodies, faces and hands
  • Capture of people and clothing
  • Human body and hand models
  • Models of human soft-tissue
  • Registration of bodies, hands and faces
 
While the computer vision community has seen a lot of work on methods for detecting and tracking people in 2D much less work has focused in reasoning directly in 3D.  Hence, in PeopleCap, special emphasis will be given to methods that work in 3D and to methods that use a generative model. 

 
 

Submission

Submission Deadline
July 24
Reviews Due
August 3
​Notification of Acceptance
August 7
Camera-Ready Submission
August 17
Workshop
September 14
- Note: camera ready submission is now open.
- Note: submission deadline has been extended to July 24th.
- All deadlines are 5 PM Pacific time.
- Paper submissions should follow the exact same guidelines of ECCV, 14 pages plus references.
- Submissions can be uploaded to the CMT: https://cmt3.research.microsoft.com/PeopleCap2018/
  1. If you do not have one already, create an account for cmt3 and login.
  2. In case you are not directed to PeopleCap submission, type PeopleCap in the search box to find it.
  3. Create a new submission and upload the main paper (and supplementary material if any).
- Accepted papers will be published in the proceedings of ECCV workshops. 
 
 

Invited Speakers

Picture
Lourdes Agapito, University College London
Picture
Yaser Sheikh, Carnegie Mellon University
Picture
Adrian Hilton, University of Surrey
Picture
Jamie Shotton, Microsoft
Picture
Franziska Mueller, MPI for Informatics, Saarbrücken
Picture
Stefanie Wuhrer, INRIA
 
 

Program

13:30
Welcome and introduction
13:40
Weak Supervision for 3D Human Pose Estimation
Lourdes Agapito
14:20
4D Performance Capture in the Wild
Adrian Hilton
15:00
Towards Real-time Hand Tracking from In-the-wild Video"
Franziska Mueller
15:40
Poster session and coffee break
16:40
Photorealistic Telepresence
Yaser Sheikh
17:10
Generative Models of 3D Human Faces
Stefanie Wuhrer
17:40
Closing remarks and best paper announcement

Accepted papers:

Nikolay N Chinaev (VisionLabs)*; Chigorin Alexander (VisionLabs); Ivan Laptev (INRIA Paris)
MobileFace: 3D Face Reconstruction with Efficient CNN Regression

Bastian Wandt (Leibniz University Hannover)*; Hanno Ackermann (Leibniz University Hannover); Bodo Rosenhahn (Leibniz University Hannover)
A Kinematic Chain Space for Monocular Motion Capture

Hang Dai (University of York)*; Nick Pears (University of York, UK); William Smith (University of York)
Non-rigid 3D Shape Registration using an Adaptive Template

Aaron Jackson (University of Nottingham)*; Chris Manafas (2B3D); Georgios Tzimiropoulos (University of Nottingham
3D Human Body Reconstruction from a Single Image via Volumetric Regression

Dylan Drover (Amazon Lab126)*; Rohith MV (Amazon Lab126); Ching-Hang Chen (Amazon Inc.); Ambrish Tyagi (Amazon Lab126); Amit Agrawal (Amazon Lab126); Cong Phuoc Huynh (Amazon)
Can 3D Pose be Learned from 2D Projections Alone?
 
 

Organizers

Gerard Pons-Moll

Picture
gpons !at! mpi-inf.mpg.de

Research Group Leader, MPI for Informatics

Jonathan Taylor

Picture
jtaylor !at! cs.toronto.edu

Research Scientist, Google

Previous edition: PeopleCap 2017
 
Imprint
Data protection
Powered by Create your own unique website with customizable templates.
Photo used under Creative Commons from y entonces
  • Home