Data-Driven Animation Techniques (D2AT)”

November 27th to 30th, 2017 – at BITEC, Bangkok, Thailand

Aims and Scopes:

              The aim of this full-day workshop in conjunction with SIGGRAPH Asia 2017 is to bring together researchers from diverse backgrounds, such as computer graphics, computer vision, virtual reality, human computer interactions and machine learning, with a common interest in data-driven, realistic animation. Interests in this emerging topic may stem from a variety of sources, e.g. point cloud data collected from laser scanners or RGBD cameras such as KINECT, high resolution geometry reconstructed by Structure from Motion (SfM), motion capture devices including high frame-rate optical trackers and IMUs, GPS data from millions of mobile devices etc.  Despite the high dimensionality and the huge volume of the dataset, the boost in machine learning and big data technologies are allowing researchers to extract important features from them for which can be applied for data analysis, synthesis and editing.

This workshop will deliver attendees the latest approaches for data acquisition, data processing, feature extraction, data analysis, and synthesis for applications in computer graphics and animation.  Moreover, we plan to bring together people from computer graphics, computer animation, computer vision, big data analysis and machine learning to create a synergy of applying machine learning techniques for animation production.

We call for high quality works that fall into the topics of data-driven techniques for computer graphics and animation. Novel ideas and results are highly welcome even if they are in a preliminary stage. We are also interested in papers that discuss existing techniques applied in a novel context.

Some specific problems of interest include, but are not limited to,

  1. data-driven character animation,

  2. data-driven cloth animation,

  3. data-driven hair animation,

  4. data-driven fluid animation,

  5. data-driven facial animation,

  6. data-driven motion editing,

  7. data-driven motion retargeting and synthesis,

  8. data-driven physics-based animation,

  9. data-driven techniques for virtual reality/augmented reality applications,

  10. machine learning techniques for computer animation,

  11. machine learning techniques for non-photo realistic rendering,

  12. machine learning techniques for character control,

  13. machine learning techniques for human computer interaction,

  14. video-based human motion analysis and tracking,

  15. image/video-based facial recognition,

  16. image/video-based human localization,

  17. image/video-based 3D reconstruction, and

  18. statistical, structural or syntactic pattern recognition methods for methods for motion analysis and synthesis.


Submissions should conform to ACM SIGGRAPH Asia 2017 proceedings style (Template) with up to 8 pages. Papers must be original, unpublished work, written and presented in English. They must be submitted online through the Easy Chair submission system. Authors of selected papers which appear in the workshop will be invited to submit extended versions of their papers to the special sections in journals (Computer & Graphics, and Computer Animation and Virtual Worlds). The proceedings of the workshop will be included in the ACM Digital Library.

Presenters are expected to cover their own travel costs, which are not covered by SIGGRAPH Asia. One contributor per selected Paper has to register with a Basic Conference or Full Conference registration in order for the work to be published and presented at SIGGRAPH Asia.



Deadline for paper submissions: 15 August 2017

Notification of acceptance: 31 August 2017

Camera-ready copy of papers: 15 September 2017

Workshop website at, www.euh2020aniage.org/workshop-siggraph-asia

Contact: hyu@bournemouth.ac.uk


Organizing Committee

Workshop Co–Chairs:

Dr Hongchuan Yu, National Centre for Computer Animation, Bournemouth University, UK

Dr Taku Komura, School of Informatics, University of Edinburgh, UK

Prof. Jian Jun Zhang, National Centre for Computer Animation, Bournemouth University, UK


Program Committee

Prof. Yizhou Yu, Dept. of computer science, The University of HongKong, China

Prof. Tong-Yee Lee, Dept. of Computer Science and Information Engineering, National Cheng-Kung University, Tainan, Taiwan

Prof. Chang-Tsun Li, Dept. of Computer Science, Charles Stud University, NSW, Australia

Dr Bui The Duy, FACULTY OF INFORMATION TECHNOLOGY, University of Technology – Vietnam National University, Hanoi, Vietnam

Dr Le Thanh Ha, FACULTY OF INFORMATION TECHNOLOGY, University of Technology – Vietnam National University, Hanoi, Vietnam

Prof Salem Benferhat, Centre de Recherche en Informatique de Lens-CNRS, University of Artois, Arras, France

Dr karim Tabia, Centre de Recherche en Informatique de Lens-CNRS, University of Artois, Arras, France

Dr Sylvain Lagrue, Centre de Recherche en Informatique de Lens-CNRS, University of Artois, Arras, France

Dr Masaki Oshita, Kyushu Institute of Technology, Japan

Dr Xi Zhao, Xian Jiaotong University, China

Dr Hubert Shum, Northumbria University, UK

Dr Shigeo Morishima, Waseda University, Japan

Dr Edmond Ho, Northumbria University, UK

Dr He Wang, University of Leads, UK


Keynote 1

Title: An Overview of Procedural Urban Modeling in Academia and Industry

Speaker: Dr Tom Kelly (ucactke@ucl.ac.uk)


Manually creating objects to fill our virtual 3D worlds can be tedious and time consuming; ‘procedural modeling’ is the study of programs to do this for us. Of particular interest today is urban procedural modeling – creating systems to automatically generate cities. These systems are of interest to city designers, architects, SFX companies, and video game developers. Procedural modeling is a relatively new discipline, but intersects many existing fields, including User Interfaces, Geometry, and Perceptual Studies. In this talk, Tom will discuss his experiences of procedural modeling in industry and academia. 

An example is that users are easily confused by the many input parameters that 3D procedural models have and are therefore unable to interact with them. Presenting these parameters to users requires advances in User Interfaces. Another field is Geometry – here the problem is finding geometric primitives that can be used to create realistic models; ideally these should be simple, yet useful, when modeling a wide range of real-world buildings. Finally, it is important to understand what makes a procedural model realistic. This is a challenging perceptual problem because a single urban procedural model can create many different buildings. Tom will present and discuss his research into each these problems, and how the results have been used in industry.

Bio sketch:

Tom is a postdoc at UCL under Niloy Mitra; he studies the modeling and reconstruction of large urban environments, fusing techniques from geometry processing, procedural modeling, and video games, to create state-of-the-art solutions to real-world problems. In previous careers, Tom has worked at animation and video start-ups, and as a software engineer at Esri.


Keynote 2

Title: Simulating the natural motion of living creatures

Speaker: Dr Jungdam Won (nonaxis@gmail.com)


Simulating the natural motion of living creatures have always been at the heart of research interests in computer graphics and animation. Recent movies and video games featured realistic, computer-generated creatures based on special effect technology. Flying creatures have great attention because of its unique and beautiful motions. Physics-based control for flying creatures such as birds, which guarantees physical plausibility of the resultant motion, has not been widely studied due to several technical difficulties such as under-actuation, complex musculoskeletal interactions, and high-dimensionality. In this talk, several different approaches to tackle the challenges will be introduced. First, we recorded the motion of a dove using marker-based optical motion capture and high-speed video cameras. The bird flight data thus acquired allow us to parameterize natural wingbeat cycles and provide the simulated bird with reference trajectories to track in physics simulation. A data augmentation method is also introduced to construct a regression-based controller. Second, we trained deep neural networks that generate appropriate control signals when the state of the flying creature is given. Starting from a user-provided keyframe animation, the learning is automatically proceeded by deep reinforcement learning equipped with evolutionary strategies to improve the convergence rate and the quality of the control.

Bio sketch:

Jungdam Won is a post-doctoral researcher in Movement Research Lab. at Seoul National University. He received his Ph.D. and B.S. in Computer Science and Engineering from Seoul National University, Korea, in 2017, and 2011, respectively. He worked at Disney Research Los Angeles as a Lab Associate Intern with Jehee Lee, Carol O’Sullivan, and Jessica K. Hodgins in 2013. His current areas of research interests are character animations, where physics-based control, motion capture, and machine learning approaches have been applied.