You may associate motion capture with some of Hollywood’s biggest and most expensive films, like Avatar, Planet of the Apes, or Lord of the Rings. We’ve all seen videos of Andy Serkis with tracking dot markings all over his face or actors wearing an intricate arrangement of ping-pong-like spheres all over their bodies, motion capture can be a cost-efficient production tool used on smaller or indie projects.
Motion capture technology is being used to create expressive and lifelike 3D animated characters across films, TV shows, games, commercials, social media, and more. As the capabilities of the technology grow, the usability and accessibility of it has grown as well, and it may be an invaluable tool for your next project.
Motion capture, or “mocap” for short and also known as performance capture, is a production tool and technique used to track and record the movement of people and convert it into digital data. This data can be applied to digital characters or objects to create lifelike or realistic animation.
Mocap systems are wearable technology that is most commonly used to track body, face, or hand movements. They can vary between multiple camera tracking set-ups, such as OptiTrack and Vicon, to a single wearable technology with individual sensors, such as Rokoko and Xsens.
Setting up your animation pipeline starts with determining which motion capture system you plan to use.
A camera system like OptiTrack or Vicon uses high-speed cameras and reflective markers to capture the movement of objects or people in 3D space with high precision and accuracy. The markers are attached to the object or person being tracked, and the cameras track the markers in real-time. While this system outputs highly precise data, camera tracking is usually specific to one production location due to long set-up times.
On the other hand, a wearable mocap system uses inertial sensors to capture and track the movements of the body, such as with Rokoko and Xsens. The body suit is equipped with a number of small, lightweight sensors that contain accelerometers, gyroscopes, and magnetometers.
Hand capture systems, such as Xsens Gloves by Manus, can also be paired with the suit to get precise finger and hand movements when needed. Unlike other systems that require a dedicated studio space with multiple cameras, Xsens is easily transported and used on location.
For capturing facial movements and expressions, Faceware is a wearable headset that captures the facial expression of an actor, such as movement of the eyebrows or the opening and closing of the mouth. The data collected from Faceware can be applied to a 3D rigged character in real-time or in post-production.
For a more cost-effective and accessible solution for facial capture, some indie directors and artists are using iPhone’s new facial tracking system, ARKit, which can track up to 52 unique facial expressions as well as head movements.
The combination of body, facial, and hand motion capture can create a complete performance that accurately captures the full range of human movement and expression. This makes it a powerful tool for film and video game production, virtual reality experiences, and other applications that require realistic and immersive character animation.
Although it is most commonly associated with film, TV, and games, mocap is now a useful tool for music videos, short films, social media, VR/AR and more. It can be used in any 3D animation pipeline, including fully animated projects or even VFX in live action projects. It is often used in situations where realistic and accurate human movement is required or when it is important to capture subtle nuances in movement that are difficult to replicate with keyframe animation.
Traditionally, all 3D animation had been done with keyframe animation. This labor-intensive process requires animators to meticulously manipulate a character's movements frame by frame. Considering that a single project can involve hundreds of thousands of frames, the complexity and tedium of this method are significant. Each frame represents a moment in time, and the animator's skill in crafting these moments is crucial to bringing characters to life.
Motion capture technology offers a transformative alternative to this traditional approach. It captures real-life movements, providing a dataset that accurately reflects the natural motion of the human body. This data serves as a foundation for animators, allowing them to start with movements that are inherently realistic and fluid.
However, motion capture is not simply a plug-and-play solution. The raw data captured often requires meticulous review and refinement. Animators step in to correct any errors and adjust the captured motions to align with the intended character performance. This process, known as clean-up animation, blends the authenticity of real movement with the creative vision of the animator. It streamlines the animation process while maintaining the artistry and nuance that are hallmarks of high-quality 3D animation. This combination of technology and human expertise is what makes motion capture an invaluable tool in the modern animator's toolkit.
For a motion capture-based production, each project follows a unique pipeline, yet there are essential steps that are fundamental to achieving success. The process begins with pre-production planning, a phase critical to the overall outcome of the project. During initial meetings, the production team, comprising directors, producers, technical directors, motion capture actors, animators, and compositors, utilize the script to outline the shot list, assign responsibilities, and iron out other logistical details. This stage sets the groundwork for a smooth production flow.
On the day of the shoot, the team's first task is to ensure all hardware is functioning optimally and to set up the environment with the necessary props. As the shooting progresses, the team can access a real-time visualization, commonly known as pre-visualization, of the character within the environment. This tool is invaluable for directors, who can make immediate on-set adjustments, and for actors, who gain insights into how their performances will translate on screen.
Once the recording is complete, the post-production phase begins. The production team processes the captured data, addressing any errors or glitches that may have occurred. This refined data is then passed on to the animators for cleanup animation, a crucial step in enhancing the motion capture's quality and realism. Finally, the compositor steps in to add visual effects (VFX) and render the final version, bringing the project to its polished, final form. This streamlined yet detailed process is key to the successful creation of motion capture-based productions.
Motion capture technology has undeniably transformed the landscape of 3D character animation within the entertainment sector. This technology, once exclusive to high-budget productions, is becoming increasingly accessible, opening new avenues for a diverse range of projects. As we progress, this trend of technological advancement and broader availability is expected to persist, allowing more productions to leverage the full potential of motion capture.
The integration of artificial intelligence (AI) in motion capture is another exciting development. Tools like Plask and DeepMotion are gaining traction, simplifying and enhancing the animation process. AI integration not only streamlines workflows but also brings a new level of sophistication and realism to character animations.
While it's true that big-budget films are more deeply embedding motion capture into their production pipelines, its impact is not limited to the traditional realms of cinema. Niche and emerging industries are also embracing this technology. Virtual influencers, a rapidly growing sector, rely on motion capture to bring digital personalities to life in a convincing manner. Similarly, the fields of virtual and augmented reality (VR/AR) are employing motion capture to create immersive and interactive 3D characters, significantly enhancing user experience.