There’s No Place Like Home

Cinematographer Robert Presley leads a team of ICG camera operators for Disney’s new 3D performance capture kid-flick, Mars Needs Moms

Minnesota-born writer Margaret Culkin Banning once observed that a mother “never leaves her children at home, even when she doesn’t take them along.” That heartfelt fact certainly hits home for the young hero of Disney’s latest 3D adventure tale, Milo (Seth Green), when he sees his mom (Joan Cusack) shanghaied from his home by Martians. Following her to the Red Planet, Milo meets up with a likeable fellow named Gribble (Dan Fogler) and initiates a rescue effort, while surviving encounters with all manner of extraterrestrial oddities.

Experienced in both live-action (The Time Machine) and animated features (The Prince of Egypt), director/co-writer Simon Wells was a perfect fit for the first 3D motion-capture based animated feature effort from ImageMovers Digital (IMD) that did not have director Robert Zemeckis at the helm. Zemeckis, who formed IMD in the 1990s and created The Polar Express, Beowulf and A Christmas Carol [see ICG November 2009], would act as producer this time out.

After developing the screenplay based on the popular Berkeley Breathed children’s book, Wells undertook a study of other 3D movies. “Frankly I’m not a fan of things flying out at you,” the filmmaker states, “though there are times it is appropriate. But I am a big fan of depth effects. Not just huge yawning depth, but the kind of depth that reveals itself as you dolly around an object. I thought How to Train Your Dragon was successful in utilizing a [live-action] cinematographer [Roger Deakins, ASC] to make the film more cinematic, and since I like strong light sources and deep shadows, getting our scenes properly lit was a major objective for me as well as for our DP.”

If Zemeckis can be considered the commander-in-chief of the mocap wars, then director of photography Robert Presley must rank as his most distinguished field general. The cinematographer’s involvement with the unique 3D workflow dates back to The Polar Express. The mocap process, often referred to as “performance capture,” utilizes an array of Vicon digital cameras as well as operator-controlled reference cameras set up on a gray-curtained stage (the “volume”) laid out only with tape to indicate walls, and wood framing on the floor to suggest scene objects. Overhead Kino Flos typically illuminate performers in the volume, though this time out Presley sought to embellish those with lights directed horizontally. “With facial capture, computers can be fooled by shadows,” he explains. “These additional Kino Flos filled in the faces, which helped with head cam data.”

Each new setup in the mocap volume takes only 15 minutes to mount, so a talented cast can blow through large page counts in a short amount of time, not unlike actors rehearsing for a stage play.

“The beauty of mocap,” recalls Wells, “is that it doesn’t slow down the actor’s process – they’re free to have at the full scene, giving it all their energy and focus.” The actors, garbed in black and crowned with a head rig for facial capture, each wear their own color patch to aid in identification for animators.

On past 3D Zemeckis shoots, Presley operated a Steadicam®, a roving reference camera eye – dubbed ‘mo-cam’ – while a team of Local 600 camera operators controlled sticks-mounted units. Wells opted to dispense with mo-cam in favor of caster-mounted tripods, and the occasional handheld shot. “Simon understood and accepted that if one actor blocked off another, the data on a camera would fall apart,” Presley continues. “We’d just have to establish coverage that let us pick up that data from another camera, or do another take. With motion capture, we’re gathering data for the animators, so it doesn’t have to be picture-perfect.”

Wells’ approach to the 3D workflow was unique – a kind of live-action previsualization before animation even began. While mocap was underway, Guild operators shot video reference with a general notion of the camera angle Wells had in mind. Then editor Wayne Wahrman and Wells did a performance assembly before the data got turned into a 3D render. “I’d do thumbnail drawings of each shot and take them to the Director’s Layout [DLO] unit next door,” Wells recounts, “where Eric Carney and his crew [from L.A.-based previs firm The Third Floor] made up low-res shots, which then came back to editorial. It was like being able to do reshoots the day after filming, but with the knowledge that the performances will always be perfect.”

The responsibilities of Guild camera operators were twofold. “As always, our main objective is to provide solid reference, especially of faces, for the animators,” states lead operator Brian Garbellini. “But when Simon wanted all the characters looking screen left, we’d have to reflect that decision with most of our camera placements, which might leave us with incomplete data. So we’d hold a few cameras back when possible to move around and maintain good frontal coverage on faces. Then there was the matter of framing to Simon’s requirements while also pulling focus, so you’ve got some left-brain/right-brain action going on, and if you’ve just come off a traditional show, it takes a day or two to get right with it. ”

Garbellini found Presley’s instincts for coverage invaluable. “Bob reminded me that details like hand gestures are difficult to animate from scratch, so capturing those little moments was important,” Garbellini continues. “Most of our operators were mocap veterans, so we all watched each other’s back. An operator might tell me that an actor in the group was squatting down in rehearsal, so I’d add a low locked-off camera to cover that action.” If there were insufficient cameras to capture all key details, Garbellini would reassign a certain number of cameras on take two, knowing that if necessary, animators could do a blend between takes in animation.

Processing of motion-capture material involved data cleanup and then application to digital characters, which were then incorporated into virtual sets. IMD animation supervisor Huck Wirtz says his team relied on Autodesk® Maya® for primary animation. “MotionBuilder® was good for plugging in and editing the mocap, and for getting feedback on lower-res models, which could be viewed at or near real-time,” Wirtz confirms. “After Simon went through DLO and had this video-game resolution cut of the movie, then it could go to animation. That made things a lot more efficient, because when cleaning up motion capture to make it really high fidelity, we worked only on the stuff he chose. That gave us solid reference for composition, from eyelines to how the camera would move.”

Another reference used for full-res animation was a digital “Kabuki Mask” process, which allowed animators to project actors’ faces back onto their characters. “Simon had fallen in love with what the performers did on set,” Wirtz recounts. “While their features were obviously different from those of Milo, Gribble and the Martians, doing a check on their expressions could at time give us a good lead or inspiration.”

Wells assembled a color script early on that indicated palettes for each scene, and then adjusted the colors for the emotional effect of any given scene. But he needed live-action style lighting to get the color scripts to play as he wanted in CG animation. “Bob wondered why we couldn’t build virtual lights to cast onto characters and objects in a believable real world way,” the director states.

To achieve the impression of bounced light sources on A Christmas Carol, Presley had ImageMovers artists create and position extra virtual lights, essentially faking every desired bounced effect. For Mars, the DP’s goal was a more naturalistic lighting scheme.

“Our virtual lights, which IMD called indirect lighting, bounce off walls and objects to create much more realistic effects,” Presley points out. “When illuminating a character’s face, light would bounce off the bridge of his nose and fill the eye sockets a bit.”

Wells confirms the efficacy of Presley’s approach, adding that, “We had a scene with two characters going through corridors, and they decided to render the scene without the characters, just to see what it would look like with these lights on. It came out near-enough perfect, proving that when you have a live-action cinematographer setting lights that work like real lights, the scene will be properly lit.”

The duties of IMD’s visual effects supervisor Kevin Baillie ranged from checking if “flesh” looked properly human (or Martian) in hue to ensuring that CG eyes registered in a way that didn’t land the character square in the middle of the so-called uncanny valley. “Each department had between 10 and 40 people,” Baillie explains, “and most of these artists have their own specialties: One would build the character, another did the skin and yet another would do the texturing.”

Since Mars’ action takes place in a variety of locales, the art direction challenges were widespread. “There’s a chase on the surface [of the planet],” Baillie continues. “But most of the time we are underground, either in the squeaky-clean, austere Martian cities or beneath them in a gritty trash world where Milo meets Gribble. There was a lot of complex geometry in these environments, but we had tools that let us know what the camera would see, so we didn’t build or detail anything that wasn’t going to show up.”

The cities were mostly shiny metal and plastic – a computational nightmare – so IMD staffers devised cheats to avoid having to deal with impossibly long render times. One of the show’s biggest artistic challenges was a scene of Milo and Gribble falling into a river.

“Our [senior look development] artist Robert Marinic spent months on a technique to grow these glowing, barnacle-encrusted lichen,” Baillie adds. “They cast light onto the terrain, and could define and control the lichen placement either by painting textures or by having us generate an ambient inclusion pass for the whole environment; whenever objects were close together, you’d get the darkness that comes from partial shadowing, and it would add lichen in these areas where rocks butted up against each other, just like moss in nature.”

Stereo supervisor Anthony Shafer broke the script down with Wells into units of depth on a per-scene basis. “When the narrative goes strong, depth increases, and when the narrative softens, the depth usually backs off,” Shafer observes. “Our research revealed that matching dimensional depth to emotional levels worked very well; in fact people are tuned to see other humans at a particular level of roundness, so the flatter a character appears, the less engaged a viewer becomes.  Our settings for character roundness related to how we felt the character played for the audience in a particular scene.”

Shafer’s bag of tricks relies on fooling the human visual cortex by tapping into emotional dials and changing audience perception of camera perspective. “A [3D animation] stereographer can force the eye into seeing the character as smaller or larger by varying the convergence from each eye,” he adds. “So when Milo felt small, we could actually push him visually in that direction for the audience.”

The neatest trick involved putting the proscenium arch on a virtual gimbal. When the audience sees stereo in RealD or Dolby®, they see a world that at times comes out toward them. But, Shafer adds, there’s always awareness of the screen, so a cardinal stereography rule is “never break the frame, because the stereo effect is ruined when one eye has a view that hits this edge. To avoid this, we developed a means to float the screen further out into the audience, thus avoiding edge error and tricking viewers into thinking the images are still behind the screen. This floating window gives us more depth to play with, increasing the dynamic range of stereo, and can affect audience physiology and emotions in other ways. By tilting the window forward, you create an illusion of falling when Milo looks down into a cavern; tilting it back as he looks up at a building causes viewers to sit back in their seats.”

Shafer worked with Hugh Murray of IMAX to produce a slightly altered version of the film. “The geometry of IMAX theaters and screen dimensions don’t let you set floating windows, so we wrote software allowing us to create a slightly deeper version for the whole film.”

Adds Wells: “There are shots I wish we could have held on longer, but I didn’t anticipate the stereo effect would be so successful. The current cut works best in 2D, but a longer version of the film, one that played a bit more slowly, would have been ideal for 3D venues.”

When Disney announced in early 2010 that ImageMovers Digital was closing, the announcement came as a surprise for many long-time staffers. Presley says the move heralds a sea change, as he returns to conventional 2D filmmaking and its shorter tenures of involvement.

“When I started on Polar Express, what I thought was a two-month job became two years,” the DP smiles. “And it was a similar commitment for Beowulf and Christmas Carol, because we’re talking about more involvement than just motion-capture process up front and timing at the end. With Mars, the [virtual] indirect lighting took a long while to work out, and I’m still timing the various releases. I hope to be able to use my motion-capture experience again, but I’m sure getting back to the live-action world will be an interesting experience.”

By Kevin H. Martin / photos by Joseph Lederer / Walt Disney Pictures