Bare Necessities

Avatar meets Rudyard Kipling as Bill Pope, ASC, takes a romp through the high-tech jungle in the classic children’s tale

When cinematographer Bill Pope, ASC, first met with Jon Favreau to discuss the filmmaker’s latest project for Disney, a new adaptation of Rudyard Kipling’s classic, The Jungle Book, his reaction was (like the film that ultimately followed) well off the grid. “They explained the process, and I said I’m sorry but I’m not the right person for this job.  I’m a live action cinematographer and I don’t have the tools for working in a purely digital space.” In the strictest sense, Pope was right. Despite the cinematographer’s extensive credits shooting complex VFX projects like The Matrix franchise and Spider-Man 2 and 3, The Jungle Book was something entirely different. Nevertheless, by project’s end, Pope described the film as “the purest filmmaking experience I’ve ever had.” Favreau’s initial response wasn’t all that different from Pope’s.  “The last thing I wanted to do was be in the middle of the jungle filming with a child actor,” he recounts. “Short hours on location and conditions that don’t lend themselves to filming easily. When you bring a camera into the jungle, it looks beautiful to the naked eye, but on the screen, it just looks like salad.”

JungleBook_1

With urging from a passionate supporter in Walt Disney Studios Chairman Alan F. Horn, Favreau, of course, did sign on, realizing the only way the project could be done was to use technology first developed by Director James Cameron and Visual Effects Supervisor Rob Legato, ASC, for Cameron’s Avatar, released in 2009. And unlike Disney’s 1967 cell-animation version of The Jungle Book, Favreau wanted to create a photo-real environment and characters.

Jon wanted the audience to feel like we had shot in India,” Pope remembers. “Part of our goal was to shoot it like it was shot with real cameras – with lens flares, lens aberrations, camera movements and a true handheld feel. It was the opposite of earlier CG films, in that we wanted no shot that couldn’t be captured by a human being.”

Enter Rob Legato, who began working with Favreau and Executive Producer Pete Tobyansen to create a CGI pipeline that looked like cinema shot with real (not virtual) cameras. “I needed to come up with a vocabulary that would translate the filmmaking disciplines that Jon and Bill were familiar with from the analog world, but which make use of CG animation techniques,” the two-time Oscar winner explains.

They needed to be able to make intuitive, not intellectual, choices – looking through the lens and moving to the right or left, or raising up a foot. That’s different than telling an animator, ‘Move me up 6.7 inches to the right.’ I had to create a methodology that drove the computer, which was precise but not intuitive, via an analog input device like Bill’s used to.”

The combined production and VFX teams included Pope, Legato, Production Designer Christopher Glass, Animation Supervisor Andrew Jones, On-Set Visual Effects Supervisor Michael Kennedy and VFX Supervisor Adam Valdez, of London-based MPC, the main animation/visual-effects house on the project. MPC created 90 percent of the film’s animation, with Weta Digital, under the supervision of Dan Lemon handling any animation involving apes (based on Weta’s stunning success with the Planet of the Apes franchise). Playa Vista-based Digital Domain was also heavily involved in the process, creating the virtual camera system with Legato that would allow the lifelike capture.

Storyboards and conceptual art were created by Favreau and Storyboard Artist/Head of Story Dave Lowery and his team of artists, from which MPC’s digital artists under direction of Glass, Jones and Digital Domain’s Virtual Production Supervisor, Gary Roberts – operating as the “Virtual Art Department” or “VAD” – began building the first level of virtual sets. These included combinations of environments, some based on photography of temples and forests taken during visits to India, along with simple “chess pieces” of the film’s characters to create so-called “decimated assets” – those simple enough to function in a video game–like environment.

JungleBook_4

The video-game approach was intentional. It allowed Favreau to do an initial virtual scout of the entire film using a controller/joystick; this low-res version made use of a modified version of a video-game engine called Unity, with Jungle Book being the first major project to use one for visualization, according to Roberts.

It enables Jon to see and manipulate them in real time,” says Legato.

We can actually set cameras and whatever lens is being called out,” adds Roberts, “to allow Jon and the team to explore the set from those positions,” which are saved for later use in the pipeline (as in motion capture). Very basic lighting tools are also available for Pope, at this point only for global lighting choices.

Favreau walked through each scene on a large monitor with a joystick and noted his preferences. “Jon likes a lot of people in the room with a lot of ideas – and then he chooses what he wants and guides you,” Pope shares. “And it was just like a real scout. He’d move the camera around and say, ‘You know, this isn’t a bad angle,’ or ‘He could come down this path.’” In fact, Favreau might determine that a tree or a river was put in an inopportune spot, and, as Roberts recalls, “because it’s rendered in the game engine pipeline, we can make the change in real time, right there.”

JungleBook_6

After the virtual settings were agreed upon, the team captured then-10-year-old actor Neel Sethi (playing Mowgli) and stunt actors/troupe players (performing the animal characters with whom Mowgli interacts) on Digital Domain’s motion-capture stage in Playa Vista. Rough versions of the set were built by the art department – ramps, slopes and other features – to approximate the terrain in the virtual sets, and then built into the 60-foot-by-80-foot stage. Taking advantage of the Unity game engine and already designed virtual sets, Favreau and Pope could change blocking or sets in real time. “It looked like a video game,” Favreau smiles. “But I could walk in the virtual set and direct my actors.”

For about ten shots in the film, Legato and Kennedy went to a rocky exterior location with parkour performers acting out chases, leaps and other actions. The scenes, mostly for the film’s opening and closing where Mowgli is chased by his mentor, Bagheera, the black panther, were photographed using a dozen video cameras, placed in varying positions and angles along the actors’ paths, in a process known as rotocapture.

The actors are wearing tracking dots, but the cameras are video, not infrared,” explains Kennedy. Digital Domain then used a “RotoMotion” system to extract movement and create rough moving characters, similar to those created by motion capture.

JungleBook_2

The mocap footage was then edited into a single “master scene” for each sequence, comprised of multiple performances and takes, placed into the virtual sets, as blocked by Favreau and Pope. In addition, Jones had his Digital Domain LAB animators create a library of walk cycles, sitting cycles and other moves, which were then substituted for the rough performances done for the animal characters on the mocap stage. “We’d use mocap as a blocking tool, and then slot in the proper animal walk,” he explains.

It was at this point that Pope placed his virtual camera, utilizing Legato and Digital Domain’s “digital and analog device system” that combined Autodesk MotionBuilder character animation software and an application, based on Unity, the game engine. This was called Photon and was a collaborative development effort between Digital Domain and Microsoft.

MotionBuilder works well in a real-time domain,” Roberts details. “But it has rendering limitations.” Adding Photon allowed Pope to make use of the game engine’s ability to render quickly, as well as offer him an ability to light the scene in real time. “As I’m working, I can tell it where the sun is, where the moon is, and add fill lights and bounce lights. I can actually see the lighting and how it’s behaving,” the DP relates.

This virtual camera (or “camera capture”) was used on a 65-foot-by-35-foot stage at Digital Domain, as well as on a 20-foot-by-30-foot stage at the film’s production office in Playa, and then in a smaller volume on Stage 2 at Los Angeles Center Studios (LACS) in downtown L.A., where additional shots could be created adjacent to where live-action filming was taking place.

Roberts says the virtual camera capture process allowed Pope to direct the exact path of the camera move, just as he would on a live-action set. “With each shot recorded in the game engine, Bill was able to pull focus dynamically as well as play with how light and shadow falls on the characters and environment as a virtual gaffer/lighter,” Roberts explains. “He could work with the production designers and art directors to evoke the feeling of each sequence by adding effects such as atmospherics, bloom, fire, rain, and dust.”

JungleBook_5

Pope (and sometimes Legato) would operate inside the “shark cage,” working with two Digital Domain technical directors, Girish Balakrishnan and John Brennan, who operated Photon and MotionBuilder, respectively, under the direction of April Warren, Digital Domain’s virtual production LAB supervisor on site, and with the help of Ryan Beagan as Pipleine Supervisor. A virtual production editor would live-cut the sequence and review with Pope in real-time. The “physical virtual camera” was a carbon fiber aluminum custom housing with tracking features, to allow its position and orientation to be tracked by a mini mocap system. On either side were custom proportional joystick inputs, each with four or five buttons, and an 8-inch OLED field monitor that allowed Pope to view what his “camera” sees in the virtual environment.

Using joysticks, Pope could roam anywhere within the environment to set up shots. “He can offset himself in the digital world, just like in a video game,” Roberts adds. “He can boom up, down, push in and out, track left and track right.” The joystick buttons were customized by Digital Domain for whatever Pope preferred.

Within Unity and Motion Builder, we simulated the Alexa camera Bill used onstage,” Roberts continues, “along with the prime lenses he prefers, as accurately as we could, including modeling depth of field, so that the camera and lighting react the way he would expect them to in the real world.”

Pope would develop camera moves, recorded in the Unity system in two ways. One, known as “parenting,” allowed Pope to instruct the operators to place the camera a fixed distance, say, behind Mowgli as the boy traverses the scene. “Bill could raise or lower his camera, or move left or right, kind of like raising the arm on a camera car,” Legato explains. “It’s like a hand-operated override of a pre-programmed path.”

The other method allowed Pope to set key frame positions by moving his camera (with the joysticks) into key positions along the camera’s path, and having Brennan store that location, before moving on to the next. Pope could then specify the type of movement, speed and other parameters of movement between those points when the shot was run – all of which could be adjusted.

JungleBook_3

They’d play the path back to him,” describes Roberts, “and he’d say, ‘Smooth it out, flatten the curve here,’ and then he’d operate on top of it,” adding the natural camera movements he would as if the camera were simply sitting on a fluid head on a dolly being pushed to his instructions on a real set. (Pope, in fact, did place his V-Cam rig on top of a fluid head, itself on a modified tripod in the shark cage.)

The Digital Domain Virtual Production team also created a system that allowed Pope to create, essentially, a Steadicam move by use of a two-man system. A grip would move a carbon fiber stick bearing mocap-tracking markers along the path of motion. Pope’s V-Cam rig was electronically tied to the top of that stick, effectively making the two pieces operate as one. “One person is essentially acting as the dolly operator, or the head of the crane, and viewing their movement through Bill’s camera,” Legato explains. “And Bill then operates his camera on top of it. It’s like taking apart a Steadicam.” After shooting in Camera Capture, Pope would sit with the Digital Domain team and make further camera adjustments, after which he would, working in Photon, finalize his lighting scheme with MPC’s lead lighter, Michael Hipp, with the final image then rendered out in Pixar’s Renderman.

After scenes were cut together by Favreau and editor Mark Livolsi, the creative team reviewed again for possible changes in blocking, lighting and camera movement. “There are scenes I shot 20 times,” Pope recalls. “I think there was only one time where they said, ‘I think this is it.’ And I almost fell off my chair.”

From the completed shots, a TechViz was performed by MPC, in preparation for filming of live-action elements. The TechViz information allowed the art department to prepare/construct physical set pieces, as well as lighting and grip information for those teams. “When we arrived on set,” notes Roberts, “everything on the stage was pretty accurate to what was shot digitally.”

Twenty-eight live-action sequences were shot at LACS, all of them involving Mowgli. “I’m not sure I can tell one raccoon from another, but raccoons can,” animation supervisor Jones offers. “And it’s the same with humans. From the time we’re babies, we study human faces constantly and look for little mood changes in the details. If the slightest thing is off, it feels unnatural and not living. It gets creepy really quickly.” Thus, Mowgli was shot live, in what Favreau describes as “a glorified element shoot,” to complement what had already been created for the rest of the film digitally.

Favreau and Pope shot in native 3D with a Cameron Pace Alexa M Smart rig. “[Post-conversion] is great for a robot,” Favreau states. “But for the subtleties of a child’s face, especially in close-up – those are things where conversion starts to show its hand.”

Pope ran blind tests during prep for cameras and lenses, and, he says, “Jon, Rob and Chris always chose the Arri Alexa and Panavision Primo lenses, mainly for the level of sharpness and color rendition.” On the suggestion of 1st AC E.J. Misisco, veteran Steadicam/camera operator Roberto De Angelis came aboard to help. He and Pope would shoot on Stage 1, while Legato ran a 2nd Unit on Stage 2, borrowing De Angelis for the more technically demanding shots. “It became kind of a tag team, with Bill on one stage and me on the other,” Legato recalls. “I would set something up, and he would come over and approve the lighting or make a suggestion.”

Local 600 members included Misisco, pulling focus for De Angelis, and backed up by 2nd AC Liam Sinnott. When Pope was running B -camera, he was assisted by 1st AC Mike Klimchak and 2nd AC Billy McConnell, Jr., who would also work with Legato on 2nd Unit. Robin Charters served as DIT, with experienced 3D technician Jeff Rios pulling convergence on set.

The day began at 7:00 a.m. and would need to wrap by 4:30 p.m. due to the limitations of working with a child actor. After the 4:30 wrap, Pope would switch to shooting virtual camera until around 7.00pm. Throughout, the crew would view animatics (i.e., the finished edit of the Photon version of the scene), with comments from Favreau.

That was especially helpful for framing,” De Angelis shares. “If Mowgli is talking to Baloo, the bear, who has a very large head, I would know to leave two-thirds of the frame for Baloo, and put Mowgli in one-third.” Kennedy handed out a book with more TechViz data, including everything from lighting to locations of crane bases.

Detailed set pieces, involving anything that Sethi had to interact with physically, were also built. Favreau brought in puppeteers from Henson to interact with Sethi, either in a full-size two-man rubber suit (for Baloo) or simply with eyes attached to a hand.

We put the most care into those scenes,” Favreau explains. “I’ve acted myself with enough tennis balls, and they don’t make the best scene partners.” Jones was also on-hand to offer suggestions for reactions from the young actor, knowing the experience the future-animated animal character he was addressing would have.

For long-running chase scenes, runs were broken into segments, with De Angelis covering Sethi handheld or on an electric cart with a Technocrane on top. “Whatever combination that was needed to achieve the best result,” he says. Pope also had the operator purposely build in imperfections to help sell the “shot in the wild” sense of reality. “I’d tell Roberto, ‘Let him get ahead of you,’ so it would look like, ‘Oh, the operator couldn’t keep up.’”

The team made use of an ingenious method of creating shadows onto Sethi, both from his animated scene partners as well as dapple from non-existent overhead jungle foliage. MPC pre-animated the shadows of the characters and leaves, and pairs of high-powered projectors shot black mattes of those shadows onto Sethi. (Unlike painting out key lights later, this method maintained skin detail throughout.) For a walk-and-talk with Baloo, Sethi was placed on a 30-foot-diameter turntable, with leaves and Baloo’s shadows appearing at pre-determined spots along their path.

JungleBook_7

As on Avatar, Glenn Derry and Technoprops operated a Simulcam system on set, to provide a real-time combination of the in-camera image with the Photon/Unity settings and animation. Cameras were outfitted with LED tracking markers, picked up by four or five mocap camera towers spread around the set, as well as IMU sensors, to provide accurate rotational data for the cameras as they were operated.

Digital Domain and MPC created 360-degree environment spheres, or “bubblecam” images, that Gaffer Bob Finley could view on set with an iPad. Finley could see lighting schemes Pope and Hipp had decided upon after the Camera Capture step, in any position and display (in gray scale) via MPC’s completed animation, and adjust his on-set instruments accordingly. “We might tilt it upwards and see that there were more trees above in the set, and realize we needed to add more leaves,” courtesy of blue-painted set foliage, “to create more dappling,” Pope describes.

The iterative camera process required Pope to dash across to Stage 2 and create a new move in the shark cage (with the V-cam) if an on-stage setup was off. “It was like live-action dailies,” Legato laughs. “We’d just keep shooting from different angles until we were happy.”

Though obviously complex and tedious, the process was one of the most liberating Pope has experienced in his long and stellar career with VFX-driven franchise films.

I had the leisure of a perfectly repeatable action, that I could shoot over and over until I got it where I wanted it,” he concludes. “It was like making your thoughts come alive, right in front of you.”

Combining photo-real environments and animals, shot in a way that replicates the true analog feel of a real camera,” adds Legato, “made it feel like every shot was something that could only be shot live. It blends all the art forms together in a way that belies how it was done.”

by Matt Hurwitz / Behind-the-scenes photos by Glen Wilson/Frame Grabs Courtesy of Walt Disney Pictures