Jim Berney – Divergent

After obtaining a master’s in computer science and working at DARPA (part of the U.S. Department of Defense), Jim Berney entered the industry at Metrolight Studios in 1995 as a technical director. Berney then spent a decade-and-a-half at Sony Pictures Imageworks, rising to become CG supervisor and then visual effects supervisor. Along the way, he has helped launch a certain young wizard on his way to Hogswarts in Harry Potter and the Sorcerer’s Stone, took a trip to Middle Earth for The Two Towers, toiled on the Matrix sequels and nabbed an Oscar nomination for The Chronicles of Narnia: The Lion, the Witch and the Wardrobe. His latest effort, Divergent, is (surprisingly) his first that originated digitally. The Chicago-set feature’s amalgamation of in-camera and CGI reflects both Berney’s strong technical background and his on-the-job experience for traditional film effects, as writer Kevin H. Martin discovered.

ICG: Over the past two decades, a “whole cloth CG” approach has become the default mindset, but there are still those who take the “right tool for the right job” approach. Is there a balancing point between the two? Jim Berney: I think we might be getting there. I’m kind of second-generation visual effects, whereas the people I first worked with [at Sony Pictures Image Works], like Ken Ralston, Rob Legato, ASC, and John Dykstra, ASC, are first generation, and about getting it in camera. Now there are third-generation guys who want to do it all CG. I have a really strong tech background in digital. But if I can get it in camera, that is still my preference. That arose out of the great relationship I had while working with John [Dykstra]. His experience with practical and visual effects was immense, but his experience of computers was mostly motion-control applications. It was a mutually beneficial process, educating him while he educated us. Jerome Chen was another supervisor at SPI who taught me a lot.

That would have been on Stuart Little, back when the issue of lighting and rendering fur was extremely problematic. Not just fur, but fabric as well. What kind of material drapes well on a mouse? [Laughs.] The tech heads on Stuart focused on real, mathematically based lighting. My major was in algorithms for just such lighting. But being true to reality didn’t work for the film. When Stuart was over against a wall in the corner, John wanted a rim light on him. Well, there was no justification for that, except that this is your lead character and you need to make him look good. The argument made by other parties was to be true to the environment, but John said, “I don’t care! I want a Doris Day light on him.” And he had a point: if we had been shooting a real mouse for the movie, the DP would have tricked out some special light to somehow put him in backlight. It took awhile for some folks to realize that kind of cheat is just being cinematic.

Between Stuart and Harry Potter and Narnia, you had lots of experience with digital characters. That all came out of my tech background, but involved learning character rigging and musculature as well as animation, including facial performance for dialog. Then I got into environmental work for I Am Legend, making the film’s New York look like New York. It was largely a matter of deleting signs of life, and then aging things somewhat. We treated elements of the city in different ways rather than just painting over every inch of it, which would have made the whole thing look like a matte painting, thus sacrificing the very qualities that sell the shot. This was usually a matter of set dressing the hero foreground, and then selectively adding digital in the mid-ground.

Was your Legend work instrumental in getting you onto Divergent? I think so. For Divergent there was an early argument that we stay on stage and not go to Chicago, but we’d have lost all the built-in advantages of a real environment, with all that sense of history and feeling of reality, one that could be dressed to make it camera ready. Mostly we took out modern-day stuff, then added fans, cables and damage plus some new buildings. We left 80 percent of the location untouched. If you’re not getting the full benefit of location, then you might as well just shoot it all green screen.

The sequence in Tris’ mind with the mirror room sounds as though it required a lot of preplanning and conceptual gymnastics. The mirror was my biggest challenge and terrified me for months. [Director] Neil [Burger] wanted everything to look like we shot it for real, so we were on the same page right away, but it hurt your head to think about what he wanted to happen with these mirrors. She looks at a mirror on one wall of the room, and then as she looks around, the whole room fills in with mirrors, becoming a kind of infinity room, where she encounters and walks around herself in what we called The Do-Si-Do. The camera travels through the mirror plane along with her as she walks through a geographically mirrored landscape. Some people suggested mirroring the whole room and just painting the cameraman out, but that would only work for a few parts, and wouldn’t sustain, and I didn’t want to do the whole thing as CGI, either.

Sounds like previsualization would be instrumental in working all this out. We broke the mirror sequence into separate challenges while talking with Nick and [previs house] The Third Floor, reconciling the concepts with logistics so we wouldn’t waste effort previsualizing something unattainable in the real world. We only had a short time to shoot it, so [VFX house] Method worked out every detail, with a comprehensive technical visualization. They surveyed everything for the eight cameras needed to get all these reflection elements. Each went in a specific spot, some mounted in unusual rigs, plus we had a weird lighting setup and green screen. They worked in a way that was backwards from the norm, taking tech info from Maya and plugging that into the survey tool, so it would show and mark each of those eight camera positions for close to thirty shots, including tilt and lens information for [cinematographer] Alwin Küchler.

And I assume you had to decide the reality of the sequence, like whether to ground the elements with shadows. Since we didn’t want to stylize the thing, I couldn’t cheat those aspects we most wanted to get in-camera. Plus I didn’t want to cop out and do face replacements. So all those reflection elements from the eight cameras had to be replicated within the render on cards to make [the image] look like it goes to infinity. There were some digital pickups for the most distant aspects, but the first few mirrors – including the entire “hero” ones – are all photography.

How did that work? We came up with a weird camera rig with two cameras in opposition to one another when she does this walk around herself, with the cameras rotating while she circles; her eye-line had to be directed to the point in space where her own face is reflected, not too far away or you’d have a wrong-eyed stare. So I rigged another gag that traveled along where her other head would be that gave her a solid eye-line throughout. We had elaborate video splitting while shooting that let us see if this was all coming together as we went along. It was seriously bitchin’ that Method pulled this off! They did three-quarters of the movie, including the city and the train, while Scanline did the pit, Soho the scoreboards and Wormstyle a lot of paintouts.

With so much visual effects work becoming invisible, does that raise the bar for the shots that are extremely ambitious, in effect calling out their presence as effects? It does make the big effects shots harder to put across, but sometimes the design of the shot is so good that the audience will just appreciate it without becoming distracted. Rob Legato flew the camera right through the floor in What Lies Beneath, and with Ken Ralston on Contact, there’s a shot running up stairs with a girl but winding up in the mirror opposite her. These were ambitious, impressive shots that remind you of the potential for visual impact.

It sounds like the “We’ll deal with it in post” mentality is something you work to avoid. Some want to do their lighting in the DI, and that scares me. It’s probably more comfortable to make a call in the DI when they’re sitting on a couch with a cappuccino in hand. But as a vendor, I’d prefer getting an answer up front, because a choice in the DI might sacrifice the subtleties of my design. No reason to sacrifice options to tweak in DI, but why not be happy with it now? You have to pick out the costumes before shooting, but if all of a sudden you had digital clothing, would you shoot somebody with neutral colors now and then figure out the costume afterward?

Do you see the industry moving toward setting up standards, at least in the short-term? I’m starting to see some standardization in colorspace, and how we deal with the LUTs. Most VFX houses use Nuke, so I tell them, “Here’s my color pipeline, apply these CDLs to create QuickTimes that match our dailies,” and that makes it so we don’t have any popping going on. We deliver it back Raw, and things are okay from there. You know, this is actually my first digital show. Even with all its effects, Green Lantern was still shot on film.

Was everybody on board with that call? We shot anamorphic on Legend to get us a gritty feeling, and Neil wanted that same feel here. I thought this one needed the film look, but Alwin was able to get those PVintage lenses, and that helped a lot. Also, when Tris entered her fear landscape, we went anamorphic. The aspect ratio is the same, but the flares are different, and it’s warpy on the edges with a weird, unclean depth of field, which again conveyed a film quality. There are newer anamorphics for Alexa, but they’re just so clean!

Between 2K, 4K and HFR, it seems like every show needs a custom pipeline. We’re 2K on this; everybody gets a 3K scan, but then it comes down to 2K for pipeline and rendering reasons. There are still arguments about how worthwhile 4K is, but when displays standardize at that level, it’ll probably all change and get widespread acceptance. Then we’ll have the 24 frames per second versus 48 or 96 arguments to go over. [Laughs.]