America’s leading tech conferences – HPA, NAB, and Cine Gear – served up an enticing mix of new technology, while still keeping the camera department top of mind.
by David Geffner / Photos Courtesy of HPA & NAB / Cine Gear Expo LA Photos by Troy Harvey
As I’ve noted in this space for the last few years, A.I. (artificial intelligence) continues to grab headlines as the biggest technological disruptor this industry has encountered in more than a century. Consequently, there was plenty of A.I.-driven content and products at the three major U.S. tech conferences in 2025: the HPA Tech Retreat, NAB Show Las Vegas, and the even more old-school-minded Cine Gear Expo LA. But the glut of A.I. news doesn’t necessarily mean vendors have abandoned forward momentum for the technology that has driven this industry since its inception – the camera.
In fact, meeting privately with reps from FUJIFILM North America at NAB 2025 about the company’s first-ever cinema camera, the GFX ETERNA, I found a room full of professionals fully dedicated to pushing camera technology to new heights. Certainly, the long event horizon Fujifilm has opted to take with the ETERNA has helped sustain hype, and the camera’s specs alone are promising. The ETERNA has a massive native sensor that supports DCI 8K and 6.3K in Super 35 mm; 12-bit RAW and 4:2:2 10-bit uncompressed HDMI recording; 20 different types of film simulations; a host of 3D LUTs for F-Log2/F-Log-C; and even a 16 × 9, 2000-nit touchscreen monitor. But even more promising (particularly for Local 600 camera teams) is that it’s coming from a company with a rich history in creating cinema film stocks.
“The reason it’s called the ETERNA,” FUJIFILM N.A. Field Marketing Manager Michael Bulbenko told me, “is precisely because of the association that name has to Fujifilm’s history in producing film negatives. What makes the ETERNA unique among contemporary digital cameras is that it’s based on the GF sensor format we’ve had for quite a few years. So, while the ARRI 65 sensor, for example, is wider, the ETERNA’s sensor [43.8 × 32.9 mm] is taller – which puts us in a place no other manufacturer has gone.” That includes a system that captures 4:3 open gate, has dual-gate ISO’s of 800/3200, and built-in Frame.io. “I think what has surprised everyone the most [at NAB],” Bulbenko added, “is a top battery slot that charges from the V-Mount 25 volts. With an on-board battery always charging, users can hot-swap this camera; they’ll never need to shut it down while shooting.”
The ETERNA comes with a native GF lens mount, which accommodates a range of Fujinon lenses, including a new Fujinon 32-90 mm power zoom that won’t fit any other camera. (The system comes with a PL mount adapter.) Color processing on the camera, while not fixed at press time, sounds impressive, with three different F-Log curves in Rec.2020, including Fujifilm’s “Cinema F-Log Curve,” which offers a color gamut covering roughly 90 percent of ACES. As Victor Ha, Vice President, Electronic Imaging Division & Optical Devices Division, offered, “Color and image processing have always been core to Fujifilm’s mission, so this camera was designed with filmmakers in mind. Of course, the lightweight design, the large sensor, and a proprietary Fujinon lens that offers iris and zoom controls are all features that the live-sports and live-event world will love as well.” As to why the ETERNA and why now, Bulbenko said, “We’ve had these GFX cameras in the market for a few years, and everyone loves their beautiful cinema-quality video. Matty Libatique just used them on an upcoming Spike Lee movie [Highest 2 Lowest]. We’ve had Local 600 Directors of Photography begging us to put this camera in a box. So we did.”

Another company forever expanding the notion of what can go “in a box” is RED Digital Cinema, whose landmark RED One camera (4K at 60 fps) was announced by RED founder Jim Jannard at the 2006 NAB Show. Fast-forward nearly two decades, and RED has continued to be a pioneer in high-resolution capture, introducing models like the V-Raptor, which features a VistaVision 8K sensor that can crop to smaller formats, such as Super 35. When RED was acquired by venerable still-camera/lens maker Nikon in 2024 (for a reported $87 million), industry experts were mildly perplexed by the marriage, even though Nikon had shown intentions to enter the digital cinematography market. So, what then was RED touting at NAB 2025? That V-Raptor [X] Z Mount and KOMODO-X Z Mount cinema cameras were now compatible with Nikon’s Z mount lens series.
“What’s so exciting about the V-Raptor X and KOMODO X having integrated the Z mount,” described Jeff Goodman, Vice President of Product Management, RED Digital Cinema, “is that RED has never, historically, had a first-party mount or optics internal to the company. Now, under the same umbrella, we have this full chain – from the codec on down – that can be optimized to Nikon’s products. One cool thing about this integration is that the Z mount has the shallowest flange depth on the market at this level of digital capture, which means there is a wide range of adaptability in bringing in lenses from another ecosystem.
“We’ve also been working very closely with Nikon’s engineering team,” Goodman continued, “to overhaul their autofocus system, which on a still camera would not require a function like ‘smooth iris,’ as we have designed. In addition to the improved autofocus performance, RED users now have the option to use all the NIKKOR F lenses, from the AI Nikkor series onward, including popular vintage lenses, by pairing them with the Mount Adapter FTZ II. And both products support the power zoom control of the new NIKKOR Z 28-135 mm f/4 PZ. Nikon brings so many years of expertise in lens quality, and RED brings years of experience in how best to adapt their glass to digital cinema.”
Not to be outdone, Sony was also pushing the envelope of new camera technology with products like the company’s first iteration of a camera tracking system for virtual production. Speaking at NAB 2025 with Samuel Fares, senior manager of Sony’s Digital Media Production Center (DMPC) and manager of marketing, virtual production, I got to see the proprietary technology that ties together Sony’s virtual production ecosystem. “We’re the only company that makes both the camera and the LED wall,” Fares explained. “So, because we make a 1.5-millimeter-pitch Crystal LED [VERONA] wall and the Ocellus tracking system, which is marker-free and uses simultaneous localization and mapping to capture location and data – indoors or out – we can get very close to The Volume.”
Fares noted how his NAB booth demo had a corner “with a 90-degree angle that would normally mean seeing color shift on the output,” he continued. “But because we make the camera and LED wall, we can compensate for that off-axis color shift using our software within Unreal Engine. That, in turn, compensates for the color on the LED wall so that the camera is now capturing correctly, without a color shift.”
The Ocellus system features five sensors, each with two IR’s. Also inside is an IMU, which allows the system to measure camera angular movement– pitch, roll, yaw, etc. “All of that goes through a USB-C connection,” Fares added, “that feeds back to the main processing unit. So with no markers, reflecting points, or infrared cameras, we’re creating a 3D version of the environment being captured and then using those anchor points to create a 3D point cloud. All of the lens data is coming out via SDI and giving us genlock, while the RJ45 Ethernet cable sends it back to Unreal Engine. Another cable feeds power to the computer processing unit without pulling power away from the camera. For cameras that don’t have metadata, we’ll include encoders that would go on the lens, so you would create your lens profile based on whatever metadata you have.”

Of the NAB panels I attended, including two on new technology in live sports, the ICG-sponsored “Unscripted Meets Sustainability” was the most enlightening, given that it presented content rarely explored in a public setting. Hosted by Michael Chambliss, ICG’s Assistant Western Region Director, the thoughts offered up by panelists Cyle Zezo, founder, Reality of Change; Tracey Baird, executive producer/showrunner/producer (OMG Fashun, True Story with Ed & Randall, Flight of the Conchords); and Emmy-winning Local 600 Lighting Designer Oscar Dominguez (The Voice, Shark Tank, Big Brother) were fresh and insightful.
Chambliss kicked off the panel by noting how “radically different” sustainability is in the unscripted world versus scripted television and features. “Producers can actually build sustainability into an unscripted show’s challenges and even into its theme,” Chambliss announced. “This creates the possibility of a true culture shift in this type of production.”
Zezo, who in 2024 partnered with some 30 unscripted production companies to develop a Sustainability Roadmap, said that he founded Reality of Change two years ago (after he left his position as an executive at the CW Network). “In talking to other executives and producers,” he shared, “I found they all were keen on sustainability but just didn’t have the knowledge or means to make it happen. Most of the talk around sustainability was geared toward scripted content, which is great. But I knew that unscripted audiences share a unique overlap with the shows they love – whether it’s food and cooking, home renovation, or fashion, unscripted production deals directly with how real people live, so the potential to make an impact is enormous.
“Of course, you can’t get super preachy in the unscripted world,” Zezo added. “It has to be organic and part of the entertainment package, not right on the nose. We can reinforce sustainable themes, given that many unscripted shows come back to the same challenges or have real people as contestants. Using induction ovens in a cooking show or having a plant-based chef for a challenge, for example. There are many things that can be done that don’t explicitly mention climate change or sustainability.”
Zezo’s efforts resulted in a pledge from the partner production companies to promote sustainability in three areas. “The first was onscreen,” he explained, “another was more sustainable production practices, and the third was industry engagement. We think it’s important for other producers to see what these companies are doing and hopefully be inspired to mirror that, as well as to promote messaging throughout a show’s crew that sustainable practices are a priority. Unscripted producers and crews mustn’t feel like an outside entity is coming in and imposing change. It has to come from within.”

Baird picked up that gauntlet, noting how she came up behind the scenes before moving over to the creative side and was always pushing for more sustainable practices – the amount of paper printed for scripts, food left over from catering at the end of a shoot day, lighting and electrical usage, etc. “The scripted shows I worked on didn’t allow for pushing sustainability in front of the camera, but that changed when I moved into unscripted,” Baird recounted. “My OMG moment was with OMG Fashun, where I realized we could trick audiences into thinking about sustainability without hitting them over the head.
“The fashion industry is one of the biggest offenders of waste, with half of its products ending up in landfills,” Baird continued. “Julia Fox [the show’s host] wanted to show audiences that fashion doesn’t just have to be what you see on a runway in Paris or Milan – that it can be truly sustainable. We took that opportunity to make the show sustainable from head to toe because this was important to Julia. So, every challenge was soaked in sustainability, and all of the designers we brought in as contestants – we call them disruptors – used found materials in the real world. Dumpsters, Goodwill stores, plastic bags, you name it. Every challenge was about reusing fabrics and materials to make their designs. There was no waste involved.”
Lighting sets in a more efficient (and less wasteful) manner is something Rodriguez has witnessed over his long career. The lighting professional talked about the days “when we ran miles of cable and used ridiculous amounts of power to light a set, now shrinking down thanks to LED and multi-spectral lighting, which has hugely changed the unscripted world. No more need for manually changing a light with a plastic gel,” Rodriguez explained. “New technology has naturally made my lighting footprint smaller as cameras are getting faster and their contrast range is broader, so that means when we go into a natural environment, say a hometown package on a reality competition series, we don’t need massive amounts of diesel-powered trucks and generators. The circus gets much smaller.”
Rodriguez also singled out previsualization as a dynamic tool for building sustainability into unscripted productions. “The challenge is to figure out – in advance – how much gear you really need. What is the right amount? I worked on a recent show where the director had ten little challenges – a teeter-totter, a foam pit, and so on. Typically, once his camera plot got on site, it would take us 45 minutes to an hour to figure out how to light it. Ten games, ten hours spent rigging and moving around lights per day. So, I said to the director, ‘Come on over and we’ll figure out all your camera placements in previs,’ which took about three hours. So, we ended up saving Production one full day. That may not sound like a lot of time, but if you multiply those seven hours by many, many unscripted productions per year, it’s a significant savings. All accomplished by thinking ahead and using new tech.”

New camera technology – and the systems supporting it – also played a prominent role during my visit to Cine Gear LA this past June. Speaking with Matt DeJohn, VFX/editorial workflow manager at Blackmagic Design, about the company’s efforts with immersive camera design offered a glimpse into the future of how we may all experience visual entertainment. “We’re excited about the entire URSA line we’ve been rolling out,” DeJohn told me, “which includes our URSA Cine 12K LF, our URSA Cine 17K 65, and our URSA Cine Immersive, which is a camera we developed in partnership with Apple to produce content specifically for the Apple Vision Pro headset. That meant that when we set about designing this system, it had to be to Apple’s exacting level of specifications, which included 8K resolution per eye, 90 frames per second, and a full Blackmagic RAW file to provide the same flexibility you’d get with a standard cinema camera.”
To hit those specs, the company turned to its flagship camera line.
“We took the URSA body and added another sensor,” DeJohn continued. “That meant we have two 12K LF sensors on this system and our own custom dual lenses on the front that provide a stereoscopic 180-degree field of view. That wide field of view gives a broad canvas that allows the user to look around; in terms of how it’s projected in the headset, it’s a normal human field of view. The goal is to transport the user exactly how he or she would view that environment if they were really there.”
DeJohn, who worked for a virtual production company and was a VFX supervisor, noted that “creating immersive content, up until now, has been kind of a science project. By marrying this new immersive camera system with the post attributes of Resolve, we want to simplify things. So, we’re shooting in lens space, editing in that same circular fisheye lens space, and then delivering that to the headset – as opposed to having to map the image as a VFX process before it gets to the headset.” DeJohn said that showing the URSA immersive camera at Cine Gear was a great fit, “as Blackmagic has been showing traditional camera technology here for years, and that same community has been excited to be able to see the camera in person and to see the footage it’s generated.”
ARRI, perhaps more than any other manufacturer, can lay claim to pushing digital cinematography into the mainstream when it first debuted ALEXA way back in 2010. Now commonly called the ALEXA Classic, that groundbreaking camera featured a Super 35-mm CMOS sensor capable of 2.8K resolution and uncompressed RAW recording. The original ALEXA found quick and widespread adoption in the market, mainly because ARRI was careful to transpose many of the attributes of its popular film cameras that had been in use for many years. At Cine Gear LA 2025, I met up with ARRI Product Specialist Art Adams (a former ICG operator with 33 years of Guild service) to talk about the company’s most recent work in cinema lenses.
“If Signature Primes are the new Master Primes,” Adams explained, “with one designed specifically for digital cameras and the latter for film, then our Ensōs, which were introduced six months ago, are the new Ultra Primes. The Ensōs are based on the Signature Primes and have some similarities but also some differences. By loosening up the specs, we can get some other opportunities. For example, the bokeh has more astigmatism, creating more dreamy, smeary edges at the wider focal lengths. And the close-focus abilities are insane – we can get down to 1:4 on 12 of the 14 lenses, which is about two to three inches. You can get that close on everything from the 18 millimeter to the 250 millimeter in the Ensō line.”
Adams says the success of the Signature Primes begat the Ensōs for equally diverse applications – at a more cost-effective price point. “The Signatures are high-end lenses that are used in features and TV and have taken off in fashion because they’re so good with skin and faces. But they are pricey,” Adams shared. “Because many people own their own equipment, we wanted [with the Ensōs] to reach cinematographers working in commercials, corporate, and documentaries, for whom the Signatures may be out of reach. The close focus is also great for tabletop cinematography.”
As for intentionally trying to build aberrations into the line, thereby enabling narrative users to have a vintage quality in contemporary glass, Adams added that “both the Signatures and the Enzos have vintage options in the rear optics. Because these lenses are so finely tuned, and a lot is going on at the back element, if you add another element in the back, you can mess them up nicely and don’t have to go into the lens to alter the characteristics. The analogy I use is to a sports car – if you go into the engine, you’re going to change the performance; if you just put speed bumps on the track, then it’s a different approach.”
Speaking to the recent popularity of ARRI’s new ALEXA 35, Adams said that although filmmakers had been asking ARRI for a Super 35 camera that was streaming-service compliant, “it meant revising the sensor, and that’s not something we do lightly. When you make the photosites smaller and cram more of them into a smaller space, that affects the noise floor, which impacts the color – and we are all about color. It took us a while to get there, as we also rewrote our color science. That took a lot of horsepower, but in many cases [the ALEXA 35] renders the most perceptually accurate color I’ve ever seen.” As to ARRI’s longtime presence at Cine Gear, Adams shared that “Cine Gear is the show where this community, people passionate about telling stories in film and television, comes together. I see people here that I don’t see the rest of the year, so it’s a very special show.”

I caught up with another long-time ICG-member-turned-product-specialist, Matthew Irving, at the Canon booth on Universal’s town center backlot (where Back to the Future was shot). Irving, a Local 600 Director of Photography for 21 years who still keeps his membership current, was showing off Canon’s two main cinema cameras – C400 and C80 – which offer different form factors but share the same 6K sensor.
“Both of these cameras have built-in Wi-Fi and Frame.io compatibility, so users can do camera-to-cloud without needing a third-party accessory,” Irving described. “You dial up the network in the camera, which helps generate a six-digit code that you enter into Frame.io, and just like that, your editor three thousand miles away can log into Frame.io and start editing a 1920-by-1080 proxy while you’re shooting 6K full-frame RAW, 12-bit to your CFexpress card on set. I’ve done demos of this on my phone, and it could not be simpler. As soon as you hit Stop on that clip, it’s going to upload it to the cloud. And this is not just a tool for editing. Your producer or client off-set can comment on the clips using the Adobe interface. It’s great for communication across many areas of the pipeline.”
Irving also pointed out how the C400 and C80 sensors allow for triple base ISO, “which opens the door to increased sensitivity with lower noise levels. We have 800-, 3200- and 12,800-base ISO if you’re working in our Canon Log or RAW formats,” he explained. “The ISO resets the full dynamic range of the sensor around those bases, which means that even if you’re shooting at 12,800, you still have 16 stops of dynamic range in Canon Log2. The best image would be achieved working one stop below that base ISO. So, if you dial up 12,800 as a base and stop down to 6400, it’s going to clean that image up and look spectacular, even in a very low-light environment. Both cameras are native 6K full-frame RAW. But you can also do 4.34K Super 35 RAW or 4K in full frame or Super 35 using our XF AVC 10-bit 422.”
Like many other vendors I spoke with, Irving described coming to Cine Gear as “being like old home week. We had a Cine Gear party last night to reintroduce Canon Burbank to the community, as that facility was slow to get back on its feet after the pandemic,” he shared. “Canon Burbank is a welcoming place for DP’s, AC’s, operators, and everyone in Local 600 to come and test out gear and share ideas. I do a lot of outreach to ICG’s Young Workers Group, and we’ve hosted several ICG events inside that facility. So, being here at Cine Gear, and seeing the L.A. film community come together means the world to us.”


Speaking of global reach, there may be no other tech conference on the planet more in tune with A.I. than the annual HPA Tech Retreat in Rancho Mirage, CA. My Web Exclusive on the 2024 HPA Tech Retreat [Disruption Central]served up a super-deep dive into all things A.I., so my round-up this time around offers a mile-high look at HPA’s overall content, which did (thankfully) feature panels and presentations where A.I. was not the focal point. Those included a presentation from longtime VFX professionals David Stump, ASC, BVK, and Kevin Tod Haug about a “creatively lossless VFX paradigm” they used for the 2023 Italian feature Comandante. Haug described a film that had “34 pages of a non-existent submarine in the middle of the Atlantic Ocean. And given Italian-language movies have an audience of a defined size and, consequently, a limited market, the film’s director [Edoardo De Angelis] and the producers didn’t know how to solve it. Luckily, they were just inexperienced enough to let us go kind of crazy with the various VFX optimization tools we wanted to try out, even though they knew it had never been done before.”
Stump added that “we made a huge effort to have USD [Universal Scene Description] accepted as the workflow for moving assets throughout the project’s pipeline.” The essence of USD, as Haug described it, was that multiple departments could be working on multiple assets, “a CAD program, a color program, and an animation program, all working on the same scene simultaneously,” he explained. “The previs we used was a submarine model we got from the art department, based on a model that’s in a museum. We did mo-cap for the actors, and the ocean was in Unity, which was the platform we used for previs. The real news here is that none of these assets had to be converted to anything else, as they were all output in USD. That meant there was no recreating assets over and over again during the workflow, as the assets we used in previs evolved into the assets we used for the finished comps.”
Stump, an industry expert on metadata, said Comandante “really gave me the chance to advance metadata as a production technology as we drove everything through metadata. We used a set of Cooke anamorphics – smart lenses that provided metadata, which was synced with camera metadata, which was synced with motion metadata from all the other devices – cranes, dollies, moving rigs. That metadata became the basis for all of our tracking information for matching VFX in post-production. Effectively using USD and metadata – from concept to delivery – created a workflow without redundancy, which benefits the entire production.”

HPA Board Member and Head of the HPA Tech Retreat Innovation Zone James Blevins introduced a short film called “Ray Tracing For The Win,” which featured the work of 42-year ICG Director of Photography Richard Crudo, ASC, and longtime VFX professionals Christopher Nichols, director of special projects, Chaos Innovation Lab, and Vladimir Koylazov, co-founder, Chaos. “Chris called saying they wanted to use ray tracing in virtual production,” Blevins announced, “and I listed five speed bumps to deter them. I said, ‘It’s a lot more than sending an image to the wall.’ Three weeks later, Vlad had solved all of those issues, and we found ourselves at Fuse Technical Group doing a demo that was the first fully ray-traced virtual production session. Chaos then pulled together a team that included the six-time president of the ASC, Richard Crudo, to produce a short film that ended up looking way more expensive than what we paid.”
Nichols described ray tracing as the “natural evolution of how we create, where light behaves naturally and accurately, as it does in the real world. For virtual production, this means fewer compromises, no shortcuts, and no endless tweaking of fake solutions. What you’ll see on an LED wall is what you’ll see in post. Ray tracing simplifies everything more accurately, with less effort, and with more focus on creativity.”
In brief, ray tracing is a rendering technique that simulates how light behaves in the real world by tracing paths of individual light rays from light sources to the camera. Blevins noted that “when Chris says ‘simple,’ he means the fully 3D, fully ray-traced files we produced on set are not delivered to another team just to make them work in a game engine.”
Nichols said that “game engines are remarkable tools, but they’re designed for video games, and virtual production has different requirements. When we designed Project Arena, we thought, ‘What if virtual production didn’t have a second pipeline for visual effects or endless asset optimization? What if we had a process that used the same 3D assets from concept to post? No convergence, no trade-offs, everything stays consistent.”
Crudo described the experience as “eye-opening,” noting that his camera team was able to produce thirty “good” shots per day. “It allowed me, for the first time, to see what was happening in camera at the moment of exposure between a foreground element and background plate,” Crudo shared. “An argument could be made that they tried to achieve the same result with rear projection, but in the film days, you still didn’t know what the balance would be, in terms of foreground and background lighting, contrast, and color, at the moment of exposure. In this case, I could balance the color and lighting of the foreground to what we wanted to have on the volume in a live view on my calibrated monitor and sign off on it. It eliminates all the guesswork on the DP’s part.”
Changes, Crudo added, are made in the moment. “I went to Vlad one time, which was almost a Photoshop-like adjustment, about a piece that was a little hot in the background, and he said, ‘How dark do you want it?’ A little slide and it was fixed. Ray tracing reproduces light the same way your eye sees it in nature – the same textures, the same sense of wrap on faces, and falloff in terms of exposure. In many ways, the process returns control to the creatives on the set. It’s liberating and a true revelation.”

Ultimately, maintaining human control was the main takeaway from the many A.I. presentations at the 2025 HPA Tech Retreat. Tuesday’s Super Session, titled “The Evolving Human Role in A.I.-Driven Film and TV Production and Post-Production Workflows,” was the tip-off of what ground would be covered. Other presentations included A.I. and VFX, generative A.I. in postproduction, and even an overview of A.I.’s implications for intellectual property (IP) protections in the media and entertainment industry. Former Warner Bros. Executive (and outgoing President of SMPTE) Renard T. Jenkins set the table with his 30-minute presentation, “Embracing Evolution: The Evolving Human Role in an A.I. World.” Referencing a 2024 talk about A.I. he gave at the TV Academy, Jenkins noted that “a year ago, I described A.I. – at least in terms of how the general public perceives it – as a baby just learning to crawl and walk. In the last year, the baby has become a teenager, in that A.I. may think that it knows more than we do, that we can do nothing right in A.I.’s eyes, and that we may or may not be helpful to A.I.”
Jenkins, who described himself as a “passionate lover of traditional filmmaking who was born with a camera in his hands,” pointed to the heavy media attention given to Chinese A.I. start-up Deep Seek as an example of how “our industry kind of lost its mind” when confronted with the technology’s potential. “The feeling was that we’re all going to lose our jobs, everything is going to come crashing down, and A.I. is going to take over the world,” Jenkins announced. “But that will only happen if we allow it to happen. Speaking as a survivor of having raised two teenage daughters, a teenager needs guidance; a teenager still needs the input of a parent. That means that we – the people who make up this industry – must ensure that A.I. models are ethically sourced, responsibly built, and that they actually have a purpose.”
Toronto-based filmmaker Walter Woodman built on Jenkins’ words with a presentation titled “88% Human, 12% A.I.” to illustrate the imperative of maintaining human control over A.I.-created stories. Woodman’s short film Air Head debuted on YouTube in the spring of 2024, with a heavy push from OpenAI touting the project as the first to use its Sora application. While the film went viral as an all-A.I. creation, it was really a mix of traditional filmmaking and post-production editing, including manually rotoscoping the backgrounds and doing clean-up work on the main character, whose head is a yellow balloon.
“When people talk about A.I., they always think faster and cheaper,” Woodman explained. “And that’s a horrible way to look at filmmaking, because ‘faster and cheaper’ is a race to the bottom. I see my job as using A.I. to create things that may have previously been impossible [in the physical world].” Because Air Head received so much backlash, Woodman presented a “response to the haters” with a short A.I. film called Deflated that cleverly revisits the hero of Air Head, now fallen on hard times after his previous viral fame. “What parts of this film were A.I. and what parts were traditional filmmaking is why I’m here at HPA today,” Woodman continued. “This audience needs to know this is not an either-or proposition. Much like CGI before it, the future will be a combination of tools that are all, hopefully, beholden to the humans who drive the storytelling.”

Longtime media executive Barbara Lange, who has held leadership positions with SMPTE and HPA, neatly tied together the existential fears of new technology with the real-world concerns of climate change in a presentation titled “A.I. for Good: The Role of Emerging Tech in Sustainability.” Lange noted that A.I. is often labeled an “environmental villain, given its energy-intensive computing requirements. Training large models can be the equivalent of powering small towns and even countries,” she described. “But like every transformative technology, A.I. is here to stay, and we must learn how to harness it. Focusing only on energy consumption misses the bigger picture, as A.I. is already driving efficiency, reducing emissions, and enabling smarter, more sustainable decisions in our own industry.”
Lange proceeded to give several examples. The first was from a student in interactive media and business at NYU Shanghai, tracking how to make film productions more sustainable in two major areas: fuel and electricity. “Most productions are tracking these two practices by inputting data manually after the production ends,” Lange shared, “so there’s no chance to make improvements. This student attached IoT [Internet of Things] sensors to vehicles, lighting, heating, and cooling systems, anything she could measure, and was able to send real-time data to an A.I.-driven visualization tool that provided the production the ability to make changes while they were shooting.
“Twelve Labs,” Lange continued, “is using A.I.-powered video tagging to organize thousands of hours of footage. With the huge data trains being captured on set, much can go unused or be lost in the archives. Rather than go back and reinvent the wheel, an A.I.-powered video search allows production teams to instantly find content without reshooting and all the added emissions that entails. A.I. is also being used to automate real-time captioning, making the live broadcasting world more accessible and energy efficient by eliminating manual transcription. We have the power, right now, to shape a more sustainable future. We can create an industry that leads by example, proving it’s possible to tell great stories, power innovation, and protect our planet, all at the same time.”
Canon, Matthew Irving, Product Specialist
https://www.instagram.com/reel/DNq20dVs_qh/?igsh=c2JwYWh0ZXE4cHBr
ARRI, Art Adams, Product Specialist
https://www.instagram.com/reel/DNq3Xcpu7-x/?igsh=MXhmMWhsc2h3ZHI1dQ==
Blackmagic Design, Matt DeJohn, VFX/editorial workflow manager
https://www.instagram.com/reel/DNq34jx0yb3/?igsh=bDI0MTV5dm13NWYz
Bram Desmet, CEO, Flanders Scientific
https://www.instagram.com/reel/DNq4OzKg5oR/?igsh=MXA5cHRtcW5ubWt1bw==
Fujifilm, John Blackwood, Director of Product Marketing
https://www.instagram.com/reel/DNq5Pwb11Ko/?igsh=dTg0ZXoxZWQwa2o1
