Part 2 – Colorist Tashi Trieu on Grading & Finishing ‘Avatar: The Way of Water’

February 23, 2023

In Part 2 we dive into Tashi's color management pipeline, Resolve project setup, and the kind of color work he did on this VFX-heavy film.


Series

DaVinci Resolve, High Frame Rate (HFR), & Deliverables

In Part 1 of this series, DI Colorist Tashi Trieu took us inside the technical setup and workflow he used to finish Avatar: The Way of Water at Park Road Post in Wellington, New Zealand.  Now, we’ll dive into the color management pipeline, his Resolve project setup, and the kind of color work he did on this VFX-heavy film.

We’ll also examine the film’s specialized 48fps High Frame Rate (HFR) implementation, the details of delivering 11 discrete theatrical versions, and Tashi’s thoughtful take on archiving for future generations.


Color Management Pipeline        

Can you describe the color spaces and color management setup you were working with?

All the EXRs were delivered as Scene-Linear/SGamut3.Cine. My color grading working space in Resolve was Slog3/SGamut3.Cine.

All of the EXRs were inverted from Linear to Slog3 using a DCTL applied in the Input LUT stage, prior to any scaling or additional image processing. All of the grading took place in the Clip node graphs, with various versions of the show LUT and light-level-specific trims applied in the Post-Clip group graph.

Our delivery target was P3D65 for all the theatrical versions.

‘Avatar: The Way of Water’ Colorist Tashi Trieu

How complex was your node tree?  Did you have a pre-built one?

I don’t favor complex node trees. I get why a lot of people gravitate towards them, but I find them more restrictive and less supportive of improvisation.

I started every shot with five nodes in serial:

  • Node 1: My base grade and primaries.
  • Node 2: Windows and secondaries, branching off in parallel nodes at that point if multiple windows or mattes were needed.
  • Node 3: I typically reserved node 3 for any additional grading with Jim.
  • Nodes 4 & 5: For various derivative light-level trims, I’d typically do those adjustments in the last two nodes. This gave me a visual shorthand to distinguish between version-specific trims and the main creative grade for a shot.

My node structure wasn’t rigid, and certainly not the cleanest approach, but it was simple and allowed for easy and quick grade propagation if I needed to do a quick trim to a scene using the ripple command.

There are a lot of times I wish there was a better way to work in layers in Resolve. I would like additional Clip node graph pages that could be optionally enabled or disabled. That would make it really easy to do multiple versions without maintaining multiple timelines and stress about editorial parity.

Did you use Groups, Versions and/or Shared Nodes for organizational purposes, and how?

I used Groups to apply the “show LUT”, aka the display reference transform or output LUT. Because we have a mix of scene-referred footage (live action, VFX, most of the movie) and display-referred graphics (titles, subtitles, etc), I couldn’t use a LUT on Timeline or project Output level.

I don’t care for Versions much. They break and fall apart during ColorTrace and Remote Grades, so you have to manually assign the correct version of a grade. If I’m auditioning a few different grades before settling on one, stills or memories are a good way to do that.

Shared Nodes aren’t supported on stereo clips for whatever reason. Unfortunately stereo 3D isn’t as popular as it once was, so a lot of those features haven’t gotten much love in recent years.

What was the nature of the “show LUT”, and did you have different versions for the various deliverables?

The show LUT was a simple S-Curve and gamut matrix that WetaFX wrote. I used that and built transforms to target our various light levels and formats with basic trims that got me most of the way to each of the deliverables. From there, I’d do additional grades on any shots that needed something extra.


Grading Specifics

What were your main tasks during the grading process, and how often did you have to diverge from the original look Weta had established for a scene?

It was a mixed bag. There are some shots and scenes that were so perfectly dialed in by Weta that all they needed were basic continuity work. There were other scenes where Jim got them to a fairly neutral point so we could push it further in the DI. It’s difficult to finalize looks in VFX when shots from a sequence may be finished and rendered months apart from each other.

The DI is still an instrumental part of polishing and finalizing the look of a shot or sequence, whether it’s live-action or entirely CGI.

What color tools in Resolve did you use the most, and in what color space?

Mostly Lift/Gamma/Gain and Saturation. All of the grading operations took place in Slog3/SGamut3.Cine.

How did you grade using the VFX-supplied mattes?

Virtually every shot in the film is a composite, so I regularly had multiple DI mattes available for fast grading. In live-action shots, I was provided with foreground mattes when characters or elements were shot against a blue screen and character mattes for any CGI characters in the shot. For all-CGI shots, I’d typically get foreground, background, character, and occasionally FX mattes for muzzle flashes or other elements in the comp.

Character mattes are great for when you want to subtly accent a character or add a little bit of fill from one side or the other. Rather than time-consuming, articulated roto and tracking, I could quickly apply a character matte and a soft graduated window and bring up the side of someone’s face or body.

Much of the film takes place underwater.  What were the specific challenges and techniques you used for this footage?  Was grading the underwater CGI similar to grading real underwater footage, or did VFX supply depth mattes allowing you to vary the distance effect?

I did a lot in the DI to enhance the sense of volume. It was important to Jim that the water feel like an accurate medium. The characters look like they’re floating in space if it’s too clear. Even in the most transparent water, you have scattering and spectral absorption of longer wavelengths. So the deeper you are, the bluer it gets, or the further away from an object at a given depth you are, the more pale and blue it will be. I didn’t really need depth mattes for that. I could usually get away with Lift and Gamma to add that murky volume to the water.

There’s a big difference between the look of night and day on Pandora. Bioluminescence is present in most of the flora and fauna of the moon, including the Na’vi.  Did you enhance that effect in places, and how did you target it for adjustment?

The Character mattes were great for helping pop the bioluminescent dots on the Na’vi characters. The “biolume”, whether on characters or in the flora of Pandora, is a signal to the audience that it’s nighttime. But even at night, the world is bright and vivid and there’s a lot to see. So, combining character mattes and luma keys, I would sometimes pop the Na’vi bio-dots to help sell that it’s night. The contrast and presence of the dots is a great visual cue.

Late in the film, some characters are trapped in the dark, and the lights go out. I keyed their dots there to really sell the darkness.

Bioluminescent markings feature prominently on the Na’vi characters, flora and fauna of Pandora, and are an important nighttime visual cue for the audience. Image: 20th Century Studios
Bioluminescent markings feature prominently on the Na’vi characters, flora and fauna of Pandora, and are an important nighttime visual cue for the audience. Image: 20th Century Studios

Did you use many windows?

Yes, I did plenty of windowing. Because I had mattes for every shot, I could combine a matte and a pretty loose elliptical window to accomplish just about anything I wanted to do, whether it was character continuity grading, or refining their integration with the scene.

Were there any challenges specific to grading shots that featured live-action characters?  Did you receive mattes for them as well?

Mattes were instrumental in live-action scenes where subtle, natural differences in photography stood out against otherwise consistent CGI renders. It was important to keep Spider looking a healthy tan color. Amidst all the greenery he could often look too magenta. Rather than windowing, rotoing, and keying, I could usually rely on the mattes. I was very spoiled. I recommend everybody work with world-class visual effects artists!

Live-action character Spider (Jack Champion) integrated into a CGI scene. Image: 20th Century Studios
Live-action character Spider (Jack Champion) integrated into a CGI scene. Image: 20th Century Studios

Did you do any texture or grain management across the film?

No, the film is very clean. WetaFX applied a very fine amount of grain in their comps to help dither and sell the detail in shots. But beyond that, we weren’t going for a “film” look or anything that necessitated grain or texture.

Were there any other color challenges or tasks on the film that you found particularly interesting?

Placement of the forced-narrative Na’vi subtitles and locator cards is actually pretty fun. There are quite a few in the movie, and placing subtitles in 3D space can be challenging. You can’t simply put them bottom-center all the time. When you have deep shots with a large interocular distance – in other words a large stereo effect – you don’t want to push it further by bringing the subtitles too far into theater-space. It becomes rather challenging for the audience to read the subtitle and the action.

There was a lot of discussion and creative input into the placement of the subtitles. We sometimes placed them in-between characters depth-wise and often in the eye line of the speaking character, so the audience can take in the text as naturally as possible. It’s fun because subtitles in a 2D movie aren’t often given too much thought outside of typography.

What was your approach to grading and managing multiple versions (SDR, HDR, etc.)

It’s essential that each respective version is an authentic expression of Jim’s creative intent. We don’t want the theatrical EDR version or HDR home video versions to be significant creative departures, but we want to employ the added dynamic range where it counts.

In Dolby Cinema the additional dynamic range, both in the shadows and highlights, really brings the film to life in a way standard DLP projection doesn’t approach. Night scenes are spectacular and bright sunsets and explosions have a more visceral feeling. In Dolby you really feel those huge explosions and might squint a little. I think that’s really cool and adds an extra nuance to the experience.

Were you grading the stereo versions with glasses, or did you view one eye at a time?

We always grade stereo with the glasses on. This movie was natively stereo from its inception, and the attention to stereo quality throughout the process, from acquisition to visual effects and DI, really makes it best-in-class. I think it’s the best 3D that’s ever been made.

Was it challenging working in stereo with the glasses on for long periods?

I thought it might eventually be, but amazingly no. I’ve had headaches from long hours on 3D movies in the past, but we have two big things going for us here. One, the projection technology is way better than it was ten years ago, even in standard dynamic range presentations. Having that contrast, being able to see the image better is excellent.

I think darker presentations stress your eyes more, they’re fully dilated, and perhaps that adds some level to your physical experience of it. But the big thing was, the 3D is just really, very well made. Geoff Burdick was the stereo supervisor on set and through the entire visual effects process. Rather than judging stereo on a monitor, like one might be tempted to in production, he and his team supervised real-time camera feeds on a stereo 3D projector near the set. They could quickly catch any live-action stereo problem as it developed and relay that feedback to the camera team.

That effort and meticulous supervision through visual effect production paid off big time and is why the 3D is both visually stunning and effortless to watch. It’s crucial that 3D is a value-add and not an equal-value trade-off for something else, otherwise, what’s the point?

Did your eyes react differently to the laser projection vs. DLP, & did you find yourself grading differently for laser vs. non-laser projectors (i.e. shadow details, color saturation, etc.)?

Dolby Vision 2D 31.5fL is more than double the standard 14fL 2D DCinema standard. But you acclimate pretty quickly. Nobody complains about a theater being too bright.

As for Dolby 3D 14fL, it’s the same as the 14fL 2D DCinema target brightness, but with deeper shadows. We never wanted to crush or clip shadows. There is an airy feeling to everything, even dark scenes. It’s very easy to produce clinical CGI, but that wasn’t the goal.

This movie needed to feel alive and like it was really photographed. After seeing it, a friend asked me how they made the eyes so big. I told him, “it’s all CGI!” and he said, “ALL OF IT!?” I had the same reaction when I watched Avatar in 2009. “How did they shoot all that stuff in the forest – oh right, it’s all CGI, duh.” Yeah, that’s how immersed people can get when the story is good, and the visuals are on par with the world and the story’s setting.

Photorealism was the goal.

Even after looking at every shot hundreds or thousands of times over the past year of working on the film, I’m still amazed by how incredible it looks. It’s a testament to the artists involved in live-action production and visual effects. Everything from performance capture to production design to water and hair simulation is absolutely amazing.

How did you maintain the feel and impact of the film across the different light levels?

Lower light levels got a global trim, and then shot-specific grading as necessary. Low light levels are inherently a compromise. We need to sell that there’s contrast and dynamic range when there really isn’t that much provided by the projector. Darker scenes translate really well to dimmer projectors since dark scenes rarely utilize the full dynamic range of something like a Dolby projector. The specular highlights take a hit but are often in the minority.

Daylight shots with big bright skies were difficult. Something has to clip or roll off; otherwise, there’ll be no contrast. It still has to feel like crisp daylight and seeing the characters, and the story is important. Our creative decision was in how we made that compromise, what we allowed to roll off into white, and what was an important detail to retain.

My hope is that everyone gets to see the film in the biggest, brightest theaters, but the reality is that not everyone lives within a reasonable driving distance from a Dolby Cinema or IMAX or laser projection. We still want them to have an incredible time at the movies, and I’m very proud of the work we did to bring them that experience.

Was metamerism a concern when making creative decisions due to the narrow spectral bands of the laser projectors?

It’s definitely a question I have about the Dolby projectors in 3D mode. You end up with half the wavelengths in one eye and half in the other. I think it’s certainly possible some audience members may experience it slightly differently than others, more so than with a broad-spectrum xenon source. I wonder if eye dominance plays a role in color perception in 3D too. It would be interesting to see some studies on that.

Did you compensate for the color balance differences betweens glasses for the various stereo projection technologies?

No, that should be accounted for during projector calibration at the theater. Anyone calibrating a projector should be metering through the glasses to compensate for light loss and white-point shift. The same grade that might go to a RealD screen with circular polarized glasses would also go to an XpanD screen with active shutter LCD glasses would also go to IMAX Xenon with linear polarized glasses.

What was your process for creating and delivering multiple versions?

During the DI, we produced eleven discrete HFR picture masters, which were rendered and delivered to distribution. From there, various sub-derivatives (like 24fps versions) were made and packaged to produce a huge number of unique DCPs for various theater formats and localizations.

Our hero versions were:

  • Dolby 3D 14fL 1.85:1
  • DCinema 3D 3.5fL 1.85:1
  • DCinema 2D 14fL 2.39:1

Between those three hero grades, we cover the creative gamut in dynamic range and aspect ratios. From there, Dolby 2D 31.5fL , IMAX 9fL, and DCinema 6fL would be graded and derived.

Resolve’s ColorTrace function was critical.

Once I finished the 2D 14fL 2.39:1 version and a 3D 3.5fL 1.85:1 version, it was really easy to produce the dependent permutations like 3D 3.5fL 2.39:1 by ColorTracing only color or only sizing between the different formats, represented as unique timelines within a single project file per reel.

I used Output Sizing presets to quickly set the proper scaling and vertical offsets between versions.

The film was shot and composed for both 1.85:1 and 2.39:1 simultaneously, but rather than a centered crop, the 2.39:1 frame is offset vertically towards the top of the canvas to produce an almost common-top framing. This 1/3rd-2/3rd split makes for better compositions with less compromise.

The framing chart gets us part of the way there. For both 1.85:1 and 2.39:1 versions, we did a reframing pass to really dial in specific adjustments to optimize the composition both narratively and aesthetically. Jim gave precise direction throughout that would often involve multiple keyframes within shots for reframing and sizing, often accentuating camera moves.

The framing chart for Avatar: The Way of Water.
The framing chart for Avatar: The Way of Water. Note the vertical placement of the 2.39:1 crop inside the 1:85:1 frame. Image: 20th Century Studios

I used Groups in Resolve to apply my Output LUTs per various light levels and deliverables. Graphics were treated with their groups for each light level.

Eleven DCDMs (Digital Cinema Distribution Masters) in stereo, in 4K, and in high-frame-rate, is a lot to deliver, even with commercial bandwidth.

Ian Bidgood at Park Road Post had a great idea: rather than deliver traditional, uncompressed 16-bit TIFF DCDMs, we’d render losslessly-compressed JPEG2000 sequences from Resolve instead. This saved us up to 50% on file size and accelerated our uploads from New Zealand to the US. It took a bit of testing to vet the process and get all the distribution partners on board, but in the end, it worked flawlessly.

Were all opticals, end titles, etc. integrated in Resolve prior to the J2K export, or were localized versions created later?

We integrated those mostly in Resolve. Many territories received the original English Na’vi subtitles and had additional on-screen localization applied during digital cinema mastering. I believe some territories without big English-speaking populations received versions produced entirely by our digital cinema mastering partners, sans our English subtitles.

Another interesting thing is that, for 3D movies, all that localized text needs to be mapped and placed in 3D space. It’s an insane amount of work and we had a team of five or six people from Deluxe come down to Park Road for that.

The manipulation of the HFR effect in the film, specifically the creative choices made about blending back and forth between 48fps and 24fps looks inside a single sequence, feels like the emergence of a new cinematic language.  What are your thoughts on this as a storytelling technique?

I think it’s particularly effective when used to contrast normal reality with something heightened, like entering the underwater world of Pandora. The “First Swim” sequence is a really great example of this. The characters are in a new place, experiencing something new for the first time, and it’s a wondrous sequence of exploration and awesomeness.

Just like most cinematic language, it needs to be able to scale and currently we have to maintain support for 24fps exhibition as well, so that’s a bit restrictive.

I grew up watching 24fps movies and TV shows, or more accurately VHS tapes of movies with 3:2 pulldowns. To me there was always a division between “real” movies and broadcast news or home video.

I don’t think future generations need to adhere to those same definitions.

Many people believe that “cinema” has to be 24fps because that’s how it’s always been. But it’s been that way because it was the *bare minimum* to achieve persistence-of-vision as determined by tests a hundred years ago. They didn’t have modern action scenes, massive theaters, or high dynamic range projection and television. The ASC Manual has prescriptions for maximum pan-speeds, and they’re all based on 14fL film projection and a generic assumption about screen size.

We are currently limited to using framerates that fall within SMPTE video standards and the need to derive 24fps deliverables for broad compatibility with older projectors and display devices. That leaves us with 48fps and 120fps as the possibilities for HFR.

I think we may eventually get to a point where all displays and projectors support variable refresh rates, like NVidia G-Sync or ATI FreeSync in gaming. That would provide some really interesting creative opportunities.

Is the HFR workflow realistic, accessible & desirable for other types of films & filmmakers?

It really depends on the level of complexity and nuance they can afford. Anybody can shoot high-frame-rate video on their phone. But there’s more to it than picking a frame rate. How do you vary the shutter angle between shots and scenes, or even within a shot, to make it pleasing to watch and still feel “cinematic,” whatever that means to you as a filmmaker?

If you’re working in CGI, are you choosing a global shutter angle for a shot? Or are you rendering different layers of the shot at different shutter angles for creative or technical reasons?

Regarding the HFR color grading workflow, we did it with Resolve Studio out of the box, with nothing proprietary. As long as your system can handle the storage and processing requirements, it’s pretty seamless.

We had shots that were native 24fps, 24fps double-printed within 48fps containers, and native 48fps shots, all coexisting within the same timeline. It’s more work organizationally, but it’s not beyond the ability of anyone willing to put in the work.

How are you archiving the film for future remastering?

I have quite a bit of experience with archival and near-term remastering of recent digital films, particularly since I finished the remastering of the original Avatar for its 2022 theatrical re-release.

It’s important for the studio, as well as myself or any other artist who’s going to be involved down the road, to have detailed documentation and thorough archives. In the near-term, Resolve projects linking the original OpenEXR sources works great.

I intentionally restricted myself to the built-in Resolve tools, with no 3rd-party plugins, for maximum near-term compatibility.

If we need to produce a new format in a new colorspace a few years from now, it’ll be as easy as unarchiving and upgrading to the latest version of Resolve.

Long-term is a different story. It’s hard to predict how software and workflows will change in even ten, twenty, or thirty years, let alone generations from now. Today we’re remastering classic films from a hundred years ago. Hopefully this film and others made today will share the same longevity and future generations will enjoy them too. It’s important to create long-term formats that are software agnostic too.

For this reason, I like to archive both completely ungraded, textless, scene-referred renders of the film, either as TIFF or EXR, as well as graded, textless, scene-referred renders. Because every grading choice is made underneath the “show LUT” or the output LUT or the display reference transform, whatever you want to call it, those grades live in a scene-referred working space. In this case, that’s Slog3/SGamut3.Cine.

The scene-referred graded archival master is the most important archival element you could produce, and it’s something a lot of studios are moving away from in favor of original media and project files.

You need both, but they serve different purposes. You can’t realistically expect software developers to maintain backward compatibility for decades. Some grading software that’s still in use today may not be in use a year from now. Finding people who can operate it in the future will be challenging. Fortunately, 20th Century Studios and Disney understand this and trust us to archive what we need for any possible use down the road.

If in the distant future someone, somehow invents an immersive holographic display technology with full human-vision gamut, we’ll have left them a useful care package. Hopefully, they’ll have all the building blocks they need to reproduce our creative intent on new technologies, even after we’re all long gone.

What were your takeaways from the project?

There really is no such thing as being too prepared. Every part of our process was thoroughly planned and rehearsed.

We remastered Avatar (2009) in 4K and re-released it theatrically in September 2022, all while working on Avatar: The Way of Water. That allowed us to test every step of the process, from online editorial and color, to renders and mastering QC, to upload and delivery to our digital cinema partners, to distribution.

We knew it wasn’t going to be easy, but with enough preparation nothing would be too hard.

What excites you about the future of the Avatar series, color grading and filmmaking in general?

Avatar is an incredibly rich and rewarding franchise to be creatively involved with. I think a lot of people were skeptical if a second Avatar could amaze them the way the first did. But once again, Jim showed the world something they had never seen before, fulfilling both a technological promise thirteen years in the making, as well as a creative one in a new underwater world with new characters and things we hadn’t seen before. I trust the next movie, and the movie after that, will similarly blow us all out of the water.

From an industry perspective, I think there’s a potential for a shift in how colorists and filmmakers work together. For years, digital colorists, and lab timers before them, have worked in big facilities, often under contracts. Geolocation can challenge the partnerships between filmmakers and colorists. I think we have an opportunity in the years moving forward to redefine that relationship.

I’d like to see facilities become technical service providers and hosts to guest colorists.

Right now, if your director lives and works in a place your post facility doesn’t, you might not get the job. But if you as a colorist can be mobile and go anywhere the production wants and four-wall a host facility, then you can do anything. It’s worked for sound mixers for years and I think it can work for colorists too.

Conclusion

I offer innumerable thanks to Tashi Trieu for giving us such a detailed look at the DI process on Avatar: The Way of Water. If you haven’t seen the film yet, go see it in Dolby Cinema 3D, the way it’s truly meant to be seen.  The film is an amazing experience, and a remarkable technical achievement.

You can find out more about Tashi and his work at www.tashitrieu.com.

– Peder

1,200+ Tutorials, Articles, and Webinars To Explore

Get 7-day access to our library of over 1,200+ tutorials - for $5!
Do you like what you see? Maintain access for less than $5 per month.


Start Your Test Drive!
Loading...