Update 1: Originally published April 2016, this article was updated in October, 2016 to include links to the new AcesCentral.com website. In support of AcesCentral.com we’ve also made this Insights series free to read.
Update 2, Late October, 2016: Added mention of the ACEScct option included in the 1.03 specification
Color Management With The Academy Color Encoding Specification
For the past few years, ACES or the Academy Color Encoding Specification has been gaining in adoption and use on features and higher-end television shows.
The Academy (yes, the Oscars), along with many partners & contributors, has put considerable thought and resources into the color science, and the marketing/overall development of the ACES Color management system.
AMPAS has been pushing the message of ACES and the advantages of a sophisticated color management system at events all over the world since the launch of ACES 1.0 last year.
AMPAS has even sponsored the NAB Colorist Mixer the past couple years (thank you!)
Like many of you, I’ve been aware of ACES and at least conceptually aware of how it worked, but assumed ACES was something that really just had benefits for features or other ‘high-end’ projects.
So, prior to the 1.0 release of ACES, I never really dug deep and learned how it worked.
Recently, I was asked to grade a feature ‘doc’ that was shot on quite a few cameras – Arri Alexa, Blackmagic 4k and Canon C300.
Additionally, there were going to be about a dozen shots that would need to have some VFX/CGI work done on them – hence why I put doc in quotes!
Normally, I’d grade this project like any other – Rec 709, manually matching shots and tweaking the VFX/CGI as needed when I got it those shots back into my grading timeline.
However, the filmmaker I was hired by was pretty sophisticated in her approach and had a very strong compositing background (she was doing the VFX work herself).
She even mentioned ACES in some of the initial discussions we had about the project. As those discussions progressed, it was decided to use an ACES workflow on the film.
I was upfront with my client that an ACES workflow was new to me and that I needed to do some research on ACES prior to the project to really get comfortable with all that the workflow implied.
In this Insight, I want to share some of that research that helped me demystify ACES. Hopefully, I can help you in getting to know the essentials, vocabulary, and what problems ACES attempts to fix.
To be clear, this article isn’t going to break down the mathematical transforms of different part of the ACES system, or how to build your own IDT, or ODT – that info is out there and well documented by the Academy and the ACES team.
In future Insights I’ll dive a bit more into my experiences about how ACES helped a VFX pipeline for one of my projects (we’ve already done the tests).
ACES – Unifying, Broad & Precise
When I think of ACES, I think of it as mainly a unifying tool.
As I’ll explain more below, one of the main benefits of an ACES workflow from a colorists point of view, is that it captures referred data – i.e. the color science and sometimes secret sauce that each camera system uses and bakes into the signal – and reverse engineers that (through an IDT) back into the pure linear light information that was in the actual scene in front of the camera. Theoretically, without any camera bias.
This is why ACES is often discussed as being scene referred or with the more technical phrase of scene linear.
Furthermore, the ACES color space is so large it actually encompasses the entire visual locus (everything humans can see).
The net result, in an ACES workflow you’ll never run into the limitations of smaller color spaces and because ACES encompasses the entire locus. It’s future proof.
In other words, whatever future color spaces are used for presentation and distribution, new transforms (using the RRT/ODT combo) can be written using what’s called CTL programming or Color Transform Language. It parses the ACES data into whatever space is appropriate.
When it comes to precision, ACES uses Open EXR 16-bit half-float processing which results in 33+ stops of scene referred exposure!
Keep in mind that even though EXR is used, often this processing is just internal to the app you’re using and no EXR files are created for you to manage – except in renders and handoffs (but rarely in acquisition).
All of this taken together means a few things for ACES pipelines:
- Camera System Unification – Because of the scene referred/linear conversion, different cameras bias profiles are removed at the start of the grade.
- No Guessing For VFX/CGI Workflows – one reason that ACES has been embraced by VFX/CGI heavy films is because they’re compositing and working linear anyway! And that linear data can then be rendered back to whatever is appropriate to the project, or kept linear and given back to a colorist to simply reapply their existing grade with no look shifting.
- Evergreen Digital Master – One thing that AMPAS pushes about ACES is that it allows for a true evergreen digital master because of the ultrawide/high dynamic range nature of the system. To me, that makes a ton of sense.
The scene referred / scene linear approach of ACES seems great, right?
Well it is, except the human eye doesn’t work in a linear fashion. Ultimately projects are viewed on TVs, projectors, etc., that assume much more limited color spaces. Because of this, ACES also incorporates display referred color management as part of its approach.
As I’ll explain below, ACES data is parsed through a couple different transforms for different color spaces and display devices.
The ‘Parts’ Of ACES
Even though ACES and it’s various transforms are mathematically complex, you can understand ACES by understanding what each part or transform does. Here’s the terminology for each of these Transforms:
- IDT (Input Device Transform, now known as the ACES Input Transform) – The IDT takes the original camera data and converts it into the scene linear ACES color space. Because camera manufacturers know their cameras so well, they’re almost always responsible for developing the IDT (remember, this is the reverse engineering step to get back to scene referred linear data) and the IDT is written using CTL programming language. In ACES, cameras have IDTs (and possibly multiple IDTs for different lighting – tungsten, daylight, etc).
- LMT (Look Modification Transform, now known as the ACES Look Transform) – The first part of the ACES Viewing Transform (the Viewing Transform is a combination of LMT, RRT, & ODT). Provides a way to apply a look to a shot. While not the same thing as an output LUT, it’s important to note that the LMT happens post color grading of ACES data. Not every tool allows for direct access to the LMT; output LUTs can be and are often employed for LMT type transforms of ACES data.
- RRT (Reference Rendering Transform) – Think of the RRT as the render engine component of ACES. The RRT converts scene referred linear data to an ultrawide display referred data set. The RRT works in combo with the ODT. While the Academy publishes the standard RRT, some systems like Baselight & Scratch have the ability to use customized RRTs (written with CTL), many color correction systems do not provide direct access to the RRT.
- ODT (Output Device Transform, now known as the ACES Output Transform) – The final step in the ACES processing pipeline is the ODT. This takes the ultrawide, high dynamic range data from the RRT and parses it for different devices and color spaces like P3 or Rec 709, 2020, etc. ODTs like IDTs and RRTs are written with CTL
While these are the components of ACES that you’ll most often encounter, there are a couple other parts of ACES:
- APD (Academy Printing Density) – AMPAS supplied reference density for calibrating film scanners.
- ADX (Academy Density Exchange) – Used for scanning film similar to the Cineon system for scanning.
There are also three subsets of ACES called ACEScc, ACEScct and ACEScg.
All three of these use gamuts that are not as big as regular ACES (all three are slightly bigger then Rec. 2020).
- ACEScc has the advantage of making color grading tools feel much more like they do when working in a log space. Many grading tools, including DaVinci Resolve support ACEScc.
- ACEScct is just like ACEScc, but adds a ‘toe’ to the encoding so that when using lift operations the response feels more similar to traditional log film scans. This pseudo-logarithmic behavior is described as being more ‘milky’, or ‘foggier’. ACEScct was added with the ACES 1.03 specification and is meant as an alternative to ACEScc based on the feedback of many colorists.
- ACEScg is designed for VFX/CGI artists so their tools behave more traditionally.
I’m still doing some testing, but on my upcoming project I think I’ll end up using ACEScc mainly for the more traditional ‘feel’ of the grading controls.
The ACES Pipeline
Now that we’ve explored the parts of the ACES pipeline, understanding how they go together is pretty easy.
Camera Data > IDT > Color Grading > LMT (optional) > RRT > ODT
Remember earlier I mentioned that ACES is hybrid color management system of scene referred and display referred?
In the graphic below (yeah I’m not a designer) you can see how the parts of ACES fit into the scene referred (top section) and display referred (bottom section) side of things:
This is obviously a simplified view of the workflow.
Last year at the Focus On Color Day (put on by Team Mixing Light every year at NAB – we’re on for 2016 too!) colorist Andrea Chlebak (Elysium, Chappie) joined us and described some special sauce her team applied to R3Ds before converting to ACES.
The point is, in any workflow including an ACES one, there are a couple places where customization can happen – of course the grading stage is one of them.
Is ACES Right For You?
Maybe. Maybe not.
The more you get to know ACES the benefits of the pipeline become clear.
But with that said, there are several things to consider:
- The Pipeline – Do you work with other finishing artists? While the unifying aspects of ACES are hard to deny, if everyone is not tuned into the ACES workflow, things can get messy.
- IDTs – While major players like Arri, Sony, Canon and BMD have IDTs for their camera systems, there aren’t specific ones for every camera. Sure, there are generic Rec 709 or P3 IDTs, but in my mind, the generic nature of those IDTs in some ways defeats the precise scene linear approach of ACES. Of course, as new IDTs are developed by camera manufacturers the options expand. Are you working on a project with consumer cameras that don’t have specific IDTs? ACES is probably not the right workflow for you.
- Feel – One of the biggest differences between an ACES workflow and ‘regular’ 709 grading is the feel of the controls. As I’ve mentioned in previous Insights on Resolve Color Management (RCM), this is a pretty big deal. With ACES, many colorists choose to use ACEScc for a log feel to the grading controls even though the color space (still huge!) is smaller than the standard ACES specification.
Like most things in postproduction, experimentation is necessary to figure out if any particular workflow is right for you.
Prior to agreeing to the project I mentioned earlier and grade in ACES, part of my research was to actually grade shots from the film, exchange files with the VFX artist (who happens to be the filmmaker) and make sure everyone in the pipeline was on the same page.
Put simply, don’t jump into the deep end of the pool – get in at the shallow end and swim to the deep end.
In part 2 of this series, learn how to setup ACES in DaVinci Resolve, how the grading controls work and some additional considerations—including a video that walks you through the process.
In part 3 of this series, learn how to execute the ACES VFX workflow. You’ll see how to send files off to the compositor/VFX artist and get them back all while keeping your grading work intact. I have two videos documenting this workflow.
As always, questions or thoughts please use the comments below.