Getting To Know Dolby Vision Part 2: Delivering To non-HDR Televisions

January 4, 2018

In part 2 of our Dolby Vision series you learn more about suite setup. Then you dive into using the Dolby Vision trim controls (in DaVinci Resolve) for Standard Dynamic Range TVs. Plus, Dolby Vision 'mezzanine' file creation.

Day 4: 24 Insights in 24 Days 2018 New Year Marathon

Part 2: Dolby Vision Client Monitoring, Trim Controls & Rendering

It’s true, I’ve had Dolby Vision on the brain for the past few months – ever since the event my company hosted with Dolby in November 2017 showing off our Dolby Atmos and Dolby Vision capabilities, I’ve been actively seeking projects to grade in HDR so I can put the Dolby Vision workflow through its paces. My enthusiasm continued when in mid-December, Mixing Light contributor Joey D’Anna and I headed out to Burbank, CA to Dolby’s facility for a one-day intensive Dolby Vision training class.

The class exceeded our expectations!  Besides getting a chance to see the Dolby Vision workflow in action AT Dolby (as well as seeing the Pulsar in action) and talk to their in-house colorist,  over 5 or 6 intensive hours we learned a ton about the Dolby Vision delivery and encoding pipeline, Dolby Vision theater setup and more.  It was a great experience.

Still excited about that trip, it’s the perfect time to dive right back in and continue our free series on Dolby Vision grading.

In Part 1,  you looked at the essential setup, gear, and overall Dolby Vision Workflow, but I stopped short of looking in depth at the real power of Dolby Vision – trimming for SDR and active metadata.

In this installment (Part 2), you learn about:

  • Dolby Vision client monitoring
  • The Dolby Vision trim controls in DaVinci Resolve (which are similar to other grading systems) and how they work
  • The trim workflow and making trims for multiple display targets
  • Plus, we explore setting up a render for Dolby Vision content.

More On Client Monitoring

Before diving into the making SDR trims using the Dolby Vision trim controls let’s address the topic of HDR client monitoring.

After we published part 1 of this series, I got quite a few emails (remember, you can use the comments too!) about how exactly I had the consumer LG OLEDs connected in my suite.  As I mentioned in part 1, normally I don’t have two client displays, but for the purpose of the demo we did with Dolby I had two – one showing the SDR mapped output of the CMU (matching my SDR reference monitor) and one showing the PQ HDR signal.  Day to day, I skip the SDR client monitor and use just one 65′ LG C7 as the client monitor so clients can preview HDR grades.

There are a few issues with trying to run the LG OLEDs for this purpose.

  1. Dimming – it’s well known that LG OLEDs after about a minute of static content on screen (common in the grading suite) start to dim, aggressively.  While this is annoying in SDR, it’s REALLY annoying in HDR.  You want to turn off the dimming behavior which you can, with the help of an LG service remote (please keep in mind this voids your warranty).
  2. Enabling the HDR Metadata Flag – with professional HDR displays that connect via SDI, all you have to do is enable a viewing mode with PQ and you’re set.  On a consumer display that connects via HDMI, the display will only go into an HDR mode (HDR 10, HLG, or Dolby Vision, etc.) if there is a metadata flag in the HDMI stream. Embedding this flag can be accomplished quite easily if your Resolve I/O device supports HDMI 2. All you have to do is enable HDMI metadata in Resolve and connect the display to your I/O.  If you’re not connected directly via HDMI, things get a little more tricky and you’ll need to rely on 3rd party hardware to ‘trick’ the TV into HDR mode (more on this in a moment). The third option is to disable the metadata flag requirement in the service menu altogether (called module HDR), making the LG essentially act like a professional display, but I’ve had issues with calibration with this option disabled.
  3. Calibrating – Right now LG OLEDs do not support 3D calibration LUTs The 2018 LG OLEDs support direct upload of 8-bit calibration LUTs. For more bit depth (or on older OLEDs) you’ll need to rely either on the built-in Color Management controls or add an external in-line LUT box for SDR/HDR calibration LUTs.

In my setup, I can’t connect directly to an HDMI output of a BMD I/O device, so I adapt SDI > HDMI.  Additionally, disabling the metadata flag option in the LG lead (on my C7) to really bad SDR/HDR calibrations so I use a box to embed an HDR metadata flag into the HDMI stream.

I take 2x 3G loop outs of my HDR reference monitor into 2x 3G inputs of an FSI Box IO.  This unit stores 1D/3D calibration LUTs in both HD (single channel) or UHD (dual channel up to 2160p30).  I then take the dual 3G outputs of the Box I/O into two of the inputs of an AJA Hi-5 4k Plus.

The AJA box lets you convert the SDI signal to HDMI, but the really cool part is by using the HDR tab of the AJA Mini Configuration Utility you can insert HDR metadata into the HDMI output, which tells the LG TV to go into HDR mode.

The HDR tab of the AJA Mini configuration utility for the Hi5 4k Plus lets you choose relevant HDR signal options so that HDR metadata can be embedded in the HDMI output.  Notice the choice of BT.2020 Colorimetry but a P3-D65 gamut just like the color management settings in Resolve.

With HDR enabled on the AJA box you can calibrate for ST 2084 PQ and P3 gamut using a tool like Calman or Lightspace as well as preview HDR grades for clients on client sized monitor.  Keep in mind I did say preview. 

Currently, the best in breed client HDR displays from LG, which are the popular choice for overall performance, can only pump out about 700-750nits which falls short of the 1000nits recommended by Dolby for mastering and which also matches the popular Sony x300.  So you won’t be able to see the full brightness of the grade on the client monitor, but I’ve yet to find this a real dramatic problem.

Conversely, you could use a consumer LCD display that can go much brighter so no roll off or clipping would occur, but I have not had any experience with those displays in my suite (I’m an OLED snob!).

Finally, one last thing to be aware of in a multi-display setup where consumer and professional displays are being used is APL and limiting behavior.

APL or Average Picture Level is important in an HDR discussion because different displays will trigger ABL, or Automatic Brightness Limiting, when APL hits a certain level – and these thresholds are different for different displays.  For example – I’ve pushed APL on a Sony X300 and the ABL didn’t kick in but the LG OLED dimmed considerably because its ABL has been enabled.  You’ll have to find a good balance point for APL and experiment.

We’ll talk strategic and creative grading techniques in a later part of this series, but I’m tending not to really push APL and rather utilize specular highlights to add in some HDR magic to avoid such problems.

Step 1: Making Your Initial Image Analysis

The first step in a Dolby Vision HDR workflow is to color correct and create an HDR color grade on a shot.  But what do you do next? There are two schools of thought:

  • The first school of thought: Continue color correcting in HDR for the entire project. Then in a second pass deal with the SDR conversion and make necessary trims.
  • The second school of thought: Create color correct in HDR for a shot, and then immediately create the SDR version of that shot.

I see benefits in both approaches, but my personal approach is to grade the entire project in HDR and then come back and work on the SDR side of things.  I find this lets me stay focused on the HDR grade and not visually distract myself – I keep the SDR monitor turned off while making the HDR grade.

With either approach, after your HDR grading, you’ll need to analyze your images and create metata. With Dolby Vision you can do this on one frame, one shot or the entire timeline.

What do we mean by analyzing?

Dolby uses proprietary algorithms the create SDR images from your HDR color corrected images. It’s partially based on the target display (remember that 100 nit 2.4 gamma target when we set up the Dolby Vision project?). When you tell the software to analyze a shot, the color grading system analyzes the tonality of the shot, and embeds tone mapping metadata into the SDI output. Then the Content Mapping Unit (CMU) displays the image on its SDI output, tone mapped to your target display.  This analysis takes a few seconds per shot and when done, generates what Dolby calls Level 1 (L1) metadata.  You can think of Level 1 metadata as the automatic mapping performed by the by the Dolby Vision algorithm.

After the analysis has taken place, the ‘mapped,’ SDR shot will appear on your SDR monitor connected to or routed from the CMU output.

Prior to initial analysis, you can see that the SDR image (right) looks markedly different from the HDR grade (left). This is normal! Once footage has been analyzed and mapped to a monitor target by the CMU it will look much closer to the HDR grade. If the automatic analysis is not quite right, you can ‘trim’ the SDR signal to get it just right.

Here’s how to perform the analysis:

  1. In DaVinci Resolve Choose Color > Dolby Vision
  2. In this submenu you have several options for shot analysis, which are mostly self-explanatory – Analyze All Shots, Analyze Selected Shot(s), Analyze Selected and Blend, Analyze Selected Frame
  3. The analyze all shots and selected shots are my go-to options.  Analyze Selected & Blend was designed for averaging similar selected shots and applying that analysis to the constituent clips; I’ve not had great luck with this option and Dolby has recommended that each shot is analyzed on its own or that shot analysis data be copied to similar clips but your mileage may vary.  Analyze Selected frame can be used to quickly visualize an SDR mapping without waiting for an entire shot to be analyzed.
The Dolby Vision submenu lets you choose how to analyze a shot. Analysis modes are also available form the Resolve Advanced panel.

The analysis takes a few seconds per shot.  I can typically analyze an entire 90 min film (1200 shots) in about 30 minutes, but your mileage may vary depending on your grading system and CMU capabilities.

Step 2: Making Your SDR Trims

After generating L1 metadata (basic target analysis), for me, the power of the Dolby Vision system lies in the ability to tweak that analysis to get the derived SDR version of your grade just right.  Unlike HDR-10 that applies a global mapping approach, with Dolby Vision you can make trims on a shot by shot basis.

Shot by shot trims are what Dolby refers to as Level 2 (L2) metadata. Unlike the automatic L1 metadata, L2 metadata is a creative venture. It is you, the colorist, in combination with feedback from your clients, that adjusts the derived SDR version of the grade so that it best matches the creative intent of the HDR Grade.

Here’s the interesting thing; it might be that the L1 metadata – the automatic analysis – does a good enough job that no further work/thought is required for the SDR version of the shot!  Many of the colorists I’ve talked to doing Dolby Vision work say this happens on about 40%-45% of shots, which just shows you how good the automatic mapping is.

With that said, having the control to get the SDR version just right is a major advantage of the Dolby Vision system.

Once the initial analysis has taken place, the Dolby Vision trim panel becomes active (in Resolve 14 it’s to the right of the Motion Effects panel).

The Dolby Vision Trim controls offer simple yet effective control over massaging a derived SDR signal for a given target.

The trim panel allows you to adjust the derived SDR signal post CMU mapping to best match the creative intent of the original HDR grade.  I know, these controls seem quite simple, but because the Dolby algorithms are so good, a little goes a long way with the trim controls.  What’s more?  Dolby will continue to improve and add to these controls in their SDK that BMD and other companies can implement.

There is one VERY BIG CAVEAT about analyzing shots:

Only shots on video track 1 will be analyzed!  If you have a multi-track project, you’ll need to move clips down on to track 1 prior to invoking analysis on those shots.  After the analysis is done you can return them to whatever track you need them on – but a safer approach is to try to just limit your projects to one video track. According to Dolby, the need to move clips down on to track 1 is a Blackmagic issue and not a Dolby Vision one.

While I haven’t encountered a situation where this a show stopper, I can see in complicated timelines this being a bit of a pain – hopefully Blackmagic will address it in future updates.

About The Trim Controls

Although the Dolby Vision trim controls are pretty simple, they’re worth explaining in a bit more detail.

First up, let’s look at the three sliders labeled Crush, Mid & Clip.  In many tools, that support Dolby Vision trim controls these tools are not accessible but in recent versions of Resolve they are. Essentially, these controls show the initial L1 metadata that is generated in the L1 analysis. As their names imply, the CMU maps the three different parts of the tonal range based on the average over the entire clip.

According to Dolby, you really shouldn’t mess with these controls – as they’re the baseline for the automatic tone mapping and altering them may result in unexpected results when the grade is played on a display with capabilities between the 100nit trim target, and the 1000 or 4000nit mastering target.

The controls you’re adjusting are found on the left-hand side of the trim palette:

Lift, Gamma, Gain – These controls, albeit in slider form, should be familiar to any colorist.  They’re the primary controls you’ll use for tweaking the initial SDR mapping. Just keep in mind they ‘feel’ slightly different than Resolve’s built-in Lift, Gamma, Gain controls.

Saturation – This one should also be familiar.  It controls the overall saturation of the SDR signal.  Curiously, there is no overall Hue control currently in the Dolby trim controls.

Chroma Weight Offset – in the process of mapping the HDR, wide gamut signal to SDR there is a balancing act between saturation and brightness.  The Chroma Weight Offset slider allows you to adjust the balance between brightness and saturation of signal outside the capabilities of the mapped target.  I use this control all the time to strike the right creative balance in SDR shots.

Tone Detail Weight – Tone Detail Weight allows you to control the level of detail in highlights that might be blunted in the mapping HDR > SDR.  However, for a 100NIT target, the control doesn’t work.  Its functionality is only available in higher nit target trims.

The Dolby Vision trim controls are available on the center panel of the DaVinci Resolve Advanced panel allowing for quick access and streamlined trim workflow.

Each control can be adjusted directly in the Dolby Vision palette, but if you’re using a DaVinci Resolve Advanced Panel the Dolby Vision trim controls are also available in the center panel making trims quick and easy.

In general, I find the trim process to be pretty straightforward as the initial tone mapping is usually quite good. But to reiterate, the goal should be to try to get the SDR signal to match the HDR one as closely as possible.

It’s also important to understand that the trim controls are a work in progress – as Dolby updates their SDK and exposes additional functionality, it’s up to companies like Blackmagic to implement the new controls.

Finally, when you make trims, understand that you are only affecting the SDR version of the shot that has been mapped via the CMU and that your trims exist as an abstracted correction in Resolve – meaning they aren’t contained within a node, but rather only on the Dolby Vision palette.

About the Trim Workflow

When it comes to making SDR trims in the Dolby Vision workflow you’ll quickly find that while the process is straightforward and simple, there is a ton of repetitive work.  What I mean is in a typical scene you’ll often make the same trims.

It could be that you have a lot of shots from the same camera angle or could be just universally a scene needs an adjustment.  Whatever the case, Resolve offers a few simple ways to copy and paste trims from shot to shot.


The Dolby Vision palette also offers several options for copy and pasting trim & analysis data, which allow you to make quick work out of a scene with similar shots that require the same trims.

In the palette menu (the three horizontal dots in the upper right of the palette), you’ll find options for copying and pasting trims.  You’ll also notice in this menu that you can copy and paste analysis data – these options are sometimes used when a shot doesn’t analyze properly (a rare occurrence) and you’d like to grab the analysis data from a similar shot.

Just keep in mind the Dolby Vision pallette copy/paste functionality lives outside the normal node copy/paste workflow.

(optional) Step 3: Trims For Multiple Targets

Dolby dictates that you must always target a 100nit Rec709 device.  By doing so, a Dolby Vision capable TV can map the HDR signal from 100nits through the mastering level based on the capabilities of the display.

However, there is nothing stopping you from being a little more OCD than that.  You can create multiple trims that can co-exist at the same time.  So let’s say that you know for a particular project that a 600nit P3 LG OLED is going to be a big subset of the viewing audience.

Directly from the Dolby Vision palette, you can choose additional targets for trims. Dolby Vision supports multiple levels of trim, but keep in mind the 100-nit Rec 709 trim is required.

In the Dolby Vision palette, you can click on the target pull down and choose an additional target. To be clear, any new trims you do on the new target don’t replace the ones you’ve done on the required 100nit Rec 709 target – they’re in addition to your earlier trims.

While many colorists probably won’t bother with an additional trim due to budget and or time, it’s reassuring to know that additional trims are possible. I’ve found that each targeted pass for trims takes about 10% of the original time to grade the project.

Step 4: Rendering

Now that you’ve analyzed shots (or an entire timeline) and you’ve gone through the trim process (and made an award-winning HDR grade), it’s time to render!  Yes, I know I’ve skipped over the whole grading thing!  We’ll do that in later part of this series!

One thing that you have to get your head around is that the initial render from DaVinci Resolve is the start of the process in creating a final Dolby Vision deliverable that will stream on OTT services like Amazon and Netflix.

In part 3, we’ll explore the additional parts of the deliverable process but from Resolve, you’re going to create two elements:

  • 16-bit Tiff Image Sequence or Open EXR Image Sequence
  • Dolby Vision XML

On our trip to Dolby in December, Joey and I learned that ProRes 4444 and ProRes 4444 XQ may soon become a rendering option to feed the rest of the encoding pipeline as Apple as embraced Dolby Vision but for now, you’re best sticking to Tiff and EXR image sequences.

When it comes to naming styles used for your Dolby Vision renders this is a good template:


Rendering the files necessary for the rest of the Dolby Vision pipeline is easy. Choose either 16-bit Tiff or Open EXR. ProRes 4444 & 4444 XQ options might be recommended by Dolby at a future date.

Keep in mind 16-bit Tiff or EXR sequences are BIG, especially at high resolutions.  You’ll need a lot of storage space, however, there is really no need to playback in real-time these image sequences.

After rendering, the last step is to choose File > Export AAF, XML….

With the Dolby Vision components installed, you’ll have the option for exporting a Dolby Vision XML, which along with the TIff/EXR renders will be fed into the Dolby Mezzinator software or a 3rd party tool.

Because the Dolby Vision toolset is enabled (see part 1 in this series for more information on enabling the tools), you get a new type of XML that can be exported.  In the Save As Type pulldown, choose the option for Dolby Vision XML.  This XML is specifically formatted for Dolby Vision.

Coming Up In Part 3: Dolby Professional Tools, Mezzanine File Creation & Deliverables

You’ve done your HDR grade, made SDR trims with the Dolby Vision toolset and rendered out a Tiff sequence and Dobly Vision XML.  That’s it, right?

Well, not exactly…

The render and xml are the starting point for creating a Dolby Vision mezzanine file, extracting additional deliverables like a SDR Rec 709 file, and more.  In part 3, we’ll explore what Dolby calls its Professional Tools – command line programs for encoding, editing metadata and more. We’ll also touch on commercially available tools that have Dolby Vision support like Colorfront’s Transkoder.

As always if you have a question or something more to add to the conversation, please use the comments below.  


Learn More About How to Create HDR and Dolby Vision


Homepage Forums Getting To Know Dolby Vision Part 2: Delivering To non-HDR Televisions

Page 1 of 2

  • Willian Aleman

    Robbie, Thanks a lot for taking the time to make this detailed and informative insight series about Dolby Vision HDR workflow.

    Dolby Vision HDR acquisition at the present time is more of a high end niche, out of the reach of the majority of us because of hardware cost and the annual license required to delivery Dolby Vision HDR format.

    Due to this, Dolby Vision is not only unaffordable for most independent content creators and small postproduction houses, but for distributors, HDTV and monitor reference manufactures.

    Having said that, It would be beneficial if in addition to the HDR Dolby Vision series, Mixinglight would give insight into the workflow for color grading, mastering and delivering HDR outside the Dolby Vision system.

    I would be interesting in hearing about HDR HGL and PQ workflow in Davinci Resolve. What is the extra work needed outside Davinci Resolve to encode the appropriate metadata flag for the HDR TV, Netflix, Amazon,Youtube, Vimeo and the new mobile devices to playback correctly HDR programs? It’s my understanding that delivering HDR from Davinci Resolve outside Dolby Vision requires external software for flagging the final metadata required to communicate with the HDR world outside Davinci ecosystem.

    I cannot wait for the next insight in this series.


    What BMD devices are you using to convert the SDI to HDMI 2.0?

    Since the FSI LUT Box I/O in dual channel only works in 17x17x17 grid size instead of the 33x33x33 in single channel, I would like to know what is your experience in terms of having banding and blocking artifacts in HDR display and/or ACES workflow?

    Last time I exchanged email with an AJA representer about the compatibility of Davinci Resolve and AJA devices, he wrote back saying that at the moment, (about two month ago) it wasn’t possible.
    I would like to know if the AJA Hi-5 4k Plus you are currently using is compatible with Davinci Resolve?

    Recently, I have been using the Atomos HDR SUMO 19 Monitor/Recorder on-set. My use of the monitor is mainly as a live transcoder/proxy through the SDI out of the cameras filming in raw format for dailies and faster turnaround to the editor or colorist on-set, near set, or the post house. Of course, conforming to different file names between the camera source footage and external recorder is an unsolved metadata issue, not only for Atomos but for Convergent Design’s external monitor recorders too. However, confirming and syncing via timecode between the camera and recorder footage makes an easier task to do on-set or in post. In addition, switching from SDR to HDR monitoring on-set is a benefit for projects that are going to be release in both SDR and HDR.

    Since the monitor SUMO 19 is a 1200 nits panel, and supports In/Out HDR LHG/PQ, would you consider using the the SUMO 19 as a client display, instead of the 65’ LG C7/s750 nits? This of course is taking into consideration the smaller screen size, (19”) of the monitor versus the LG larger one which translates into small screen display with more nits, versus larger one with less nits.

    Thanks once again for the in-depth insight on HDR.

  • Robbie Carman

    Hey Willian,

    Thanks for the kind words, I’m glad your enjoying the series so far!

    There is a lot to your comments so let me start at the beginning.

    In part 3 I think I need to better clarify some of the licensing and costs but let me try to do a little of that here.

    1. There are no fees associated with Dolby Vision on-set or from a shooting perspective. Any camera capable of recording RAW, 10/12 Bit Log is a perfect candidate for any flavor of HDR including Dolby Vision. The same source can be used for HLG, HDR10, Dolby Vision. All modern cinema cameras are capable of shooting the dynamic range necessary for HDR workflows.

    2. Dolby DOES NOT ever charge licensing fees to content creators. If you go shoot a film and want to finish in Dolby Vision, Dolby doesn’t charge a fee for that. You’re correct that licensing fees are paid by TV manufacturers to license the Dolby Vision chip in a TV and that OTT services like Netflix pay for encoders that support Dolby Vision, and that post houses do have equipment costs and maintenance fees paid to Dolby – But I want to be very clear that a production does not pay specifically for Dolby Vision. If the production can afford it/desires it there is nothing stopping a production from finishing in Dolby Vision.

    To that point, I’m now suggesting to many clients to do a DV finishing workflow because thru that process we can get HDR10, SDR and DV deliverables easily.

    3. One thing to keep in mind, there is nothing ‘special’ about Dolby Vision from the HDR aspect in the grading suite. Dolby invented PQ that systems like HD10, Dolby Vision and others use. So when grading HDR there is no difference really in doing so between Dolby Vision and HDR10 – Resolve is setup the same way, same monitoring is used etc.

    The Dolby System comes into play with SDR and the ability to apply active metadata on shot by shot basis. That part as you’ve read, requires Dolby Hardware (CMU) and grading system support. But the HDR side is identical to an HDR10 workflow as they’re both PQ Rec2020 (P3-D65 Gamut)

    4. Yes! Once I finish this series on Dolby Vision I’ll jump over to HDR10 workflow. You’re correct that right now a 3rd party tool is required to insert additional metadata into the final HVEC deliverable – this can be done with sophisticated (read costly) programs like Transkoder and others, but there are also many free options available like Hybrid which is a front end to FFmpeg that I’ve been using. I’ll def follow up on this

    Ok on to your questions:

    1. I’m not using any BMD device to convert SDI to HDMI 2.0. In my signal path I”m taking SDI off my router/monitor out to the BoxIO and then onto the AJA Hi-5 4k Plus which converts to HDMI and inserts HDR flags. The Ultra Studio Extreme and the Decklink 4k Studio cards have HDMI 2.0 on them directly so you could simply connect HDMI to and HDR consumer display and enable HDR metadata in Resolve to serve the same purpose.

    2. I’ve been very happy with the BoxIO even in dual channel mode at 17x17x17 and have not noticed any artifacts and or banding. I’m leveraging both its 1D (grayscale) and 3D LUT capabilities – the 1D in my opinion is especially important on the LGs as they’re not very linear.

    3. The AJA box has nothing to do with Resolve per se. I’m not connected to it thru Resolve at all. It’s simply a down stream converter on my SDI signal path. Your correct that AJA interfaces are not and probably will never be supported directly in Resolve.

    4. I have not hands on experience with the SUMO but remember peak white is not the only measurement of a good HDR device. One reason everyone is so hot on LGs is their black level performance and how well they can be calibrated. With that said I do agree that there is an advantage of having a client display that is equal to your mastering monitoring in terms of peak white output.

    Hope this helps!

  • Willian Aleman

    Hi Robbie,

    Thanks a lot, for the clarification and answering my questions. I can’t wait for part three.

    1 & 2. When I refer to Dolby Vision content creator cost, I’m not referring to on-set licensing or the direct cost to the production. Rather, the the content creators and the producers for indie feature films.

    For example, traditionally it has been the norm for production houses, those that can, to charge an extra fee to master the DCI-P3 because of the required license. However, it would be interesting to know if the post house is charging or not a “hidden” price to the production to master the Dolby Vision HDR? Otherwise, how can the owner of the post house justify the investment in the system?

    3. I’m glad to hear that there is no difference in using Dolby Vision in Davinci Resolve and other HDRs.

    4. The insight about HDR10 workflow with Hybrid for the inclusion of the metadata is going to be beneficial.

  • Robbie Carman

    Ahh I see what you mean.

    Yes, many facilities will probably charge more for Dolby Vision and HDR work to help cover their investment. My HDR grading rate is about $85/hr more then my regular HD/SDR grading rate. But I think I take a slightly different approach than a lot of facilities I want to empower content creators to use the new technology – so I play the long game/volume game instead of the quick recoup game….but that’s just me .

  • Willian Aleman

    Thanks a lot for the clarification. It’s greatly appriciated.

  • Hi Robbie,

    Great series so far!

    How exactly do graphics and titles work with HDR and Dolby Vision? I’m used to the traditional grade without graphics with an XML round trip. When you run the files through the Mezzinator, is it just the graded footage or does it have all the titles and graphics on it too?

  • Robbie Carman


    It’s a great question and one that is not totally standarized. Yes you could absolutely bake in the titles an other graphics at the grade stage prior to rendering out your Tiff, EXR or XQ file for the mezzinator. Dolby is currently recommending that those titles be handled just like SDR so 80-90NITS. Another way if you’re using a mastering tool like Clipster, Transkoder etc is to render out textless and handle those elements there – those tools are also up to easily handling versioning requirements – i.e. different languages etc.

  • Thank you so much for the response! Can’t wait for part 3!

  • Elliott Balsley

    How do you calibrate the LG in HDR mode? I’ve been told recently by SpectraCal that CalMan cannot create 3D LUTs for HDR. I would imagine that when you make the calibration LUT, you would have to specify the max luminance, so for example on a 1000 nit display it should pass through values above 75% unchanged.

    Is there any way to control how the LG handles highlights above its max 750 nits? I guess it tries to roll off rather than clip, but that mapping could vary between manufacturers, which sort of defeats the purpose of DV. Maybe in an ideal world you could set that as another target through the CMU.

    If you make a timeline edit after the DV analysis, does the metadata follow the shots, or do you have to redo the entire trim pass?

    I’m looking forward to part 3!

  • Robbie Carman

    Hey Elliot –

    Sorry I missed this earlier in the week.

    Are you trying to use the the LG as an HDR mastering monitor? Obviously because of its intended purposes there are features of the display i.e. LG’s tone mapping that make it more challenging to act as a true mastering monitor – but in my experience it works great as a HDR preview monitor and that’s how I use them in my facility.

    With the latest version of Calman, they do have SDR/HDR LG workflows due to their new relationship with LG to provide direct control and upload of Luts for the 8 series and later displays. In SDR, they are generating a true 33* 3D Lut and a high precision 1D LUT.

    In HDR, as you sort of pointed out the 3D LUT is not generated in the traditional sense, but rather with a 3×3 Matrix that’s then used to extrapolate (through fancy algorithms) to a 33x LUT. The reason this is done in HDR mode is that to get accurate results you’d need HUGE test pattern sets (9000+) and at high nit values the potential for permanent screen burn-in in the time it would take to present 9000+ patches is very real.

    There are are also many ongoing discussions with OLEDs and specifically the WRGB OLEDs like the LGs use that in such large HDR test pattern set thermal build up becomes a significant issue.

    HDR calibration on these sets is very much an ongoing discussion with the panel makers, meter companies and software companies like Spectracal & Light Illusions

    So for the next question the short answer is YES, but are you willing to totally void your warranty?

    You’re correct that the default behavior/tone mapping of the LG panels is provide roll off to match the peak brightness performance of the panel.

    There are tone mapping options in the service menus including the ability to turn it off completely – meaning that if you push 1000NIT signal to a 750 nit panel anything above 750 nits will just clip. However, on the LG panels this clipping is particularly nasty, and turning these options off requires a complete redo of all SDR/HDR calibration.

    While disabling the tone mapping might seem like a good thing for a true reference monitor, I would discourage for the LGs. Smart use of HDR scopes can still in form you where clipping is happening.

    Your absolutely right that tone mapping differs between companies, but one thing you have wrong is that to be a DV certified display there is a Dolby chip in the TV (that performs similar to what the CMU does) that Dolby has certified for that line of sets. Meaning Dolby knows that that panel can do.

    But that doesn’t really matter. If you’re using an LG panel as an HDR monitor in the grading environment you’re not really grading ‘Dolby Vision’.

    Your using PQ. Indeed, the sets never go into DV mode that enables their chip. Instead, the TVs display a generic ‘HDR’ banner indicating that the TV is working in PQ (assuming that the right HDR metadata is reaching the set).

    As mentioned in this series, Dolby Vision really comes together in the final encoding stage with DV trim metadata and final encoding of the HVEC file.

    Now, when it comes to making trims your DEAD ON about making trims specifically for the typical HDR TV. Currently (and this will probably change soon) Dolby has P3-D65 & Rec 2020 600 Nit trim targets for all supported grading systems.

    In addition to the mandatory 100 nit trim I generally do a 600 nit trim as well and look at it on my LG in PQ/HDR mode. As you point out this lets me make specific trims for that target which currently makes up a lot of the TVs on the market.

    Dolby Vision supports multiple trims at the same time.

    Make sense?

    Finally, for your last question, it depends where you are in the finishing process. In Resolve, on the timeline, yes trims stay with the clip. But if you’ve output a final render and Dolby XML and then make changes no. Remember to get properly analyzed clips need to be on V1 – at least right now.

    Sorry for the very long winded response. Let me know if any of this doesn’t make sense.

  • Elliott Balsley

    Yep that all makes sense. So I suppose Dolby has to evaluate a sample of a particular model TV before they stamp the DV badge on it, so they can tune the highlight rolloff.

    One other thing I’ll point out is that with the latest version of Transkoder, you can actually “grade” the DoVi metadata on the TV itself. It triggers the TV into DoVi mode (not generic HDR), sends the level2 metadata over HDMI, and you can grade it in real-time. Pretty neat.

  • Hello.
    I’m starting to study HDR color correction. Currently working / studying on a Win10 station running Davinci R15 Studio, Decklink Mini Monitor 4k and 3 Dell monitor, being 2 1080 (GUI and Scope) and a 4k HDR (model: UP2718Q) connected to Decklink through the HDMI port using an Atevon cable 4K. I know it’s not the best setup for HDR, but it’s a Home Office workstation, and I do not think anyone can have a Dolby Pulsar at home …

    I did some tests, viewing on LG TVs B7 through Youtube and worked well the HDR that I rated in Resolve. But now I would like to take a step forward, I want to be able to simultaneously view the HDR and SDR classification using 2 monitors. I know the process of using the Right Eye / Left Eye stereo grading tools to make adjustment passes for Standard Dynamic Range (SDR) / High Dynamic Range (HDR) color gradients.

    My questions now are what I need to change in my current setup and how to do this split signal, right eye monitor SDR, left eye Monitor HDR for example.

    I can send through my Decklink Mini Monitor, send 2 signals one through the HDMI to one monitor and another through the SDI to another monitor using an SDI to HDMI converter?

    Or I need to buy another card with DeckLink 8K Pro for example and an SDI to HDMI converter (remembering that it’s home, I need the best cost benefit) …

    How do I specify which monitor is the HDR and what is the SDR?

    Thank you!

  • Balaji G

    Currently using AJA HI-4K-Plus converter and in Resolve using HDR Color Management settings. Currently using EZIO CG-3145 monitor and LG OLED C9. How to trigger Dolby Vision in to my LG Oled TV? Can you help me out?

  • Robbie Carman

    Hi Balaji

    First you’ll need to make sure you’ve done base level Dolby analysis on shots and or additional trims using the Dolby trim controls if you have a license – you don’t say how your monitors are hooked up but I’m assuming they’re getting the same signal your not setup for dual HDR/SDR monitoring?

    After generating analysis metadata you can enable DV tunneling via the Hi5-4k-Plus. You just need to make sure you have the latest software and firmware for it – they released this a couple months ago -

    With the correct metadata flags chosen for the Hi5 you should see the Dolby Vision badge pop up on your LG – just keep in mind DV tunneling is meant as a QC check and not a replacement for a proper HDR/SDR monitoring path

  • Robbie Carman

    Marcelo –

    I’m so sorry I 100% missed this. I’m sure you’ve figured this out by now. But the L/R eye workflow is not deprecated in Resolve instead there is an option in your monitor settings section of project settings to use Dual SDI monitoring. However this is the issue with your setup – while you can of course use the HDMI output of that Decklink (with the proper options selected in Project Settings) you can’t do an SDR/HDR monitoring setup with an HDMI + SDI connection they both need to be SDI. By default SDI 1 wll be your HDR output with SDI 2 being the SDR

Page 1 of 2

Log in to reply.

1,000+ Tutorials to Explore

Get full access to our entire library of over 1,100+ color tutorials for an entire week!

Start Your Test Drive!