Using Fusion’s 3D Camera Tracker for Patching and Object Placement

October 6, 2020

Learn how to insert 3D objects or paint out defects in videos with a moving camera using DaVinci Resolve Fusion's 3D Camera Tracker.


Reconstructing a camera’s motion

In previous Insights, we’ve looked at Fusion’s Point Tracker and Fusion’s Planar Tracker, so now we’re going to delve into the 3D world and learn about the Camera Tracker.

In this Insight, we’ll look at:

  • Auto tracking the camera move
  • Solving the track
  • Refining the solve by deleting less accurate tracking points
  • Exporting from the CameraTracker to the 3D scene

Solving the track is just the start.

Once the 3D scene has been created with a virtual camera, we can add things (such as 3D text) into the scene by aligning and positioning it in front of the camera.

We also look at using the camera as a projector to paint on a patch in the scene and have it apparently move with the scene for quick and easy paint fixes.

What’s the point of 3D Tracking (and how do you get good results)?

The purpose of the Camera Tracker tool is to calculate (solve) the motion of a real-world camera by analyzing a piece of video. Once it’s figured out how the camera was moving in your shot, it creates a 3D setup in Fusion consisting of a Camera3D and a Point Cloud (along with a Merge3D and a Renderer3D). The Camera3D has all the characteristics and motion of the real world camera, and it also shows the video ‘projected’ onto an image plane as an optional reference or background.

Like almost all tools in Fusion, the Camera Tracker has a Mask input. It’s a good idea to mask out any moving objects, such as people or vehicles, as these will make the tracking and subsequent solve more difficult. In my example, you can see the man’s arm move to raise the soda can. The points that are automatically tracked on his arm only confuse the solver, so it would be better to manually animate a rough mask to prevent the tracker from looking at this area.

There are other challenges in this example shot in addition to the man moving his arm. There’s a lot of detail in the very far distance, which isn’t very helpful to the solver. Also, the rocks and gravel on the ground all look similar to each other, confusing the tracking process. It would also help if you can provide as much detail about the real camera parameters so that the solver has more information to work with. All that said, the Fusion tracker and solver managed OK on this portion of the clip.

For best results, the solve operation needs a good selection of tracking points that are accurate and sustain across a good number of frames.

The temptation is to set the tracking parameters to have many thousands of points, hoping for better results. However, too many tracked points will actually just slow the solving process to a crawl and may yield less accurate results. For best results, the solver actually needs fewer but more accurate tracking points. This is why it’s important to adjust the filters and delete inaccurate points if you’re getting a solve error that’s too large.

What’s an acceptable ‘solve error’ size?

Fusion generates a solve error number, defined by pixels. Under 1 pixel is OK and under 0.5 pixels error is good. But just like you don’t want too many tracking points (which tends to result in large solve errors) don’t delete too many points otherwise your results can be even worse! Yes, it’s a balancing act that you’ll learn execute through experimentation.

Catching The Texture

I have to give a shout out to Eric Westphal for his demo to fellow Blackmagic trainers; his tips on projecting a texture and painting on it really helped me understand this process. The key to Eric’s techniques shown in this Insight are the use of the UV Renderer and the Catcher tool. In order to do a 2D Paint fix and place it in the scene as the camera moves around it there are a few steps we need to execute:

  • First, set the Camera to project the video as a texture – In order to allow objects in the scene to receive this projected texture, they must have a Catcher connected to their input. The Catcher material will ‘catch’ the projected texture – this is particularly important if you need an object’s texture to also respect an alpha channel.
  • Change the Renderer3D to UV Render mode – This ‘unwraps’ the texture on objects, thus enabling us to paint on the flattened texture.
  • Finally, add the ReplaceMaterial3D node – This node allows us to put the painted image back into the texture and have it placed in the scene as an invisible paint fix (using an additional Renderer3D in its default OpenGL mode).
Special Thanks: Sherwin Lau kindly permitted me the use of this footage for this Insight.

– Jamie


Member Content

Sorry... the rest of this content is for members only. You'll need to login or Join Now to continue (we hope you do!).

Need more information about our memberships? Click to learn more.

Membership options
Member Login

Are you using our app? For the best experience, please login using the app's launch screen


1,200+ Tutorials, Articles, and Webinars To Explore

Get 7-day access to our library of over 1,200+ tutorials - for $5!
Do you like what you see? Maintain access for less than $5 per month.


Start Your Test Drive!
Loading...