Here are three small projects that I used to teach myself basic motion tracking and rotoscoping in Blender.

#1 Duplicating and Rotoscoping an actor

This was my first attempt at rotoscoping a character to pass in front of another piece of footage. Things I learned in no particular order.
Try to have your mask land on a natural line. It makes the seam much less visible.
Use at least one tracking marker to simplify moving the mask between keyframes.
Separate the mask into multiple pieces, for example, head, shoulder and arm. This makes fixing small errors in the mask much easier.
Common advice is to do major movements first and then refine. I was trying too hard to perfect the entire mask for an extreme movement. This led to spline points being wildly out of whack between keys as they tried to tween to the next position. I think a better way is to let the tracker take care of major movements and do some simple scaling and only once that is done for the entire shot, refine to the perfect shape.
I didn’t end up using the per-spline feather much, but I like that it can be used to apply blur in a much more controlled way.

#2 Motion Tracking combined with Rotoscoping

I spent a heap of time trying to align a cube with the truck. I think the most important thing is getting a good camera solve. If you don’t have that, you’re going to be fighting it the whole way.
Some parts of the solve that were troublesome came down to the camera settings. The original footage was 1280×1080 but the pixel aspect ratio was 1.5, resulting in 1920×1080 output. I did not realize what that meant and spent quite a bit of time with a very poor camera solve and not sure how to fix it. Once I set the pixel aspect in Blender it went from a solve error close to 1 to a solve error of .22px. There is also a refinement add-on that seems to automatically weight the tracks for a better solve. That got the solve down to .04px.
The optical center setting gave me some grief as well. I let Blender try to refine that property, but the center it picked was not correct. I just set it back to the mathematical center, i.e. 640×580 and only let Blender refine the other features.
Under Clip Display in Motion Tracking there is a 3D Marker option, which will overlay the camera solve points back to the footage and let you double check the result. I noticed that all the points were off by a similar amount, so I looked at the camera settings.
Tracking the person was much harder but for a mask there is no real point in getting it perfect. I mostly wanted to know what the process was like. I did find out that tracking her shirt was fruitful despite it being very dark. There was enough detail in the wrinkles to give me a couple extra tracks and get the solve.
I have not put much effort into masking the person at all. I found out that you can use the trackers to help automate the process a little bit. Each spline point can be set to follow any marker. I wish there was a way for it to automatically figure out a mask from a group of trackers but it is helpful to have something so I don’t have to move 20 spline points each frame.
I learned again(it’s a hard lesson) that one should not attempt for perfection too soon with the mask. Start very broad, let the tracker help, refine a little bit and play with the compositor nodes. This will save tons of time with just getting an iteration done and seeing where it needs more work.
It was a good exercise in using the compositor. I did not realize that you have to make a render in order for the render layers to work in the compositor. Seems obvious now…
I used the 3d View a lot and aligned the view with the camera and set the footage as the background. This let me line up things pretty well. It’s very nice that it lets you play the footage and see the 3d models but it isn’t a catch all. You won’t have any idea about the mask, or the particle system, or the lighting. The cat is well modeled but has some crazy hair system, which I had to scrap in order to make it work. It also has a fairly goofy armature, in my opinion.

#3 Animation and shadow catching

This builds on the last video with this same footage. Having a camera solve and mask means I can start getting wild with the rest of it. The most important part of this one was getting the side of the truck to map correctly to the plane.
I used Ian Hubert’s lightsaber tutorial for the major idea. I used the window texture coordinates on the plane material and then went into camera view. Then I baked the texture to a new image and applied that back to the plane. I went with some quick keyed animations for the sides just to get a feel for it.
Setting the interpolation type to “Bounce” really helps liven it up. I sculpted the hand with just a subdivided cube and dyntopo on. Mostly I used the Add, Grab and Smooth brushes.
In order not to show the sides of the box I used the holdout shader on the side closest to the camera. Now that I think about it, I could have maybe used my shadow catcher as a mask? Have to do a little more research on that. The sides of the truck have an invisible material that picks up shadows. I found some tutorials online that detail the nodes needed.
For future reference, it’s: Diffuse—Shader to RGB—Color Ramp(almost all white)—Fac of Mix Shader
Diffuse(black color)—Input 1 of Mix Shader
Transparent—Input 2 of Mix Shader

Ideas for improvement:

  • Materials on the objects
  • More shadow catching planes
  • Better lighting/adjust sun position
  • Render in Cycles
  • Model a mechanism for extending the hand
  • Refine the mask
  • Hinge mechanisms for the doors/better doors

Credits:

The cat was created by JonasDichelle:
https://www.blendswap.com/blend/18519
Footage from Hollywood Camera Work:
https://www.hollywoodcamerawork.com/tracking-plates.html
Projecting image from video onto plane:
https://www.youtube.com/watch?v=bVUeRIY1E-M
Shadow catcher resources:
https://artisticrender.com/how-to-create-a-shadow-catcher-with-eevee-in-blender/
https://www.youtube.com/watch?v=NFcSuMxm4GE