Motion Tracking
Markos Leemet
The plan is to record at least a 4 minute video of me walking around with my camera or doing something. And then make it more awesome with CGI and 3D visual effects.
Learning Milestones (1-5)
Milestone 1 (21.09)
- Learn more about trackers!
- Create some models.
Before starting this project(s) I already knew that Blender had a motion tracking feature but I haven't had any results at all, trackers were flying everywhere and not sticking to their places until now, a lucky accident by giving it one more try. Attach:MT_test0.mp4
It is necessary to have at least 8 motion trackers in every frame of the video. That is enough for Blender to make a camera solve from the video. More trackers make a more stable solve.
The video should be stable or less shaky because any fast movement causes motion blur and it is harder to determine the correct position of a tracker on a blurry image and unfortunately Blender may find a new spot to track that is different from the starting point.
Motion blur can be fixed by using faster shutter speed, that means lower exposure so the room should be well lit. Higher fps can also help but it costs more gigabytes and makes tracking slower for every second.
The video should not contain zooming, Blender does not know how to zoom or change field of view so I have to go to the object I wanted to zoom to, or zoom digitally after the rendered video.
I have not had any luck or results with portrait mode videos, that makes me believe that the videos should be always landscape.
Milestone 2 (12.10)
- Particle effects in blender
I followed a good tutorial on youtube how to make basic particles in Blender. The way I originally thought it would work was simply adding the particle system to an object but the particles didn't show up. I later discovered that my rendering engine was set to cycles which is my preferred rendering method, but this mode requires a little bit extra work. I had to set up an area where particles will be rendered, a simple box in my case. The box has a special material which allows points to be rendered in that box. It can also render vertices.
Here is a screenshot of the particle emitter (the sphere on the left). I used an icosphere because it's only job was to reflect and diffuse, so this works perfectly because all the faces have the same surface area. In the bottom-right side of the picture there is a compositing node setup. Mainly only for making the background a bit darker and add contrast and a slight color correction.
Here is a picture without compositing
And a final render. Rendering took me ~2 hours. After the rendered mp4 file I converted it into low-res gif for easy access.
I wanted to "matchmove" a video somewhere to put the "exploding pFX ball" but the weather was rainy all the time and It was way too hard to track a footage.
Milestone 3 (26.10)
- Make an object that appears to be bigger on the inside
- Put it in a video footage
- Maybe some particle effects for cool animation
I just discovered programs that are meant for motion tracking: 1. PFTrack 2. SynthEyes 3. mocha 4. boujou
I also found libmv and I was happy to see that it is free and open source. I later discovered that this is an official part of Blender. I thought it was cool to discover the original program before Blender conquered it.
I though I should give Boujou a try because I saw a video on youtube on how to use it from a channel called "Corridor".
So I imported a JPG sequence of my video. And then there was a button "track features" I guess it is similar to Blnders detect feature but it automatically starts tracking. Once that was done I should do a camera solve. It turns out you can't export camera move data while using demo version of the program. But I exported anyway (I found a guy on discord server who was kind enough to do it for me). He gave me a .txt file and told that .fbx doesn't work well and I had to use Boujou to Blender script. Here is his screenshot of solved camera. I guess I am going to ask him to track videos from now on.
I finished importing the camera move data to Blender and I also made a empty which looks like a cube. I added the camera and point cloud as child objects and then I rotated the cube so that the points would be on the ground. That makes it easier for me in the future to add objects to scene without adjusting rotations and positions.
I started to work on the scene now. Firstly I always put a simple cube into the scene to see if the camera solve is stable. Attach:MT_3_test.mp4
The camera solve seems to be pretty solid. So I started to work on the TARDIS from doctor who. I found a nice model online but it didn't have interior, as expected.
https://img1.cgtrader.com/items/82992/d735724c0d/large/doctor-who-tardis-3d-model-stl.jpg
Only thing I had to next was to think how the "bigger on the inside" effect could work. At first I had a idea to use backface culling to not render outside but render inside. That didn't work well but then I found that in Blender there is a special material that makes everything behind it invisible. I used a plane with that material on it to hide the insides, except the doorway.
After I made cool materials and made some final color adjustments it was time to render the scene. Rendering took me somewhere 1,2 days.
Milestone 4 (09.11)
- Track face
- Add CGI elements
So I started to add some tracking markers on my face with a permanent marker. The points are supposed to follow topological lines because later I want to connect trackers with lines and faces and have somewhat realistic result. It also helps to see deformations better than "box modeling". After that was done I recorded and sent the video to Blender where I started tracking the points.
After manually finding points and tracking them, I then solved the tracking points. This time instead of solving the camera movement I solved "empty" movement and the camera stays in one spot. Then I made a mesh and started to place the vertices to empty with accuracy of my eyeball. Then I hooked a vertex with it's corresponding empty. I used a hook modifier for that.
I also manually connected the faces. And continued on with the other half of the face because I cant just use a mirror modifier. Each tracker has to manipulate each vertex. After that was done I added some fake depth and fake vertices to the face geometry.
Here is a video of trackers controlling the vertices. Attach:MT_4_vid1.mp4 Another video where i tried to project a uv texture of pewdiepie (left) and used my video as a background (middle) and then i made a quick little extra to try make a game character change emotions. Attach:MT_4_vid2.mp4
My techniques for morphing the emotions was to find appropriate emotion shapes under the shape key menu. The model had a lot of them but not enough. I then added a driver to the shape keys (a thing in Blender) and then I had to add a reference point (empty object with coordinates) and then finding the distance between the tracker and the ref.point. The distance was then used as the shape key value. There are certainly better and cleaner ways to do it but this is how I did it.
Milestone 5 (23.11)
- Full body tracking
For full body tracking I don't actually need a lot of points to track because I am going to control a humanoid character with IK rig. Since IK rigs are used to make animations easier with few controllers (legs, torso, arms and head) I decided to make 9 trackers and control the controllers with these.
Once i finished tracking some points:
- foot for IK
- knees for IK pole
- torso for root transformation
- upper torso for rotation
- hands for IK
- head for head rotation
I decided not to model human myself and found this guy: https://opengameart.org/content/low-poly-human-male
It was just a mesh, meaning I had to make bones and weight paint myself. Once I added bones I then added bone constraints like "IK","copy location" and "locked track" to some logically chosen bones and connected them to their controllers (empty objects) to control multiple bones at once.
Basically now by changing 9 controller positions, I can make a pose like this
Now I simply had to connect the tracked points to their rig controllers and see the result
I am missing elbow trackers. With those, the result could have come out better but I thought it wasn't necessary because I thought the arms would bend naturally.
The Project (6-Demo)
Milestone 6
- Robot model + animations
- 3D printer model
- GIF on a paper video
3D model of some kind of a robot which hides it's arms until it has to deploy weapons.
As for the 3D printer 3D model, here is a untextured render of it.
These 2 models are going to be used in the final project with some slight improvements.
So for GIF, i thought i should use this iconic 8 Bit dancing guy: ?itemid=4877113
I converted it into MP4 becuse blender does not read GIFs, probably.
I thought, probably the best way to do this is with a lot of tracking markers on the paper itself and when I bend the paper, the image "bends" as well but in screen space.
So the next step was obvious, I had to track them. This worked flawlessly for me after I clicked "track". It was basically black-white features with no errors but I was expecting some trackers to fly away or something.
Also, the trackers above have these "rings" attached to them. These are called masks. There is a specific reason why I added these. These "rings" create a black-white mask in the compositing menu, I can use this to remove the black dots. And it worked like magic the first time i tested it.
Basically how it works is I use the mask to reveal a "photoshop layer" of blurred video footage. Since its blurred, we can't see the dots.
Similarly before withe the face tracking (capture) milestone I connected the vertices with trackers.
So now i just had to put the video in the background and image to foreground. Do some clever blending with these 2 layers and it's done. Unfortunately, currently I have a 1080p60fps video, meaning i can't upload such big file.