Back in June, I wrote about “Adding Life to CG Camera Move“. This time around I wanted to show another way to add realistic camera movement to a CG Camera by importing tracked footage shot by an actual camera. You can film your camera move on a physical camera mirroring the camera move you were hoping to achieve in CG. Then you can camera track that footage to ingest it into your 3d package of choice. Once it’s in CG you can then tweak the tracked camera to fit your needs and have all of the lovely subtle nuances of a hand filmed camera move.
First I recorded a camera move that would work for me. I tried to match my cg scenes conditions (ISO, Aperture, and lens).
Here is the footage at half resolution on you tube
Now we’ll bring this footage into Nuke to track in a Read node. You could track your footage in something like AfterEffects or whatever program of your choosing. Make sure your project settings matches your footage settings.
Add a tracker node below it. Input your data for your camera, like Lens (Focal Length) and Camera settings (Film Back Settings). For me, was a DSLR camera. Use the video selection, not the image still since the sensor parameter is different. Now you can click “track” whenever you are ready to track the footage.
In the Settings tab, you can up the “Number of Features” to a higher amount say 300 or 450 to sample more. Click on “Refine Feature Locations” and “Preview Features”. You can also decrease the “Keyframe Spacing” to get a more accurate track.
Click to “solve” the track. Now you will want to dial in the accuracy of the track. You can delete tracks that are no good (red markers). Tweak the Max Track Error and Max Error under the “AutoTracks” tab. The Red tracks are the rejected tracks so you could also click “Delete Rejected”. You will want to save a previous version of your file when doing this.
Now you can click over the main window and click “Tab” to scroll to the 3d viewer. You will see your tracked data in a point cloud view of sorts. What you will notice at first is your camera is angled and not oriented in correct 3d space. You have to tell Nuke what and where the camera is pointing.
Click Tab again and you’re back in 2d view. You will select some of the tracked points to orient your scene. First off establishing your ground plane. Once selected appropriate tracks, right click and go down to Ground Plane–Set to Selected. If you go back to 3d you will have seen your camera rise up and level up to the ground. Now you can go back and set X or set Y and so on. Select two points in your camera tracked data to establish scale. Right click and select Scene–Add Scale Distance. In the Scene tab, click on Distance and enter the scale between the points. This will scale your scene to the measurement.
Once you’ve gotten all that sorted out, you can export out your tracked camera as a Camera only or a Scene. In our case we’ll do Scene.
This will generate a scene node and a camera node will be created in Nuke. You can create a piece of geometry say a sphere (you’ll need to scale it down) and test it by hooking it up to a scanline node. The scanline node will then hook to your camera and the BG can be temporarily hooked to your original footage. Scrub through viewing from the Scanline node to see how your sphere object tracks to your camera. It should stick to the camera movement correctly.
Now hook up what you want to export out of Nuke to the scene node. If you want the test sphere hook that to scene, you can add axis nodes (any of the camera track nodes for spatial info), and the camera. Don’t hook up the CameraTrackerPointCloud unless you really need since it will given your a ton of locator points in Maya. Below the Scene node, add a WriteGeo node. Here you will select the folder where you want to save your exported doc. Add “.fbx” for file name. Nuke will automatically give you a drop down of file options you will want to include or not. Click “Execute”.
In Maya, you will import your new camera as the fbx file to whatever 3d scene you want to use the camera in. Group the nodes if you like. You can scale them all and reorient them as you like. Now you are free to use whatever of the camera data that you’d like.
Here’s the footage in Maya with my scene, roughly placed in a scene.
It’s nothing special yet, but you get the idea. You can retime the footage, alter the key frames, or whatever. You could add a simple handheld shot just to have organic jitter to a cg camera. There are a bunch of various ways to use this data.