CAMTracker(UE) v3.5 release instructions (2)

Virtual Production Workflow 

1.   First of all, determine the tracking system you use and the hardware of the video output.   

For the tracking system, it is recommended to use a more professional tracking system, so that the final picture stability can better meet the requirements of film production. For the vive tracking system, it is ok to test it , but we recommend you use a more professional tracking system if you has higher requirements for picture stability .

For the way of outputting the picture to the UE, for routine testing, you can use Offworld’s NDI/Spout plug-in to output the keyed picture to the UE. If possible, try to use the AJA or Deltacast brand video output card, which can support Genlock locked frame output, and support the output of LTC time code synchronously with the camera time code.

2.    Then set the video input of the tracking camera according to the process, the CAMTracker provide one tracking camera (ie CAM0) and 4 fixed cameras (CAM1-4) video input. You can directly click the red CAM0 button in the main window of the Video Monitor interface to open the video input settings. Select the corresponding brand and device model of your capture card here. The camera input format is best to choose the 10bit422 format to facilitate fine keying. When using deinterlacing, take care to select field Precedence for frame synchronization. If there are multiple camera inputs, it is best to choose to synchronize, Sync Input. For related specific functions, please refer to the description of TD: https://docs.derivative.ca/Video_Device_In_TOP 

3.   For color space transform, please refer to the output of the specific camera to set. Usually the output is set to Linear linear color space. For the description of each option of this dialog, please refer to the official description of TD: https://docs.derivative.ca/OpenColorIO_TOP

4.   Next for the Lens Correction section. The corresponding checkerboard picture needs to be printed first (the picture is obtained from me) and glued to the cardboard, making sure not to deform. Then run the camera calibration software, click VIDEOIN to set camera screen,  make sure to close the camera screen in CAMTracker, otherwise it will be occupied. Then click Textport to open the command line interface to view the processing information at any time, and then place the cardboard in different postures and click Capture Frame to take a picture. Every time you take a picture, the number of recognized marker points will be prompted in Textport, and it will also be displayed in the real-time picture. make sure that the recognized marker points can not be less than 6, if the recognition fails, you need to click Clear Sets to clear, when the number of CapturedSets reaches 10, you can click Calibrate Cam to calculate the camera calibration, in the Textport will be Display the relevant results, and then input the results into the LENS window K1-K2-P1-P2-K3 options in turn. Note that the error value of the reprojection is less than 0.2. For other related options, please refer to the TD documentation. https://docs.derivative.ca/Lens_Distort_TOP

5.   Keying operation. For keying, the first thing is to set the lighting well, try to avoid color spillage, and set it well in the early stage will have a multiplier effect. For the camera settings for keying, please refer to the  related tutorial on Youtube.

In CAMTracker, we usually only need to open the Pick Mode, click and drag in the main monitoring window to select the background to be keyed, and adjust the corresponding curvature and noise reduction options according to the actual effect. Use the HSL adjustment option as much as possible to remove spill. While adjusting the keying effect, you can switch between different backgrounds to check the final effect and prevent the main object from showing through. For the meaning of each function, please refer to the previous article.

6.     If you have color grading requirements, you can use this ColorGrading window to make adjustments. The slider part in the lower left corner can be restored to the default value of 0.5 by right-clicking. The waveform graph in the lower right corner can be switched between different display styles by clicking the button above.  

7.     Then we switch the main monitor window to 3D view. Click the blue VIVE button to toggle the tracking system you are currently using. For the VIVE system, the one-key reset origin function can be used to quickly calibrate the coordinate system. The T265 tracking camera can use the record origin function and CAM ROT for easy use and installation. When using the VIVE tracking system, the default Wireless option can use the Kalman filter slider on the right. A wired connection is recommended, although you can use these filtering options as well.    

In the XYZ OFFSET options, enter the offset values of the measured tracker and the camera’s entrance pupil respectively. When you use the vive tracking system, you can click Align View to switch to the alignment view and use the second tracker as a reference for alignment. Of course, when doing this alignment, you must first click the corresponding camera button to set the camera’s FOV or focal length to match the actual image.

8.    Then we choose 3Screen or 4Screen according to the actual green screen configuration. Open the corresponding setting window and adjust the size to cover the area to be output for MASK adjustment. If you are doing partition keying also make sure that the floor is  covered.    

9.    Next we need to set the video output and track data output. If you use NDI or Spout, just click the corresponding button to activate it, but you will not be able to use the Genlock function. To enable the Genlock function, you need to use an AJA or Deltacast branded video output card and select the reference source in Reference Source. The First Field option is already bound to the parity field setting of the capture card input. For related options, please refer to the TD documentation: https://docs.derivative.ca/Video_Device_Out_TOP

For tracking data output, choose according to your specific needs. If you want to use the screen tracking function (VIVE), you can only use OSC for output. If you only use camera tracking data, it will be easier to use the FreeD protocol. For the output of time code, you can click the TC button in the upper right corner to set. Click the Sync button to synchronize with the camera input timecode. The format of the time code must be determined, which will determine the frame rate of the data sent.

10.    Then you need to open the UE settings video and the tracking data input. If you directly use the capture card to input to the UE, please refer to the UE official instructions for settings. https://docs.unrealengine.com/5.0/en-US/professional-video-io-in-unreal-engine/    

For NDI/Spout input, you need to use the plug-in on the offworld.live website. The plug-in is free to register, and you need to log in to use it. The installation method is also very simple, just download the compressed package version, unzip it to the plug-in directory of UE and enable it in the software. After restarting the UE, select the corresponding virtual camera and NDI receiver manager in the Offworld LIVE subclass in the Place Actor window and place them in the project.

Then select OWLNDIReceiver Manager and add NDI receiver, set Render Target to the rendering target RenderTarget provided in the trial version files.

If you use COMPOSURE, please refer to the official UE documentation. Here we mainly introduce the Billboard method, add a plane to the project, and assign the MAT material to this plane. This MAT material comes with Billboard blueprint and screen material functions. Make sure to rotate this plane adjustment to the state shown below. The scaling size of this plane should be consistent with the Screen Width/Height settings in CAMTracker. Of course, if you want to cut out some unwanted areas, you can set this size to a smaller size, or set it to a vertical screen ratio. The settings in CAMTracker are only used as a visual reference for the actual output.

11.    Then for tracking data access, if you use OSC, you need to enable the corresponding OSC plugin.    

Then open the OSC blueprint provided in the shared files, and copy the relevant variables and blueprints to the level blueprint. If the reference to the camera is invalid, please reset the reference.

In addition to the regular tracking data, the data supported in this blueprint also includes the data of  FOV and the two axes of screen tracking.

If you use the LiveLink FreeD protocol, you need to activate the LiveLink FreeD plugin first. Once FreeD output is activated in CMTracker, LiveLinkFreeD sources can be added by clicking the +Source button in the UE>Window>Virtual Production>LiveLink window.
 After adding, you can click FreeD in the theme list to set related options. For example, setting the calculation mode can be set to time code, whether to enable the focus and focal length options. You can also click on the camera identified under the list, and then switch the view option on the right to display frame data to view the incoming tracking data.
Then we select the virtual camera in the project, click Add > Live Link Controller in the details panel, and set the theme to the corresponding camera.
Then we need to enable the settings of the previous OSC blueprint to automatically switch the perspective to the virtual camera in the main level blueprint.
At this point, the settings to the tracking data is completed. Of course, if you do not access the focal length data, you need to manually set the focal length of the virtual camera to match the actual camera.
 
If you use a capture card for input, please refer to the official UE instructions for timecode and Genlock settings: 
12.   Finally, adjust the frame number of tracking delay in CAMTracker while shaking the camera to synchronize the picture and camera tracking. If you need to push the video to OBS, you can add OFFWORLD’s OWLNDISender Manager to the project to set up sending the video through NDI. 
  

Leave a Reply