CAMTracker(UE) v3.5 release instructions (1)

Let’s first explain the purpose of CAMTracker in virtual production. As a companion software for virtual production, CAMTracker is mainly used for processing camera image keying, tracking system input and coordinate system conversion processing.

 CAMTracker can output keyed images to UE through NDI, Spout or professional video output card, support Genlock locked frame output through AJA or Deltacast brand video output card, and support to output LTC time code synchronously with camera time code.

Tracking data can be output to UE through OSC or FreeD protocol. Through the OSC, the six degrees of freedom of the tracking camera and FOV data can be transmitted, including the position data of the tracking screen (X, Y axis, excluding Z axis-height data). The UE side needs to use a blueprint to connect. If you only transmit camera tracking data and not screen tracking data, you can directly use the LIVELink FreeD protocol to connect, which is simpler and more direct.  Support NDI or capture card video input as monitoring screen.

Interface introduction

1.   The main interface of the software can switch between video monitoring and 3D view through the Video Monitor/3D View button. The video monitor screen is mainly used for the keying process, and the 3D view is used to monitor the green screen and camera status to adjust the tracking offset.

2.    The Save Project button is used to save the project file. Resolution button is used to set the final resolution of the output video.

3.   The Align View button is mainly for vive tracker users. If you have two trackers, one can be used as a tracking camera, and the other can be used to align the real and virtual cameras, that is, to adjust the XYZ offset value (the offset of the tracker from the camera’s entrance pupil). Of course, the FOV of the camera needs to be set before doing this. In addition, you can switch between different tracker types via the tracker switch slider, including 2.0, 3.0, tundra tracker and tracking camera styles.

4.   Lens/OCIO- lens correction/color space conversion. Lens correction. Because the camera lens will inevitably cause various distortions to the original shooting picture, in order to match the actual shooting picture with the engine picture, lens correction needs to be done. Zhang Zhengyou’s camera calibration method is routinely used. I also recently completed the UI of the relevant camera calibration software. To do camera calibration, you need to print the checkerboard first, and paste the printed paper on the hard board, and then use the camera calibration software to clearly shoot at least 10 checkerboards from different angles, and then let the computer do the camera calibration calculation, calibration The data obtained later can be filled in here in turn.

A description of options related to this node is here: https://docs.derivative.ca/Lens_Distort_TOP

5.   The OCIO function has already been added to UE4. It is mainly used here to convert the color space of the camera image. This is a professional photography-related technology. A simple understanding is to convert other color spaces into the Linear color space used by the computer 3D engine. Different brands of cameras can use different configuration files. Of course, you can also use the corresponding LUT for color correction. For the specific function can be refer in the description of TD: https://docs.derivative.ca/OpenColorIO_TOP

6.    ColorGrading function. This function is mainly for professional cameras with color correction needs to perform color correction processing on the image after keying. For the functions of each function, please refer to professional color correction knowledge.

7.    Next to the green screen configuration mode button, you can switch between billboard, 3screen and 4screen. Its function is that you can make corresponding settings in the three-dimensional space of the software according to the size of the green screen in the real space to ensure that only the pictures in the green screen space are output to the UE. The three-fold screen and four-screen configuration modes here have nothing to do with the settings in the UE. The billboard is still used in the UE, but the billboard screen is limited to the corresponding green screen space. All pictures outside the green screen will be masked out. In this way, even if the camera captures the picture outside the green screen, it will not be displayed in UE. This step is done after the camera image is matched with the software image.   

8.    The GenLock item on the right shows the status of the current frame lock output. You can switch between different frame rates through the FPS button. This frame rate is the software running frame rate and the time code frame rate format. This frame rate setting also changes the frame rate of the recording options. The Sync button is used to synchronize the camera input timecode to the LTC output. Without synchronization by default, the software will automatically generate LTC timecode. Click this timecode to open the LTC output settings interface.    

9.    The left pane of the software is to switch the input source to the main monitoring pane, which is convenient for keying and viewing picture details. In the main monitoring window, you can scroll the mouse wheel to zoom in or out, or press the H key to restore Default size.    

There are three types of input sources, namely Movie File video file (you can directly drag to this window area in the upper left corner to open the local video for demonstration), Cam0-4 camera  (you need to click the corresponding button to set the capture card input in the upper left area of the main video monitoring window), demo video (built-in green screen video).

The middle part of the left pane is for taking snapshots to facilitate keying. Click the captured picture to output it to the main monitoring pane for viewing.

The bottom part is the monitoring screen of 4 fixed cameras, click these buttons to switch the corresponding camera screen to the main monitoring pane.

10.    The middle part of the UI is the main pane. You can click the 3D VIEW/Video Monitor button to switch between the monitor screen and the 3D view. The top left part of the main pane is for camera input settings and OCIO color space conversion/lens correction settings. The middle is for chroma key mode switching and screen general adjustment options. The chroma key mode is divided into S Key and M Key.    

The S Key is a single mode. If the image spill is not serious, you can get good results with this mode. In this mode, the first noise reduction (1th Denoise) does not work, you can adjust the second Secondary noise reduction removes background noise. M Key is a blending mode. If the S Key mode is not ideal, you can use this mode to prevent keying from affecting the main object. You can first determine the value of the secondary noise reduction in the S Key mode, and then switch to this mode to adjust the first noise reduction. The options on the right is to switch the background of the screen after keying to check the keying result.

The bottom half of the main pane contains various keying options. The first is the three adjustment areas of HSV, we can drag the corresponding adjustment points to get smoother results. The Layer button can switch between the two keying layers. This is mainly used for partition keying. For example, when the color of the  green screen floor and the background are different, we can configure the green screen size in the 3D MASK view, and then separately Do separate keying for the two areas. Of course, when doing this, you should pay attention to the consistency of the corresponding parameters to prevent the two areas from being too different. Usually, we only need to adjust the HSV area. In most cases, Layer 1 suffices for most requirements. The HSL Adj button can adjust the de-spill color strength of the current layer. When you turn on the Pick Mode button below, click the green background of the main pane and drag it to key. These color values will be automatically set. Usually what we need to adjust is the option Color Softness, which is usually around 0.15-0.35, depending on the result. You can also try the effects of other options.  This Curvature option can set the smoothness of the keying. This value cannot be adjusted too large. The SmoothEdge option use to adjust the edge smoothness, if the edge of the main object is more jagged, you can use this option to solve. CurveType , the default is Triangle linear transition, which can retain more details, Sine curve, edge transition will be more obvious. Immediately after these two Trim options are usually as inactive as possible. It is equivalent to high and low cuts.

The options surrounded by the purple border at the bottom left are general de-spill options. The first two options can only be used to de-spill the edges in special cases, and are usually not used. The third item can adjust the edge hue value, usually a value of 0.

On the right is the enhanced contrast adjustment and fill mode. The first two Contrast and Edge Transparency adjustments are those for the despill options on the left. The last two items are fill mode options, which are used to prevent the subject object from being affected by the keying function.

At the bottom is the operation history.

11.    On the right is the keying output monitoring and recording options. The first line is the recording option of CAM0. You can switch between recording OrigImg (original green screen image) and FinalKey (keyed image). On the right is the green record button.    

The following is the background overlay option after the main monitor screen is keyed. The default is Black. The options here will affect the final output.

The following two panes are the two modes of the main monitor screen: final output and MASK view. Click the button in the upper left corner to open the corresponding recording options. The content recorded by the first item RecOpt1 depends on the selection mode of the CAM0 recording screen above. The default recorded video files are all with LTC timecode. The second RecOpt2 recording is this MASK view content, also with LTC timecode. By default, the content in the second item follows the settings in the first item. When you click the green circular recording button, the two images will be recorded at the same time. For the specific naming, you can click the gray button in the figure below to manually set , or click the green button to keep the default naming rules. Of course, you can also cancel the simultaneous recording of this MASK content, just click the gray button in the red box to cancel the association with the first recording option.


The lower pane is for monitoring the output images of the four fixed cameras. The buttons on the right are similar to the recording options of CAM0. If you want to record the corresponding camera images at the same time when you click the record button, you can switch the X button to ✔The CAM button with the green border is the activation button for the NDI output. 

3D View

1.    The first five red buttons at the top are NDI output settings (all NDI output settings are synchronized), Spout output settings (only for CAM0), video device output settings (support Genlock locked frame output through AJA or Deltacast brand video output cards) , OSC output settings, FreeD protocol output settings. The five buttons of CAM0-4 can open the position parameters and viewing angle settings of the corresponding camera. For the determination of the fixed camera position, a tracking device can also be installed (the tracking data always corresponds to the position of CAM0). After calibrating the position, set the position of CAM0 you can copied the position and posture data and then pasted into the corresponding CAM position setting page. When you switch the corresponding CAM, the 3D view will also display the corresponding camera. The latter two options show the trackers’s battery level that bound to the tracking camera and billboard, so that it can be checked at any time.

2.    The lower part is the general tracking data setting options. The width and height of the screen here corresponds to the size of the billboard, which should be consistent with the settings in UE. Of course, if you want to cut off unwanted areas, you can also do it by adjusting this size. For example, adjust to portrait mode. The ScreenTrack screen tracking function is mainly used for the Vive tracking system. The Vive tracking system is always turned on. When you switch the tracking system to other tracking devices, the screen tracking function uses the position of the first recognized Tracker. When you switch to the Vive tracking system, the screen tracking uses the position of the second tracker identified. This position data only includes the data of the two axes of X and Z (note that the TD uses the right-hand coordinate system Y-axis orientation).    

The UE CAM settings dialog is used to offset the rotation axis when using other tracking protocols. This is usually only used when the camera angle is abnormal, it only works on the UE camera, and the camera in CAMTracker is not affected by other things. CamFlip camera position flip, switch the position of the camera on the Z axis. CAM ROT camera rotation, this function is designed to use Intel T265 tracking camera to facilitate its installation and use in four directions, up and down, front and rear, to avoid strong light interference and enhance positioning accuracy.

Below are the offset value adjustment options for the three axes of XYZ. You can make adjustments by measuring the relative position of the tracker to the entrance pupil. The blue VIVE button is used to switch between different tracking systems. When you use the Intel T265 tracking camera, pay attention, make sure that it is connected before switching to it. Do not plug and unplug it when you are using it, and switch back to the default VIVE tracking system when you are not using it.
The DefaultVal(V) button on the right is a one-key reset origin function. Because vive tracker uses the steamvr platform, its origin calibration procedure is quite cumbersome, so I specially developed the function of one-key calibration, which greatly saves the work. Specifically, you only need to place the indicator light of the tracker2.0 towards the camera in the middle of the green screen floor, the direction must be the same as the direction of the green screen, and then click the DefaultVal(V) button to the right of the vive button to switch to RestOrigin(V), that is, This position is used as the origin to establish a coordinate system, and this origin reset will also include the three rotation axes of XYZ. The next time you start up, you only need to remember the corresponding base station power-on sequence. Under normal circumstances, the first identified base station is used as reference . If the room calibrate is done, of course it does not matter.
For the T265 tracking camera, the DefaultVal button is the function of recording the origin, so as to avoid tossing the wire back and forth for origin calibration. You only need to do the origin calibration once, then get a fixed position and click the DefaultVal button to switch to RecordOrigin. Next time, you only need to power on at this position and angle. Here you just switch the tracking device type. In addition, you can use the CAM ROT function to install the T265 in four directions to avoid strong light reflection and tracking data jitter.  
3.    The Tracker Switch option can select the tracker model displayed in the 3D view, which are 2.0, 3.0, trundra tracker and Camera. The Traking Delay option is used to adjust the delay in sending frames for tracking data.   

To be continued. .

Leave a Reply