CAMTracker(D2) v2.5 release instructions (2)

The role of the CAMTracker (D2) software in the entire workflow is to process the camera images and tracking data and then send  to D2 for real-time 3D composition. Specifically, the processing of the picture involves color space transform, camera picture correction and real-time keying, MASK processing and projection mapping, and the processing of tracking data involves data filtering, coordinate system conversion and transmission protocol conversion.

CAMTracker(D2)virtual production workflow:


1.   The first step is to set the camera screen, and CAMTracker provide a tracking camera (ie CAM0) and 4 fixed cameras (CAM1-4) video input  . You can directly click the red CAM0 button in the main window of the Video Monitor interface to open the video input settings. Select the corresponding brand and device model of your capture card here. The camera input format is best to choose the 10bit422 format to facilitate fine keying. If there are multiple camera inputs, it is best to choose to synchronize, Sync Input. For related specific functions, please refer to the description of TD: https://docs.derivative.ca/Video_Device_In_TOP

For color space transform, please refer to the output of the specific camera to set.

2.    Next for the Lens Correction section. The corresponding checkerboard picture needs to be printed first (the picture is obtained from me) and glued to the cardboard, making sure not to deform. Then run the camera calibration software, click VIDEOIN to set camera screen,  make sure to close the camera screen in CAMTracker, otherwise it will be occupied. Then click Textport to open the command line interface to view the processing information at any time, and then place the cardboard in different postures and click Capture Frame to take a picture. Every time you take a picture, the number of recognized marker points will be prompted in Textport, and it will also be displayed in the real-time picture. make sure that the recognized marker points can not be less than 6, if the recognition fails, you need to click Clear Sets to clear, when the number of CapturedSets reaches 10, you can click Calibrate Cam to calculate the camera calibration, in the Textport will be Display the relevant results, and then input the results into the LENS window K1-K2-P1-P2-K3 options in turn. Note that the error value of the reprojection is less than 0.2. For other related options, please refer to the TD documentation.https://docs.derivative.ca/Lens_Distort_TOP

3.   Keying operation. For keying, the first thing is to set the lighting well, try to avoid color spillage, and set it well in the early stage will have a multiplier effect. For the camera settings for keying, please refer to the  related tutorial on Youtube.

For the keying mode, it is divided into normal mode and easy mode.

When the easy mode is activated, all options with a green border are adjustable parameters. For a relatively clean picture, usually the easy mode can get a good keying effect. In normal mode, the color selector is usually used to set the HSV values of the key color. The main parameters to be adjusted are two denoising options, the Curvature value, and other parameters do not need to be moved if they are not necessary. Of course this depends on the actual effect in D2. For specific keying operation details, please refer to the previous tutorial.

1.    Set up the tracking system. The default tracking system is VIVE Tracker. If you only purchased two base stations and one tracker, you need to set up the steamvr software first to ensure that it can be used normally without a helmet. When setting up SteamVR, remember to close CAMTracker, as it will call the SteamVR driver. For the specific operation process of using without helmet, please refer to the UE documentation link:https://docs.unrealengine.com/5.0/en-US/livelinkxr-in-unreal-engine/.

After setting up SteamVR without a helmet, you can first make sure that the base station recognizes the tracker. It is recommended that you use a USB cable to connect the tracker directly to the computer to avoid jitter caused by signal interference. If the position jitter is still very strong, you can check the reflection of the venue and the installation angle of the base station. Then click the Wireless/Wired button below to switch, and use the three sliders with a blue border on the right to set the Kalman filter smooth tracking option. When setting these three options, you can refer to the waveform on the right to adjust. If you have a helmet, you can do a room calibration to set the origin of the coordinate system. If not, you can also place the first identified tracker in the middle of the green screen floor, pay attention to the placement direction of the coordinate system, as shown in the figure below, make sure the charging port is facing the direction of the camera. Then toggle the DefaultVal button on the right to reset the origin coordinates. All trackers identified after reset are based on this origin coordinate system.

2.    Align View. Before doing the alignment, click CAM0 Settings in the 3D Preview view to set the FOV of the camera to match the FOV of the actual camera. 

 This alignment view is mainly aimed at vive tracker users. In this view, the actual camera image and the tracker’s rendering image will be overlay displayed to facilitate the alignment of virtual and real images. If you have two trackers, you can bind one to the tracking camera (the first tracker identified is red), and measure the relative offset as shown above. The other (the second identified tracker is white) is placed in the frame for accurate frame alignment. Depending on the type of tracker you use, you can also switch between different tracker types through the tracker switch slider, including 2.0, 3.0, tundra tracker, and camera is only for T265 and other tracking cameras.

Of course, if you are using other tracking devices, you can also use the first identified tracker for view alignment, because no matter which tracking device you switch to, the Vive will always remain enabled, you just need to align the coordinate systems of the two kinds of tracking devices . The Screen Tracking screen tracking function also uses the tracker position used for the calibration coordinate system as a reference, that is, the second identified tracker position is used in VIVE mode, and the second tracker position is used when switching the tracking device type to other options.

If there is no tracker for calibration reference, as long as the camera is calibrated, the FOV and XYZ offset values are set accurately. Also note that the Y-axis installation angle of the tracker on the tracking camera is consistent with the camera angle.

3.    Green screen configuration. Next, we can switch between billboard, 3screen and 4screen according to the actual configuration of the live green screen. Then it’s time to set the size and position of the actual green screen. Make sure you don’t mask the footage outside of the green screen into it. If the keyed image is inconvenient to view in the 3D view, you can choose to overlay different backgrounds in the upper right corner of the window.  

4.    Then we add billboard in D2, and put the same size plane as in CAMTracker to its child. The size of this plane can be set according to actual needs. If you want to control the FOV of the D2 camera, you need to use the DMX signal to control it. In DMX Settings, you can set the Artnet protocol and the number of the unvierse. The default sent universe’s number is 0, and the address code starts with 1. The channel table is 1-12 for the 6 DOF channels of the 16-bit DMX camera, and 13-14 is the 16-bit FOV (FOV mapping range is 1-100 degrees). When using the zoom data, remember to activate the light blue button in the image above to associate Relevant data driven. When not using FOV data, remember to click the small button on the far left to return to the data setting mode. 15-18 is the X-axis and Z-axis of the screen tracking 16bit (of course, it is better to send the screen tracking data by the PSN protocol). If you just send camera data without FOV, you can directly use the PSN protocol. The PSN protocol ID used by the tracking camera is 0, and the ID used by the tracking screen is 1.  

After adding the position Camera, set the corresponding FOV angle in Settings, and select the PSN Net Camera Tracker1 previously added as the last item.

5.   Finally, the delay time of the tracking data needs to be adjusted to synchronize with the video image, and the unit is frame. The specific operation is to adjust the delay frame number while panning the camera left and right.   

 

Leave a Reply