1 Introduction

This is the user guide for the VCAnx video analytics plug-in for the Nx Witness server.

This guide will describe the features and options within the VCAnx plug-in.

2 Architecture

2.1 VCAnx plug-in

The VCAnx plug-in consists of two components.

Note: The VCAnx server component can be installed on the same hardware platform as Nx Witness, but due to the resource requirements this can result in high resource usage and reduced channel capacity.

2.2 VCAnx configuration tool

The VCAnx configuration tool is a stand alone application, designed to provide a feature rich experience when configuring a channels video analytic features.

diagram

diagram

3 Prerequisites

For the purposes of this document, it is assumed that the VCAnx server component will be installed on a dedicated hardware platform.

3.1 Hardware

The hardware specifications for a given system will depend on the intended number of video channels to be processed, as well as which trackers and algorithms will be run on those channels. Some initial guidelines are provided below:

3.1.1 x86

3.1.2 NVIDIA Jetson

VCAnx also support GPU acceleration on the NVIDIA Jetson Orin range of embedded devices. For optimal performance JetPack 5.0.1 should be installed.

In the absence of a correctly configured or installed GPU, VCAserver will either default to running the deep learning features on the CPU or, algorithms with a strict GPU requirement, will not be available.

3.2 Software

As the combinations of operating system, drivers and hardware is so variable, software requirements are based on configurations used internally for testing.

3.2.1 Environment

To ensure a host system is ready to run VCAnx, it is advised the following checks are made to ensuring the host system is ready to run the analytics.

  1. Check the NVIDIA graphics card is detected by the driver using the NVIDIA tool nvidia-smi. At the command prompt, type nvidia-smi.
  2. Check the NVIDIA CUDA Toolkit is installed and configured in the OS environment using the NVIDIA tool nvcc. At the command prompt, type nvcc -V.

4 Installing VCAnx

The latest version of the VCAnx plug-in for Nx Witness can be downloaded through the support portal on the VCA technology website or obtained from your local software distributor.

4.1 Windows

Copy the installation files to the target system and install the VCAnx plug-in.

vcanx_installer_**VERSION_NUMBER**

install VCAnx

install VCAnx

Where the same hardware is being used for Nx and VCAnx, select both options to install.

4.2 Linux

VCAnx server and VCAnx Plug-in on Linux comes as a single archive file containing an .sh script, which handles the installation of the components. Once the archive has been downloaded, navigate to folder and unpack the installation script from the archive.

./vcanx-installer-**VERSION_NUMBER**-linux64-vca_core-**VERSION_NUMBER**.sh

The .sh script consists of 3 different install options.

--server-only

sudo ./vcanx-installer-**VERSION_NUMBER**-linux64-vca_core-**VERSION_NUMBER**.sh --server-only

--plugin-only

sudo ./vcanx-installer-**VERSION_NUMBER**-linux64-vca_core-**VERSION_NUMBER**.sh --plugin-only

--both

sudo ./vcanx-installer-**VERSION_NUMBER**-linux64-vca_core-**VERSION_NUMBER**.sh --both

Where the same hardware is being used for Nx and VCAnx, select both options to install.

5 VCAnx configuration tool

Although the VCA analytics can be configured through the Nx Witness Client, some of the features cannot be fully realised due to integration limitations. To resolve this there is a separate tool available that can be used to provide a complete experience and aid in the configuration process.

5.1 Installing the VCAnx configuration tool

5.1.1 Linux

The VCAnx configuration tool is provided as a snap package and can be installed through the following command.

sudo snap install ./vcanx_vca-config_**VERSION_NUMBER**_amd64.snap --dangerous

vcanx-config

vcanx-config

5.1.2 Windows

The VCAnx configuration tool is provided as an EXE installer and can be installed by copying to the target system and opening the file.

vcanx-config-setup **VERSION_NUMBER**-x64

vcanx-setup

vcanx-setup

5.2 Logging in

Once installed you can access the tool from the applications on the desktop. The configuration tool displays a similar interface to the Nx Witness Client, connect to the Nx Witness server you want to configure and login.

vcanx-config-login

vcanx-config-login

Once you are logged into the Nx Witness server, there is a list of the cameras on the left side bar. This list shows all the cameras that have had the VCAnx plug-in enabled, the plug-in can be enabled or disabled through the Nx Witness Client.

vcanx-config-options

vcanx-config-options

5.2.1 Enable

This option shows the status of the plug-in against the selected camera. From here you are change the license type and deep learning features.

Note: The calibration and classification features are not displayed when using the Deep Learning filter, DL Object Tracker or DL People Tracker. To show the available options, Select the tracker and click Apply.

Note: When first selected, the DL trackers will run a model generation process. This optimises the DL models to run on the available GPU hardware. Irrespective of which tracker is selected, the DL People tracker model, DL Object Tracker model and the DL Filter model will all be optimised in one go. This process can take up to 10 minutes per model and may increase with different GPU configurations. The process will not need to be run again unless the GPU hardware is changed. Whilst optimisation is performed a message will be displayed in the live view, and no objects will be tracked during this time.

6 Configure Nx Witness

After the VCAnx server has been installed, the Nx Witness server needs to be configured.

VCAnx server list menu

VCAnx server list menu

Note: You can define up to 4 VCAnx servers.

VCAnx server list

VCAnx server list

7 Licenses

In order to use the analytic features, a license is required. Licenses can be managed through the configuration tool. Please refer to your software distributor to obtain licenses

To view and edit the licenses, select License from the menu.

license menu option

license menu option

Licenses

Licenses

The page provides a list of all the licenses that are available from the configured servers, it also displays which licenses are currently allocated.

7.1 Active Licenses

7.2 New Licenses

Tabs are created for each server, select the server tab to view the HWGUID and new new licenses.

7.3 How to add a license(s)

7.4 How to Remove License(s)

8 Enabling the plug-in for the camera

After the Nx Witness server has been configured to use the VCAnx servers, enable the plug-in for each of the cameras where you want to apply video analytics.

enable plugin against camera

enable plugin against camera

9 Deep Learning

Contains the object classifications and threshold for the deep learning filter.

deep learning

deep learning

10 Rules

This displays all the rules that have been configured for the selected camera and allows rules to be created/modified or deleted as required. A snapshot of the camera is displayed on the screen to allow rules to be defined.

rules

rules

10.1 Types of Rules Available

10.2 How to Add a Rule

10.3 How to Modify a Rule

10.4 How to Delete a Rule

10.5 Intrusion

The intrusion rule triggers an event when an object is first detected in a zone.

Note: The intrusion rule will trigger in the same circumstances as the Enter and Appear rule, the choice of which rule is most appropriate will depend on the scenario.

vcanx-intrusion

vcanx-intrusion

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.6 Enter

The enter rule triggers an event when an object crosses from outside a zone to inside a zone.

Note: The enter rule detects already-tracked objects crossing the zone border from outside to inside.

enter

enter

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.7 Exit

The exit rule triggers an event when an object crosses from inside a zone to outside a zone.

Note: The exit rule detects already-tracked objects crossing the zone border from inside to outside.

exit

exit

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.8 Loitering

The loitering rule triggers an event when an object is present in a particular zone for a predefined period of time.

loitering

loitering

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.9 Appear

The appear rule triggers an event when an object starts to be tracked from within a zone.

appear

appear

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.10 Disappear

The disappear rule triggers an event when an object stops being tracked within a zone.

disappear

disappear

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.11 Abandoned

The abandoned rule triggers an event when an object is left in a zone for the specified time.

Note: The abandoned object threshold time is a global setting and can be found in the advanced section.

abandoned

abandoned

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

10.12 Removed

The removed rule triggers an event when the area within a zone has changed for the specified time.

Note: The removed object uses the same threshold as the abandoned rule, the abandoned threshold time is a global setting and can be found in the advanced section.

removed

removed

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

10.13 Stopped

The stopped rule triggers an event when an object has stopped in a particular zone for a pre-defined period of time.

Note: The stopped rule does not detect abandoned objects. It only detects objects which have moved at some point and then become stationary.

stopped

stopped

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.14 Line Crossing

The line crossing rule triggers an event when an object is first detected crossing a particular line.

Note: line crossing rule will trigger in the same circumstances as the direction violation rule, the choice of which rule is most appropriate will depend on the scenario.

line crossing

line crossing

The rule will create a line and overlay it on the snapshot image.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.15 Counting Line

The counting line rule triggers an event when movement is detected crosses the line in the direction indicated and within the width defined.

Note: The counting line defers from the direction rule in that it does not use the VA object tracker, instead it detect motion past the line.

counting line

counting line

The rule will create a line and overlay it on the snapshot image.

Note: The direction that will be used is shown on the screen as you select the options.

Note: The object filter feature is not available when using the counting line rule.

10.16 Tailgating

The tailgating rule triggers an event when objects cross over a line within quick succession of each other, within the defined time.

tailgating

tailgating

The rule will create a line and overlay it on the snapshot image.

10.17 Direction Violation

The direction violation rule triggers an event when an object crosses the detection line in a particular direction and within the acceptance parameters.

drection

drection

The rule will create a line and overlay it on the snapshot image.

Note: You can also adjust these settings using the on-screen controls. Click and hold inside the dotted circles and drag to your desired angle.

Note: The object filter option is only available when the standard object tracker and Deep Learning Object tracker are selected. Filter options will change depending on which tracker is being used.

10.18 Logical Rule

Logical rules extend the standard rules to allow various inputs to be combined using logical expressions, this helps to reduce false events.

Note: Only certain rules can be used in a logical rule.

logical rules

logical rules

10.19 None Detection

The none detect zone can be used to exclude areas of the scene from being analysed. This can be used to reduce false triggers that can be caused by moving foliage or busy scenes.

none detect

none detect

The rule will create a zone and overlay it on the snapshot image, the zone can be reshaped accordingly. Selecting a minor node will split the segment and create a more complex shape, to remove a segment, right-click a major node and select delete.

11 Calibration

Camera calibration is required in order for object identification and classification to occur. If the height, tilt and vertical field-of-view are known then these can be entered as parameters in the appropriate fields. If however, these parameters are not known then you can use the auto-calibration tool to suggest suitable values.

11.1 Measured Values

11.2 Estimated Values

This section is populated by the calibration tool based on the snapshots and guide points you define in the 5 or more snapshots.

11.3 How to Calibrate a Camera

Note: The more images you use, the more accurate the calibration tool will become.

Click Apply to save your changes.

12 Classification

When the calibration features have been defined, objects that are detected are assessed and assigned to one of the classifiers listed in the classification section. it has been preprogrammed with the most commonly used classifiers but these can adjusted if the scenario requires.

Adjust the settings of the classifiers by either overwriting the current setting or using the up/down arrows to change the setting.

Note: The calibration process must be completed before objects can be classified.

Classification

Classification

Note: When modifying classifiers, avoid overlapping parameters with other classifiers as this will cause the analytics engine to incorrectly identify objects.

13 Tamper

The Tamper feature is intended to detect camera tampering events such as bagging, defocusing and moving the camera. This is achieved by detecting large persistent changes in the image.

Tamper

Tamper

Note: The option will reduce sensitivity to genuine alarms and should be used with caution. Remember to Apply changes for them to take effect

14 Advanced

The advanced section contains settings relating to how the analytics engine tracks objects.

Note: In most installations the default configuration will apply.

Advanced

Advanced

14.1 Analytics Processing Stream

14.2 Object Tracker

Note: Changing the detection point that is used by the system can effect the point at which objects will trigger an event.

14.3 Scene Change

14.4 Display Information