Only this pageAll pages
Powered by GitBook
1 of 82

English

Loading...

What's new?

Loading...

Loading...

Loading...

Loading...

SWARM in a nutshell

Loading...

Quick start guide

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Solution areas

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

SWARM Control Center

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Test & Performance measurements

Loading...

Loading...

Loading...

Loading...

Loading...

Useful knowledge

Loading...

Loading...

Loading...

Loading...

Guidelines

Loading...

Loading...

VPX

Loading...

Loading...

Getting Support

Loading...

Loading...

SWARM Documentation

SWARM Analytics Technical Documentation

This documentation is outdated!

Use this technical documentation to learn, apply and share about SWARM technology.

Who should read this?

This documentation is a technical guideline for partners and customers who are interested in integrating our software in a new or existing data analyzing and IoT environment.

Head over to

https://docs.bernard-gruppe.com/

Update 1

Release Date: 13.12.2023

Number plate area code (Early Availability Feature)

In addition to the number plate raw text and country code, we now support the number plate area code. Some number plates include an area code associated with a geographical area, such as "W" for Vienna (Austria) or "M" for Munich (Germany).

The following 13 countries are supported

  • Austria

  • Bulgaria

  • Switzerland

  • Czech Republic

  • Germany

  • Greece

  • Croatia

  • Ireland

  • Norway

  • Poland

  • Romania

  • Slovakia

  • Slovenia

How does this work?

  • For supported countries, we detect spaces between letters and parse the raw license plate text according to a predefined format.

  • In the case of countries that are not supported (e.g. Italy), the generated event won't contain an area code.

How to configure?

All use cases based on ANPR are supported, no additional configuration is required.

How can I access the data?

The Data Analytics widgets "Journey Distributions" and "License Plates" allow segmenting by "License Plate Area"

Minor improvements and bug fixes

  • Improved classification accuracy for the classes person and bicycle (Traffic & Parking Standard/Accuracy+)

  • Fixed Data Analytics, some queries led to inconsistent data

  • Fixed rule engine for Region of Interest occupancy, triggering a hardware output

  • Fixed P100/OP100 slowdown introduced with 2023.3

  • Reduced downtime during device updates

  • Fixed debug video output pixelation of counting line overlays

  • Fixed license notification for not enough SPS licenses


Device Update size (2023.3 -> 2023.3 Update 1):

  • P101/OP101 & NVIDIA Jetson devices: 370MB

  • P100/OP100: 350MB

The device update will be automatically applied by 31.01.24!

No breaking API/Event changes (only additions), this includes:

  • The Control Center API and Data Analytics API

  • The Event Format

P101, P401 or OP101

Quick Start Guide

The SWARM Perception Box is a managed black box; meaning you do not need to manage hardware/OS/driver/security patches. You also cannot access the system.

To get set up, follow these steps:

1. Mount and Go Online

Your first step includes:

  • Mount the (O)P101/P401 to the desired location.

  • Ensure your (O)P101/P401 is online in your SWARM Control Center.

2. Device Configuration

After the Perception Box is successfully connected with the SWARM Control Center, you must configure the used camera and the scenario you are fulfilling.

3. Data Analytics

​

If you are using a custom broker, the has been adjusted to contain the area code.

For the already configured scenarios, you can gather your data either via or create Dashboards in our Data Analytics. Those dashboards offer out-of-the-box visualizations as well as REST APIs to gather more insights.

Barrierless Parking with ANPR
Journey Time
event schema
P101 - Perception Box
P401 - Perception Box
OP101AC - Outdoor Perception Box
OP101DC - Outdoor Perception Box
Devices
MQTT
Data Analytics
Data Analytics Widget "Journey Time Distributions"

SWARM Perception Platform Overview

Technical Architecture

The SWARM Perception Platform consists of the following major components:

Data integration options

  • Data Analytics API (high level)

    • Events are sent and stored in Azure Cloud environment managed by Swarm

    • Events are processed and stored by Swarm

    • The API enables easy integration with third party systems

  • MQTT (low level)

    • Requires to operate an MQTT server

    • Requires the storing/processing of raw events

    • Enables on the edge processing for time-critical and/or offline use cases

Principles

The architecture is based on the following principles:

  • Centralized data analytics, device configuration, and update management.

    • Hosted in the Microsoft Azure Cloud, maintained by Swarm.

  • Decentralized camera stream processing at the edge

    • Generated data (Events) and management traffic are decoupled.

    • Support for heterogeneous network infrastructure

  • Scale from one to thousands of SWARM Perception agents

SWARM Control Center (SCC): Perception Boxes and analyze your data with Data Analytics. For both, we provide .

SWARM Perception Agent: The SWARM software running on our products P101/OP101/VPX. The video from an RTSP camera/USB camera is processed with the help of deep learning. Events are generated and sent via MQTT to either or a Single or multi-camera processing is supported. The engine is configured solely through the SCC.

Manage and configure
APIs for integration
Data Analytics
custom MQTT broker.

P401 - Perception Box

Quick Start Guide

P401 diagrams

Equipment Required
  • SWARM Perception Box P401

  • 3-pole cable

  • Screw-Kit for mounting

Optionally included in SWARM delivery or internally sourced:

  • 1-8x RTSP camera (powered via PoE included in P401)

  • 1-8x Ethernet cables (cat5e or higher)

Step 1: Mount the P401

Your P401 and camera(s) should be mounted based on the criteria of your specific use case:

Step 2: Connect P401 to power

To power the P401, connect the black wire as ground and the red and yellow ones to the + pole on your power source (24VDC/4A).

  • red = 24V+

  • black = GND (ground wire)

  • yellow = IGN (Ignition Wire) - attach it to the +-pole together with the red wire

Step 3: Connect P401 and camera to your network
  1. Connect the P401 GbE port to the network via ethernet cable (DHCP).

  2. Connect the RTSP camera(s) to the P401 Poe ports or the internal network.

Step 4: Check if the P401 is online

Once the P401 is connected to power and network with internet connectivity, the Perception Box should be online in your SWARM Control Center.

This may take a few minutes for the first installation.

Step 5: Incoming camera connection
Step 6: Outgoing MQTT connection

Advanced Support

Find the

Ensure are met.

Please input the required in the 'Camera Connection' section in your SWARM Control Center.

Please choose the required MQTT connection in your SWARM Control Center. You can choose between and .

Ensure the are followed.

To understand the status of a camera within the SWARM Control Center, refer to the documentation.

If you have followed all the instructions above and the P401 is not online in the SWARM Control Center, or if you need to set a static IP configuration for the ethernet interface, please contact us via the .

Once Step 6 is complete, return to the page to continue with .

Products Datasheet here
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
network requirements
camera configuration
data analytics
custom MQTT connection
Network Requirements
Camera & Device Monitoring
SWARM Support Center
Getting Set-Up
Device Configuration
P101, P401 or OP101

OP101AC - Outdoor Perception Box

Quick Start Guide

Once you have received the OP101AC, it is recommended that the instructions below are followed before mounting.

Equipment Required

SWARM delivery includes:

  • SWARM Outdoor Perception Box 101AC

  • 2x Ethernet Power Cable Seal(s)

  • Pole Mounting Kit

Optionally included in SWARM delivery or internally sourced:

  • 1x RTSP camera (powered via PoE or power cable)

  • Camera Mounting Kit

  • 1-2x Ethernet Cables (cat5e or higher | without kink protection)

Not provided by SWARM:

  • 1x IoT Ready Sim Card (size: standard mini | no PIN protection | no APN configuration | data plan size: 2GB)

  • 230VAC Power Connection

Step 1: Open OP101AC
  1. Using the flathead screwdriver, unscrew the pins located at each corner of the OP101AC.

  2. Open the box by raising the lid. The lid will completely detach from the body of the box.

Step 2: Insert SIM card into LTE stick
  1. Remove the LTE stick from the case.

  2. Open the LTE stick by sliding and removing the lid.

  3. Place the SIM card into the LTE stick as instructed inside the device.

  4. Place the lid on the LTE stick and slide it shut.

  5. Check the internet connectivity of the LTE stick by inserting it into a PC/laptop that is disconnected from Wi-Fi or other networks. (Make sure there is no password/pin protection on the SIM card, as well as the right APN settings are applied)

  6. If the LTE stick supplies the PC/laptop with internet, remove the LTE stick, place it back in the OP101AC, and ensure it is connected.

Step 3: Mount OP101AC

Using the mounting equipment provided, your OP101AC and camera should be mounted based on the criteria of your specific use case:

Step 4: Configure camera's static IP

If the camera was delivered by SWARM, you can move on to the next stage (Step 5: Connect RTSP camera) as the camera is preconfigured.

  1. Configure the static IP on your RTSP camera (we recommend using the IP 192.168.3.164 et seqq.; the subnet mask is preset to 255.255.255.0).

  2. Configure the correct resolution for your use case as specified in the SWARM documentation:

  1. Activate ONVIF if it is supported by the camera model.

A user with administrative rights using the same credentials as the camera's interface has to be configured for the ONVIF protocol.

Step 5: Connect RTSP camera
  1. Connect the camera to the ethernet cable port using a suitable PoE cable

  2. Make sure you use the provided cable seal on the OP101AC's side and a weatherproof seal on the camera's side

Repeat if you have a second RTSP camera.

Step 6: Power OP101AC
  1. Ensure the 230V power connection is off.

  2. Remove the cap of the power adapter from the OP101AC.

  3. Take the 3 wires (N, L, PE) from the 230V power connection and insert them into the hole of the power adapter cap which you have just removed.

  4. Insert each wire into the matching wire in the OP101AC power adapter (see below for wire color codes).

  5. To secure, screw the wires in place using the flathead screwdriver.

  6. Raise the seal over the power adapter and screw each cap of the adapter to secure it.

  7. Turn the 230V power connection on.

Wire Color Codes

  • Protective Earth (PE) = Green / Green and Yellow

  • Neutral (N) = Black / Blue

  • Single Phase: Line (L) = Red / Brown

Step 7: Online in SCC

Before you close the OP101AC, check if the OP101AC is online in the SWARM Control Center.

Step 8: Close OP101AC
  1. Place the lid back on the case.

  2. Using the flathead screwdriver, rotate all the pins to secure the OP101AC.

  3. To ensure the lid is firmly closed, gently attempt to raise the lid from the box.

Advanced Support

Install VPX Agent on X86/NVIDIA Server

Install NVIDIA drivers and the Docker engine

After you followed the installation guidelines you must be able to get a similar output

Install Azure IoTEdge

Skip the container engine installation (you did that already)

Configure Azure IotEdge

You will receive a ZIP file from Swarm with configuration files. (Replace $ID with the device ID you received from SWARM)

At this point check the IotEdge logs for any errors

You will now see your deployment in the SWARM Control Center as "Unnamed Device" with the registration ID:

At this stage, the IoT Edge runtime will pull the docker images and once finished the device can be configured in the Control Center.

Next steps: Configure your use case.

Find the .

Make sure to use a SIM card with sufficient data volume. For normal use, approx. 1 GB per month and a device are needed.

If your specific use case is not specified, please select 1080p (1920 x 1080).

Enabling ONVIF during initial setup not only saves time for possible support cases in the future, but you can also benefit from applying Swarm's recommendations on camera parameters with a single click in your Control Center.

If you have followed all the instructions above and the OP101AC is not online in the SWARM Control Center, please check out our .

Please see our for the number of cameras that can be used.

Once your OP101AC is mounted and online, return to the page to continue with .

Check our first!

(only the package aziot-edge is required)

ℹ️
ℹ️
❗
Product Datasheet here
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
troubleshooting guidelines
benchmarks
Getting Set-Up
Device Configuration
P101, P401 or OP101
docker run --rm --gpus all nvidia/cuda:11.6.2-base-ubuntu20.04 nvidia-smi

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.86       Driver Version: 470.86       CUDA Version: 11.6     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
ID=<DEVICE-ID>
sudo mkdir /opt/swarm/
sudo mkdir /opt/swarm/config
sudo mkdir -p /etc/iotedge/
sudo cp $ID/VPX-$ID/config.yaml /etc/iotedge/config.yaml
sudo chown iotedge:iotedge /etc/iotedge/config.yaml
sudo cp $ID/VPX-$ID/config.toml /etc/aziot/config.toml
sudo mkdir -p /var/lib/iotedge/hsm/
sudo cp $ID/all/azure-production-certificates/* /var/lib/iotedge/hsm/
sudo chown aziotcs:aziotcs /var/lib/iotedge/hsm/iot-edge-device-SwarmEdgeDeviceCA-full-chain.cert.pem
sudo chown aziotcs:aziotcs /var/lib/iotedge/hsm/swarm-iot.root.ca.cert.pem
sudo chown aziotks:aziotks /var/lib/iotedge/hsm/iot-edge-device-SwarmEdgeDeviceCA.key.pem
sudo iotedge config apply -c '/etc/aziot/config.toml'
iotedge system logs -- -f
docker ps -a
CONTAINER ID   IMAGE                                                    COMMAND                   CREATED        STATUS        PORTS                                                                                                                         NAMES
118c96889ec7   swarm.azurecr.io/curiosity-x64-tensorrt:6.6.33           "/usr/local/bin/nvid…"    6 days ago     Up 6 days     0.0.0.0:8090->8090/tcp, :::8090->8090/tcp                                                                                     curiosity
95d63beb336e   swarm.azurecr.io/azure-module-x64:1.7.1                  "java -jar app.jar"       6 days ago     Up 6 days                                                                                                                                   azure-module
3003fd8db258   mcr.microsoft.com/azureiotedge-hub:1.1.15                "/bin/sh -c 'echo \"$…"   3 months ago   Up 4 weeks    0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:5671->5671/tcp, :::5671->5671/tcp, 0.0.0.0:8883->8883/tcp, :::8883->8883/tcp   edgeHub
6f469d38d91d   mcr.microsoft.com/azureiotedge-metrics-collector:1.0.9   "/bin/sh -c 'echo \"$…"   3 months ago   Up 3 months                                                                                                                                 azure-monitor
88d10c824af0   mcr.microsoft.com/azureiotedge-agent:1.1.15              "/bin/sh -c 'exec /a…"    3 months ago   Up 3 months                                                                                                                                 edgeAgent
system requirements
Install the Docker Engine
Install the NVIDIA drivers and CUDA
Install the NVDIA container toolkit
Install IotEdge 1.4

Virtual Perception Box

The Virtual Perception Box (VPX) is a software-only product that will be provided as docker containers (SPS license).

  • You are fully responsible for the setup and maintenance of the hardware and software (OS, driver, networking, system applications, …)

  • Production-ready checklist for devices with VPX (Swarm Analytics recommendation)

    • Integrate remote management (e.g. VPN/SSH)

    • Integrate update management (e.g. Ansible) to update system packages like IotEdge, Jetpack, Docker

    • Device monitoring

    • Check security (firewall, security patches applied, strong passwords/certificates)

Install VPX Agent on NVIDIA Jetson (Jetpack 4.6)

Install JetPack 4.6.0

Make sure to match the exact JetPack version. Don't use newer or older versions.

Free up hard disk space on your Jetson device (optional)

Some Jetson devices don't have enough hard disk space for the VPX agent. You can run the following script which removes non-essential applications. (At your own risk)

Alternative: Use an SSD with >32GB for storage.

Install VPX Agent

With our installer script, installing the VPX agent is easy. Make sure to get the serial(s) from us in advance.

After the installation script is complete, the IoT Edge runtime will pull four docker containers as outlined below.

Make sure that the container curiosity-arm64-tensorrtis used.

Downloading curiosity might take a while

You will see in the SWARM Control Center an "Unnamed Device" with the corresponding registration ID:

Set-up Traffic Counting

How to succeed in traffic counting including the classification of vehicles according to our Classes/Subclasses on dedicated urban & highway streets

You want to know the traffic situation of an urban street or highway? SWARM software is providing the solution to get the number of vehicles passing at the street split per vehicle type (Classes) and direction.

What data can be generated?

For this use case, SWARM software is providing you with any data needed for traffic counting - The counts of vehicles including classification can be covered. The counts are split between two directions (IN/OUT). Furthermore, several counts can be made in one video camera, e.g.: count each lane separately. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines.

What needs to be considered for a successful analysis?

Environment specification

Hardware Specifications

Traffic Insights

Use Cases for Traffic Scenarios

There are two major use cases for understanding the traffic situation across your city, urban streets as well as highways.

Flash the system image JetPack 4.6.0 (L4T 32.6.1) onto your Jetson device. Follow the . Depending on your hardware capability, you have the option to use an SD card or internal storage.

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Possible Camera for this use case

The configuration of the solution can be managed centrally in . Below, you can see how the standard traffic counting needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

Configuration
Settings

How to place the configuration type?

For receiving the best accuracy of the counting including the classification, the Counting Line should be placed approx. in the middle of the video frame so that vehicles from both directions are seen long enough for good detection and classification.

You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT directions.

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.

Example

Here is an example for a Traffic counting widget You have the different options to choose the data you want for a certain time period as well as choosing the output format (e.g.: bar chart, table, ...).

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv. file for further processing in Excel.

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

sudo apt update && sudo apt install curl -y
curl http://get-vpx.swarm-analytics.com/clean_jetson.sh > clean_jetson.sh  && chmod +x clean_jetson.sh 
./clean_jetson.sh
sudo apt update && sudo apt install curl -y
curl http://get-vpx.swarm-analytics.com/install.sh > install.sh  && chmod +x install.sh 
./install.sh
swarm@VPX:~$ sudo docker ps -a
CONTAINER ID        IMAGE                                             COMMAND                   CREATED             STATUS              PORTS                                                                  NAMES
57eec104e917        swarm.azurecr.io/curiosity-arm64-tensorrt:5.3.0   "./curiosity"             2 weeks ago         Up 5 minutes                                                                               curiosity
82b106f9d3d7        swarm.azurecr.io/azure-module-arm64:1.1.0         "java -jar app.jar"       6 weeks ago         Up 10 days                                                                                 azure-module
27ffd61ab021        mcr.microsoft.com/azureiotedge-hub:1.0.10.3       "/bin/sh -c 'echo \"$…"   6 weeks ago         Up 10 days          0.0.0.0:443->443/tcp, 0.0.0.0:5671->5671/tcp, 0.0.0.0:8883->8883/tcp   edgeHub
5e96e96eb440        mcr.microsoft.com/azureiotedge-agent:1.0.10.3     "/bin/sh -c 'exec /a…"    6 weeks ago         Up 10 days                                                                                 edgeAgent

Object velocity

< 130 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Expected Accuracy (Counting+Classification)

(when all environmental, hardware, and camera requirements are met)

Counting >95% (vehicles, bicycles) Classification of main classes: >95% Classification of subclasses: >85%

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second

25

documentation from NVIDIA
Data Analytics
Traffic Scenario
API
Set-up Traffic Counting
Set-up Traffic Counting with speed estimates
Set-up Intersection Insights

Set-up Intersection Insights

How to succeed in getting the information for crossings or roundabouts as well as vehicle movements from Origin (Entry) to Destination (Exit).

You want to know the flow of your intersections, SWARM Perception platform is providing the solution to get the number of vehicles starting in an origin zone and ending up in a destination zone per vehicle type (Classes).

What data can be generated?

For this use case, SWARM software is providing you with any data needed for Origin Destination Zones - The number of vehicles from one zone to another including classification according to BAST/TLS standards can be covered.

What needs to be considered for a successful analysis?

Environment specification

Object velocity

< 50 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Expected Accuracy (From Origin to Destination)

(when all environmental, hardware, and camera requirements met)

Origin Destination counts + right main class: >85% (for vehicles) Classification of main classes: >95% Classification of subclasses: >85%

Hardware Specifications

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second (FPS)

25

Version 2024.1

Release Date: 28.02.2024

Control Center

  • Device Updates

    • Select individual devices for an update, e.g. this allows staged rollout site by site

    • Schedule the update, e.g. 5am to reduce the production impact to a minimum

  • Device Camera View

    • Auto refresh the camera preview for a convenient calibration for up to 30seconds

    • The stream automatically expands when one stream is configured

    • Event triggers and focus zone is now visible in every camera preview

    • Show or hide the event triggers/focus area

    • The focus area is not hiding event triggers anymore

  • The MQTT QoS level (0 - at most once, 1 - at least once, 2 -exactly once) for customer broker is configurable. Higher levels of QoS are more reliable, but involve higher latency and have higher bandwidth requirements.

  • Side Menu

  • API

    • Potentially breaking change: Authentication requests are throttled, limit: 100/hour. Note: The number of API calls for the Control Center is not throttled.

    • Potentially breaking change: Serial number format changed to string

Device

The device improvements are available once you deploy the update for your organisations/tenants.

Models

  • The detection accuracy for Traffic&Parking (Standard and Accuracy+) has been improved, in particular the classes: car with trailer, trucks (single unit, articulated, with trailer) and bicycle.

Tracker

We improve the data quality by enhancing the tracking system for:

  • Non-linear movement (e.g. vehicles in roundabouts)

  • Short occlusions (e.g. vehicles occluded by poles or signs)

  • Track hijacking (one or more objects share the same track) when used in combination with the focus area

  • Calibration visualisation

    • Tracks are coloured in blue when they are too small to classify.

    • Long tracks will not be cut-off

This is not a marketing video, it shows the engine overlay we ship with 2024.1. No hand tuning has been performed.

Device Update size (2023.3 U1 -> 2024.1):

  • P101/OP101 & NVIDIA Jetson devices: 150MB

  • P100/OP100: 150MB

P101 - Perception Box

Quick Start Guide

P101 diagram

Equipment Required
  • SWARM Perception Box P101

  • Power Adapter (Barrel Jack)

Optionally included in SWARM delivery or internally sourced:

  • 1x RTSP camera)

  • 1x Switch/Router with DHCP service or external power for the camera

  • 1-2x Ethernet cables (cat5e or higher)

Step 1: Mount the P101

Your P101 and camera should be mounted based on the criteria of your specific use case:

Step 2: Connect P101 to power

Connect the P101 to power using the barrel jack power adapter.

Step 3: Connect P101 and camera to your network
  1. Connect the P101 to the Switch/network via ethernet cable.

  2. Connect the RTSP camera to the same Switch/network via ethernet cable.

Step 4: Check if the P101 is online

Once the P101 is connected to power and network with internet connectivity, the Perception Box should be online in your SWARM Control Center.

This may take a few minutes for the first installation.

Step 5: Incoming camera connection
Step 6: Outgoing MQTT connection

Advanced Support

Set-up Barrierless Parking

How to succeed in setting up a Barrierless Parking Scenario to gather data about utilization

You have a parking space where you simply want to know your utilization by making an Entry/Exit count, SWARM provides a perfect solution for doing that quite easily. See yourself:

What data can be generated?

For this use case, SWARM software is providing you with any relevant data for your Entry/Exit parking space. The solution is gathering the number of vehicles in your parking space as well as the number of vehicles entering and exiting your parking space for customizable time frames.

The vehicles are classified in any classes the SWARM software can detect. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles.

What needs to be considered for a successful analysis?

Environment requirements

Hardware Specifications

System requirements

System requirements for our virtual software offering

The VPX agent needs the following hardware and software requirements to be met.

NVIDIA Jetson family

  • Hardware

      • Orin Nano 4GB/8GB, Orin NX 8GB/16GB, Orin AGX 32GB/64GB

    • Memory: at least 4 GB available

    • Storage: at least 6 GB free

  • Software

NVIDIA GPUs

  • Hardware

    • CPU: X86-64bit

    • Memory: At least 4 GB available

    • Storage: At least 15 GB free

    • NVIDIA Workstation GPUs

      • RTX series (e.g. RTX A2000)

      • Quadro RTX series (e.g. Quadro RTX 4000)

  • Software

    • Ubuntu 20.04 LTS

    • NVIDIA Driver Version 470 (or newer)

    • Docker 19.0.1+

    • IotEdge 1.4

Install VPX Agent on NVIDIA Jetson (Jetpack 5.1.2)

Install JetPack 5.1.2

Make sure the NVIDIA docker engine is installed (apt package nvidia-docker2)

Installation size

Install VPX Agent

With our installer script, installing the VPX agent is easy. Make sure to get the serial(s) from us in advance.

After the installation script is complete, the IoT Edge runtime will pull four docker containers as outlined below.

Make sure that the container curiosity-arm64-jetpack5 is used.

Downloading curiosity might take a while due to the size (~5GB)

You will see in the SWARM Control Center an "Unnamed Device" with the corresponding registration ID:

Version 2023.3

Release Date: 15.11.2023

Adaptive Traffic Control

Adaptive traffic control enables you to interface with hardware devices like traffic controllers using dry contacts. Use cases and benefits:

  • 'Smart Prio' System: Prioritise certain traffic classes and ensure fluid traffic behavior in real-time (e.g. pedestrians, bicyclists, e-scooters, heavy traffic).

  • Simplify infrastructure maintenance: Replace multiple induction loops with a single Swarm Perception Box. The installation does not require excavation work, and reduces the maintenance effort/costs.

Device Health - The healthiness of your device at a glance

The following metrics are supported:

  • Device Uptime, Status, Restarts, Available Disk Space

  • Device Temperature (support for P101/OP101/Jetson Nano)

  • LTE Modem Traffic, Signal Strength and Reconnects (support for OP100/OP101)

  • Camera status and Camera processing speed (FPS)

  • Generated and Pending events

Model Improvements

Traffic & Parking (Standard and Accuracy+)

We improved the classification accuracy for both variants Standard and Accuracy+, especially for the classes: articulated truck, truck with trailer and car with trailer.

People Head

We fixed the class output, the event class output is now person (vs. previously head). Affected by this change are the devices P101/OP101/VPX. Not affected are the devices P100/OP100.

We improved the accuracy due to higher resolution processing.

People Full body

We improved the accuracy due to higher resolution processing.

Parking Fisheye

The model has been deprecated and will not be updated in the future. The model will continue to work as long as there are devices using it. Please consider switching to the Traffic & Parking model.

Device Metadata

Organize devices and generate events containing pre-defined device metadata. You can define up to five key and value pairs for a device. The keys and values can be freely defined by your choice, we support autocompletion for keys to avoid nasty typos.

Track Calibration History

Track calibration overlays the last 100 object tracks on a current camera frame. This enables you to position event triggers (e.g. counting lines) for optimal results. We extend the functionality with a history over the last 24 hours.

With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.

The track calibration images will be stored on the edge device and are only accessible through the Control Center. Make sure that viewing, storing, and processing of these images for up to 24 hours is compliant with your applicable data privacy regulations.

Device Update

You decide when your devices get an update. Please update soon in order to use the latest features (e.g. Adaptive Traffic Control, Track Calibration History) and to benefit from quality improvements (e.g. Model updates), bug fixes and security updates.

We will automatically update by 13.12.23.

Device Update size (2023.2 -> 2023.3):

  • P101/OP101 & NVIDIA Jetson devices: 460MB

  • P100/OP100: 470MB

Other improvements

  • Control Center - User Management - Invite users and manage permission of existing users

  • Control Center - MQTT client id can be defined for custom MQTT brokers

  • Data Analytics - Speed-up of Origin-Destination widgets

  • Data Analytics - Fix widget visualisation of the chord diagram (OD) and line charts

  • Under some conditions an ROI in combination with a rule did not trigger events

No breaking API/Event changes (only additions), this includes:

  • The Control Center API and Data Analytics API

  • The Event Format

OP101DC - Outdoor Perception Box

Quick Start Guide

Once you have received the OP101DC, it is recommended that the instructions below are followed before mounting.

Equipment Required

SWARM delivery includes:

  • SWARM Outdoor Perception Box 101DC

  • 1x Ethernet Power Cable Seal(s)

  • Pole Mounting Kit

Optionally included in SWARM delivery or internally sourced:

  • 1x RTSP camera (powered via PoE or power cable)

  • Camera Mounting Kit

  • 1x Ethernet Cables (cat5e or higher | without kink protection)

Not provided by SWARM:

  • 1x IoT Ready Sim Card (size: standard mini | no PIN protection | no APN configuration | data plan size: 2GB)

  • 9-24VDC Power Connection

Step 1: Open OP101DC
  1. Using the flathead screwdriver, unscrew the pins located at each corner of the OP101DC.

  2. Open the box by raising the lid. The lid will completely detach from the body of the box.

Step 2: Insert SIM card into LTE stick
  1. Remove the LTE stick from the case.

  2. Open the LTE stick by sliding and removing the lid.

  3. Place the SIM card into the LTE stick as instructed inside the device.

  4. Place the lid on the LTE stick and slide it shut.

  5. Check the internet connectivity of the LTE stick by inserting it into a PC/laptop that is disconnected from Wi-Fi or other networks. (Make sure there is no password/pin protection on the SIM card, as well as the right APN settings are applied)

  6. If the LTE stick supplies the PC/laptop with internet, remove the LTE stick, place it back in the OP101DC, and ensure it is connected.

Step 3: Mount OP101DC

Using the mounting equipment provided, your OP101DC and camera should be mounted based on the criteria of your specific use case:

Step 4: Configure camera's static IP

If the camera was delivered by SWARM, you can move on to the next stage (Step 5: Connect RTSP camera) as the camera is preconfigured.

  1. Configure the static IP on your RTSP camera (we recommend using the IP 192.168.3.164; the subnet mask is preset to 255.255.255.0).

  2. Configure the correct resolution for your use case as specified in the SWARM documentation:

  1. Activate ONVIF if it is supported by the camera model.

A user with administrative rights using the same credentials as the camera's interface has to be configured for the ONVIF protocol.

Step 5: Connect RTSP camera
  1. Connect the camera to the ethernet cable port using a suitable PoE cable

  2. Make sure you use the provided cable seal on the OP101DC's side and a weatherproof seal on the camera's side

Repeat if you have a second RTSP camera.

Step 6: Power OP101DC
  1. Ensure the power connection is off.

  2. Remove the cap of the power adapter from the OP101DC.

  3. Take the 2 wires (+/-) from the power connection and insert them into the hole of the power adapter cap which you have just removed.

  4. Insert each wire into the matching wire in the OP101DC power adapter.

  5. To secure, screw the wires in place using the flathead screwdriver.

  6. Raise the seal over the power adapter and screw each cap of the adapter to secure it.

  7. Turn the 9-24VDC power connection on.

Step 7: Online in SCC

Before you close the OP101DC, check if the OP101DC is online in the SWARM Control Center.

Step 8: Close OP101DC
  1. Place the lid back on the case.

  2. Using the flathead screwdriver, rotate all the pins to secure the OP101DC.

  3. To ensure the lid is firmly closed, gently attempt to raise the lid from the box.

Advanced Support

Set-up Traffic Counting with speed estimates

How to succeed in traffic counting including speed estimates of vehicles according to our Classes/Subclasses on dedicated urban & highway streets

Do you want to know the average speed of your traffic in given areas? SWARM software is providing the solution to get the number of vehicles passing at the street split in different speed segments (10km/h) or even see the average speed of the given count over an aggregated time period.

What data can be generated?

What needs to be considered for a successful analysis?

Environment specification

Hardware Specifications

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 30 PPM for object classes car, truck

> 60 PPM for object classes person, bicycle, motorbike

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

2.8mm-12mm

Camera mounting - distance to object center

Object classes car, truck

5-30 meters (2,8mm focal length)

35-100 meters (12mm focal length)

Object classes person, bicycle, scooter

3-12 meters (2,8mm focal length)

25-50 meters (12mm focal length)

Camera mounting height

Up to 10 meters

Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks

Camera mounting - vertical angle to the object

<50°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0° - 90° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)

Wide Dynamic Range

Can be enabled

Camera

Comment

HikVision

Bullet Camera

2,8 mm fixed focal length

HikVision

Bullet Camera

2,8mm - 12mm motorised focal length

Model

Configuration option

Counting Line (optional speed)

ANPR

Disabled

Raw Tracks

Disabled

SWARM Control Center
camera and data configuration.

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 30 PPM for object classes car, truck

> 60 PPM for object classes person, bicycle, motorbike

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

2.8mm-12mm

Camera mounting - distance to object center

Object classes car, truck

5-30 meters (2,8mm focal length)

35-100 meters (12mm focal length)

Object classes person, bicycle, scooter

3-12 meters (2,8mm focal length)

25-50 meters (12mm focal length)

Camera mounting height

Up to 10 meters

Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks

Camera mounting - vertical angle to the object

<50°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0° - 360°

Camera Infrared Mode

Can be enabled

Wide Dynamic Range

Can be enabled

Possible Camera for this use case

Camera

Comment

HikVision

Bullet Camera

2,8 mm fixed focal length

HikVision

Bullet Camera

2,8mm - 12mm motorised focal length

The configuration of the solution can be managed centrally in . Below you can see how an Origin Destination analysis needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your accordingly.

Configuration settings

Configuration

Model

Configuration option

Origin Destination Zones

ANPR

Disabled

Raw tracks

Disabled

How to place the configuration type?

For Origin/Destination at least two zones need to be configured. The zones can be placed as needed on the video frame. Consider that the first zone the vehicle is passing will be considered as the Origin zone and the last one the Destination zone.

If a vehicle is passing Zone A then Zone B and afterwards Zone C, the OD event will be Origin: A and Destination: C

On top, it is important that the zones are configured as big as possible in order that there is enough space/time to detect the vehicles successfully.

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.

Example

You have the different options to choose the data you want in the preferred output format (e.g.: bar chart, table, ...). For Origin Destination analysis there is a special chart - Chord Chart - for visualizing the flow with the counts.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.

If you would like to integrate the data in your IT environment, you can use our SWARM API. In data discovery, you will find a description of the Request to use to retrieve the data of each widget.

Devices is the first item in the side menu. The last selected side menu item will be remembered locally in your browser cache.

P101 Exterior Diagram

Find the

Ensure are met.

Please input the required in the 'Camera Connection' section in your SWARM Control Center.

Please choose the required MQTT connection in your SWARM Control Center. You can choose between and .

Ensure the are followed.

To understand the status of a camera within the SWARM Control Center, refer to the documentation.

If you have followed all the instructions above and the P101 is not online in the SWARM Control Center, or if you need to set a static IP configuration for the ethernet interface, please contact us via the .

Once Step 6 is complete, return to the page to continue with .

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Possible Camera for this use case

The configuration of the solution can be managed centrally in . Below, you can see how the Entry/Exit parking with license plate detection needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

How to place the configuration type?

To receive the best accuracy in counting including the classification, the CL should be placed approx. in the middle of the video frame so that vehicles from both directions are seen long enough for good detection and classification.

Consider that the IN/OUT direction of the counting line is important as it is relevant for the calculation of the utilization. (IN = Entry to parking, OUT = Exit of parking).

Visualize data

Scenario

In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.

Example

You are able to visualize the data for any Entry/Exit you have configured with the Counting Lines. So you are able to see the number of vehicles with their classes/subclasses that entered or left your parking spot, either aggregated over several Entry/Exits or separately per Entry/Exit. We deliver the following two standard widgets Current & Historic Parking Utilization out of the box when creating a Parking Scenario Dashboard

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.

Supported devices

, , ,

or ()

You can find the benchmark results here:

In case of uncertainty, please contact

Flash the system image JetPack 5.1.2 (L4T 35.4.1) onto your Jetson device. Follow the . Depending on your hardware capability, you have the option to use an SSD or internal storage.

Jetpack 5.1.2 requires at least 32GB or more of storage. In order to free up more storage for additional software, either use a bigger storage device or .

The device type needs to be set in SWARM Control Center. Currently, only our support team can do that.

To get started, check out the .

The device health metrics allow you to provide evidence for reliable and continuous data collection and to self-diagnose (e.g. stable network connectivity/power supply/camera connection/processing speed,... )

We replaced the model Parking (Single-/Multispace) with the model Traffic & Parking (Accuracy+). The model is tuned for accuracy while the processing speed (measured in FPS) is reduced. Ideally for use cases with less dynamic objects like and (Single- and Multispace).

Once defined, metadata allows you to filter the list of devices by metadata values and the generated events will include the pre-defined metadata for further processing by your application. For details have a look at the .

Find the .

Make sure to use a SIM card with sufficient data volume. For normal use, approx. 1 GB per month and a device are needed.

If your specific use case is not listed, please select 1080p (1920 x 1080).

Enabling ONVIF during initial setup not only saves time for possible support cases in the future, but you can also benefit from applying Swarm's recommendations on camera parameters with a single click in your Control Center.

If you have followed all the instructions above and the OP101DC is not online in the SWARM Control Center, please check out our .

Please see our for the number of cameras that can be used.

Once your OP101DC is mounted and online, return to the page to continue with .

For this use case, SWARM software is providing you with any data needed for traffic counting as explained in the .

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Possible Camera for this use case

The configuration of the solution can be managed centrally in . Below, you can see how a standard traffic counting needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

Configuration
Settings

How to configure the speed line?

When you have enabled the speed estimation, the Counting Line will transform into two lines with a distance calibration measurement. In order to get a good result on speed estimates it is crucial that the calibration distance between the two speed lines is accurate. The distance can be changed in the trigger settings on the left sidebar.

You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT directions.

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.

Example

Here is an example for a Traffic counting widget split by 10km/h groups of speed estimates. You have different options to choose the data you want for a certain time period.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv. file for further processing in Excel.

Tip: Use the or .

Camera mounting - horizontal angle to the object

Tip: Use the or .

Camera mounting - horizontal angle to the object

You can visualize data via in different widgets.

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
DS-2CD2046G2-I
DS-2CD2645FWD-IZS
Traffic & Parking (Standard)
Data Analytics
Traffic Scenario
Products Datasheet here
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
network requirements
camera configuration
data analytics
custom MQTT connection
Network Requirements
Camera & Device Monitoring
SWARM Support Center
Getting Set-Up
Device Configuration
P101, P401 or OP101

Object velocity

< 30 km/h

Day/Night/Lighting

Daytime or Well illuminated

Indoor/Outdoor

Indoor or Outdoor

Expected Accuracy (Counting only)

(when all environmental, hardware and camera requirements met)

>95% Only vehicles are considered. For parking spaces people, bicycles and motorbikes are not part of our test scenarios as they don't occupy parking spaces.

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second (FPS)

12

sudo apt update && sudo apt install curl -y
curl http://get-vpx.swarm-analytics.com/install-2.0.sh > install.sh  && chmod +x install.sh 
./install.sh
swarm@VPX:~$ sudo docker ps -a
CONTAINER ID        IMAGE                                             COMMAND                   CREATED             STATUS              PORTS                                                                  NAMES
57eec104e917        swarm.azurecr.io/curiosity-arm64-jetpack5:7.1.0   "./curiosity"             2 weeks ago         Up 5 minutes                                                                               curiosity
82b106f9d3d7        swarm.azurecr.io/azure-module-arm64:1.11.1         "java -jar app.jar"       6 weeks ago         Up 10 days                                                                                 azure-module
27ffd61ab021        mcr.microsoft.com/azureiotedge-hub:1.4.10       "/bin/sh -c 'echo \"$…"   6 weeks ago         Up 10 days          0.0.0.0:443->443/tcp, 0.0.0.0:5671->5671/tcp, 0.0.0.0:8883->8883/tcp   edgeHub
5e96e96eb440        mcr.microsoft.com/azureiotedge-agent:1.4.10     "/bin/sh -c 'exec /a…"    6 weeks ago         Up 10 days                                                                                 edgeAgent

Object velocity

< 130 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second

25

Showcase Adaptive Traffic Control with the Rule Engine: Left Lane >=3 objects; Right Lane: >= 1 object
ℹ️
ℹ️
❗
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
DS-2CD2046G2-I
DS-2CD2645FWD-IZS
Traffic & Parking (Standard)
Data Analytics
Parking Scenario
API
NVIDIA Jetson
Nano 4 GB
TX2 NX
Xavier NX
AGX Xavier
JetPack 4.6
Jetpack 5.1.2
not all Jetson devices support newer Jetpack
IotEdge 1.4
Network requirements
Network requirements
How many cameras can my Perception Box compute?
support
documentation from NVIDIA
follow the NVIDIA reference
Please create a ticket, therefore.
configuration steps
common problems
event schema
Product Datasheet here
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
Traffic Insights
Parking Insights
Advanced Traffic Insights
People Entry/Exit Counting
troubleshooting guidelines
benchmarks
Getting Set-Up
Device Configuration
P101, P401 or OP101
Traffic counting use case
Data Analytics
Traffic Scenario
API
Parking Insights
Adaptive Traffic Control

Advanced Traffic Insights

Set-up guide and recommendations - ANPR

How ANPR works

Automatic number plate recognition (ANPR) works in four steps. It needs to detect vehicles and license plates, read the plates, and trigger events.

1. Detect vehicle

Detect vehicles as cars, trucks, and buses and follow them in the video stream.

2. Detect license plate

For each detected vehicle, detect license plates and map them to the vehicles.

3. Read license plate

For each detected license plate, apply an optical character recognition (OCR) to read the plate.

4. Send event

If the vehicle crosses a counting line, send an event with the text from the detected license plate.

The main challenge for ANPR setups consists of clearly readable license plates. This means a sharp and well-illuminated image without occlusions or blurry objects is required to obtain correct results. The following guide shows how to set up our ANPR system and helps to avoid the most common issues.

Typical Setup

The system is designed for two typical setups which are described here.

Single lane from the side

For this setup, the camera is mounted at around 2m height, as closely as possible to the side of the lane to avoid a high horizontal angle. If possible, enforce vehicles to stay in lane for the ANPR section, as switching lanes can lead to inaccurate results.

Two lanes from above

When positioning the camera above the cars (e.g. entry/exit of a garage) a maximum of two lanes can be covered.

To work properly, vehicles should drive straight through the scene to have the license plate visible in the entire scene. The camera should be at a height of 3m and facing vehicles directly from the front or back to avoid a high horizontal angle to the license plate.

Camera Setup

Camera setup can sometimes be tricky and often requires some experimentation with the camera position and parameters to get optimal results.

The following sections describes common camera issues and how to avoid them.

Resolution

License plates need to be visible with 250 pixel-per-meter (PPM). For a standard European plate, this gives us a minimum height of 30px and a minimum width of 100px to get good recognition results.

For camera setups with object distances within the specification, a FullHD (1080p) resolution is sufficient. In some cases, it might help to choose a higher resolution (4MP or 2K) for a sharper image.

It is recommended to check the size of license plate crops manually during the setup phase.

Viewing Angle

License plates need to be visible from a direct viewing angle. While small angles (<20° horizontal, <30° vertical) and tilting <5° can be handled, larger angles cannot work at all. If view angles get bigger, the system is more likely to mix up characters or is not able to recognize characters close to the edges.

For camera positions from the side only a single lane is recommended, while with camera views from above, a maximum of two lanes works.

Lighting

Scene illumination has two major effects.

  1. With good illumination, a lower shutter speed can be chosen and images get less blurred, especially for fast-moving vehicles.

  2. Good lighting reduces the ISO value of the camera and images appear less grainy and sharper.

Some cameras offer additional illumination which can be useful. If the camera light is not sufficient, an external illumination of the scene is required.

Digital noise reduction (DNR) should be kept in a low range to further reduce graininess.

Shutter speed

A low shutter speed is important for moving objects to get a sharp image and avoid blurriness caused by motion.

While in general, faster is better, the selected shutter speed depends on the available light in the scene.

Depending on vehicle speed a shutter speed of 1/250 is a bare minimum for moving objects below 15 km/h. For faster vehicles, up to 40 km/h, a shutter speed of 1/500 is a good choice. For faster objects, an even lower shutter speed is required which only works with good illumination.

With P101 we only support ANPR on vehicles passing with maximum 15km/h

Encoding Quality

To stream the camera image, data is encoded. Different encodings can save data and reduce image quality. For the ANPR use case, high image quality is required. Select H.264 codec and a high bitrate of >6000kbps for FullHD (1080p) content and >8000 kbps for 4MP video material with 25 FPS.

Additional features such as BLC and WDR are not recommended, as postprocessing can reduce details. If they are necessary, the impact on the video quality should be checked.

A constant bitrate (CBR) usually leads to a better quality than variable bitrate (VBR).

Test setup in various conditions

When setting up the camera, it is recommended to take a few short test videos in different lighting conditions (morning, midday, evening, night) to check if license plates are clearly visible in all conditions.

If license plates are not clearly recognizable for a human, ANPR cannot work. Make sure to get good and clear camera images for best results.

Event Triggers

A suitable position of the event trigger (counting line) is essential for good ANPR results. If the line is positioned far in the back, the ANPR system has no time to detect and recognize the plate before an event is sent. If the line is in a position, where the license plate is only visible at a suboptimal angle, results will not be accurate.

For an optimal counting line position, a short debug video of the scene with 3-5 vehicles is required. In the analysis of the video, one should follow the vehicle through the optimal section with the best view on the plate (see Example 6). Just as the view on the plate gets worse (see Example 7), position the line right behind the center of the vehicle.

By this method, it’s guaranteed that the system can utilize the best video parts to detect and recognize the license plate and send the event just before suboptimal views worsen the result.

Accuracy

Our ANPR system is tested properly under various conditions. In our test setup, we have around five different scenes, and accuracies are calculated on the basis of > 800 European vehicles. Overall accuracy means the percentage of correctly identified vehicles plus license plates compared to all passing vehicles with readable license plates.

Under the specified conditions, the system reaches >95% overall accuracy in slow parking environments and >90% in environments with fast vehicles.

For a detailed analysis of potential errors, see limitations described below.

Limitations

A base limitation that cannot be solved is the general readability of license plates. Plates with occlusions, covered with dust or snow or incorrectly mounted plates cannot be read. Environmental limitations such as strong rain or snow which blocks the clear view on plates can also lead to inaccurate results.

General

There are a few hard limitations where the system cannot provide good results.

1. Illumination (day-only)

Currently, the system supports good illumination only. This limitation is usually for day-only, however it will also work for well-lighted night scenes if the license plates are clearly recognizable.

2. Single-line plates only

License plates with two lines (such as motorcycle plates) are not supported and recognition will not work.

3. EU license plates only

The recognition system is limited to standard EU license plates. It can work with license plates from other countries (and some older non-standard EU plates), but there are no accuracy guarantees.

Error Types

There are four potential errors that can occur within the ANPR system.

  • No vehicle is detected (< 1% error rate)

  • No plate is detected (< 0.1% error rate)

  • Event is sent without a vehicle passing (< 0.1% error rate)

  • Wrong plate text is recognized (< 5-10% error rate, depending on the scenario)

Typical Errors

The OCR system identifies character by character. In most error cases it’s a single character that is misclassified or a character that was missed. As in some countries, license plate characters look very similar (sometimes even exactly the same), most errors are caused by mixing up characters with lookalikes.

  • B and 8 can be mixed-up

  • D and 0 can be mixed-up

  • 0 and O can be mixed-up

  • I and 1 can be mixed-up

  • 5 and S can be mixed-up

The best option to avoid these mixups is to get a clear front view on the plate. However, for systems that need to match in- and outgoing license plates, it might make sense to match them with a fuzzy search that takes mixups and duplicated characters into account. For example, the system could still match plate texts like S123A0 and SI23AO, when the second event exists on the same or following day.

Further improvements

If all setup recommendations are implemented and the camera configuration cannot be improved any further, there are some external improvements that can be made. Best results are achieved when combined.

  1. Slow down vehicles in the ANPR section to have more time to detect and recognize the license plate.

  2. Reduce distance from vehicle to camera, for example by limiting the entry to a single or narrow lane, reducing variation and angle to the camera.

  3. If possible, use camera zoom to focus on the section with the best view on license plates. This can also help with a low resolution of plate crops.

  4. If the Swarm system performance is an issue (low FPS), it can help to blackout any unnecessary image parts with vehicles (e.g. with a privacy zone). By this, the focus of the system is set on the ANPR section only.

Set-up Barrierless Parking with ANPR

How to succeed in setting up an Entry/Exit parking system with ANPR

You have a parking space where you want to know your utilization and parking times of your customers, then you can use the SWARM solution as following.

What data can be generated?

For this use case, SWARM software is providing you with any relevant data for your Entry/Exit parking space. The solution is gathering the number of vehicles in your parking space as well as the number of vehicles entering and exiting your parking space for customizable time frames.

The vehicles are classified in any classes the SWARM software can detect. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles.

What needs to be considered for a successful analysis?

Find below some general settings for the installation of this use case. As the automatic number plate reading needs some more detailed information you will find additional and more detailed information on how to set it up in the following page:

Environment requirements

European countries

Licence plate types

Note: square two line license plates (e.g. motorbike) are not supported

Object velocity

< 40km/h with low-light conditions

Area of focus

Single lane when camera mounted on the side; Two lanes when mounted above the center of both lanes

Day/Night/Lighting

Daytime or well illuminated only (min 500 lux)

Indoor/Outdoor

Indoor & Outdoor

Expected Accuracy (Counting + License Plate)

(when all environmental, hardware and camera requirements met)

>90% Only vehicles are considered. For parking spaces people, bicycles and motorbikes are not part of our test scenarios as they don't occupy parking spaces.

Hardware specification

The License plate recognition is not supported on the P100 SWARM Perception boxes. So for this use-case, the P401, P101 SWARM Perception box, or a VPX deployment option with NVIDIA-based hardware is needed.

Set-up Queue Length Detection

How to get insights on traffic congestions in terms of data generation, camera set up and Analytics options.

Next to the traffic frequency at given locations, you are wondering about the length of a queue when traffic congestion is given. In combination with the speed of the detected vehicle, you can get proper insights into the length and speed of the current queue.

What data can be generated?

For this use case, SWARM software is providing you with the most relevant traffic insights - The counts of vehicles including the classification of the SWARM main classes can be covered. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines. By combining this with different Regions of Interest (RoI) you can retrieve the needed insights into traffic congestion.

For traffic frequency, all SWARM main classes can be generated. Depending on the camera settings, we can detect present vehicles up to 70 m.

What needs to be considered for a successful analysis?

Possible cameras for this use case

Environment specification

Hardware Specifications

Camera Configuration

Configure the connection to your camera

SWARM offers Multi-Camera support, allowing you to process more than one camera per Perception Box.

Perception Box Management

To open the configuration page of a Perception Box, click on the row of the Box. There you can manage all cameras running on one device.

Although you are completely free in naming your Perception Boxes, you might want to have a logical naming scheme.

Depending on your subscription, you will have a pre-defined number of cameras you may use with your Perception Box. If you need to process more cameras, contact our Sales team.

Camera settings

Clicking on a camera will collapse the corresponding settings. You can name the camera. On top you have the option to deactivate the camera stream. If a stream is deactivated it will not be taken into consideration by the SWARM Software and the performance will not be impacted but the configuration will be kept.

A GPS coordinate needs to be set for each camera. The GPS coordinate is mandatory and can by entering the coordinates or with the location picker directly on the map.

We are currently able to process camera streams over RTSP, as well as streams coming over USB. You can select the different options as Connection Type.

  • USB cameras must be available as V4L device at /dev/video0. The following specifications are supported:

    • RAW color format: UYVY, YUY2, YVYU

    • Resolution: 1080p

    • Other camera settings:

      • Shutter speed, brightness, FPS are camera/sensor dependent and have to be individually calibrated for optimal results

    • Make sure to use USB 3.0 camera in order to benefit from the full frame rate.

The other fields for the Camera Connection can be found in the manual of the camera and/or can be configured on the camera itself.

There are some special characters, which could lead to problems with the connection to the camera. If possible, avoid characters like "?", "%" and "+" in the password field.

Using Message Compression can save up to 50% of the bandwidth used for sending events to the MQTT broker. Be aware that the broker needs to be configured for compression as well

Camera monitoring status

In the device configuration, you have seen the overall status of the cameras included in one Perception Box. On the camera level, you have the option to see the individual status to better identify the root cause of the issue (see mark 4 in the overview above).

Set-up Adaptive Traffic Control

Adaptive Traffic Control

Adaptive traffic control enables you to interface with hardware devices like traffic controllers using dry contacts. Use cases and benefits:

  • 'Smart Prio' System: Prioritise certain traffic classes and ensure fluid traffic behavior in real-time (e.g. pedestrians, bicyclists, e-scooters, heavy traffic).

  • Simplify infrastructure maintenance: Replace multiple induction loops with a single Swarm Perception Box. The installation does not require excavation work, and reduces the maintenance effort/costs.

What needs to be considered for a successful analysis?

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Possible Camera for this use case

Configuration settings

Configuration

  • Requirements:

    • Supported event types: Region of Interest (ROI) in combination with rules

First off, enable IO-device, specify the used Quido device type and the endpoint (IP or hostname).

Rule Configuration

Define at least one ROI and create an associated rule. As long as the rule is valid, the associated Quido relay output is enabled (contact closed). One or more rules can be created for the same ROI.

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.

Environment specification

Hardware Specifications

Set-up Single Space/Multi Space Parking

Gather real time occupancy state about specific parking spaces - free or occupied

You have a parking space where you simply want to know if your specific parking spaces are occupied or free, SWARM provides a perfect solution for doing that quite easily. See yourself:

What data can be generated?

For this use case, SWARM software is providing you with any relevant data for Single Space detection within your parking space. The solution is to provide you with the occupancy state of each of your configured parking lots.

The single space detection will give you information about the occupancy state of your parking lot (free or occupied) as well as the information about the object in your parking space, including the classification. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles. On top the classification is depending on the camera installation, for a more top-down view the classification will be less accurate.

Camera placement

Good camera placement and understanding of the following section are key for accurate detections for Single Space Parking.

  • put a car on one of the parking spaces

  • put a large vehicle (high van, small truck - the largest vehicle that you expect in your parking) on all parking spaces next to your car

  • if you still can see >70 % of the car, then this parking spot is valid.

General & easy recommendations for deciding where to place the camera:

  • Parking spots have to be fully visible (inside the field of view of the camera). We do not guarantee full accuracy for cropped single parking spaces.

  • Avoid objects (trees, poles, flags, walls, other vehicles) that occlude the parking spaces. Avoid camera positions, where cars (especially high cars like vans) occlude other cars.

  • Occlusions by other parking cars, mainly happen if parking spaces are aligned in direction of camera-alignment lines.

Get a better overview for installations with more details on camera distance to objects and mounting height:

What needs to be considered for a successful analysis?

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Configuration settings

How to place the configuration type?

In the Parking Event templates you will find the two options Single Space (RoI) and Multi Space (RoI). These event types are the ones you need to set up this use case. Use an Single Space (RoI) in case you configure a parking space for a single car. In case you have an area where you expect more than one car choose the Multi Space (RoI). The difference between these two event types is the maximum capacity that you can set in the trigger settings.

Place the Region of interest (RoI) on the parking space you would like to configure. Consider that a vehicle is in the RoI if the center point of the object is in the ROI.

As the center point of the object is defining if the object is in an ROI or not please take care to configure the ROI taking into consideration the perspective.

Visualize data

Scenario

In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.

Example

You are able to visualize the data for any Single- or Multispace parking lot you have configured with the Parking RoI. So you are able to see the occupancy status as well as the number of vehicles in each RoI or aggregated across one or several camera streams. You have the option to add Current & Historic Parking Utilization or the Single Multi Space Occupancy widgets for your data in this use case.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.

Environment requirements

Hardware Specifications

Scenario Configuration

Configure your scenario according to your covered use cases

Now, as you see your camera, you have the option to configure it. This is where the magic happens!

Event Triggers

Each event trigger will generate a unique ID in the background. In order for you to keep track of all your configured types, you are able to give it a custom name on the left side panel of the configuration screen. --> This name is then used for choosing the right data in Data Analytics.

Please find the abbreviation and explanation of each event type below.

Event Types

We provide templates for the three different areas in order to have everything set for your use case.

  • Parking events --> Templates for any use case for Parking monitoring

  • Traffic events --> Templates for use cases around Traffic Monitoring and Traffic Safety.

  • People events --> Templates for using the People Full Body or People Head model.

This will support you to easier configure your scene with the corresponding available settings. You can find the description of the available Event Triggers and the individual available Trigger Settings below.

Event Triggers Details

Counting Lines will trigger a count as soon as the center of an object crosses the line. While configuring a CL you should consider the perspective of the camera and keep in mind that the center of the object will trigger the count.

The CL is logging as well the direction the object crossed the line in IN and OUT. You may toggle IN and OUT at any time to change the direction according to your needs. On top a custom name for IN and OUT direction can be configured. The custom name for direction can then be used as segmentation in Data Analytics and is part of the event output.

Per default, a CL only counts objects once. In case each crossing should be counted there is an option to enable events for repeated CL crossings. The only limitation taken there is that only counts will be taken into consideration if they are 5 seconds apart from each other.

Available Trigger Settings: ANPR, Speed Estimation, Events for repeated CL crossing

You can enable the Speed Estimates feature as a specific trigger setting with a Counting Line in the left side bar. This action will add one additional line that can be used to configure the distance between in your scenario. For best results, use a straight distance without bendings.

RoIs are counting objects in the specified region. This type also provides as well the Class and Dwell Time, which tells you how long the object has been in this region.

Depending on the scenario type we can differentiate between 3 types of RoIs. For those 3 types we are offering predefined templates described below:

Zones are used for OD - Origin - Destination. Counts will be generated, if an object moves through OD 1 and afterwards through OD 2. For OD at least two zones need to be configured.

The first zone the object passes will be the origin zone and the last one it moved through the destination zone.

A VD covers the need of having 3D counting lines. The object needs either to move into the field and then vanish or appear within the field and move out. Objects appearing and disappearing within the field, as well as objects passing the field are not counted.

The Virtual Door is designed for scenes to obtain detailed entry/exits count for doors or entrances of all kinds.

Virtual Door Logic - how it works

The logic for the Virtual Door is intended to be very simple. Each head or body is continuously tracked as it moves through the camera's view. Where the track starts and ends is used to define if an entry or exit event has occurred.

  • Entry: When the track start point starts within the Virtual Door and ends outside the Virtual Door, an in event is triggered

  • Exit: When the track start point starts outside the Virtual Door and ends within the Virtual Door, an out event is triggered

  • Walk by: When a track point starts outside the Virtual Door and ends outside the Virtual Door, no event is triggered

  • Stay outside: When a track point starts inside the Virtual Door and ends inside the Virtual Door, no event is triggered

Note: There is no need to configure the in and out directions of the door (like (legacy) Crossing Lines) as this is automatically set.

Global Settings

You can enable the ANPR feature with a Counting Line, which will add the license plate of vehicles as an additional parameter to the generated events. When enabling the ANPR feature, please consider your local data privacy laws and regulations, as number plates are sensitive information.

The Image Retention Time can be manually set. After this time, any number plate raw information as well as screen captures will be deleted.

In the Global Settings section, you have the option to add focus areas. A focus area will define the area of detection on the frame. So in case focus areas are defined, detections will only be taken into consideration in these corresponding areas. If a focus area is configured, the areas will be shown on the preview frame and in the table below. In the table you have the option to delete the focus area.

Attention: When a focus area is drawn, the live and track calibration will only show detections and tracks in these areas. So before focus areas are drawn check the track calibration in order to see where the tracks are on the frame to not miss essential detections in the focus area definition.

In the configuration, there are two trigger actions to choose from. Either a time or an occupancy change, depending on the use case.

In the Global Trigger settings you can adjust the RoI time interval.

The RoI time interval is used accordingly depending on the chosen trigger action:

  • Time --> The status of the region will be sent at the fixed configured time interval.

  • Occupancy --> You will receive an event if the occupancy state (vacant/occupied) changes. The RoI time interval is a pause time after an event was sent. This means that the occupancy change will not be checked for the configured time interval and you will receive max. one event per time frame. The state is always compared with the state sent in the last event.

At the raw track mode an event will be generated as soon as the object is leaving the camera frame. At this event the exact track of the object will be retrieved. The track will be gathered in X/Y coordinates of the camera frame.

Configuration of the Event Triggers

To create your own solution, select a model for your solution and then place your type (or select raw tracks mode).

When a type is active, left-click and hold the white circles to move the single corner points. You can create any tetragon (four-sided polygon). To move the entire type, left-click and hold anywhere on the type.

Models

The essence of our computer vision engine's ability to detect and classify lies in its models.

Which model should I use?

Traffic & Parking (Standard and Accuracy+)

Class Definition

Event will contain class and subclass according to this definition.

The following images are intended as examples only and are not supposed to provide information about exact camera distances or perspectives.

Standard examples

Here you can find some examples showing what can be detected in different theoretical installation cases.

The following definitions will be used:

  • Camera-angle: measured from a horizontal line.

  • Camera-height: mounting height of the camera

  • Distance camera to point in the image-center. This distance is already fixed with camera angle and camera mounting height. But for better estimation of your camera setup, we add it as an additional parameter.

In the following, we give standard examples, with the following assumptions:

  • Car-width = 2 m

  • Parking length/car-length = 5 m

  • Height of the occluding vehicle = 2.5 m

Color-code:

  • green: good accuracy

  • yellow: will work in most cases, but parking spots might be occluded if the neighboring spaces are occupied with a large vehicle

  • orange-red: not recommended: might work in some cases, but in general this spot has high potential to be occluded by a vehicle parking on the spot next to it.

  • black: not recommended at all!

Single parallel parking line

Increasing the distance of camera to vehicle helps to monitor more vehicles.

Higher mounting will help in case there is traffic or other potential occlusion in front of the parallel parking line

Two parallel parking lines

For two or more parallel parking lines, mounting height is crucial in order to reduce occlusions - The higher, the better.

The distance from the camera to the vehicle has a bad impact on the occlusions. So the right combination of distance and height need to be found.

Version 2024.2

Release Date: 02.04.2024

Device Schedule (Running/Standby)

The device schedule allows you to define time slots when the device should be in operational mode (device state: running) or in power saving mode (device state: standby).

The power saving mode is mostly relevant for the battery powered system BMA Mobil since it extends the usage time for a single charge.

In our measurements, the BMA Mobil reduces the power consumption during standby by about 55%.

How to enable the schedule for a device?

  1. Open a device in the control center and select the tab schedule

  2. Switch the toggle to Enabled and select desired time slots when the device should be operational

  3. Save Schedule

What will happen during standby?

The device reduces the power consumption by pausing the video signal processing.

Functions that are not available during standby:

  • No events will be generated

  • No camera image is visible

  • No event triggers can be configured

Functions that are available during standby:

  • The device will be online in the Control Center, including available device health, logs, and reboot functionality.

  • Wake-up the device from standby (→Set the toggle to Disabled)

What is the expected battery lifetime?

In our measurements the BMA Mobil consumes 1,07A when active and 0,48A when in standby.

Examples

With an 100Ah battery degraded to 90Ah we get the following battery lifetimes:

  • Always active: 90Ah / 1,07A ~3,5 days

  • Always standby: 90Ah / 0,48A ~7,8 days

  • 13h active, 59h standby (Example from screenshot, total 3days): 13h * 1,07A + 59h *0,48A = 42Ah out of 90Ah (~50%)

Improved latency for BMA/BMC

We have been able to improve latency for the BMA working together with the BMC to < 1s. Measured by vehicle is physically present in a zone to the point when the BMC switches a dry contact.

Other Improvements

  • Fixed: On Curiosity startup the first Region of Interest event was always vacant (even if there were objects).

  • Improved: Curiosity performance improvements by avoiding to re-classify already classified objects

  • Improved: The Number Plate Column for DataAnalytics Parking widgets is optional


Device Update Size

2023.3 U1 → 2024.2: 155MB (BMA / P101 / Jetson Nano)

API

  • There are no API breaking changes

Set-up Journey Time & Traffic Flow

Detailed information on the solution for Journey time and area-wide traffic flow in terms of data generation, camera set up and Analytics options.

Next to the traffic frequency at given locations, you are wondering about statistics on how long the vehicles take from one to another location and how the traffic flows across your city and municipality. With this solution, you can generate that data with a single sensor solution from SWARM.

What data can be generated?

For this use case, SWARM software is providing you with the most relevant traffic insights - The counts of vehicles including the classification of the SWARM main classes can be covered. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines. By combining more sensors in different locations the journey time as well as the statistical traffic flow distribution will be generated.

The journey time and traffic flow distribution can be generated for vehicles only (car, bus and truck).

What needs to be considered for a successful analysis?

Environment specification

Accuracy

In this technical documentation, accuracy refers to the penetration rate of a single sensor, which is the percentage of correctly identified license plates divided by the total number of vehicles counted during a ground truth count.

The current penetration rate for this use case is 60%, taking into account different day/nighttimes, weather conditions, and traffic situations. When calculating journey time between two sensors, approximately 36% of journeys are used as the baseline, which is calculated by multiplying the penetration rate of both sensors.

The accuracy is sufficient to generate data that can be used to make valid conclusions about vehicle traffic patterns and journey times.

Hardware Specifications

SWARM Control Center
camera and data

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 60 PPM

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

USB 3.0/UYVY, YUY2, YVYU

Camera Focal Length

2.8mm

Camera mounting - distance to object center

5-20 meters

Camera mounting height

3-6 meters

Camera mounting - vertical angle to vehicle

<50°

Note: setting correct distance to vehicle and camera mounting height should result in the correct vertical angle to vehicle

0° - 90°

Wide Dynamic Range

Must be enabled

Camera

Link

Comment

HikVision

DS-2CD2046G2-IU

2,8 mm Focal Length

Configuration

Model

Configuration option

CL (Counting Line)

Events for repeated CL crossings

Enabled

ANPR

Disabled

Raw tracks

Disabled

SWARM Control Center
camera and data configuration.

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 30 PPM for object classes car, truck

> 60 PPM for object classes person, bicycle, motorbike

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

2,8mm-12mm

Camera mounting - distance to object center

Object classes car, truck

5-30 meters (2,8mm focal length)

35-100 meters (12mm focal length)

Object classes person, bicycle, scooter

3-12 meters (2,8mm focal length)

25-50 meters (12mm focal length)

Camera mounting height

Up to 10 meters

Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks

Camera mounting - vertical angle to the object

<50°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0° - 90° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)

Wide Dynamic Range

Can be enabled

Camera

Comment

HikVision

Bullet Camera

2,8 mm fixed focal length

HikVision

Bullet Camera

2,8mm - 12mm motorised focal length

Model

Configuration option

Counting Line (optional speed)

ANPR

Disabled

Raw Tracks

Disabled

Speed Estimation

Enabled

SWARM Control Center
camera and data configuration.

Thanks to the License plate recognition, the parking duration of your customers will be analyzed. On top of the License plate information, license plate origin country as well as are available as meta information. The country codes are according to standard. The country classification is working with excellent accuracy of 99%.

Especially for Automatic Number Plate Recognition (ANPR) the camera choice and positioning are essential.

The requirements for accurate number plate recognition can be aligned with respective norms for the accurate operation of (human-based) surveillance systems.

The standards give a recommended pixel-per-meter measure (“pixels on target”), to reliably perform that task (by a human). The relevant category for clear reading of license plates/identification of a person “without a reasonable doubt” is “identify”. A Bullet camera is recommended.

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 250 PPM

To clearly read a license plate, at least 250 PPM are required. Using the camera parameters defined below ensures to achieve the minimum required PPM value

Resolution

min. 1920×1080 (H264)

Focal Length

min 3.6-8 mm motorized adjustable focal length recommended

Mounting

Distance and height of installation

Note: setting correct distance to license plate and camera mounting height should result in the correct vertical angle to license plate Horizontal angle to license plate

Exposure / Shutter speed

max. 1/250 for objects not moving faster than 40 km/h

Possible Camera for this use case

Manufacturer
Model
Link
Note

Hikvision

Bullet Camera

DS-2CD2645FWD-IZS

Motorized varifocal lens

The configuration of the solution can be managed centrally in . Below, you can see how the Entry/Exit parking with license plate detection needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

Configuration

Model

Configuration option

CL (Counting Line)

ANPR

Enabled

Raw tracks

Disabled

How to place the configuration type?

For receiving the utilization of your parking space including the park durations of your customers, a CL for each Entry/Exit needs to be configured. The CL should be placed approx at the beginning from last third of the frame in order that the object can be over several frames, so that the License plate detection and classification are most accurate.

Consider that the IN/OUT direction of the crossing line is important as it is relevant for calculation of the park duration. (IN = Entry to parking, OUT = Exit of parking).

Visualize data

Scenario

In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.

Example

You are able to visualize the data for any Entry/Exit you have configured with the Counting lines. So you are able to see the number of vehicles with their classes/subclasses and license plates of any Entry or Exit.

Furthermore, you will be able to gather a list of customers with the corresponding license plate that has parked longer than your preconfigured parking duration is. For the purpose of provability, you can also see a picture of the incoming and outgoing vehicle. Please mind that you need to configure the parking time due to your data privacy restriction.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing.

Examples:

‘standard’ car license plates ()

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Set up parameters
Recommended

*The higher the distance of the objects to the camera, the higher the focal length, the higher the dead zone. In order to achieve the needed PPM for the detection of objects (30 PPM, please consider the following illustration and table:

Possible cameras for this use case

Configuration settings

What should be configured?

In order to receive the counting data including speed as well as the RoIs occupancy, a Counting Line and several RoIs need to be configured as event triggers. Depending on the specific use case and object distance, several triggers might need to be combined.

How to place the event triggers?

In order to receive information on how fast vehicles are driving and how many objects are currently present in a specific region, you need to configure counting lines with speed estimation and generic RoIs.

You can choose the direction IN/OUT as you want in order to retrieve the data as needed and give a custom name to that direction.

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards for Speed Events and combined trigger.

If you need your data for further local analysis, you have the option to export the data of any created widget as .csv file for further processing in Excel.

Mark
Description

The maximum number of cameras you can use depends on your hardware. While our does have a fixed number of cameras, we did tests and for .

RTSP cameras must be configured with H264 or H264+ video codec. For more details, head over to

You can use to test the video stream of the camera beforehand. If you are unsure which parts of the streaming URL you should use, select Custom Connection String and copy and paste the working string from VLC Media Player.

As soon as you got the Camera Connection configured, you will see one frame of the camera as a preview. You can now start with the from here.

The Swarm Perception Box sends the results from the real-time analysis to an MQTT broker. The default configuration will send data to Azure Cloud and to for retrieving the data. If you want to configure a see more info in the Advances set-up section of the documentation.

As soon as you see a frame of your camera, you have the option to your Scenarios. This is where the magic happens! --> See next page!

The configuration of the solution can be managed centrally in . Below, you can see how the standard traffic counting needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration
Settings

Supported hardware: , ,

All you need to get started: Define a Region of Interest (ROI), define a and select which relay output to trigger.

You can visualize data via in different widgets.

The main challenge in planning a camera installation is to avoid potential occlusions by other cars. We advise using the or and testing your parking setup for the following conditions:

Recommended

The configuration of the solution can be managed centrally in . Below, you can see how to configure a Single Space Parking use case to get the best results

In order to start your configuration, take care that you have configured your

Configuration
Settings

If the distance from the camera to the object (parking space) is higher, the perspective will have a higher impact and you need to adapt the ROI as well according to the perspective. In order to support the calibration in the best way, you can use the calibration mode which can be activated on the top right of the configuration frame. There you will see the detection boxes and center points of the vehicles which are at that moment in the camera. So take care to configure the RoI accordingly that the center point will be in the RoI.

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

As SWARM software is mostly used in dedicated use cases, you can find all information for a perfect set-up in our Use Cases for , and

In the configuration, you can select the best for your use case as well as configure any combination of different event triggers and additional features to mirror your own use case.

Single Space Parking RoI
Multi Space Parking RoI
Generic RoI

Learn more about the .

Please consider our Use Case specification to properly use this feature. The feature is especially available for .

You can enable the Journey time feature in the Global Settings on the left side bar. This feature generates journey time and traffic flow data. This setting is needed for . Find more technical details on data which will be generated in following section:

Raw Tracks should only be used in case you decide for the advanced set up with a

Have a look at our documentation for use cases (, , ), we recommend a model for each use case. If unsure, use the model Traffic & Parking (Standard).

Class
Subclass
Definition

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Set up parameters
Recommended

Possible cameras for this use case

Camera
Link

The configuration of the solution can be managed centrally in . Below, you can see how the standard is configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

Configuration
Settings

What should be configured?

In order to retrieve the best accuracy we strongly recommend to configure a focus area on the maximum two lanes which should be covered for the use case.

Think of focus areas as inverted privacy zones - the model only "sees" objects inside an area, the rest of the image is black.

In order to receive the counting data as well as the journey time data, a Counting Line needs to be configured as an event trigger.

How to place the event triggers?

For receiving the best accuracy of the counting including the Journey time information the Counting Line should be placed at a point where the vehicle and the plate will be seen for approx. 10m in distance. On top take care to configure the Counting line at a place where the track calibration still shows stable tracks.

You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT direction

Visualize data

Scenario

In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.

Examples

Here is an example for a Journey time widget. Journey time can be shown as average, median or up to two different percentiles.

Another example below that visualizes the journey distribution. There is a slidebar to go through the different time periods of the chosen aggregation level. On top the figures can be changed easily between absolute and relative values.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as .csv file for further processing in Excel.

Tip: Use the or .

Camera mounting - horizontal angle to vehicle

Tip: Use the or .

Camera mounting - horizontal angle to the object

Tip: Use the or .

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

In case you are using your custom MQTT broker, you can also retreive the raw data there. We provide a special option to have the license plate capture added to the event schema. This enables you to retrieve the capture within the pushed MQTT message. The picture is encoded in BASE64. In order to enable this option, please contact our .

Object Distance
Focal Length
Dead Zone
Camera
Link

The configuration of the solution can be managed centrally in . Below, you can see how the standard is configured for optimal results.

In order to start your configuration, take care that you have configured your accordingly.

Configuration
Settings

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

Make sure to place focus areas in a way that it covers enough space before an event trigger so that the model is able to "see" the objects for a similar amount of time as if the focus area wasn't there. The model ignores all objects outside a focus area so there is no detection, no classification, no tracking and no ANPR reading conducted.

You can visualize data via in different widgets.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
https://www.hikvision.com/en/products/IP-Products/Network-Cameras/Pro-Series-EasyIP-/ds-2cd2046g2-i-u-/
Traffic & Parking (Standard)
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
DS-2CD2046G2-I
DS-2CD2645FWD-IZS
Traffic & Parking (Standard)
Set-up Adaptive Traffic Control
Set-up Journey Time & Traffic Flow
Set-up Queue Length Detection
license plate area codes
ISO 3166 Alpha 2
Set-up guide and recommendations - ANPR
Data Analytics
Parking Scenario
API
support

30

2,8 mm

2,8 mm

50

5 mm

5 m

70

7 mm

>8 m

Object velocity

< 130 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Expected Accuracy (Counting+Classification)

(when all environmental, hardware, and camera requirements are met)

Counting >95% (vehicles, bicycles) Classification of main classes: >95% Classification of subclasses: >85%

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second

25

Object velocity

< 40 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Expected Accuracy

(when all environmental, hardware, and camera requirements are met)

Presence Detection >95% Classification of main classes: >95% Classification of subclasses: >85%

Supported Products

VPX, P401, P101

Frames Per Second

12

Object velocity

0 km/h

Day/Night/Lighting

Daytime

Nighttime (Only well illuminated or night vision mode)

Indoor/Outdoor

Indoor or Outdoor

Expected Accuracy

(when all environmental, hardware and camera requirements met)

>95% Classification is not considered

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second (FPS)

5

Object velocity

< 80 km/h

Day/Night/Lighting

Daytime/Well illuminated/Night vision

Indoor/Outdoor

Outdoor

Supported Products

VPX, P401, P101/OP101

Frames Per Second

25

Solution areas
Showcase Adaptive Traffic Control with the Rule Engine: Left Lane >=3 objects; Right Lane: >= 1 object
❗
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
https://www.hikvision.com/en/products/IP-Products/Network-Cameras/Pro-Series-EasyIP-/DS-2CD2645FWD-IZS/
Traffic & Parking (Standard)
License plate countries
520 by 120/110 mm
SWARM Control Center
camera
Data Analytics
Traffic Scenario
API
SWARM Perception Box
benchmarks
recommended hardware
VLC Media Player
Scenario Configuration
Data Analytics
Custom MQTT,
Camera & Device Monitoring
configure
SWARM Control Center
camera and data configuration.
Quido 4/4
Quido 8/8
Quido 2/16
rule
Data Analytics
Traffic Scenario
Axis lens calculator
generic lens calculator
Standard examples
SWARM Control Center
camera and data configuration.
Data Analytics
Parking Scenario
API
Traffic Insights
Parking Insights
Advanced Traffic Insights
.
model
Virtual Door logic
Barrierless Parking Use Case
Advanced Traffic Insights
Technical concept
custom MQTT connection.
Traffic Insights
Parking Insights
Advanced Traffic Insights
Data Analytics
Traffic Scenario
API

HikVision

DS-2CD2646G2-IZS

Model

Configuration option

Counting Line & RoIs

ANPR

Disabled

Raw Tracks

Disabled

1

Device information

By clicking on the pen you may change the name of the Perception Box. There are nearly no limitations to doing so. You may use any special character and as many chars as you want. On top, you find the Device ID and the Serial number of the device with a copy option. The device ID is necessary for any support case you are opening. The serial number is the one from the Perception box which you find on the label of the box

2

Here you can see the individual naming of each camera on one device, which can be changed in the next steps where you are configuring the camera settings. By clicking on the row of the camera the camera settings will open.

3

Add Camera

In the configuration step of your Perception Box you might need to add new cameras which can be achieved by clicking on this button.

4

Retrieve Logs & Reboot Device

5

The Camera Status represents basic monitoring of SWARM software and gives an indication if the software, camera input and the MQTT connection is up and running on camera level.

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 30 PPM for object classes cars, trucks

> 60 PPM for object classes person, bicycle, motorbike

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

2,8mm-12mm

Camera mounting - distance to object

Object classes cars, trucks

5-30 meters (2,8mm focal length)

35-100 meters (12mm focal length)

Object classes person, bicycle, scooter

3-12 meters (2,8mm focal length)

25-50 meters (12mm focal length)

Camera mounting height

Up to 10 meters

Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks

Camera mounting - vertical angle to the object

<50°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0°-45° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)

Wide Dynamic Range

Can be enabled

Camera

Note

HikVision

Bullet Camera

2,8mm - 12mm motorised focal length

Model

Configuration option

Region of Interest + Rule

ANPR

Disabled

Raw Tracks

Disabled

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 60 PPM

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

2.8mm - 4mm

Camera mounting - distance to object center

5-30 m (cars in the center of the image)

For 5 meters distance we guarantee a high accuracy for 3 parking spaces, aligned orthogonal to the camera.

The higher the distance, to the camera, the more parking-spaces can be monitored.

Camera mounting height

Indoor: 2,5 - 5m Outdoor: 2,5 - 10m Higher is better. Vehicles can potentially occlude the parked cars, hence we recommend higher mounting points.

Wide Dynamic Range

Must be enabled

Night-mode

ENABLED

Model

Configuration option

Single Space (Roi) or Multi Space (Roi)

Raw tracks

Disabled

Event Trigger

Time

Time

Time or Occupancy

Type

Parking

Parking

People & Traffic Events

Default number of objects

1

5

1

Color

car

Cars include small to medium sized cars up to SUVs, Pickups and Minivans (for example VW Caddy).

The class does not include cars pulling a trailer.

car

van

Vans are vehicles for transporting a larger number of people (between 6 and 9) or used for delivery.

car

car with trailer

Cars and vans that are pulling a trailer of any kind are defined as car with trailer.

For a correct classification the full car and at least one of the trailer axis have to be visible.

truck

single unit truck

Single unit trucks are defined as large vehicles with two or more axes where the towing vehicle can not be separated from the semi-trailer and is designed as single unit.

truck

articulated truck

Articulated trucks are large vehicles with more than two axes where the towing vehicle can be separated from the semi-trailer. A towing vehicle without a semi-trailer is not included and is classified as single unit truck.

truck

truck with trailer

Single unit trucks or articulated trucks pulling an additional trailer are defined as truck with trailer.

bus

-

A bus is defined as vehicle transporting a large number of people.

motorbike

-

The class motorbike is defined as a person riding a motorized single-lane vehicle. Motorbikes with a sidecar are included, whereas e-bikes are not part of this class.

Motorbikes without a rider are not considered.

bicycle

-

The class bicycle is defined as a person actively riding a bicycle. People walking and pushing a bicycle are not included in this class and are considered as person.

Bicycles without a rider are not considered.

person

-

The class person includes pedestrians that are walking or riding Segways, skateboards, etc. are defined as pedestrians.

People pushing bicycles or strollers are included in this class.

scooter

The class scooter includes a person riding on a so-called kick scooter, which can either be motorized or human-powered. The scooter usually exists out of two wheels and a handlebar.

tram

The class tram is a public transportation vehicle operating on tracks along streets or dedicated tramways. Trams are typically electrically powered, drawing electricity from overhead wires.

other

-

Vehicles not matching the classes above are considered in the class other.

Parking Insights

Use Cases for Parking Scenarios

With the SWARM Perception Platform, you are able to find a solution for each parking environment thanks to the offering of following use cases.

Set-up guide - Installation

Journey time set up

This guide focuses on specific details to be considered for journey times and area-wide traffic flow on public roads, focusing on camera placement, camera settings, and event trigger configuration.

Please be aware that the camera settings need to be adjusted according to the installed location as lightning conditions might differ.

Camera placement

Perfect camera placement is critical in order to get a clear image and readable number plates. While some parameters such as distance from the camera to the number plate can be fine-tuned by zooming after installation, mounting height and angle between the camera and travel direction of vehicles can only be adjusted by physical and cost-intensive re-arrangement. The camera position has to be chosen in a way that passing vehicles are fully visible and can be captured throughout several frames of the video stream while making sure the number plates are large enough for the ANPR system to identify every single character.

We recommend mounting heights between 3 and 8 meters, therefore the suitable minimum capture distance ranges from 5 to 14 meters. Besides the vertical angle constraint, number plates should be visible with at least 250 pixels-per-meter (PPM), this constraint determines the minimum focal length (zoom) the camera has to be set to.

Mounting height [m]
Minimum capture distance [m]
Maximum capture distance [m]
Range of focal length [mm]

3

5

19

4-12

4

7

18

5.4-12

5

9

18

6.6-12

6

10

18

10-12

7

12

18

11-12

8

14

17

12

Why between 3 and 8 meters of camera mounting height?

The lower bound of 3 meters is determined by rather practical reasons and not technical limitations. Cameras mounted lower than 3 meters are often prone to vandalism. Also, headlights from passing vehicles can lead to reflections on the camera. The upper bound of 8 meters is determined by the resulting minimum capture distance of at least 14 meters for the needed camera resolution of 1920x1080p. License plates need to be visible with 250 pixel-per-meter (PPM).

As the Swarm Perception Box and cameras are mainly mounted on existing infrastructure such as traffic light poles, there are two general options to mount the cameras: side mounting or overhead mounting.

Overhead mounting

When positioning the camera above the vehicles, two lanes can be covered with one sensor.

Consider mounting height (1) and capture distance (2) which determine the vertical angle (3) between the camera and the travel direction of the vehicle. The distance between the center of the lane (4) and the camera determines the horizontal angle (5) between the camera and the travel direction of the vehicle.

Side mounting

When mounting the camera to the side of the road, two lanes can be covered, assuming the horizontal angle between the camera and the travel direction of the vehicles is not exceeding 20°.

Position the camera as close as possible to the side of the road to avoid a horizontal angle larger than 20°. Larger angles can lead to lower accuracy because parts of the number plate can become unreadable. While traveling directions (1) and (2) are the same for both vehicles, horizontal angle (3) is much larger than (4).

Night mode settings

While capturing sharp images during the day with good lighting conditions is relatively easy, low lightning and dark conditions make it a lot more difficult for cameras to deliver readable number plates from moving vehicles. The following section of this guide, therefore, provides an overview to fine-tune your camera to deliver readable number plates in such conditions.

However, the setting of the following parameters strongly depends on the specific camera mounting position and its environment. A light source such as a streetlamp or a vehicle passing on a different lane can send light to the camera sensors and influence the resulting image to a great extent. For this reason, this guide can only provide a general overview of relevant settings and their effect on image quality.

Day/Night Switch

We recommend that the Auto day/night switch mode from the cameras is used. As you can see in the examples below, it is crucial that the camera changes to night mode reliably.

Camera & Device Monitoring

This page describes the different status devices and camera streams can have and what to expect.

Basic monitoring

In the SWARM Control Center, you will find a basic monitoring status on the camera as well as the device level. This status will show if your cameras are up and running or if there is any need for action to get them up and running again.

In the camera overview of your devices and dashboards, you will find the camera monitoring, which tells you if your camera is working as expected. In device configuration, you find the device monitoring, which shows the worst state of all cameras running on the device

Device Connection Status

Camera Status
Description

The device is up and running (powered, connected to the internet)

The device is offline (no power, no internet, etc.). There are several easy steps to check before you can contact our support team.

Device Status

The device monitoring depends on the worst status of the stream monitoring in order to give you an overview in your device list on devices where a camera is not working as expected.

Camera Status
Description

Everything is fine. All the cameras configured on your device are running as expected.

At least one of the cameras on the device is not configured. Check the camera monitoring status for more details.

At least one of the cameras on the device has an issue and is not sending data as expected.

At least one of the cameras on the device has a Warning status.

The device is offline. Check if the hardware is connected to the power supply and has a running network connection.

When you just have changed the configuration from one of the cameras on the device, the status will go on pending for max. 5 minutes until the correct status is determined.

One or more camera streams are disabled.

Stream Monitoring Status

The monitoring takes into consideration the system, the camera input, and the MQTT connection.

Camera Status
Description

Everything is fine. Your camera is running as expected.

Software is running smoothly, camera connection is available and MQTT broker is connected.

The camera is not configured.

The status means that data are still generated and delivered but there are dedicated issues that could have an impact on the data accuracy. Issues types: Video frames cannot be retrieved correctly - At least 10% of camera frames the camera delivers are broken. Performance issues - The performance (frames per second) are dropping the limit of the configured event types.

Something unexpected happened and the software is not running --> no data is generated

Issue types:

Docker container is not running correctly - please contact support - Software is not running.

Data cannot be sent to MQTT endpoint - There are more than 10 MQTT events that have not been delivered to MQTT broker successfully since at least 10 seconds. Please check if your MQTT broker is up and running.

Camera not connected - Camera connection can't be established. Please check if the camera is up and running and if the camera details as for example user & password are configured correctly

The Perception Box or your hardware is offline. Check if the hardware is connected to the power supply and has a running network connection.

When you just have changed the configuration, the status will go on pending for approx. 5 minutes until the correct status will be determi

The respective stream is disabled and can only be enabled again if there are enough licenses available. This state can also be used to save the current configuration, while you don't have a need for the device to run.

Calibration support

In order to configure the stream properely for best data accuracy there are two options which will support you in the configuration process.

Live Calibration

For easy calibration, you can use our Live calibration in the top right corner drop down of the preview frame. As you can see in the screenshot below, this mode offers visibility about what objects the software is able to detect in the current previewed frame.

The detected objects are surrounded by a so-called bounding box. Any bounding box also displays the center of the object. In order to distinguish the objects, we offer the calibration more in differentiated colors of the main classes. Any event that gets delivered via MQTT is triggered by the center of the object (dot in the center of the bounding box).

Track calibration

The track calibration feature gives the option to overlay a relevant amount of object tracks on the screen. With the overlay of the tracks, it will be clearly visible where in the frame the objects are detected the best. According to this input, it is much easier to configure your needed use cases properly and have good results with the first configuration try.

With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.

The track calibration images will be stored on the edge device and are only accessible through the Control Center. Make sure that viewing, storing, and processing of these images for up to 24 hours is compliant with your applicable data privacy regulations.

The color of the tracks are split by object class so that they can be distinguished between cars, trucks, buses, people and bicycles.

The colors of the tracks and bounding boxes are differentiated per main class. Find the legend for the colors on the question mark in the preview frame as shown in the Screenshot below.

Traffic Scenario

Analyze your traffic at your counting areas or intersections across cities or urban areas

Widget type options

In the Traffic scenario, there are no widgets created automatically after the creation of the process, but you can create the widgets as you need them

For traffic, there are the widget type options Traffic Counting and Origin-Destination Analysis available.

Device Health

The healthiness of your device at a glance

Basic Metrics

  • Device Uptime

    • See the time, this device has been up and running until now.

  • Device Status and Device Restarts

  • Device Free Disk Space

    • In order, for the disk space of your device to be running full, you can see this as an early indicator here.

Advanced Metrics

  • Device Temperature

    • Supported for: P101/OP101/Jetson Nano

    • If the device is running at a high temperature (depending on specifications defined by the manufacturer) we will state a warning here. The temperature could impact the performance (throttle processing performance).

  • Modem Traffic, Signal Strength and Reconnects

    • supported for OP100/OP10

Camera Metrics

  • Camera status

  • Camera processing speed

    • In case the fps are dropping, there might be a potential problem with the camera occurring, or the device is getting too hot.

  • Generated and Pending Events

Data Integration

How to Integrate your generated data in external applications

You want to integrate data in any other platform or billboards we offer two options for retrieving data

Get only the data you need according to queries you can create in our Data Analytics, then use our API.

Get any generated event directly from the box without processing it through the cloud use the MQTT option

Be aware that in this option you get the data of each event, and you can't use Data Analytics in our SWARM Perception Platform

Generic Scenario

Analyze any scenario that can be configured with our available event triggers

In case you need a dashboard for another use case that is not covered with Parking or Traffic Scenario, the Generic Scenario will give you this option.

The measures extract certain key metrics from the SWARM generated raw data. In general, there are metrics around the following areas of use:

  • Counts: It's always a sort of counting for either Counting Line (CL), Virtual Door (VD) or Origin/Destination Zones (OD).

  • Region of Interest (ROI): Calculates the number of objects reported within a certain region.

Parking Scenario

Digitize your parking area for smoother and easier operation

Define and change your parking parameters

At the parking scenario, you have the option to configure additional parameters to define your parking area. You can configure the maximum capacity and the maximum parking time. On top, you have the option to set the current utilization in order to calibrate the parking area once in a while.

The parameters can be set and changed in the Configuration tab of the dashboard.

Consider that changing the current utilization will overwrite the current value.

Widget type options

In a Parking Scenario, the two standard widgets Current & Historic Parking Utilization will be automatically created for you.

For any widget, there will be a predefined filter which is filtering out the classes bicycle, motorbikes and persons in order to only consider vehicles needing a given parking spot.

You want to know and retrieve the current utilization of your parking area, the Current Parking Utilization widget will tell you with one click.

Just select the widget type Current Parking Utilization, name the widget and choose if it should be calculated via Single-/Multispace detection or Entry/Exit counting. This choice will depend on the use case you have installed and configured.

To see utilization trends of your parking area you can use the Historic Parking Utilization widget which tells you the capacity with the option to aggregate the data on given time periods. There is the option to have the utilization calculated based on Single-/Multispace detection or Entry/Exit counting.

You can choose to display the average, minimum and maximum utilization of the chosen aggregation period. On top, you can choose between absolute or percentage figures.

The standard defined output is a line chart that you can change according to your needs.

You try to find out the frequency your parking users are entering or exiting the different entries and exits along your parking area, you can find that information by displaying an Entry Exit Frequency widget.

You can choose the different Entries or Exits you want to consider, and aggregate and segment the data as needed.

In the example below, you see how often vehicles are using the one location for entry and exit (CL direction) per day.

The parking time will be shown as soon as the vehicle entered and exited your parking area. In case that the License plate will not be detected at either entry or exit, there will be no parking time calculated. (This is done in order to not falsify the statistics)

At this widget, the parking time is shown per license plate. As license plates are sensitive data, the parking time per license plate can only be displayed as long as you have considered the retention time for this sensitive data. For statistical information on parking time, please use the Historic Parking Utilization widget.

The output for this widget is a table with the standard columns License Plate and the Parking Time. If you want to have more information, can add data segmentation in order to show for example where the vehicle with the given license plate entered and exited or see a capture of the vehicle with the license plate of the entry and exit.

Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.

The Historic Parking Time widgets will show you the minimum, maximum or average parking time of your parking users by saving the parking time based on the License plates.

In case that the License plate will not be detected at either entry or exit, there will be no parking time calculated. (This is done in order to not falsify the statistics)

Compared to the parking time widget, the data will be available on a historic basis according to the data retention plan you have chosen for your SWARM Control Center. Any time a parking time is captured it will be added to the average, minimum and maximum calculation, this enables to have parking time information and trends without saving sensitive data for a longer time.

The standard defined output is a line chart that you can change according to your needs. On top, the data aggregation period can be changed.

Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.

Do you have parking users exceeding the maximum parking time of your parking area quite often?

You can start to automatize the enforcement process by using the SWARM solution, which will automatically tell you the License plates which exceeded the parking time. In order to have evidence, the SWARM software is taking a picture of the vehicle with the License plate and the timestamp the vehicle entered and exited the parking area.

You simply need to choose the Parking Time Violation widget and everything else will be done in the background for you based on the maximum parking time parameter you have set in the Dashboard Configuration tab.

You can preview the evidence picture by clicking on show. In order to download the information required for the enforcement process, you can export the table in csv format as well as export the Evidence pictures as a zip folder.

Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.

In order to display the occupancy of your configured Single- or Multispace parking, you can use the widget type Single / Multi Space Parking Occupancy.

You will see in a grid the occupancy level of each of your configured parking lots. In case you only want to display some dedicated parking spaces you can select these dedicated parking spaces (RoI).

The Data Analytics widgets "Journey Distributions" and "License Plates" allow segmenting by "License Plate Area"

Rule Engine

Here you can find details on how to use the Rule Engine for your customized Scenario Configuration

With the Rule Engine, you can customize your event triggers. Reducing Big Data to relevant data is possible with just a few clicks: From simple adjustments to only get counts for one direction of the Counting Line to more complex rules to monitor a Region of Interest status when a vehicle crosses a Counting Line.

Creating a rule on a single event trigger

For rule creation, an event trigger has to be chosen to attach it to. Depending on the type of the event trigger, options are available to set flexible filter conditions.

For those conditions combined via AND, all conditions need to be fulfilled. In the example above, events are sent, only if a bicycle or person is crossing the Counting Line in the IN direction.

Creating a combined rule on an RoI & CL

You can create combined conditions for RoI and CL. When they are chosen as an event trigger, the option to add another condition appears below. This subcondition needs to be based on a second RoI or CL. They will then be combined by an AND connection.

Combined rules trigger an event only in case an object is crossing the CL and the rule of the additional CL or RoI is met.

In the example below, the rule sends an event in case a car, bus, truck or motorbike is crossing the speed line at more than 50 km/h and at the same time a person which is longer than 5 sec in the RoI.

Save rules as templates, edit or delete them

Save rule as template

Any created rule can be tagged as a template. This provides the option to use the same logic on any camera stream within the same Control Center.

Edit or delete a rule

If you are deleting a rule that is tagged as a template, the template will be removed. In case a rule is created on a trigger (e.g.: CL) and the trigger gets deleted, the rule will disappear as well.

Dashboard overview & Widget creation

How to manage Widgets in Data Analytics Dashboards

The dashboards are created for your specific scenario (Traffic, Parking or Generic). In order to show valuable data with a low effort, the widget options vary per scenario.

The Dashboard overview can be customized with your widgets according to your needs. On the top left, you can select the time frame filter, which will be applied to any widget in this dashboard. The time filter is persisted individually for each dashboard on your browser. So as soon as you open the dashboard again, your last time filter will be applied.

You can move widgets across your dashboard by simply dragging and dropping them. The size of the widget can be adjusted by using the left bottom corner.

On top, you have a full-screen option on the top right in order to display the widget dashboards in full size on your screen.

Widget creation

In order to create a new data widget, you can click on New Widget. In the widget creation process, the selection options vary per widget type. The widget type options depend on the scenario you have chosen for your dashboard (Parking, Traffic or Generic).

Below, you can find the description of different selection options at the widget creation process. This will give you an overview of the result of the selection options. (* mandatory)

  • Widget Name* - You can name your widget as you want. The name will be displayed for each widget on your Dashboard Overview.

  • Data aggregation - You can choose on which time frame your data should be aggregated. You can aggregate your data per hour, day, week, month and year. E.g.: You choose to aggregate your data per day for a traffic counting use case, all the counts of the day will be summed.

  • Data segmentation (split by) - You can split your data on given parameters of the created events. E.g.: You want to see the counts per day split per class and subclass? So you need to choose the data segmentation fields class and subclass.

  • Filter data - In order to narrow down your data you can filter on the given parameters by using one of the following operators: contains, does not contain, equals, does not equal, is set, is not set

  • Define Output* - For displaying your data in the right output, we have different options based on the widget types.

    Available output options: Table, Number, Bar Chart, Line Chart, Pie Chart, Chord Diagram

Technical concept

In this page it is described how the detection and the matching of the journeys work from a technical perspective

In order to detect journeys of vehicles, there is a need to detect the same vehicle at several predefined locations. This means there needs to be a dedicated identifier in order to tell if these are the same vehicles.

For vehicles, the unique identifier is obviously the license plate (LP). So, the LP will be taken as the unique identifier for matching vehicles across several locations. As LPs are considered personal data, a salt hashing function will be applied to pseudo-anonymize the personal data.

How does it work?

Based on the SWARM standard use case for traffic counting, the object (vehicle) will be detected and classified. If the journey time feature is enabled, the algorithm will run an LP detection and an LP reading for each detected vehicle. The raw string of the LP will then be pseudonymized with a so-called hashing mechanism, and the pseudonymized random text will be sent within the standard Counting Line (CL) event over the encrypted network.

In the upcoming section, more details of the single steps are described:

Detect vehicle

In each frame of the video stream, vehicles are detected and classified as cars, trucks, and buses. Alongside this, the vehicle is tracked across the frames of the video stream.

Detect license plate

For each classified vehicle, the license plate is detected and mapped to the object.

Read license plate

For each detected license plate, an optical character recognition (OCR) is applied to read the plate. The output of this part is a text which includes the raw string of the license plate.

Pseudonymize the license plate (Hashing)

In order to hash the LP, a salt shaker generates random salts in the backend (Cloud) and distributes the salts to the edge devices. A salt is random data that is used as an input to hash data, for example, passwords or in our case LPs. The salt will not be saved in the backend. The only point, where the salts are temporarily stored is in the edge device (Perception box).

In order to increase the safety of potential attacks, the salt has a validity window of 12 hours. After the validity window, a new randomly generated salt will be used. The graphic below illustrates an example of the hashing function used for LPs.

Salts 1-4 are generated by the salt shaker and distributed to each edge device. In order to always detect all journeys, each LP is hashed with two salts. Two salts are needed, as a journey could potentially have a longer travel time than the salt validity time. In the upcoming section, match event on possible journeys, it is shown why two salts per LP are needed.

Trigger and send event

If the vehicle crosses a counting line (CL), a CL event with the hashes (h) from the detected LP is sent via MQTT to the Cloud (Microsoft Azure) and saved in a structured database (DB).

Match event on possible journeys

On the cloud, the DB is regularly checked for possible matches within the hashes. As shown above, two hashes are created per detected vehicle. If one of the two hashes is the same for two different detections it will be saved as a journey with the journey time information, class, edge device names & GPS coordinates of the edge device.

In case the same hash is found in several locations, a multi-hop journey will be saved based on the sorting of the timestamps. (e.g.: Journey from location A to B to C)

Anonymize data

After 12h, which is the validity time of the salt used for pseudonymizing the license plate, the pseudonymized LP will be deleted. This action makes the pseudonymized data anonymized. In summary, it means, that after 12 hours past the detection of the vehicle and LP all data are anonymized.

Use Case Examples for Rule Engine

In this page you can find examples of rules for real world use cases.

Wrong-way driver

Detect vehicles (motorized traffic) passing a street in the wrong direction (e.g.: one-way-streets of highway entrances).

U-turns

Create a new rule, name it and choose Origin destination as triggers for the rule. For U-turns, a predefined template can be used. You still have the opportunity to adapted it according to your needs. Therefore, you can connect the existing origin and destination zones in your scenario and in case anyone is going from one zone back again to the same, one can assume, that this was a U-turn.

Unexpected class using dedicated areas

In traffic situations, there are several situations where a given class of street users should not use dedicated areas, e.g.:

  • people in the center of an intersection

  • Vehicles in fire-service zones

In order to check when and how often it happens, you can create a rule based on a predefined RoI in these dedicated areas. Create a new rule, name it and choose the RoI as triggers for the rule. You can find a template as an example for "person on street".

In the subcondition you can choose "Object" as a parameter and choose min nb of objects which need to apply to the conditions. You can define which classes are expected or not. On top, a dwell time condition can be added in order to only take objects into account which are in the area longer than a given time. (e.g. jaywalking, wrong parking in fire-service zones).

Risk situations at pedestrian crossings

How often has it happened to any of you have been cut off at a pedestrian crossing while crossing or waiting to cross. This happens on a daily basis, and quite often it is very close to severe incidents. In order to know if and how often this happens, we provide you a solution with our rule engine. This is the basis for you to know where to set dedicated actions. The solution is a combined rule with a CL that is detecting the vehicles and a RoI which is focusing on pedestrians and bicycles. Configure a CL or speed line in front of the pedestrian crossing. On top, an RoI can be configured at the pedestrian crossing and/or the waiting area next to.

With that configuration, one or several rules can be created. In this example, one rule for this high risk situation is defined. You can detect when at least one person is on the Pedestrian crossing and a vehicle is crossing the speed line at more than 10 km/h.

Here is a short video where it is shown how such a rule will be applied.

Adapted Traffic Control

Define at least one ROI and create an associated rule. As long as the rule is valid, the associated Quido relay output is enabled (contact closed). One or more rules can be created for the same ROI.

Camera settings

Option to change camera parameters to optimize video stream settings for the SWARM solution.

The connection for changing camera settings from the SWARM Perception Platform is established via the open ONVIF standard.

Make sure to enable ONVIF on your camera and create an admin user with the same user credentials as for the camera itself. In case the camera is delivered by SWARM, ONVIF is enabled by default.

The camera settings section is split into two tabs. One tab is for checking if the Basic settings needed for the Swarm Analytics processing are correctly set. In the Advanced settings, camera parameters can be manually adjusted and optimized.

Basic settings

In the basic settings tab, the current main configuration of the camera is shown and compared with the recommended settings for your configuration. The icons per setting indicate if the applied settings match Swarm's recommendations.

There is an option to automatically apply the recommended settings in order to have the camera configured for achieving the best results.

Advanced settings

As each installation is different, especially in terms of illumination and distance as well as further external factors, you can configure the camera settings individually for receiving the best image quality for data analysis with the SWARM solution.

Change and apply settings. When settings are applied, the preview frame is refreshed and you will see how the changes impact the image quality. In case you are not happy with the changes you just made, click on revert settings. The settings will then be reverted to the settings which have been applied at the time the camera settings page was opened.

The following configuration options are available:

Example how to enable ONVIF on Hikvision cameras

You can find the ONVIF setting in the following section of the camera settings on the Hikvision UI: Network --> Advanced Settings --> Integration protocol

  • Enable Open Network Video Interface

  • Make sure to select "Digest&ws-username token"

  • Add user

    • User Name: <same as for camera access>

    • Password: <same as for camera access>

    • Level: Administrator

  • Save

Time Synchronization needs to be correct for ONVIF calls to work

System --> System settings --> Time

  • Enable NTP for time synchronization

People Entry/Exit counting

How to success in setting up counting for people entering and exiting a dedicated area

You want to know how SWARM software is providing the solution to get the number of People either entering or leaving your configured area via a Virtual Door in different sceneries. SWARM software is providing the solution to get the number of People either entering or leaving your configured area via a Virtual Door in different sceneries.

What data can be generated?

For this use case, SWARM software is providing you with the counts of people split by direction (IN/OUT). On top, several counts can be made in one camera, e.g.: count each door separately.

What needs to be considered for a successful analysis?

Environment specification

Hardware Specifications

SWARM Control Center
camera and data configuration.

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 60 PPM

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

Camera Focal Length

min. 2.8 mm variofocal lense*

Camera mounting - distance to object center

5–70 meters*

Camera mounting height

3–8 meters

Camera mounting - vertical angle to the object

<50°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0° - 90°

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

250 PPM (vehicle)

Using the camera parameters defined below ensures achieving the minimum required PPM value)

Camera video resolution

1920x1080 pixel

Camera video protocol/codec

RTSP/H264

USB 3.0/UYVY, YUY2, YVYU

Camera Focal Length

min. 3.612 mm motorized adjustable focal length

Camera mounting - distance to object center

Camera mounting height

Camera mounting - vertical angle to the object

<40°

0° - 20°

Dahua

IPC_HFW5442EP-ZE-B

HikVision

DS-2CD2646G2-IZS

Model

Configuration option

Counting Line

Journey Time

Choose Journey time mode on Global settings

Raw Tracks

Disabled

SWARM Control Center
camera and data configuration.

On top, you have the options to retrieve and display the SWARM software logs to get a more detailed overview in case the box is not running as expected. There you can see if the box is able to connect to the camera. In case the connection to the camera is not successful, please check the camera & network settings on your side. As every hardware needs a reboot from time to time, we included this function "Reboot device" here to do this. In case you still experience issues, please contact our

See the definition of the status in the page.

Tip: Use the or .

Camera mounting - horizontal angle to the object

Tip: Use the or .

dark green

purple

light green

Smaller vans as the VW Multivan are included as well as vehicles similar to the Fiat Ducato.

This includes autobuses, coaches, double-decker, motor buses, motor coaches, omnibuses, passenger vehicles and school buses.

This class includes tractors (with or without trailer), ATVs and quads, forklifts, road rollers, excavators and snow plows.

Journey time is defined as the time passed between the sighting of the same vehicle across two or more camera streams. The identification of vehicles is based on number plates. Please refer to our detailed about ANPR for a general overview.

Find out how to configure automatic email alerts for status changes in our .

If your device appears offline and this is not intended, please follow our

You need to configure the as well as your specific according to your use case.

We suggest to use this calibration view especially for calibrating your configurations with Region of Interests.

We suggest to use this calibration support for any as well as

You want to have information on your traffic based on the counts and classification of any object passing your counting area, then the Traffic Counting widget is exactly what you need.

First, you need to choose the Counting Lines you want to have the count from.

You can display the count of the traffic aggregated over the chosen time period and split by any direction. Another option you have at this widget is to display the modal split of your traffic, which shows the distribution of the different object classes. In case you have configured the Speed estimation for your counting line you will be able to retrieve the average speed per counting aggregation or even split the counts in different speed estimate ranges (10 km/h ranges).

The "Include average speed in data" toggle will only give you results in case you have configured speed estimation on the chosen Counting Line.

-----------------------------------------------------------------------------------------------

-----------------------------------------------------------------------------------------------

You simply need to choose the widget type and choose your output format, and you see from the counts from an Origin to a Destination zone. You can display these in a dedicated output format called Chord diagram.

You want to get the average speed of your traffic over a given time period, then the Speed Estimation is the widget type to be chosen.

You simply need to choose the Counting Line where you have configured your speed estimates, choose a level of aggregation and you will get a line chart or table with the average speed over your chosen time period.

In case a rule has been created on the chosen cameras, there is the option to display how often the defined rule happened across a given time interval. Simply choose the widget type Rule trigger and the rule you would like to see the occurrence frequency. The data aggregation can be chosen according to the individual needs.

The device health metrics allow you to provide evidence for reliable and continuous data collection and to self-diagnose (e.g. stable network connectivity/power supply/camera connection/processing speed,... )

Gives an overview of the and potential restarts of the device

Gives an overview of the per camera stream

In case any Device Health Metric is not showing the expected values, please follow our

In the Generic Scenario, you can create widgets based on the data generated with any .

You will be able to choose between the widget type described in the table below. At the widget creation process, you have the same as in any other scenario.

Widget type
Description

By using the ANPR feature, which can be enabled in the the parking time of your parking users will be calculated.

Name your rule - This name is used to create widgets in Data Analytics, and will be part of the event you receive via MQTT.

Choose the event trigger the rule should be based on Any of your already configured event triggers can be chosen. In case Origin/Destination is selected, all configured zones are used automatically.

You have the option to choose from predefined templates or your individual rules, which you have tagged as your templates yourself. --> See later in this section how to tag a rule as a template.

Set your subconditions With subconditions, you can filter down to only gather the relevant data for this rule. The parameter options for the subconditions are dependent on the chosen event trigger.

After creating a rule, the Scenario Configuration of the camera needs to be saved in order that the rule will be applied accordingly.

In the actions section, you can click on the tag symbol in order to save the rule as a template. If the rule is tagged as such, the symbol will be highlighted .

Rules can be edited by clicking on the edit symbol . This action will open the edit mode of the rule. By clicking on the bin symbol you can delete a rule. A confirmation of the deletion is required to finalize the action.

As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard . Create a new rule, name it and choose the configured counting line (CL). For wrong-way drivers, a predefined template can be used. You still have the opportunity to adapted it according to your needs. For the wrong-way driver you can create a rule that the direction needs to equal "out" which in your configured scene needs to be the wrong direction.

At an intersection, only detect objects which are performing a U-turn. As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard

Please contact if you would like to try out this feature or if you have any further questions.

Configuration
Description
Value

Find detailed information about camera requirements/settings as well as camera positioning in the table below.

Possible Camera for this use case

The configuration of the solution can be managed centrally in . Below, you can see how a standard people counting needs to be configured for optimal results.

In order to start your configuration, take care that you have configured your

Configuration settings

How to place the configuration type?

For receiving the best accuracy of the Virtual Door, the Virtual Door should be placed approx in the middle of the video frame and not too close to the camera so that people will be detected before their center point is already in the perspective of the Virtual Door.

The direction IN/OUT can not be chosen. If a person is detected outside the VD and disappears inside the VD, it will be direction IN.

Visualize data

Scenario

In our Generic Scenario section, you can find more details about the possible metrics to use for creating your Generic Scenario Dashboards.

Example

As people counting is based on a Virtual Door, you need to choose the Metrics "VD count" or "VD IN/OUT Difference". You have then the different options to choose the data you want for a certain time period as well as choosing the output format (e.g.: bar chart, number, table, ...).

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.

Tip: Use the or .

Camera mounting - horizontal angle to the object

Tip: Use the or .

5-20 meters Please consider that the zoom needs to be adjusted according to the capture distance. More details are in the .

3-8 meters Please follow the in detail.

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

Camera mounting - horizontal angle to the object

For analyzing an intersection in order to see how your traffic is moving across the intersection, you can use the Origin-Destination Analysis, which is based on the

You can visualize data via in different widgets. To perform people entry/exit counting, we offer a generic scenario, which offers a bundle of metrics to analyze your raw data.

If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

💚
💜
💚
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
https://www.hikvision.com/en/products/IP-Products/Network-Cameras/Pro-Series-EasyIP-/ds-2cd2646g2-izs/
Traffic & Parking (Standard)
support team.
Camera and Device Monitoring
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
DS-2CD2645FWD-IZS
Traffic & Parking (Accuracy+)
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
Traffic & Parking (Accuracy+)
Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
installation set-up guide
installation set-up guide
More information in the Installation set-up guide.
https://www.dahuasecurity.com/asset/upload/uploads/soft/20190506/DH-IPC-HFW5442E-ZE.pdf
https://www.hikvision.com/en/products/IP-Products/Network-Cameras/Pro-Series-EasyIP-/ds-2cd2646g2-izs/
Traffic & Parking (Standard)
Set-up Barrierless Parking
Set-up Barrierless Parking with ANPR
Set-up Single Space/Multi Space Parking
guide
Monitoring Alerts section
Troubleshooting Guidelines!
Single & Multispace use case
Traffic monitoring use case
Barrierless parking use case.
Intersection monitoring use case configuration.
common problems
status
status
Troubleshooting Guidelines.

Counting Line Count

The number of objects that crossed a Counting line (CL). This can be split by direction or classification.

Counting Line IN/OUT difference

The difference of objects which crossed the CL in IN direction and OUT direction.

Counting Line IN/OUT difference = Counting Line IN - Counting Line OUT

Origin Destination Count

The number of objects that flowed from origin zone to destination zone in a scene.

Region of Interest Average Person / Cars / Trucks / Buses

Average number of objects (Person, Cars, Trucks or Buses) reported within the configured regions. For this widget type, you have the option to choose multiple classes.

Region of Interest Min Person / Cars / Trucks / Buses

Minimum number of objects (Person, Cars, Trucks or Buses) reported within the configured regions. For this widget type, you have the option to choose multiple classes

Region of Interest Max Person / Cars / Trucks / Buses

Maximum number of objects reported within the configured regions. For this widget type, you have the option to choose multiple classes

Virtual Door count

The number of objects that passed through a Virtual Door. Please remember that only objects will be counted which either appeared out of the VD and disappear in the VD or the other way around.

Virtual Door IN/OUT difference

The difference of objects which were counted as IN direction and OUT direction at Virtual doors.

Virtual Door IN/OUT difference = Virtual Door IN - Virtual Door OUT

Brightness

Defines how dark or bright the camera image is

From 0 (dark) to 10 (bright)

Contrast

Difference between bright and dark areas of the camera image

From 0 (low contrast) to 10 (very high contrast)

Saturation

Describes the depth or intensity of colors in the camera image

From 0 (low color intensity) to 10 (high color intensity)

Sharpness

Defines how clearly details are rendered in camera image

From 0 (low details) to 10 (high details)

Shutter speed

Speed at which the shutter of the camera closes (illumination time)

Generally, a fast shutter can prevent blurry images, however low-light conditions sometimes require a higher value. Values are in seconds, for example 1/200s = 0.005s

Day/Night mode

Choose between day-, night-, or auto-mode, which will apply the IR-cut filter depending on camera sensor inputs

Day, Night, Auto

WDR (Wide dynamic range)

For high-contrast illumination scenarios WDR helps to get details even in dark and bright areas

When WDR is activated, the intensity level of WDR can be adjusted

Zoom

Motorized optical zoom of cameras

Two levels of zoom distance are available indicated by the + and - buttons. Zoom is applied instantly to the camera and cannot be reverted automatically.

Object velocity

< 10 km/h (walking speed)

Day/Night/Lighting

Daytime or well illuminated

Indoor/Outdoor

Indoor or Outdoor

Expected Accuracy

(when all environmental, hardware, and camera requirements met)

>95%

Supported Products

VPX, P401, P101/OP101, P100/OP100

Frames Per Second (FPS)

12

Data Analytics Widget "Journey Time Distributions"
Rule Engine Demonstration
camera and data connection
configuration
Data Analytics API (REST API)
Raw event data with Custom MQTT server
event type
selection options
Scenario Configuration,
traffic counting use case
intersection monitoring use case.
Support
Data Analytics
Generic Scenario
API

Benchmarks

Benchmarked maximum number of cameras on given hardware

Find here an overview on how many cameras of which use case you can run on Perception boxes as well as on reference Hardware for VPX.

Perception Box

Perception Box
Traffic counting & Intersection monitoring
Adaptive Traffic Control
Journey Time & distribution
Barrierless parking
Barrierless parking with ANPR
Single Space Parking
People

P101

1

2

1

2

1

4

2

P401

3

6

3

6

3

12

6

OP101AC

1

-

1

2

1

2

2

OP101DC

1

-

1

1

1

1

1

P100

1

-

0

2

0

3

2

OP100

1

-

0

1

0

1

1

With the OP101DC (& OP100) you can only operate 1 camera as there is just one Ethernet port to connect.

VPX reference hardware benchmarks

Take into consideration that these performances can only be reached if the Swarm VPX agent is running exclusively on the hardware.

Test setup

Devices

Overview about the Device Configuration in the Swarm Control Center

In the Device Configuration tab of the SWARM Control Center, you can centrally manage all your Perception Boxes and configure the cameras in order to capture the data as needed for your use cases.

You can see the different parts of the device configuration described below.

Mark
Description
1
2

Sort, Search & Filter

Especially when hosting a huge number of devices, you can benefit from our options to search for a specific device you want to manage. Furthermore, we offer the option to sort the list or filter for specific monitoring status of the camera connections. When a filter is set, you can see this indicated on the top including the option to quickly clear all filter.

3

Device Name / ID of your Perception Boxes or your Hardware. You can change the Device Name of the Boxes according to your preferences.

The Unique ID is used for communication between edge devices (Perception Box) and Azure Cloud.

4

This status indicates if the connection between the Perception Box and the Management Hub (Azure) is established. Possible values are Online, Offline or Unknown. If a device is offline unexpectedly, please check out our trouble shooting guide.

5

The Status represents basic monitoring of SWARM software and gives an indication if the software is up and running on device level.

6

Auto refresh Button: Whenever something has been changed in the configuration, or a status changes, this option helps you to automatically refresh the Device Configuration page.

Administration

Orchestration of Control Center Parameters

The administration section of your Control Center consists of the following 3 subsections.

SCC API

API to gather the specific data out of the Swarm Control Center

Make sure to add your tenant ID as a header in the authentication flow.

How to gather the URL for your specific API

To gather the first part of the URL for your specific API Documentation/Swagger UI you can either contact our support or grab it from the source code of your Control Center.

Example

Retrieve the device monitoring status

  1. Go to the Swagger UI

2. The API call above will give you the status of a device and returns the following:

{
  "boxStatus": {
    "connectionState": "CONNECTED",
    "runtimeState": "DISABLED"
  },
  "id": "676cac42-f3d6-416d-ac83-3f54f1c0bb43",
  "name": "7th NE parking garage entrance",
  "statusId": "676cac42-f3d6-416d-ac83-3f54f1c0bb43",
  "tags": [
    {
      "name": "Roxxon Energy Corporation"
    }
  ],
  "type": "P100"
}

3. The states are defined in the API documentation below

4. You can as well get the state of the individual streams. The API returns the following:

[
  {
    "id": "fd02a4c9-5e55-4100-a2fd-d76d16993bce",
    "name": "",
    "model": "traffic-detector-urban-standard-fast",
    "streamStatus": {
      "state": "NOT_OPERATIONAL",
      "errorReason": [
        "ENGINE"
      ]
    },
    "enabled": true
  }
]

Raw event data with Custom MQTT server

Getting started with your custom MQTT connection

As soon as you have configured your use case in the Swarm Control Center, the SWARM software generates events. These events are transferred as standard JSON.

Custom MQTT broker

For higher security, you can use MQTT over SSL. Simply add ssl:// prefix to the broker configuration.

Message compression

In case that message compression is configured, the events are compressed with zlib / inflate .

Swarm Event Scheme

Find the Event scheme of the different configuration types on the Git Hub Repository on the link below.

Counting Line Event

A counting line event is triggered if an object crosses a virtual line (identified by the property lineId). The line has a user-defined name (property lineName). A timestamp (property timestamp) is set when the event occurred. The object can cross the line in two directions (property direction) and is either moving in or out. Additionally, the object that crosses the line is classified (property class & sublcass). The classes are dependent on the use case.

In case the ANPR feature is enabled, the license plate (property plateNumber) and the license plate country (property numberPlateOrigin) will be added to the event.

With ANPR there are captures of the license plate at entries and exits taken. The License plate capture can be attached in JPG format to the MQTT message encoded with BASE64.

If speed estimation is enabled and configured, the speed estimate (property speedestimate) will give the speed estimate output in km/h.

Region of Interest Event

The Region of Interest Event depends on the type of the Region of Interest. Region of Interest with RoI-type Parking will generate a ParkingEvent and the RoI type Generic will generate a RegionOfInterestEvent

Parking Event

A parking event is triggered by a time interval every 10 seconds. The information of all the configured Parking RoI will be aggregated in one single event. In parkingSummary all the RoI will be listed with the configured capacity and the current count of vehicles in the RoI.

As a total summary, you will have the totalCapacity and the totalVehicles which gives a complete overview of all configured Parking RoI in this camera stream.

As an Early Availability feature, you can enable ANPR for Parking RoI. This will provide the license plate (property plateNumber) and the license plate country (property numberPlateOrigin) in a string format.

Region of Interest Event

A region of interest event is triggered either by a state change or by a time interval (property triggerType). The state (property state) can change from occupied to vacant or vice-versa. It is occupied in case the number of objects in the RoI is at least as high as the configured capacity.

Every event contains a user-defined name (property roiName) and a timestamp (property timestamp) when the event occurred. Detected objects and their associated class and dwell times are listed (property objects). The classes are dependent on the use case.

Rule Event

In case a rule is created on an event trigger, a rule event is sent. The rule event is triggered based on the chosen event trigger logic in combination with the defined conditions. A timestamp (property timestamp) is set when the event occurred. The rule event includes the generic information around the rule name, device and stream UUID. On top of the event information, the chosen standard event information is part of the message in the same format as for the standard messages of the chosen event triggers.

Raw track Event (Heatmap)

Raw track mode traces objects as they move through the field of view. A complete trace, of the route that the object took, is generated as soon as the object exits the field of view.

This trace includes the classification of the object (property class) and the path of the object throughout the field of view. The class is dependent on the particular use case.

The track is described as a series of path elements, which include a timestamp and the top-left coordinates along with width and height of the tracked object. There are a maximum of 10 path elements in every event.

Breakdown of Object related attributes:

Classes in the Swarm Event Scheme

Example Counting Line detecting a Van. Note that the class is "car" and the subclass is "van".

 "crossingLineEvent":{
      "class":"car",
      "subClass":"van",
      "direction":"in",
      "lineId":"test_id",
      "lineName":"office",
      "timestamp":"2019-12-29T10:31:14.373202Z"
   },

Data Analytics API (REST API)

Access Data Analytics widgets underlying data via API

The REST API makes generated event data available to third-party applications, retrieved from your Data Analytics widgets.

API Call

Once you configure a widget, find the item "API call" in the side menu.

Authentication

Integration example

In the GitHub repository below you can find example code that highlights how to integrate the data into your own application. It showcases how to handle the required authentication as well as how to perform queries.

Example Request

Bicycle Counting

You can see a Data Analytics widget for bicycle counting as an example below. The respective type of widget (Traffic Counting) is selected, data is aggregated per day, split by object class and direction, and we filter for bicycles only.

API Request

The API-Call option shows the respective GET request for this data, as you can see below.

https://example.com/cubejs-api/v1/load?query=
{
   "measures":[
      "CrossingEvents.count"
   ],
   "dimensions":[
      "CrossingEvents.classification",
      "CrossingEvents.direction"
   ],
   "segments":[],
   "filters":[
      {
         "member":"CrossingEvents.streamId",
         "operator":"equals",
         "values":[
            "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
         ]
      },
      {
         "member":"CrossingEvents.classification",
         "operator":"contains",
         "values":[
            "bicycle"
         ]
      },
      {
         "member":"CrossingEvents.lineId",
         "operator":"equals",
         "values":[
            "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
         ]
      }
   ],
   "timeDimensions":[
      {
         "dimension":"CrossingEvents.timestamp",
         "granularity":"day",
         "dateRange":"This week"
      }
   ],
   "order":{}
}

API Response (shortened)

{
  "queryType": "regularQuery",
  "results": [
    {
      "query": {...},
      "data": [
        {
          "CrossingEvents.classification": "bicycle",
          "CrossingEvents.direction": "in",
          "CrossingEvents.timestamp.day": "2021-11-02T00:00:00.000",
          "CrossingEvents.timestamp": "2021-11-02T00:00:00.000",
          "CrossingEvents.count": "235"
        },
        {
          "CrossingEvents.classification": "bicycle",
          "CrossingEvents.direction": "out",
          "CrossingEvents.timestamp.day": "2021-11-02T00:00:00.000",
          "CrossingEvents.timestamp": "2021-11-02T00:00:00.000",
          "CrossingEvents.count": "234"
        },
        {
          "CrossingEvents.classification": "bicycle",
          "CrossingEvents.direction": "in",
          "CrossingEvents.timestamp.day": "2021-11-03T00:00:00.000",
          "CrossingEvents.timestamp": "2021-11-03T00:00:00.000",
          "CrossingEvents.count": "203"
        },
        {
          "CrossingEvents.classification": "bicycle",
          "CrossingEvents.direction": "out",
          "CrossingEvents.timestamp.day": "2021-11-03T00:00:00.000",
          "CrossingEvents.timestamp": "2021-11-03T00:00:00.000",
          "CrossingEvents.count": "249"
        }
      ],
      "annotation": {...}
    }
  ],
  "pivotQuery": {...}
}

Extended Documentation

Troubleshooting Guidelines

This page provides a collection of common issues that might occur as well as steps to resolve them.

  1. Check if the device is powered

    • Is the device powered with DC barrel (PoE is not supported)

    • Is the power supply fulfilling the recommended specs (12V, >2A)

    • Is the LED next to the ethernet port on? Please take a picture

  2. Check internet connectivity

    • Does the P101 respond to ping in the local network?

  • Check if the device is powered

    • Is the device powered with DC via an external power supply?

    • Are the red and the yellow wires connected to the +-pole?

    • Is the power supply fulfilling the recommended specs (24VDC/4A)

    • Is the LED next to the connection ports on? Please take a picture

  • Check internet connectivity

    • Does the P401 respond to ping in the local network?

  1. Check if the device is powered

  2. Check internet connectivity

    • Check if your SIM card has enough data left (e.g. online portal)

    • Check if the SIM card works with your LTE stick

      • Plug the LTE stick with the SIM card into your PC or Notebook

        • Deactivate WLAN, unplug ethernet cable

        • In case your are using a Huawei stick provided by SWARM, the stick's LED has to be solid blue or green. If the LED blinks, there is no internet connection

        • Check if the PC/Notebook is connected to the internet by opening a website

          • If your PC/Notebook can connect to the internet by using the LTE stick it should as well work with your OP101

  3. Check OP101 Hardware

    • Open OP101

      • Can you spot any damage? (e.g. loose cables) - pls take a picture

      • Check if the LTE stick as well as the USB connector are properly plugged in

      • Check if all ethernet cables connecting P101, PoE switch and cameras are propperly plugged in

Monitoring Alerts

E-Mail Alerts to monitor the status of your devices and streams

Automatic E-Mail alerts can be created to get immediate notifications on potential issues with your Swarm Perception Boxes. In the section "Monitoring Alerts" in your Swarm Control Center, custom alerts can be created and managed. Choose from several predefined alerting conditions, choose the relevant devices, define E-Mail recipients, and get instant E-Mail notifications if a device changes the status from "Operational" to "Not operational" or "Warning".

Only admins can set and maintain monitoring alerts. For standard users and viewers, the section Monitoring Alerts is not visible in the Control Center at all.

Create & Edit Alerts

The creation of the alert is split into three steps.

1. Alert conditions

Alert conditions are based on the connection status and the stream monitoring status. The table below explains the three predefined alert conditions.

In case one chosen condition is true, an alert will be sent. You have the option to multi-select the available conditions.

On top, you can choose to get a resolution notification as soon as the error condition is resolved.

2. Choose the devices - the alert should be applied on

At the multi-select table the devices, where the alert should be applied, need to be chosen. The select all option in the top left corner of the table selects all devices on the page of the table. To search for the right devices, you find a search in the top right corner.

In case there are more pages of devices in the selection table take care that the multi-select will only select the devices on the active page.

3. Define the E-Mail recipients

In the last step of the Alert creation process, the recipients need to be defined. By clicking on add, an E-Mail address can be added. There is no limitation on the number of recipients.

There is no need for recipients to have any dedicated access to the Control Center. Any E-Mail address can be chosen. So feel free to choose the group E-Mail addresses of your teams.

Edit or delete Alerts

In the overview table where all created Alerts are displayed, they can be edited or deleted. In the last column, you can find the action buttons to perform this.

The editing workflow looks the same as the creation process.

Browser Compatibility SCC

In order to optimize the usage of our SWARM Control Center, you need to use one of the recommended and supported browsers below.

We recommend using the most up-to-date browser that's compatible with your operating system. The following browsers are supported:

Why should you stop using older browser versions?

Newer browser versions are generally the safest, as they lower the chance of intrusion and other security risks. Older browsers and operating systems are not recommended since they do not always support the operation and security of our Control Center. We do not support older, out-of-date versions or browsers not mentioned below.

We test new browser versions to ensure they work properly with our websites, although not usually right away after they are released.

Barrierless Parking and ANRP

Use case for barrierless parking, including utilization of the parking area by recognizing the license plate of the parking customer

Introduction

You would like to get more insights about your parking spaces and the customers using it? SWARM software is providing the solution to gather the needed data for parking monitoring.

In order to efficiently manage your parking infrastructure, gathering accurate and reliable data is key. Our parking monitoring solution builds on Artificial intelligence based software that is designed to detect, count and classify objects entering indoor our outdoor parking spots. Generated data can be used as an information basis, helping to predicatively guide customers in parking garages and outside facilities or manage parking violations. Basically speaking, we can help you to answer questions such as:

  • How is the current utilization of my parking spot?

  • What is the historic parking utilization at an average level?

  • How long are my customers parking in the garage?

  • Is there any possibility to see and proof parking violations (e.g. long time parking)?

  • …

Background

Technology wise our parking monitoring system consists of the following parts: object detection, object tracking, counting objects crossing a virtual line in the field of interest as well as object classification and ANPR. The following section of this article will briefly describe those pretrained technologies used for parking monitoring.

Object Detection

The main task here is to distinguish objects from the background of the video stream. This is accomplished by training our algorithm to recognize a car as an object of interest in contrast to a tree, for example. This computer vision technology deals with localization of the object. While framing the item in the image retrieved from the frames per second out of the analyzed stream, it is correctly labeled with one of the predefined classes.

Object Classification

The recognized objects are furthermore classified to differentiate the different types of vehicles available in traffic. Depending on the weight, axis and other features, the software can distinguish the recognized images from predefined and trained classes. For each item, our machine learning model will provide one of the object classes detected by SWARM as an output.

Object Tracking

Where was the object initially detected, and where did it leave the retrieved camera image? We accomplish to equip you with information to answer to this question. Our software is detecting the same object again and again and in the way tracking it from one frame to the next within the generated stream. The gathered data enables you to visualize the exact way of the object for e.g. generating heat maps, analyzing frequented areas in the scene and/or planning strategic infrastructure needs.

Crossing Virtual Lines

Another technology available in our traffic counting is used to monitor the streamed scene. By manually drawing a virtual line in our Swarm Control Center (SCC), we offer an opportunity to quantify the objects of interest crossing your counting line (CL). When objects are successfully detected and tracked until they reach a CL, our software will be triggering an event, setting the counter for this line accordingly.

ANPR (Automatic Number Plate Recognition)

OCR (Optical Character Recognition)

Technology Specifics

ANPR Events

Before sending and event including the license plate-information of a vehicle (entering a parking zone) our system performs the following steps:

Test setup

Our performance laboratory (“Performance Lab”) is set up like a real-world installation. For each scene, we send a test-video from a RTSP server to all of our supported devices using an Ethernet connection. Models and software versions to be tested run on the devices, sending messaged to an MQTT-broker. Retrieved messages are compared with ground-truth counts, delivering accuracy measurements as well as ensuring overall system stability.

Scenes

Our ANPR parking test scenario includes the following scenes:

Performance

Our ANPR solution acquires an accuracy of over 90%.

Application Limitations

  • Crossing lines are not positioned correctly

  • Vehicles + license plates are occluded by another vehicle

  • The camera image is too dark, too bright or too blurry in order to correctly detect an object. Please see our requirements for the Parking Monitoring on the page linked below.

License Management

Overview about your licensed streams

The license management section provides an overview of your software licenses. This means in detail:

  • The number of licenses currently in use

    • Number of Camera streams activated. Disabled streams don't count.

  • The total number of licenses that were purchased

    • In general, all SPS (Swarm Perception Subscriptions) can be used with any hardware that belongs to you.

  • The current status of each license

    • ACTIVE = The license is currently valid, and the expiration date lies in the future

    • EXPIRED= The license is no longer valid and, therefore, expired. Either the license was already renewed or you decided to let it run out.

    • INACTIVE = The license period starts on a future date.

  • The start and end date of each license validity

  • The order and invoice number as well as the number of streams that are included

Adding or activating additional streams is only possible if sufficient SPS licenses are available.

User Management

Manage users having access to your Control Center

The user management section provides an overview of all users that have access to your control center as well as the possibility to add, remove, or edit users and user roles.

Add a new user

To add a new user, simply click on "New User" and fill out all required fields. The new user needs to set a personal password by verifying the email address via the workflow "Forgot your password" on the Login Page.

Edit existing users

You can only change the role of existing users. If you have to change users' names or email addresses, you need to delete the user and subsequently create a new user.

User roles

  • Viewer: This is read-only permission for data analytics. It allows access to existing scenes and dashboards.

  • User: Can access device configuration and data analytics in a read/write fashion. Is allowed to reconfigure devices, create new scenes, dashboards, etc.

How do we measure Performance?

Overview about how we measure the performance of our released models

We calculate accuracy by comparing counts obtained by our traffic counting solution against a manually obtained ground truth (GT). Delivering correct and realistic accuracy measures is most important, and therefore we make a real effort obtaining our GT data.

We also make sure that scenes from any performance measurement never find their way into our training dataset, and thereby avoiding overtraining and unrealistic performance measurements which cannot be reached in real world usecases.

Crossing line & Virtual door

The following example describes counting accuracy calculation for crossing lines.

Example

Given the following results table:

  • Scene 1 has 2 errors (1 missed, 1 overcount)

  • Scene 2 has 1 error (1 missed)

  • In total, there are 3 errors and 16 Ground truth counts (5 + 3 + 3 + 5)

This gives us an accuracy of 16-3/16 = 81.25%

Origin/Destination

The following described counting accuracy calculation for origin/destination.

Scene 1 has 2 errors (1 missed, 1 overcount)

Scene 2 has 1 error (1 missed)

In total, there are 3 errors and 11 GT counts (5 + 3 + 3)

This gives us an accuracy of 11-3/11 = 72.72%

ANPR

For automated number plate recognition (ANPR), the accuracy logic is the same as for crossing lines, with two additional restrictions:

  • vehicle class is not taken into account

  • the number plate sent in the event is compared and has to fully match the ground truth

For this example, we receive an accuracy of 4/6*100% = 66%

Devices used for Measuring

For our performance measures, we are using different types of hardware to guarantee a stable version of our software. When we receive different results in our Happy RTSP Performance lab, we are going to proudly announce the minimum percentage as our accuracy to be achieved.

In the table below, you can see the 4 different devices we are testing, as well as an example of results achieved. In this case, we would publish an accuracy of 90% as a target.

Recommended

Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.

> 80 PPM

Using the camera parameters defined below ensures to achieve the minimum required PPM value)

Camera video resolution

1280×720 pixel

Camera video protocol/codec

RTSP/H264

USB 3.0/UYVY, YUY2, YVYU

Camera Focal Length

2.8mm

Camera mounting - distance to object center

2-8 meters

Camera mounting height

2-4 meters

Camera mounting - vertical angle to the object

<45°

Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle

0° - 360°

Camera FPS

> 25 FPS

Wide Dynamic Range

Should be enabled

Camera

Link

Comment

HikVision

DS-2CD2046G2-IU

2,8 mm Focal Length

Configuration

Model

People Full Body (if distance > 5m to Virtual Door)

People Head (if distance < 5m to Virtual Door)

Configuration option

VD (Virtual Door)

ANPR

Disabled

Heatmap

Disabled

SWARM Control Center
camera and data configuration.

Hardware

  • 32GB SD card.

  • 10W power mode, with DC barrel jack power. .

Power Mode
Traffic
Journey Time & distribution
Barrierless Parking
Barrierless Parking + ANPR
Single / Multi Space
People, Head

MAX N

1

1

2

1

4

2

Hardware

  • Developer KIT

  • Active cooler

  • Power supply: Included Developer KIT Power supply

The measured number of streams are dependent on the used power mode.

Power Mode
Traffic
Journey Time & distribution
Barrierless Parking
Barrierless Parking + ANPR
Single / Multi Space
People, Head

20W 6 core

3

3

6

3

12

6

We are just about to benchmark it.

Hardware

The measured number of streams are dependent on the used power mode.

Power Mode
Traffic
Journey Time & distribution
Barrierless Parking
Barrierless Parking + ANPR
Single / Multi Space
People, Head

MAX N

5

3

8

5

10

8

Hardware

Jetson Orin Nano 4 GB 32 GB SD Card Forecr DSBOX-ORN

Power Mode
Traffic
Journey Time & distribution
Barrierless Parking
Barrierless Parking + ANPR
Single / Multi Space
People, Head

10 W

4

2

5

3

6

4

In order to replicate the above results, we describe our test setup in the following. In order to emulate RTSP cameras, we are using an All tests have been conducted at room temperature.

Toggle to change between and

See the definition of the status in the page.

Every setting and information available in the Control Center can be gathered via an API. The Swagger documentation for our demo instance as an example can be found here:

Generally we stick to the following OAUTH flow documented here

In case you don't want to use Data Analytics and retrieve data via we provide you the option to configure a custom MQTT broker.

The Swarm Perception Box will send events in the form of a JSON to an MQTT broker you configure. The is used to deliver events. In case events cannot be delivered, e.g. no connectivity, we cache events up to 24k messages. The stream UUID is set automatically as MQTT client id.

There are several ways how to validate a JSON against a schema, a good overview is provided by . As a starting point, we recommend , an online tool to manually validate a JSON against our schema.

The header of the JSON is defined by a version of the format being used (property version). The format is major.minor, a major version change denotes a breaking change whereas a minor version change indicates backward compatibility. For unique identifiers, we rely on . Timestamps are defined with .

Please contact our to enable the addition of license plate captures via MQTT.

Every event does contain a class of the detected object. We arranged those objects in classes and subclasses for a better overview. You can see the classes and Subclasses as well as examples in the section .

For every Data Analytics widget, the underlying data can be queried via a provided . Integration to third-party applications works out fast and easy.

The provided dialog pop up shows detailed information on how the API request for the generated data of this particular widget looks like. Copy/paste the command into a terminal and execute it. You can directly test the call within the dialog, including the response format, by clicking on "Try it out!" which does not require the usage of a terminal.

The provided access token is temporary. For a permanent integration into third-party applications, please request a permanent access token .

We strictly follow the documented by Microsoft. There are several that you can use.

The REST API is based on Cube.js. More information on the functionality of the API can be found in the .

Although we are always happy to help, please take your time and try to find the issue you are facing here before contacting us via our .

My device shows up as "OFFLINE" in the control center

A camera stream is "not running"
  • Check hardware

    • Check if the ethernet cables connecting (O)P101, P401 (PoE switch), and the camera are properly plugged in and not damaged.

    • OP101:

      • Check the LED state of the port used on the PoE switch

    • Are you using the correct connection type? RTSP or USB Cameras are supported

    • Are you using the correct camera host, port and path to access the camera stream?

    • Are you using the correct username and password?

A camera stream has status "Warning"
  • Check the reason of the warning status by hovering over the symbol

    • Video frames cannot be retrieved correctly - At least 10% of camera frames the camera delivers are broken.

      • This can have several reasons, please contact us via our

    • Performance issues - The performance (frames per second) are dropping below the limit of the configured event types.

  • Check if your device is configured according to

  • Do not ignore warnings of this type. Expected accuracy values might not be reached.

A camera stream is "not configured"
  • Check if is configured

  • Check if the of your use case is correct

"No camera feed" in Swarm Control Center
  • Try refreshing the frame

  • Try refreshing your browser window

    • Are you using the correct connection type? RTSP or USB Cameras are supported

    • Are you using the correct camera host, port and path to access the camera stream?

    • Are you using the correct username and password?

No events are created from a camera stream

General

Data Analytics

Custom MQTT

  • Know what you are doing:)

    • Connect via SSH to the device

    • Install tcpdump sudo apt install tcpdump

    • Record network traffic, filter for LTE interface and IP address of the broker

    sudo tcpdump -i eth0 host 91.213.98.152 -w dump

Alert condition
Description
Latest historic status
New status

The is a web-based application and runs in the browser of all modern desktops and tablet devices. To use the portal login to your , you must have JavaScript enabled on your browser.

(the latest version)

(the latest version)

(the latest version)

(the latest version, Mac only)

ANPR stands for automated number plate recognition. For detailed settings and camera requirements, we refer to our use case description. To identify the number plate of the parking customer, we are using optical character recognition to read the numbers and figures that identify the vehicle.

OCR stands for optical character recognition and means, basically, converting an image of letters to the letters itself (longer description: ). In our ANPR solution, we are scanning the image of the retrieved classified vehicle. Out of this picture, our OCR solution reads the combination of the license plates to identify the customer. Recognizing the image at entry and exit level enables the possibility to track the parking time of each single vehicle.

In order to understand how to interpret our accuracy-numbers, we gave some more technical details on the ANPR solution. The detailed way of our accuracy calculation and an explanation of our test-setup is documented in our “” section.

In general, there are several reasons why parking monitoring systems cannot be expected to reach 100% accuracy. Those reasons can again be split into various categories (technological, environmental and software side) that either lead to missed or over-counts. Given our technical and environmental prerequisites specified in our, we could reveal the following limitations in the provided software.

Admin: Can do everything a “User” is allowed to do. Additionally, an admin has access to the .

Tip: Use the or .

Camera mounting - horizontal angle to the object

Make sure you in a way that event triggers can actually capture events.

Use to confirm that events can be triggered, e.g. do moving objects actually cross a countingline or are parking vehicles detected inside a region of interest?

Check if you created suitable dashboards and widgets for your use case, please refer to our detailed

Refer to our detailed guide on

Transfer the file dump and analyse with

Pixels per Meter (PPM)
Axis lens calculator
generic lens calculator
https://www.hikvision.com/en/products/IP-Products/Network-Cameras/Pro-Series-EasyIP-/ds-2cd2046g2-i-u-/
Jetson Nano 4GB developer kit
The power supply we use
Jetson AGX developer kit
RTSP server.
Monitoring Alerts
License Management
User Management
Swagger UI
OAuth 2.0 client credentials flow on the Microsoft identity platform - Microsoft Entra
SWARM API,
QoS level 1
json-schema.org
jsonschemalint.com
UUID
ISO8601
support
models
REST API
curl
via the Support Portal
OAuth flow
client libraries
external documentation

Device offline

Gets triggered if a device is changing the status from 'Online' to 'Offline'.

Connection: Online

Connection: Offline

Device Error

Gets triggered if the stream status of one or more streams on the devices changes to 'Not Running', from either 'Running' or 'Warning' (because they cannot deliver messages, connect to the camera, ...).

Stream status: Running or Warning

Stream status: Not Running

Device Warning

Gets triggered if the stream status of one or more streams on the device changes from 'Running' to 'Warning' (due to degraded camera connection, ...)

Stream status: Running

Stream status: Warning

Device

Accuracy

P101

91%

Nvidia Jetson AGX

91%

Nvidia Jetson NX

91%

Nvidia GTX 1080

90%

Network Requirements

Needed Requirements for your SWARM Perception Box

  • IPv4 is required (IPv6 is not supported)

    • A private IP4 address is okay. A public routable IP4 address is not required.

    • Make sure the MTU size is at least 1500 bytes.

  • At least 1Mbit/s down/up

Firewall (your network)

The P101/OP101/VPX Agent need to connect to the SWARM Control Center, which is hosted in the Microsoft Azure Cloud. This requires the following outgoing ports to be open in your firewall. Incoming ports are not required to be open.

Port
Protocol
Direction

80

IPv4 - TCP/UDP

Outgoing

123

IPv4 - UDP

Outgoing

443

IPv4 - TCP/UDP

Outgoing

1194

IPv4 - UDP

Outgoing

8883

IPv4 - TCP

Outgoing

5671

IPv4 - TCP

Outgoing

Typically, the camera video stream is accessed through port 554 (TCP/UDP)

If you are using your own MQTT broker, make sure to allow the required ports.

Troubleshooting

Connect your PC to the network the Perception Box is connected to.

IPv4

Make sure IP4 is supported

ping4 google.com

DNS

Make sure the DNS is able to resolve *.azure-devices.net, *.azure-devices-provisioning.net.

swarm@:~$ dig +short global.azure-devices-provisioning.net

id-prod-global-endpoint.trafficmanager.net.
idsu-prod-mrs-001-su.francesouth.cloudapp.azure.com.
40.79.180.98

Ports

Make sure that all above listed outgoing ports are open.

swarm@:~$ curl portquiz.net:8883
Port 8883 test successful!
Your IP: 127.0.0.1

SSL/TLS

Make sure the TLS certificate is valid (and not inspected). Watch out for Verification: OK.

swarm@:~$ openssl s_client -connect global.azure-devices-provisioning.net:443

CONNECTED(00000005)
depth=2 C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root
verify return:1
depth=1 C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 02
verify return:1
depth=0 CN = *.azure-devices-provisioning.net
verify return:1
---
Certificate chain
 0 s:CN = *.azure-devices-provisioning.net
   i:C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 02
 1 s:C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 02
   i:C = IE, O = Baltimore, OU = CyberTrust, CN = Baltimore CyberTrust Root
---
Server certificate
-----BEGIN CERTIFICATE-----
MIIIWTCCBkGgAwIBAgITfwATMr0tZ+TbqzQUkQAAABMyvTANBgkqhkiG9w0BAQsF
ADBPMQswCQYDVQQGEwJVUzEeMBwGA1UEChMVTWljcm9zb2Z0IENvcnBvcmF0aW9u
<<SNIP>>
/5bEzS0RghacUpAj47GmEtrpMGnjW+NpzowkjsR4HE2T54ItSlafD/4Am1Fbx/oE
/o14IXIGOpM+TlGPEifj+7cgIA7GESAgi8J3CaI=
-----END CERTIFICATE-----
subject=CN = *.azure-devices-provisioning.net

issuer=C = US, O = Microsoft Corporation, CN = Microsoft RSA TLS CA 02

---
No client certificate CA names sent
Client Certificate Types: RSA sign, DSA sign, ECDSA sign
Requested Signature Algorithms: RSA+SHA256:RSA+SHA384:RSA+SHA1:ECDSA+SHA256:ECDSA+SHA384:ECDSA+SHA1:DSA+SHA1:RSA+SHA512:ECDSA+SHA512
Shared Requested Signature Algorithms: RSA+SHA256:RSA+SHA384:RSA+SHA1:ECDSA+SHA256:ECDSA+SHA384:ECDSA+SHA1:DSA+SHA1:RSA+SHA512:ECDSA+SHA512
Peer signing digest: SHA256
Peer signature type: RSA
Server Temp Key: X25519, 253 bits
---
SSL handshake has read 4003 bytes and written 444 bytes
Verification: OK
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
    Protocol  : TLSv1.2
    Cipher    : ECDHE-RSA-AES128-GCM-SHA256
    Session-ID: 36070000994141FEF9A6DA8FFE8AEBAE8609332DED4B5B69AC05BF44FE3667B8
    Session-ID-ctx:
    Master-Key: 1D2580A0EECFF340F4A7DA46BC6B88D25309C78EFF35B736A2882745E010778D6EB29B45A1C7F62ADDF1AB6D2937EA9D
    PSK identity: None
    PSK identity hint: None
    SRP username: None
    Start Time: 1626709603
    Timeout   : 7200 (sec)
    Verify return code: 0 (ok)
    Extended master secret: yes
---

😨
😾
😿
Data Analytics
Device Configuration.
Camera and Device Monitoring
SWARM Support Center
😩
Understand what the status means
Check camera configuration
😢
Understand what the status means
SWARM Support Center
our recommendations
🙀
Understand what the status means
camera and data connection
configuration
Check camera configuration
configured your scenario
live calibration and track calibration
custom MQTT solutions
Wireshark
SWARM Portal
SCC
Microsoft Edge
Chrome
Firefox
Safari
Barrierless Parking (ANPR)
Optical character recognition
How do we measure Performance
set-up documentation
Parking Insights
Administration section
data analytics guide
  1. Detect the vehicle

  2. Detect the license plate of the vehicle

3. Track the vehicle and check if it crossed the entry/exit line.

4. Recognize the letters on the detected number-plate (OCR*), and send the information if the vehicle-track crossed the line.

recognized as BNW525

Task: Count entering vehicles and recognize the number plate

Conditions: daylight/indoor

Camera setup:

  • 2688 × 1520 resolution

  • height: 3 m

  • distance: 10 m

  • focal length: 6 mm focal length

Object velocity: 0-15 km/h

Objects: >150

Scene description: Parking garage exit

Task: Count exiting vehicles and recognize the number plate

Conditions: daylight/indoor

Camera setup:

  • 2688 × 1520 resolution

  • height: 3 m

  • distance: 10 m

  • focal length: 6 mm focal length

Object velocity: 0-15 km/h

Objects: >150

image-crop

ground-truth

model-result

correct

85BZHP

85BZHP

YES

BNW525

BNW555

NO

DY741WJ

DY741WJ

YES

GU278MB

GU278MB

YES

FW749XA

FW749XA

YES

ERZM551

ERZM55

NO

Number Plate Area Code

Early Availability Feature for supported countries

In addition to the number plate raw text and country code, we now support the number plate area code. Some number plates include an area code associated with a geographical area, such as "W" for Vienna (Austria) or "M" for Munich (Germany).

The following 13 countries are supported

  • Austria

  • Bulgaria

  • Switzerland

  • Czech Republic

  • Germany

  • Greece

  • Croatia

  • Ireland

  • Norway

  • Poland

  • Romania

  • Slovakia

  • Slovenia

How does this work?

  • For supported countries, we detect spaces between letters and parse the raw license plate text according to a predefined format.

  • In the case of countries that are not supported (e.g. Italy), the generated event won't contain an area code.

How to configure?

All use cases based on ANPR are supported, no additional configuration is required.

Upgrade Jetpack from 4.4.1 to 4.6.0

Data Analytics

Overview about Data Analytics in the SCC

Data Analytics allows you to digitize your traffic and parking scenarios and visualize both live data and historical trends analyses. In addition, you have the possibility to organize your parking areas, intersections or counting areas into different groups and display them in a list and map view.

As we know that with our detection you can as well gather data for other use cases we provide the option to use Data Analytics as well for any generic scenario.

You can create Dashboards for any of your scenarios and display the data as you need them to regularly check the data according to your needs.

See more information on the following pages.

Creation and organization of dashboards

How to manage dashboards in Data Analytics

Within Data Analytics, you can create several dashboards (digital parking & traffic scenarios) and organize them in Dashboard groups in order to keep a structured control of any Analysis across your parking areas or cities.

Create, edit and delete Dashboard groups

Dashboard groups are available to bring structure to your collection of dashboards. You can create dashboard groups on the top bar by clicking on the + symbol. Name your dashboard group and simply click on Save.

The Default dashboard group can't be renamed, either deleted.

By clicking on the dashboard group, you can simply navigate between the dashboards of each group. The groups are sorted alphabetically.

If you have chosen a group, you can edit the name by clicking on the pen next to the name.

By clicking on the x symbol at the Dashboard group navigation bar, you can delete the dashboard group. You will be asked to confirm the deletion.

Deleting a dashboard group will delete any dashboard linked to that group.

Create, edit and delete Dashboards

Dashboards can be created by clicking on New Dashboard. Each Dashboard has three tabs (Overview, Cameras and Configuration). At the creation of the Dashboard, you will be directed to the Configuration tab in order to set any specific information for your dashboard. After you have set your configuration, go to Cameras tab in order to add one or several cameras which should be taken into consideration for the dashboard.

As soon as you have allocated the cameras to the dashboard, you can start to create your dashboard by adding widgets in the Overview tab.

The overview tab will be your actual Dashboard where you can customize and analyze the data according to your needs.

See more information in the next section of the documentation.

You can add one or more cameras to your dashboard. This needs to be done in order to select from which cameras you want to analyze the data.

Select the cameras in the drop-down and click on add cameras.

The Scenario can't be changed in editing mode. So take care to choose the right scenario during dashboard creation

Link the Dashboard to the dashboard group of choice. The dashboard group you have been located in during creating the new dashboard will be the preselected group.

Paste the coordinates of the installation location to the dashboard configuration. The coordinates will help you to navigate across the dashboards on a map view. On top, the coordinates will automatically set the local time in your dashboard. If no coordinates are set the time will be displayed in UTC timezone.

A camera can be added to various dashboards. This will allow you to use cameras for multimodular purposes. E.g.: Having the traffic counted as well as the bicycle lane right next to it.

Upgrade IotEdge from 1.1 to 1.4

apt remove iotedge libiothsm-std
apt update
apt install aziot-identity-service=1.4.1-1 aziot-edge=1.4.3-1

chown aziotcs:aziotcs /var/lib/iotedge/hsm/iot-edge-device-SwarmEdgeDeviceCA-full-chain.cert.pem
chown aziotcs:aziotcs /var/lib/iotedge/hsm/swarm-iot.root.ca.cert.pem
chown aziotks:aziotks /var/lib/iotedge/hsm/iot-edge-device-SwarmEdgeDeviceCA.key.pem

iotedge config import
iotedge config apply -c /etc/aziot/config.toml

FAQ

  • Can I delete /etc/iotedge/config.yaml after the upgade?

    • No, please keep the file. Our containers still need access to the file. In a future version we will remove this dependency.

White paper for use cases

In this section, you can find our White papers for different use cases

How to use Azure IotHub as Custom Broker

Azure IotHub specialities

  • The IotHub device ID must correspond to the MQTT client ID

  • You can only connect with one client for a given IotHub device

  • The SAS token expires after a pre-defined time and needs to be refreshed. You need to update the token and update the MQTT password once in a while for every Stream in the Control Center.

What does this mean in terms of Swarm?

  • You can either:

    • create for every stream a corresponding IotHub device ID (recommended and used below) OR

    • create random IotHub device IDs and assign one to each stream by setting the MQTT client ID.

Steps

  1. Create an IotHub device, copy the stream ID from the Control Center

    1. az iot hub device-identity create --hub-name <hubname> --device-id "<stream-id> --edge-enabled

  2. Generate a SAS token for the IotHub device.

    az iot hub generate-sas-token --hub-name <hubname> --duration 51840000 --device-id <stream-id>

  3. Monitor incoming events

    az iot hub monitor-events --hub-name <hubname> -d "stream-id"

    1. Make sure you receive messages at this point. Don't proceed unless this step works.

  4. Enter URL, username, password and topic as custom broker in the Control Center.

Parking garage entry

Parking garage exit

Upgrading Jetpack is .

Your cameras will then be displayed, and you will have the same view on the cameras as you have in the . You can see the frame of the camera and directly jump to the where you can change the event type configuration.

Name your Dashboard and give it a description in order to remember what the dashboard includes. Choose the Scenario according to the use case you want to cover with the dashboard. (, or )

For the Parking Scenario you can set further parameters. See more specific information on the dedicated .

For 2023.1 and later is required. For new devices our is handling the installation process. Existing VPX devices must be upgraded by the partner. P10X/OP10X are upgraded by Swarm with the rollout of 2023.1.

Please read the instructions first, roughly these steps are required. Note: you don't need the package defender-iot-micro-agent-edge

We set the stream ID as MQTT client ID (default behaviour). You can overwrite the MQTT client ID if needed.

Test with an MQTT client (e.g. mosquitto) to publish a message. We used this file. mosquitto_pub -p 8883 -i <stream-id> -u '<hubname>.azure-devices.net/<stream-id>/?api-version=2021-04-12' -P '<SAS token>' -t 'devices/<stream-id>/messages/events' --cafile root.pem -d -V mqttv311 -m '{"swarm":"test"}'

Barrierless Parking with ANPR
Journey Time
documented by NVIDIA
Dashboard overview & Widget creation
device configuration
Scenario configuration
Traffic
Parking
Generic
Parking Scenario page
from the Microsoft documentation
Traffic Counting
Barrierless Parking and ANRP
IotEdge 1.4
installer
The official documentation from Microsoft
root.pem

How to access the debug output?

How to record debug videos for calibration and performance checks of the configured scenarios

What is the debug mode?

For more detailed calibration insights and performance check, you can use the advanced debug mode.

Debug mode let you visually show the SWARM software in action. It is designed for debugging on Swarm side mainly, but can also be used for adjusting and understanding SWARM.

Remember GDPR/DSGVO, whenever you work with the debug mode! Technically, it is possible to record the debug mode and breach data privacy!

Access Debug Stream

With the stream ID in the path, you can choose the dedicated steam

FAQs

Frequently Asked Questions

In this section you will find all frequently asked questions. We hope to be able to support you with this. This area will be updated continuously.

1. Solutions

What are the requirements for using your product?
  • Camera(s): This is essential, of course. The number of IP cameras needed entirely depends on the use case. You can find more details regarding camera specifications in our technical documentation. We are happy to support you with this and cameras can be delivered directly via Swarm Analytics.

  • Electricity: Naturally, a power supply is needed. This can be provided by either a barrel jack power adapter (i.e. for our SWARM Perception Box P101), a 230V power connection (speaking of the SWARM Outdoor Perception Box OP101AC), or even via solar- or battery-powered systems, for which our OP101DC is particularly well suited.

  • Internet: An internet connection is mandatory to get the system up and running. Without an internet connection, one cannot access the system and any configuration will be impossible. The connection can be done by either cable (LAN) or mobile connection (LTE).

2. Product: Sensor

How and where are the parameters processed?

The generated video from the camera is processed exclusively on the Perception Boxes. No video data is transferred to a server/cloud or stored. The Perception Box is connected to a suitable camera on site. Events from the configured event types are transmitted to the Azure Cloud via MQTT and stored in a database there. This enables visualization and evaluation centrally and conveniently in the browser of your choice via SWARM Data Analytics for all Perception Boxes.

Alternatively, events can be transmitted via an MQTT server provided by the customer. In this case, further processing of the raw event data is the responsibility of the customer and enables even more use-cases and custom integrations.

Which vehicle classes are recognized and how are they defined?
How about my other custom class: Can you train on that?

We understand that custom classes can play a crucial role in meeting specific use-case requirements. Our dedication to delivering personalized solutions for our customers and partners is paramount, and we constantly strive to enhance our product and cater to the distinctive needs. Nevertheless, it’s important to acknowledge that the progress of mobility and traffic behaviors and developments significantly influence the types of classes we teach our models to recognize. Therefore, we keep a close eye on emerging mobility trends and adapt our solution accordingly, most recently by adding e-scooters as an additional class.

How accurate are you?

Accuracy varies for each use case and is, of course, also dependent on environmental conditions, no matter how precisely the Swarm algorithm works.

How can I improve my accuracy? Which environmental conditions have to be met in order to get great results?

Our technical documentation provides help for configuring the respective use case:

To what extent is detection limited at night and/or in bad weather (i.e. heavy rain, fog, etc.)?

At night, the accuracy of detection depends on the available lighting and camera settings. With sufficient lighting and camera settings according to our specifications, the loss of accuracy can be reduced to a minimum. Heavy rain, fog as well as other extreme weather situations can be partly compensated.

Basically, for all of the above scenarios, anything that can be clearly identified by the human eye can also be detected by the software. If you have example videos of your use-cases, we can test them through our software and measure the accuracy over a predefined period of time.

Do you have any recommendations for cameras?

The following sections in our technical documentation provide configuration details and recommendations for cameras depending on the respective use case:

3. Product: Control Center

3.1. Device Configuration

Is it important that the entire vehicle or object travels “inside” the zone or is it sufficient if the center-point of the vehicle does so?

The center-point of the vehicle (or other object) is tracked, therefore it is important that the center-point of the vehicle moves in the entry zone and leaves in the exit zone. Bear in mind that the camera view is a 2D representation of the 3D world, so the zones often need to be larger than you expect. The center-point of the vehicle may be above road level.

Is it okay if the vehicles stand in a zone for a while, i.e. can I create a zone “in front” of the traffic light so that the vehicles are still in the “entry” zone?

It is okay if vehicles (or other objects) stand still for a while in a zone. When they start moving again, they will continue to be tracked. If possible, it is best to avoid zones that cover parked vehicles. This can cause performance problems as vehicles are constantly tracked and can be confused with each other if another similar vehicle passes nearby.

Can zones overlap or be very close to each other?

Yes, zones can be very close to each other or overlap. However, an object (e.g. a car) must be detected as entering and exiting two different zones (i.e., without overlapping) for its track to be recorded.

Is it only possible to gather entry and exit data, or also the density of groups of people?

Entry and exit counts are very useful for identifying the total number of people in a specific area (e.g. outdoor pool, market, or stadium). If COVID-related spacing rules are to be followed, the distribution of people is important. Therefore, the Swarm Analytics device also directly outputs the distribution of objects in an area. However, partial roofing or large sunshades can make it difficult to detect the distribution of people.

4. Product: Support and Maintenance

What are the next steps after my order?

As soon as you receive your order, you can start with the installation process. For example, the installation and configuration of the SWARM Outdoor Perception Box is simple and can be completed in 30 to 60 minutes. The following requirements are necessary for an outdoor installation, additional to the box itself:

  • Suitable IP camera with PoE LAN cable

  • Continuous 230V power supply

  • Standard miniSIM card with more than 600MB data volume per month

  • Screwdriver for mounting the clamps

  • Laptop or tablet for camera alignment

Where can I find your terms and conditions; what are the regulations for RMA, guarantees, etc.?
What are the electrical and/or building requirements to mount the system?

5. Data Protection

How do you ensure GDPR compliance?
Are the camera images stored? What is stored and where?
What biometric data is collected?

We take data protection extremely seriously. The algorithms of our technology are intentionally developed in such a way that no biometric characteristics are collected and thus no persons can be identified. That is also why our technology does not use facial recognition at all.

Do I need a data protection approval or a data protection impact assessment?

No. There is no data collection permit or reporting system required. As long as the data is not linked to other data sources, there is no need for a data protection authorization or a data protection impact assessment necessary (see DSFA-AV). The image generated by the camera exists only about 50 milliseconds and is neither saved nor forwarded. The output of the software is textual data, which is then visualized in the dashboard of the user. Therefore, there is no personal data collected.

Is it possible to "hack" the system?

The system only sends, which means it is invisible to an active attack. It is also not possible to connect to the camera.

6. Offering and Pricing

If the SCC data subscription is terminated, can I keep the data of the subscription period for 3 years?

No. The data will be stored and made accessible for 3 years from their generation date, if an SCC data subscription is active. We strongly recommend downloading your data and transferring it to a different destination, before you end your SCC data subscription.

What’s the difference between the two data subscriptions?
What are the typical costs of a system?
Could we get your system for free for testing?

Once the debug mode is enabled, you can access the stream on all IPs, which are configured on the SWARM Perception Box (or your own Hardware) via a and port 8090 with the stream ID as path.

You may also use any kind of video streaming application, like to access the stream.

How many camera streams can be handled with one box?

For a detailed answer to this question, please visit the related section in our technical documentation .

Which objects can be recognized? How does the detection work exactly?

Objects such as vehicles, cyclists or people are detected and classified by models pre-trained by us. On the technical side, the SWARM Computer Vision software first looks at individual frames of the video stream in real time. The image is analyzed by an artificial neural network and relevant objects (vehicles, pedestrians, etc.) are detected.

In a second step, the objects are classified more precisely, e.g. to distinguish cars from trucks or motorcycles from bicycles. Subsequently, the objects are combined over several frames to detect the movement of objects. These so-called ‘tracks’ are used to perform relevant events and counts with the SWARM Event Engine. This data is generated by so-called and encrypted. Only the anonymized data without inference to the original camera image leaves the device and can be used for evaluation and analysis. The main advantages of this Computer Vision approach are its flexibility and extensibility.

All information on the camera image that is visible to humans could thus also be used for automated analysis. For example, in order to detect new objects, no additional sensor is required, but only a software update - in terms of scalability and adaptation to possible further requirements, this is a major advantage over other approaches.

With the SWARM software, multiple motorized as well as non-motorized traffic classes can be surveyed - from cars to trucks with trailers to pedestrians. For motorized traffic in Germany, we have followed the classification guidelines as far as visually possible, but we also offer other standards. More details regarding the can be found in our technical documentation.

We appreciate your understanding that training new classes is a complex process that requires extensive research and testing to ensure that our models perform accurately and reliably with our known level of quality. If you have a relevant use-case in mind, feel free to get in touch with our to discuss the opportunities and scaling of the project.

Nevertheless, we can work with an accuracy between 95 and 99% for standardized applications, such as parking lots or traffic counting on designated highways and urban streets.

See the next questions for further information and support. For further information regarding our technology’s accuracy, we recommend .

→

→

→

→

→

→

→

Please find in our technical documentation.

You can find our on our website. Also our can be found there. Please let us know via if you need further documents and information.

You can find all electrical and building requirements to mount the different Perception Boxes in our . Please also have a look in the quick start guide, where the setup is explained step by step. If you encounter issues in this process, have a look at our troubleshooting guidelines. Please feel free to contact our support if you need further assistance.

→

→

→

→

→

No videos are stored. In addition, the system only sends data, so that active attacks from the outside are prevented. In addition, while the AI collects information about the objects detected in the camera images, it does not collect biometric data or video footage. More information is collected in our .

Only the data of the configured events are stored. Events can be configured around traffic counts (motorized and non-motorized traffic), origin-destination analysis, and information about objects in a given zone. It is also possible to specify additional parameters that will be included in the event output. Examples are: Speeds, number plates in parking lots for parking time analysis and the maximum number of objects in a zone for a utilization analysis. The transmission of the event data from the Perception Box to the cloud takes place via JSON format to an MQTT broker. More information can be found in our .

The main difference lies in the data retention. While our standard model (SWARM Perception Subscription) offers a data retention period of 30 days, this can be extended to three years with the SWARM Data Subscription. Further details can be found on our .

The final costs depend, of course, strongly on the scope and timeframe of the project. You can find our pricing model with all cost factors on our , as well as a project example with sample costs.

Needless to say, we will provide you with support for all installations and tests, and we are also happy to send hardware for testing purposes. However, we ask for your understanding that we cannot provide this free of charge and that the expenses have to be covered. Please feel free to contact our for further details.

browser
http://IP:8090/STREAM-ID
VLC,
here
event triggers
BAST
object classes
Sales team
with one of the highest accuracies in the industry
this section in our technical documentation
Traffic Insights
Advanced Traffic Insights
Parking Insights
Traffic Insights
Advanced Traffic Insights
Parking Insights
People Insights
more detailed information regarding the first setup
General Terms and Conditions
Subscription and Support Terms
email
product datasheet
SWARM Perception Box P101
SWARM Perception Box P401
SWARM Outdoor Perception Boxes OP101AC and OP101DC
Troubleshooting Guidelines
SWARM Support Center
GDPR guidelines
GDPR guidelines
website
website
Sales team
9MB
2024.1-tracker.mp4
Open to see a video recording of the enhanced tracking system.
SWARM Platform Architecture
P401 Side 1: Exterior Diagram
P401 Side 2: Exterior Diagram
SWARM Outdoor Perception Box - Exterior - OP101AC
SWARM Outdoor Perception Box - Interior - OP101AC
This registration ID is a sample
Traffic Counting Widget
csv. export
REST API
OD Chord
OD Table
REST API
csv. export
Showcase of the enhanced tracking system. Note: This camera placement is not recommend for production.
Toggle direction
Standard Parking Widgets
csv. export
REST API
Model Description for Traffic & Parking (Accuracy+)
Device Metadata
Enable the feature for camera streams
A slider allows you to access the track calibrations of the last 24 hours
SWARM Outdoor Perception Box - Exterior - OP101DC
SWARM Outdoor Perception Box - Interior - OP101DC
csv. export
REST API
Parking time violations
csv. export
REST API
3 Object Distances and Focal Lenghts
Combine CL, Speed and RoI for Queue Length Detection
csv. export
REST API
Configuration Overview - Configuration
Configuration Overview - Device Health
Camera configuration options
Location picker for camera
Preview frame
IO-device Configuration
Rule Configuration for Adaptive Traffic Control
Fully visible single parking spaces
Avoid occlusions by other objects
Avoid occlusions by other vehicles
csv. export
REST API
Scenario Configuration Overview
Event Name
Toggle for repeated CL crossings
Enable Speed Estimation
Counting line with speed estimation enabled
Configure Origin/Destination Zones
Raw Tracks
Journey Time (average and median) widget of two journeys.
Journey distributions widget.
csv. export
REST API
Example setup: Two lanes from above
Example set up: Two lanes from the side
Live Calibration
Enable the feature for camera streams
A slider allows you access the track calibrations of the last 24 hours
Track Calibration
Basic Device Health Metrics
Advances Device Health Metrics
Camera related Health Metrics
Parking Time
Rule Overview - Template Tag
Object Detection
License Plate Detection
Reading License Plate
Hashing Process with Salt Shaker
Event triggered
Event transmission from Edge to Cloud
Journey Time Allocation
Wrong-way drivers
U-turns
Pedestrian crossing configuration
Rule Configuration for Adaptive Traffic Control
csv. export
REST API
Device Overview
Control Center API Documentation Link
Device Status Documentation
The "API Call" option is available for every widget
The specific API request is ready to use.
Example widget for bicycle counting
Alert Conditions
Chose devices for Alerting
E-Mail recipients
Editing Monitoring Alerts
License Mangement
User Management Screen
Add a new user
Edit existing users
Confusion Matrix Measuring Accuracy for CL & VD
Confusion Matrix Measuring Accuracy for OD
Map view
Switch to Configuration
Debug Mode

Get in touch

How to contact the SWARM team

Contact Support

Traffic Counting

Use case for counting traffic on dedicated urban & highway streets with the classification of vehicles according to our Classes/Subclasses

Introduction

You would like to know the traffic situation of an urban street or highway? SWARM software is providing the solution to get the number of vehicles passing at the street split by object type (Classes) and direction.

In order to efficiently organize and plan strategic infrastructure and traffic installations, gathering accurate and reliable data is key. Our traffic counting solution builds on Artificial Intelligence based software that is designed to detect, count and classify objects taking part in road traffic scenarios such as highways, urban and country roads, broad walks, intersections, and roundabouts. Generated traffic data can be used as an information basis helping decision-making processes in large Smart City projects as well as to answer basic questions about local traffic situations such as:

  • How many trucks are using an inner-city intersection every day?

  • Smart Insights about traffic load — Do I need to expand the road?

  • How many people are driving in the wrong direction?

  • Why/When and Where are people parking/using side strips on the highway?

  • What areas are more frequently used than others on the road?

  • …

Background

Technology wise our traffic counting system consists of the following parts: object detection, object tracking, counting objects crossing a virtual line in the field of interest as well as object classification. The following section of this article will briefly describe those pretrained technologies used for traffic counting.

Object Detection

The main task here is to distinguish objects from the background of the video stream. This is accomplished by training our algorithm to recognize a car as an object of interest in contrast to a tree, for example. This computer vision technology deals with localization of the object. While framing the item in the image retrieved from the frames per second out of the analyzed stream, it is correctly labeled with one of the predefined classes.

Object Classification

The recognized objects are furthermore classified to differentiate the different types of vehicles available in traffic. Depending on the weight, axis and other features, the software can distinguish the recognized images from predefined and trained classes. For each item, our machine learning model will provide one of the object classes detected by SWARM as an output.

Object Tracking

Where was the object initially detected, and where did it leave the retrieved camera image? We accomplish to equip you with information to answer to this question. Our software is detecting the same object again and again and in the way tracking it from one frame to the next within the generated stream. The gathered data enables you to visualize the exact way of the object for e.g. generating heat maps, analyzing frequented areas in the scene and/or planning strategic infrastructure needs.

Crossing Virtual Lines

Another technology available in our traffic counting is used to monitor the streamed scene. By manually drawing a virtual line in our Swarm Control Center (SCC), we offer an opportunity to quantify the objects of interest crossing your counting line (CL). When objects are successfully detected and tracked until they reach a CL, our software will be triggering an event, setting the counter for this line accordingly.

Technology Specifics

In traffic counting we distinguish between the following use cases: highway, roundabout, urban traffic and country road. We measure the accuracy values individually for each scene. This way ensures that every new version of our model not only improves accuracy in some usecases but delivers more stable and more accurate measurements across possible scenarios.

Scenes

Highway

  • Scene description: Highway with four lanes

  • Task: Count cars and trucks in both directions

  • Conditions: daylight

  • Camera setup: 1280×720 resolution, 6 m height, 20 m distance

  • Object velocity: 60-130 km/h

  • Objects: >900

Roundabout

  • Scene description: Roundabout with four exits

  • Task: Count cars and trucks in all eight directions

  • Conditions: daylight

  • Camera setup: 1280×720 resolution, 4 m height, 30 m distance

  • Object velocity: 5-30 km/h

  • Objects: >100

Performance

Our Traffic Counting solution acquires an accuracy over 93.59%*.

*SWARM main classes detected only (Person, Rider, Vehicle — PRV)

Application Limitations

Occurance
Example
  • Crossing line or item behind a big obstacle

  • Object (PRV), with high distance to the camera

  • Objects overlap strongly, so our detection model detects more than 1 object as only 1

Occurance
Example

  • Color and/or shape of Objects are very similar to the background, so our detection model is not able to distinguish between object and background (r.g. grey cars, persons dressed in grey/white)

  • Different objects (classes) look very similar from certain perspectives, e.g. single-unit-trucks are barely distinguishable from articulated-trucks when only seen from the front or behind

Please log any issues, questions or potential bugs within our .

If this is your first time contacting us, we will need to create an account for you. In this case, kindly find our support email here:

Please find any details around our subscription and support terms in the .

Our performance laboratory (“Performance Lab”) is set up like a real-world installation. For each scene, we send a test video from our to all of our supported devices using an ethernet connection. The following columns will provide an overview about two scenes before we offer the performance values gathered in our accuracy measurement tests.

SWARM Performance Lab structure

When measuring performance of our traffic counting solution, a crucial point is the selection of the scene. We are choosing real-world scenarios from some of our installations as well as publicly available video material. We make sure that accuracy values obtained in our test laboratory reflect real life use cases in the best possible way. All video material used to test performance fulfills the specification requirements, which can be found in our

Highway Scene for Traffic Counting
Roundabout Scene for Traffic Counting

In order to understand how to interpret our accuracy-numbers, we gave some more technical details on our traffic monitoring solution. The detailed way of our accuracy calculation and an explanation of our test-setup is documented in our “” section.

In general, there are several reasons why traffic counting systems cannot be expected to reach 100% accuracy. Those reasons can again be split into various categories (technological, environmental and software side) that either lead to missed or over-counts. Given our technical and environmental prerequisites specified in our , we could reveal the following limitations in the provided software.

Camera perspective is not matching

SWARM Support Center
support@swarm-analytics.com
official document
Happytime RTSP Server
set-up documentation.
How do we measure Performance
How do we measure Performance?
set-up documentation
LogoGitHub - hal9000-swarm/analytics-integration-example: Example of integrating the Swarm data analytics APIGitHub
LogoGitHub - hal9000-swarm/swarm-event-schema: Schema for generated MQTT eventsGitHub
LogoDeflateWikipedia
our set-up requirements