Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
SWARM Analytics Technical Documentation
This documentation is outdated!
Head over to https://docs.bernard-gruppe.com/
Use this technical documentation to learn, apply and share about SWARM technology.
This documentation is a technical guideline for partners and customers who are interested in integrating our software in a new or existing data analyzing and IoT environment.
Release Date: 13.12.2023
In addition to the number plate raw text and country code, we now support the number plate area code. Some number plates include an area code associated with a geographical area, such as "W" for Vienna (Austria) or "M" for Munich (Germany).
The following 13 countries are supported
Austria
Bulgaria
Switzerland
Czech Republic
Germany
Greece
Croatia
Ireland
Norway
Poland
Romania
Slovakia
Slovenia
For supported countries, we detect spaces between letters and parse the raw license plate text according to a predefined format.
In the case of countries that are not supported (e.g. Italy), the generated event won't contain an area code.
All use cases based on ANPR are supported, no additional configuration is required.
The Data Analytics widgets "Journey Distributions" and "License Plates" allow segmenting by "License Plate Area"
If you are using a custom broker, the event schema has been adjusted to contain the area code.
Improved classification accuracy for the classes person and bicycle (Traffic & Parking Standard/Accuracy+)
Fixed Data Analytics, some queries led to inconsistent data
Fixed rule engine for Region of Interest occupancy, triggering a hardware output
Fixed P100/OP100 slowdown introduced with 2023.3
Reduced downtime during device updates
Fixed debug video output pixelation of counting line overlays
Fixed license notification for not enough SPS licenses
Technical Architecture
The SWARM Perception Platform consists of the following major components:
SWARM Control Center (SCC): Manage and configure Perception Boxes and analyze your data with Data Analytics. For both, we provide APIs for integration.
SWARM Perception Agent: The SWARM software running on our products P101/OP101/VPX. The video from an RTSP camera/USB camera is processed with the help of deep learning. Events are generated and sent via MQTT to either Data Analytics or a custom MQTT broker. Single or multi-camera processing is supported. The engine is configured solely through the SCC.
Data Analytics API (high level)
Events are sent and stored in Azure Cloud environment managed by Swarm
Events are processed and stored by Swarm
The API enables easy integration with third party systems
MQTT (low level)
Requires to operate an MQTT server
Requires the storing/processing of raw events
Enables on the edge processing for time-critical and/or offline use cases
The architecture is based on the following principles:
Centralized data analytics, device configuration, and update management.
Hosted in the Microsoft Azure Cloud, maintained by Swarm.
Decentralized camera stream processing at the edge
Generated data (Events) and management traffic are decoupled.
Support for heterogeneous network infrastructure
Scale from one to thousands of SWARM Perception agents
Release Date: 02.04.2024
The device schedule allows you to define time slots when the device should be in operational mode (device state: running) or in power saving mode (device state: standby).
The power saving mode is mostly relevant for the battery powered system BMA Mobil since it extends the usage time for a single charge.
In our measurements, the BMA Mobil reduces the power consumption during standby by about 55%.
Open a device in the control center and select the tab schedule
Switch the toggle to Enabled and select desired time slots when the device should be operational
Save Schedule
The device reduces the power consumption by pausing the video signal processing.
Functions that are not available during standby:
No events will be generated
No camera image is visible
No event triggers can be configured
Functions that are available during standby:
The device will be online in the Control Center, including available device health, logs, and reboot functionality.
Wake-up the device from standby (→Set the toggle to Disabled)
In our measurements the BMA Mobil consumes 1,07A when active and 0,48A when in standby.
Examples
With an 100Ah battery degraded to 90Ah we get the following battery lifetimes:
Always active: 90Ah / 1,07A ~3,5 days
Always standby: 90Ah / 0,48A ~7,8 days
13h active, 59h standby (Example from screenshot, total 3days): 13h * 1,07A + 59h *0,48A = 42Ah out of 90Ah (~50%)
We have been able to improve latency for the BMA working together with the BMC to < 1s. Measured by vehicle is physically present in a zone to the point when the BMC switches a dry contact.
Fixed: On Curiosity startup the first Region of Interest event was always vacant (even if there were objects).
Improved: Curiosity performance improvements by avoiding to re-classify already classified objects
Improved: The Number Plate Column for DataAnalytics Parking widgets is optional
2023.3 U1 → 2024.2: 155MB (BMA / P101 / Jetson Nano)
There are no API breaking changes
Release Date: 28.02.2024
Device Updates
Select individual devices for an update, e.g. this allows staged rollout site by site
Schedule the update, e.g. 5am to reduce the production impact to a minimum
Device Camera View
Auto refresh the camera preview for a convenient calibration for up to 30seconds
The stream automatically expands when one stream is configured
Event triggers and focus zone is now visible in every camera preview
Show or hide the event triggers/focus area
The focus area is not hiding event triggers anymore
The MQTT QoS level (0 - at most once, 1 - at least once, 2 -exactly once) for customer broker is configurable. Higher levels of QoS are more reliable, but involve higher latency and have higher bandwidth requirements.
Side Menu
API
Potentially breaking change: Authentication requests are throttled, limit: 100/hour. Note: The number of API calls for the Control Center is not throttled.
Potentially breaking change: Serial number format changed to string
The detection accuracy for Traffic&Parking (Standard and Accuracy+) has been improved, in particular the classes: car with trailer, trucks (single unit, articulated, with trailer) and bicycle.
We improve the data quality by enhancing the tracking system for:
Non-linear movement (e.g. vehicles in roundabouts)
Short occlusions (e.g. vehicles occluded by poles or signs)
Track hijacking (one or more objects share the same track) when used in combination with the focus area
Calibration visualisation
Tracks are coloured in blue when they are too small to classify.
Long tracks will not be cut-off
This is not a marketing video, it shows the engine overlay we ship with 2024.1. No hand tuning has been performed.
Devices is the first item in the side menu. The last selected side menu item will be remembered locally in your browser cache.
The Virtual Perception Box (VPX) is a software-only product that will be provided as docker containers (SPS license).
You are fully responsible for the setup and maintenance of the hardware and software (OS, driver, networking, system applications, …)
Production-ready checklist for devices with VPX (Swarm Analytics recommendation)
Integrate remote management (e.g. VPN/SSH)
Integrate update management (e.g. Ansible) to update system packages like IotEdge, Jetpack, Docker
Device monitoring
Check security (firewall, security patches applied, strong passwords/certificates)
Quick Start Guide
If you have followed all the instructions above and the P101 is not online in the SWARM Control Center, or if you need to set a static IP configuration for the ethernet interface, please contact us via the SWARM Support Center.
Once Step 6 is complete, return to the Getting Set-Up page to continue with Device Configuration.
Quick Start Guide
The SWARM Perception Box is a managed black box; meaning you do not need to manage hardware/OS/driver/security patches. You also cannot access the system.
To get set up, follow these steps:
Your first step includes:
Mount the (O)P101/P401 to the desired location.
Ensure your (O)P101/P401 is online in your SWARM Control Center.
After the Perception Box is successfully connected with the SWARM Control Center, you must configure the used camera and the scenario you are fulfilling.
Release Date: 15.11.2023
Adaptive traffic control enables you to interface with hardware devices like traffic controllers using dry contacts. Use cases and benefits:
'Smart Prio' System: Prioritise certain traffic classes and ensure fluid traffic behavior in real-time (e.g. pedestrians, bicyclists, e-scooters, heavy traffic).
Simplify infrastructure maintenance: Replace multiple induction loops with a single Swarm Perception Box. The installation does not require excavation work, and reduces the maintenance effort/costs.
The following metrics are supported:
Device Uptime, Status, Restarts, Available Disk Space
Device Temperature (support for P101/OP101/Jetson Nano)
LTE Modem Traffic, Signal Strength and Reconnects (support for OP100/OP101)
Camera status and Camera processing speed (FPS)
Generated and Pending events
We improved the classification accuracy for both variants Standard and Accuracy+, especially for the classes: articulated truck, truck with trailer and car with trailer.
We fixed the class output, the event class output is now person (vs. previously head). Affected by this change are the devices P101/OP101/VPX. Not affected are the devices P100/OP100.
We improved the accuracy due to higher resolution processing.
We improved the accuracy due to higher resolution processing.
The model has been deprecated and will not be updated in the future. The model will continue to work as long as there are devices using it. Please consider switching to the Traffic & Parking model.
Organize devices and generate events containing pre-defined device metadata. You can define up to five key and value pairs for a device. The keys and values can be freely defined by your choice, we support autocompletion for keys to avoid nasty typos.
Track calibration overlays the last 100 object tracks on a current camera frame. This enables you to position event triggers (e.g. counting lines) for optimal results. We extend the functionality with a history over the last 24 hours.
With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.
You decide when your devices get an update. Please update soon in order to use the latest features (e.g. Adaptive Traffic Control, Track Calibration History) and to benefit from quality improvements (e.g. Model updates), bug fixes and security updates.
We will automatically update by 13.12.23.
Control Center - User Management - Invite users and manage permission of existing users
Control Center - MQTT client id can be defined for custom MQTT brokers
Data Analytics - Speed-up of Origin-Destination widgets
Data Analytics - Fix widget visualisation of the chord diagram (OD) and line charts
Under some conditions an ROI in combination with a rule did not trigger events
Quick Start Guide
Once you have received the OP101AC, it is recommended that the instructions below are followed before mounting.
For the already configured scenarios, you can gather your data either via or create Dashboards in our Data Analytics. Those dashboards offer out-of-the-box visualizations as well as REST APIs to gather more insights.
To get started, check out the .
The device health metrics allow you to provide evidence for reliable and continuous data collection and to self-diagnose (e.g. stable network connectivity/power supply/camera connection/processing speed,... )
We replaced the model Parking (Single-/Multispace) with the model Traffic & Parking (Accuracy+). The model is tuned for accuracy while the processing speed (measured in FPS) is reduced. Ideally for use cases with less dynamic objects like and (Single- and Multispace).
Once defined, metadata allows you to filter the list of devices by metadata values and the generated events will include the pre-defined metadata for further processing by your application. For details have a look at the .
Find the .
Make sure to use a SIM card with sufficient data volume. For normal use, approx. 1 GB per month and a device are needed.
If your specific use case is not specified, please select 1080p (1920 x 1080).
Enabling ONVIF during initial setup not only saves time for possible support cases in the future, but you can also benefit from applying Swarm's recommendations on camera parameters with a single click in your Control Center.
If you have followed all the instructions above and the OP101AC is not online in the SWARM Control Center, please check out our .
Please see our for the number of cameras that can be used.
Once your OP101AC is mounted and online, return to the page to continue with .
Quick Start Guide
If you have followed all the instructions above and the P401 is not online in the SWARM Control Center, or if you need to set a static IP configuration for the ethernet interface, please contact us via the SWARM Support Center.
Once Step 6 is complete, return to the Getting Set-Up page to continue with Device Configuration.
Use Cases for Traffic Scenarios
There are two major use cases for understanding the traffic situation across your city, urban streets as well as highways.
Quick Start Guide
Once you have received the OP101DC, it is recommended that the instructions below are followed before mounting.
Find the Product Datasheet here.
If you have followed all the instructions above and the OP101DC is not online in the SWARM Control Center, please check out our troubleshooting guidelines.
Once your OP101DC is mounted and online, return to the Getting Set-Up page to continue with Device Configuration.
Flash the system image JetPack 5.1.2 (L4T 35.4.1) onto your Jetson device. Follow the documentation from NVIDIA. Depending on your hardware capability, you have the option to use an SSD or internal storage.
Jetpack 5.1.2 requires at least 32GB or more of storage. In order to free up more storage for additional software, either use a bigger storage device or follow the NVIDIA reference.
With our installer script, installing the VPX agent is easy. Make sure to get the serial(s) from us in advance.
After the installation script is complete, the IoT Edge runtime will pull four docker containers as outlined below.
The device type needs to be set in SWARM Control Center. Currently, only our support team can do that. Please create a ticket, therefore.
Make sure that the container curiosity-arm64-jetpack5
is used.
You will see in the SWARM Control Center an "Unnamed Device" with the corresponding registration ID:
How to succeed in traffic counting including the classification of vehicles according to our Classes/Subclasses on dedicated urban & highway streets
You want to know the traffic situation of an urban street or highway? SWARM software is providing the solution to get the number of vehicles passing at the street split per vehicle type (Classes) and direction.
For this use case, SWARM software is providing you with any data needed for traffic counting - The counts of vehicles including classification can be covered. The counts are split between two directions (IN/OUT). Furthermore, several counts can be made in one video camera, e.g.: count each lane separately. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines.
Make sure to use a SIM card with sufficient data volume. For normal use, approx. 1 GB per month and a device are needed.
If your specific use case is not listed, please select 1080p (1920 x 1080).
Enabling ONVIF during initial setup not only saves time for possible support cases in the future, but you can also benefit from applying Swarm's recommendations on camera parameters with a single click in your Control Center.
The configuration of the solution can be managed centrally in . Below, you can see how the standard traffic counting needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your
For receiving the best accuracy of the counting including the classification, the Counting Line should be placed approx. in the middle of the video frame so that vehicles from both directions are seen long enough for good detection and classification.
You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT directions.
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.
Here is an example for a Traffic counting widget You have the different options to choose the data you want for a certain time period as well as choosing the output format (e.g.: bar chart, table, ...).
If you need your data for further local analysis, you have the option to export the data of any created widget as csv. file for further processing in Excel.
You can visualize data via in different widgets.
If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
Object velocity
< 130 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Expected Accuracy (Counting+Classification)
(when all environmental, hardware, and camera requirements are met)
Counting >95% (vehicles, bicycles) Classification of main classes: >95% Classification of subclasses: >85%
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second
25
System requirements for our virtual software offering
The VPX agent needs the following hardware and software requirements to be met.
Hardware
Supported NVIDIA Jetson devices
Orin Nano 4GB/8GB, Orin NX 8GB/16GB, Orin AGX 32GB/64GB
Memory: at least 4 GB available
Storage: at least 6 GB free
Hardware
CPU: X86-64bit
Memory: At least 4 GB available
Storage: At least 15 GB free
NVIDIA Workstation GPUs
RTX series (e.g. RTX A2000)
Quadro RTX series (e.g. Quadro RTX 4000)
Software
Ubuntu 20.04 LTS
NVIDIA Driver Version 470 (or newer)
Docker 19.0.1+
IotEdge 1.4
You can find the benchmark results here: How many cameras can my Perception Box compute?
In case of uncertainty, please contact support
Flash the system image JetPack 4.6.0 (L4T 32.6.1) onto your Jetson device. Follow the documentation from NVIDIA. Depending on your hardware capability, you have the option to use an SD card or internal storage.
Make sure to match the exact JetPack version. Don't use newer or older versions.
With our installer script, installing the VPX agent is easy. Make sure to get the serial(s) from us in advance.
After the installation script is complete, the IoT Edge runtime will pull four docker containers as outlined below.
Make sure that the container curiosity-arm64-tensorrt
is used.
You will see in the SWARM Control Center an "Unnamed Device" with the corresponding registration ID:
Check our system requirements first!
Install the NVIDIA drivers and CUDA
Install the NVDIA container toolkit
After you followed the installation guidelines you must be able to get a similar output
Install IotEdge 1.4 (only the package aziot-edge is required)
You will receive a ZIP file from Swarm with configuration files. (Replace $ID with the device ID you received from SWARM)
At this point check the IotEdge logs for any errors
You will now see your deployment in the SWARM Control Center as "Unnamed Device" with the registration ID:
At this stage, the IoT Edge runtime will pull the docker images and once finished the device can be configured in the Control Center.
Next steps: Configure your use case.
Use Cases for Parking Scenarios
With the SWARM Perception Platform, you are able to find a solution for each parking environment thanks to the offering of following use cases.
How to succeed in setting up a Barrierless Parking Scenario to gather data about utilization
You have a parking space where you simply want to know your utilization by making an Entry/Exit count, SWARM provides a perfect solution for doing that quite easily. See yourself:
For this use case, SWARM software is providing you with any relevant data for your Entry/Exit parking space. The solution is gathering the number of vehicles in your parking space as well as the number of vehicles entering and exiting your parking space for customizable time frames.
The vehicles are classified in any classes the SWARM software can detect. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles.
How to succeed in traffic counting including speed estimates of vehicles according to our Classes/Subclasses on dedicated urban & highway streets
Do you want to know the average speed of your traffic in given areas? SWARM software is providing the solution to get the number of vehicles passing at the street split in different speed segments (10km/h) or even see the average speed of the given count over an aggregated time period.
How to succeed in getting the information for crossings or roundabouts as well as vehicle movements from Origin (Entry) to Destination (Exit).
You want to know the flow of your intersections, SWARM Perception platform is providing the solution to get the number of vehicles starting in an origin zone and ending up in a destination zone per vehicle type (Classes).
For this use case, SWARM software is providing you with any data needed for Origin Destination Zones - The number of vehicles from one zone to another including classification according to BAST/TLS standards can be covered.
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 30 PPM for object classes car, truck
> 60 PPM for object classes person, bicycle, motorbike
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
2.8mm-12mm
Camera mounting - distance to object center
Object classes car, truck
5-30 meters (2,8mm focal length)
35-100 meters (12mm focal length)
Object classes person, bicycle, scooter
3-12 meters (2,8mm focal length)
25-50 meters (12mm focal length)
Camera mounting height
Up to 10 meters
Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks
Camera mounting - vertical angle to the object
<50°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0° - 90° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)
Wide Dynamic Range
Can be enabled
Camera
Comment
HikVision
Bullet Camera
2,8 mm fixed focal length
HikVision
Bullet Camera
2,8mm - 12mm motorised focal length
Model
Configuration option
Counting Line (optional speed)
ANPR
Disabled
Raw Tracks
Disabled
The configuration of the solution can be managed centrally in . Below, you can see how the Entry/Exit parking with license plate detection needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your
To receive the best accuracy in counting including the classification, the CL should be placed approx. in the middle of the video frame so that vehicles from both directions are seen long enough for good detection and classification.
Consider that the IN/OUT direction of the counting line is important as it is relevant for the calculation of the utilization. (IN = Entry to parking, OUT = Exit of parking).
In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.
You are able to visualize the data for any Entry/Exit you have configured with the Counting Lines. So you are able to see the number of vehicles with their classes/subclasses that entered or left your parking spot, either aggregated over several Entry/Exits or separately per Entry/Exit. We deliver the following two standard widgets Current & Historic Parking Utilization out of the box when creating a Parking Scenario Dashboard
If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.
For this use case, SWARM software is providing you with any data needed for traffic counting as explained in the .
The configuration of the solution can be managed centrally in . Below, you can see how a standard traffic counting needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your
When you have enabled the speed estimation, the Counting Line will transform into two lines with a distance calibration measurement. In order to get a good result on speed estimates it is crucial that the calibration distance between the two speed lines is accurate. The distance can be changed in the trigger settings on the left sidebar.
You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT directions.
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.
Here is an example for a Traffic counting widget split by 10km/h groups of speed estimates. You have different options to choose the data you want for a certain time period.
If you need your data for further local analysis, you have the option to export the data of any created widget as csv. file for further processing in Excel.
The configuration of the solution can be managed centrally in . Below you can see how an Origin Destination analysis needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your accordingly.
For Origin/Destination at least two zones need to be configured. The zones can be placed as needed on the video frame. Consider that the first zone the vehicle is passing will be considered as the Origin zone and the last one the Destination zone.
On top, it is important that the zones are configured as big as possible in order that there is enough space/time to detect the vehicles successfully.
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.
You have the different options to choose the data you want in the preferred output format (e.g.: bar chart, table, ...). For Origin Destination analysis there is a special chart - Chord Chart - for visualizing the flow with the counts.
If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.
If you would like to integrate the data in your IT environment, you can use our SWARM API. In data discovery, you will find a description of the Request to use to retrieve the data of each widget.
Camera mounting - horizontal angle to the object
You can visualize data via in different widgets.
If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
You can visualize data via in different widgets.
If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
You can visualize data via in different widgets.
Object velocity
< 30 km/h
Day/Night/Lighting
Daytime or Well illuminated
Indoor/Outdoor
Indoor or Outdoor
Expected Accuracy (Counting only)
(when all environmental, hardware and camera requirements met)
>95% Only vehicles are considered. For parking spaces people, bicycles and motorbikes are not part of our test scenarios as they don't occupy parking spaces.
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second (FPS)
12
Object velocity
< 130 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second
25
Object velocity
< 50 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Expected Accuracy (From Origin to Destination)
(when all environmental, hardware, and camera requirements met)
Origin Destination counts + right main class: >85% (for vehicles) Classification of main classes: >95% Classification of subclasses: >85%
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second (FPS)
25
How to succeed in setting up an Entry/Exit parking system with ANPR
You have a parking space where you want to know your utilization and parking times of your customers, then you can use the SWARM solution as following.
For this use case, SWARM software is providing you with any relevant data for your Entry/Exit parking space. The solution is gathering the number of vehicles in your parking space as well as the number of vehicles entering and exiting your parking space for customizable time frames.
The vehicles are classified in any classes the SWARM software can detect. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles.
Thanks to the License plate recognition, the parking duration of your customers will be analyzed. On top of the License plate information, license plate origin country as well as license plate area codes are available as meta information. The country codes are according to ISO 3166 Alpha 2 standard. The country classification is working with excellent accuracy of 99%.
Find below some general settings for the installation of this use case. As the automatic number plate reading needs some more detailed information you will find additional and more detailed information on how to set it up in the following page:
European countries
Licence plate types
Note: square two line license plates (e.g. motorbike) are not supported
Object velocity
< 40km/h with low-light conditions
Area of focus
Single lane when camera mounted on the side; Two lanes when mounted above the center of both lanes
Day/Night/Lighting
Daytime or well illuminated only (min 500 lux)
Indoor/Outdoor
Indoor & Outdoor
Expected Accuracy (Counting + License Plate)
(when all environmental, hardware and camera requirements met)
>90% Only vehicles are considered. For parking spaces people, bicycles and motorbikes are not part of our test scenarios as they don't occupy parking spaces.
The License plate recognition is not supported on the P100 SWARM Perception boxes. So for this use-case, the P401, P101 SWARM Perception box, or a VPX deployment option with NVIDIA-based hardware is needed.
Automatic number plate recognition (ANPR) works in four steps. It needs to detect vehicles and license plates, read the plates, and trigger events.
1. Detect vehicle
Detect vehicles as cars, trucks, and buses and follow them in the video stream.
2. Detect license plate
For each detected vehicle, detect license plates and map them to the vehicles.
3. Read license plate
For each detected license plate, apply an optical character recognition (OCR) to read the plate.
4. Send event
If the vehicle crosses a counting line, send an event with the text from the detected license plate.
The main challenge for ANPR setups consists of clearly readable license plates. This means a sharp and well-illuminated image without occlusions or blurry objects is required to obtain correct results. The following guide shows how to set up our ANPR system and helps to avoid the most common issues.
The system is designed for two typical setups which are described here.
For this setup, the camera is mounted at around 2m height, as closely as possible to the side of the lane to avoid a high horizontal angle. If possible, enforce vehicles to stay in lane for the ANPR section, as switching lanes can lead to inaccurate results.
When positioning the camera above the cars (e.g. entry/exit of a garage) a maximum of two lanes can be covered.
To work properly, vehicles should drive straight through the scene to have the license plate visible in the entire scene. The camera should be at a height of 3m and facing vehicles directly from the front or back to avoid a high horizontal angle to the license plate.
Camera setup can sometimes be tricky and often requires some experimentation with the camera position and parameters to get optimal results.
The following sections describes common camera issues and how to avoid them.
License plates need to be visible with 250 pixel-per-meter (PPM). For a standard European plate, this gives us a minimum height of 30px and a minimum width of 100px to get good recognition results.
For camera setups with object distances within the specification, a FullHD (1080p) resolution is sufficient. In some cases, it might help to choose a higher resolution (4MP or 2K) for a sharper image.
It is recommended to check the size of license plate crops manually during the setup phase.
License plates need to be visible from a direct viewing angle. While small angles (<20° horizontal, <30° vertical) and tilting <5° can be handled, larger angles cannot work at all. If view angles get bigger, the system is more likely to mix up characters or is not able to recognize characters close to the edges.
For camera positions from the side only a single lane is recommended, while with camera views from above, a maximum of two lanes works.
Scene illumination has two major effects.
With good illumination, a lower shutter speed can be chosen and images get less blurred, especially for fast-moving vehicles.
Good lighting reduces the ISO value of the camera and images appear less grainy and sharper.
Some cameras offer additional illumination which can be useful. If the camera light is not sufficient, an external illumination of the scene is required.
Digital noise reduction (DNR) should be kept in a low range to further reduce graininess.
A low shutter speed is important for moving objects to get a sharp image and avoid blurriness caused by motion.
While in general, faster is better, the selected shutter speed depends on the available light in the scene.
Depending on vehicle speed a shutter speed of 1/250 is a bare minimum for moving objects below 15 km/h. For faster vehicles, up to 40 km/h, a shutter speed of 1/500 is a good choice. For faster objects, an even lower shutter speed is required which only works with good illumination.
With P101 we only support ANPR on vehicles passing with maximum 15km/h
To stream the camera image, data is encoded. Different encodings can save data and reduce image quality. For the ANPR use case, high image quality is required. Select H.264 codec and a high bitrate of >6000kbps for FullHD (1080p) content and >8000 kbps for 4MP video material with 25 FPS.
Additional features such as BLC and WDR are not recommended, as postprocessing can reduce details. If they are necessary, the impact on the video quality should be checked.
A constant bitrate (CBR) usually leads to a better quality than variable bitrate (VBR).
When setting up the camera, it is recommended to take a few short test videos in different lighting conditions (morning, midday, evening, night) to check if license plates are clearly visible in all conditions.
If license plates are not clearly recognizable for a human, ANPR cannot work. Make sure to get good and clear camera images for best results.
A suitable position of the event trigger (counting line) is essential for good ANPR results. If the line is positioned far in the back, the ANPR system has no time to detect and recognize the plate before an event is sent. If the line is in a position, where the license plate is only visible at a suboptimal angle, results will not be accurate.
For an optimal counting line position, a short debug video of the scene with 3-5 vehicles is required. In the analysis of the video, one should follow the vehicle through the optimal section with the best view on the plate (see Example 6). Just as the view on the plate gets worse (see Example 7), position the line right behind the center of the vehicle.
By this method, it’s guaranteed that the system can utilize the best video parts to detect and recognize the license plate and send the event just before suboptimal views worsen the result.
Our ANPR system is tested properly under various conditions. In our test setup, we have around five different scenes, and accuracies are calculated on the basis of > 800 European vehicles. Overall accuracy means the percentage of correctly identified vehicles plus license plates compared to all passing vehicles with readable license plates.
Under the specified conditions, the system reaches >95% overall accuracy in slow parking environments and >90% in environments with fast vehicles.
For a detailed analysis of potential errors, see limitations described below.
A base limitation that cannot be solved is the general readability of license plates. Plates with occlusions, covered with dust or snow or incorrectly mounted plates cannot be read. Environmental limitations such as strong rain or snow which blocks the clear view on plates can also lead to inaccurate results.
There are a few hard limitations where the system cannot provide good results.
1. Illumination (day-only)
Currently, the system supports good illumination only. This limitation is usually for day-only, however it will also work for well-lighted night scenes if the license plates are clearly recognizable.
2. Single-line plates only
License plates with two lines (such as motorcycle plates) are not supported and recognition will not work.
3. EU license plates only
The recognition system is limited to standard EU license plates. It can work with license plates from other countries (and some older non-standard EU plates), but there are no accuracy guarantees.
There are four potential errors that can occur within the ANPR system.
No vehicle is detected (< 1% error rate)
No plate is detected (< 0.1% error rate)
Event is sent without a vehicle passing (< 0.1% error rate)
Wrong plate text is recognized (< 5-10% error rate, depending on the scenario)
The OCR system identifies character by character. In most error cases it’s a single character that is misclassified or a character that was missed. As in some countries, license plate characters look very similar (sometimes even exactly the same), most errors are caused by mixing up characters with lookalikes.
B
and 8
can be mixed-up
D
and 0
can be mixed-up
0
and O
can be mixed-up
I
and 1
can be mixed-up
5
and S
can be mixed-up
The best option to avoid these mixups is to get a clear front view on the plate. However, for systems that need to match in- and outgoing license plates, it might make sense to match them with a fuzzy search that takes mixups and duplicated characters into account. For example, the system could still match plate texts like S123A0
and SI23AO
, when the second event exists on the same or following day.
If all setup recommendations are implemented and the camera configuration cannot be improved any further, there are some external improvements that can be made. Best results are achieved when combined.
Slow down vehicles in the ANPR section to have more time to detect and recognize the license plate.
Reduce distance from vehicle to camera, for example by limiting the entry to a single or narrow lane, reducing variation and angle to the camera.
If possible, use camera zoom to focus on the section with the best view on license plates. This can also help with a low resolution of plate crops.
If the Swarm system performance is an issue (low FPS), it can help to blackout any unnecessary image parts with vehicles (e.g. with a privacy zone). By this, the focus of the system is set on the ANPR section only.
Gather real time occupancy state about specific parking spaces - free or occupied
You have a parking space where you simply want to know if your specific parking spaces are occupied or free, SWARM provides a perfect solution for doing that quite easily. See yourself:
For this use case, SWARM software is providing you with any relevant data for Single Space detection within your parking space. The solution is to provide you with the occupancy state of each of your configured parking lots.
The single space detection will give you information about the occupancy state of your parking lot (free or occupied) as well as the information about the object in your parking space, including the classification. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles. On top the classification is depending on the camera installation, for a more top-down view the classification will be less accurate.
The main challenge in planning a camera installation is to avoid potential occlusions by other cars. We advise using the Axis lens calculator or generic lens calculator and testing your parking setup for the following conditions:
put a car on one of the parking spaces
put a large vehicle (high van, small truck - the largest vehicle that you expect in your parking) on all parking spaces next to your car
if you still can see >70 % of the car, then this parking spot is valid.
Parking spots have to be fully visible (inside the field of view of the camera). We do not guarantee full accuracy for cropped single parking spaces.
Avoid objects (trees, poles, flags, walls, other vehicles) that occlude the parking spaces. Avoid camera positions, where cars (especially high cars like vans) occlude other cars.
Occlusions by other parking cars, mainly happen if parking spaces are aligned in direction of camera-alignment lines.
Find detailed information about camera requirements/settings as well as camera positioning in the table below.
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 60 PPM
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
2.8mm - 4mm
Camera mounting - distance to object center
5-30 m (cars in the center of the image)
For 5 meters distance we guarantee a high accuracy for 3 parking spaces, aligned orthogonal to the camera.
The higher the distance, to the camera, the more parking-spaces can be monitored.
Camera mounting height
Indoor: 2,5 - 5m Outdoor: 2,5 - 10m Higher is better. Vehicles can potentially occlude the parked cars, hence we recommend higher mounting points.
Wide Dynamic Range
Must be enabled
Night-mode
ENABLED
The configuration of the solution can be managed centrally in SWARM Control Center. Below, you can see how to configure a Single Space Parking use case to get the best results
In order to start your configuration, take care that you have configured your camera and data configuration.
Model
Configuration option
Single Space (Roi) or Multi Space (Roi)
Raw tracks
Disabled
In the Parking Event templates you will find the two options Single Space (RoI) and Multi Space (RoI). These event types are the ones you need to set up this use case. Use an Single Space (RoI) in case you configure a parking space for a single car. In case you have an area where you expect more than one car choose the Multi Space (RoI). The difference between these two event types is the maximum capacity that you can set in the trigger settings.
Place the Region of interest (RoI) on the parking space you would like to configure. Consider that a vehicle is in the RoI if the center point of the object is in the ROI.
As the center point of the object is defining if the object is in an ROI or not please take care to configure the ROI taking into consideration the perspective.
You can visualize data via Data Analytics in different widgets.
In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.
You are able to visualize the data for any Single- or Multispace parking lot you have configured with the Parking RoI. So you are able to see the occupancy status as well as the number of vehicles in each RoI or aggregated across one or several camera streams. You have the option to add Current & Historic Parking Utilization or the Single Multi Space Occupancy widgets for your data in this use case.
If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.
If you would like to integrate the data in your IT environment, you can use the API. In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
Object velocity
0 km/h
Day/Night/Lighting
Daytime
Nighttime (Only well illuminated or night vision mode)
Indoor/Outdoor
Indoor or Outdoor
Expected Accuracy
(when all environmental, hardware and camera requirements met)
>95% Classification is not considered
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second (FPS)
5
Here you can find some examples showing what can be detected in different theoretical installation cases.
The following definitions will be used:
Camera-angle: measured from a horizontal line.
Camera-height: mounting height of the camera
Distance camera to point in the image-center. This distance is already fixed with camera angle and camera mounting height. But for better estimation of your camera setup, we add it as an additional parameter.
In the following, we give standard examples, with the following assumptions:
Car-width = 2 m
Parking length/car-length = 5 m
Height of the occluding vehicle = 2.5 m
Color-code:
green: good accuracy
yellow: will work in most cases, but parking spots might be occluded if the neighboring spaces are occupied with a large vehicle
orange-red: not recommended: might work in some cases, but in general this spot has high potential to be occluded by a vehicle parking on the spot next to it.
black: not recommended at all!
In this page it is described how the detection and the matching of the journeys work from a technical perspective
In order to detect journeys of vehicles, there is a need to detect the same vehicle at several predefined locations. This means there needs to be a dedicated identifier in order to tell if these are the same vehicles.
For vehicles, the unique identifier is obviously the license plate (LP). So, the LP will be taken as the unique identifier for matching vehicles across several locations. As LPs are considered personal data, a salt hashing function will be applied to pseudo-anonymize the personal data.
Based on the SWARM standard use case for traffic counting, the object (vehicle) will be detected and classified. If the journey time feature is enabled, the algorithm will run an LP detection and an LP reading for each detected vehicle. The raw string of the LP will then be pseudonymized with a so-called hashing mechanism, and the pseudonymized random text will be sent within the standard Counting Line (CL) event over the encrypted network.
In the upcoming section, more details of the single steps are described:
In each frame of the video stream, vehicles are detected and classified as cars, trucks, and buses. Alongside this, the vehicle is tracked across the frames of the video stream.
For each classified vehicle, the license plate is detected and mapped to the object.
For each detected license plate, an optical character recognition (OCR) is applied to read the plate. The output of this part is a text which includes the raw string of the license plate.
In order to hash the LP, a salt shaker generates random salts in the backend (Cloud) and distributes the salts to the edge devices. A salt is random data that is used as an input to hash data, for example, passwords or in our case LPs. The salt will not be saved in the backend. The only point, where the salts are temporarily stored is in the edge device (Perception box).
In order to increase the safety of potential attacks, the salt has a validity window of 12 hours. After the validity window, a new randomly generated salt will be used. The graphic below illustrates an example of the hashing function used for LPs.
Salts 1-4 are generated by the salt shaker and distributed to each edge device. In order to always detect all journeys, each LP is hashed with two salts. Two salts are needed, as a journey could potentially have a longer travel time than the salt validity time. In the upcoming section, match event on possible journeys, it is shown why two salts per LP are needed.
If the vehicle crosses a counting line (CL), a CL event with the hashes (h) from the detected LP is sent via MQTT to the Cloud (Microsoft Azure) and saved in a structured database (DB).
On the cloud, the DB is regularly checked for possible matches within the hashes. As shown above, two hashes are created per detected vehicle. If one of the two hashes is the same for two different detections it will be saved as a journey with the journey time information, class, edge device names & GPS coordinates of the edge device.
In case the same hash is found in several locations, a multi-hop journey will be saved based on the sorting of the timestamps. (e.g.: Journey from location A to B to C)
After 12h, which is the validity time of the salt used for pseudonymizing the license plate, the pseudonymized LP will be deleted. This action makes the pseudonymized data anonymized. In summary, it means, that after 12 hours past the detection of the vehicle and LP all data are anonymized.
Journey time set up
This guide focuses on specific details to be considered for journey times and area-wide traffic flow on public roads, focusing on camera placement, camera settings, and event trigger configuration.
Perfect camera placement is critical in order to get a clear image and readable number plates. While some parameters such as distance from the camera to the number plate can be fine-tuned by zooming after installation, mounting height and angle between the camera and travel direction of vehicles can only be adjusted by physical and cost-intensive re-arrangement. The camera position has to be chosen in a way that passing vehicles are fully visible and can be captured throughout several frames of the video stream while making sure the number plates are large enough for the ANPR system to identify every single character.
We recommend mounting heights between 3 and 8 meters, therefore the suitable minimum capture distance ranges from 5 to 14 meters. Besides the vertical angle constraint, number plates should be visible with at least 250 pixels-per-meter (PPM), this constraint determines the minimum focal length (zoom) the camera has to be set to.
Why between 3 and 8 meters of camera mounting height?
The lower bound of 3 meters is determined by rather practical reasons and not technical limitations. Cameras mounted lower than 3 meters are often prone to vandalism. Also, headlights from passing vehicles can lead to reflections on the camera. The upper bound of 8 meters is determined by the resulting minimum capture distance of at least 14 meters for the needed camera resolution of 1920x1080p. License plates need to be visible with 250 pixel-per-meter (PPM).
As the Swarm Perception Box and cameras are mainly mounted on existing infrastructure such as traffic light poles, there are two general options to mount the cameras: side mounting or overhead mounting.
When positioning the camera above the vehicles, two lanes can be covered with one sensor.
Consider mounting height (1) and capture distance (2) which determine the vertical angle (3) between the camera and the travel direction of the vehicle. The distance between the center of the lane (4) and the camera determines the horizontal angle (5) between the camera and the travel direction of the vehicle.
When mounting the camera to the side of the road, two lanes can be covered, assuming the horizontal angle between the camera and the travel direction of the vehicles is not exceeding 20°.
Position the camera as close as possible to the side of the road to avoid a horizontal angle larger than 20°. Larger angles can lead to lower accuracy because parts of the number plate can become unreadable. While traveling directions (1) and (2) are the same for both vehicles, horizontal angle (3) is much larger than (4).
While capturing sharp images during the day with good lighting conditions is relatively easy, low lightning and dark conditions make it a lot more difficult for cameras to deliver readable number plates from moving vehicles. The following section of this guide, therefore, provides an overview to fine-tune your camera to deliver readable number plates in such conditions.
However, the setting of the following parameters strongly depends on the specific camera mounting position and its environment. A light source such as a streetlamp or a vehicle passing on a different lane can send light to the camera sensors and influence the resulting image to a great extent. For this reason, this guide can only provide a general overview of relevant settings and their effect on image quality.
We recommend that the Auto day/night switch mode from the cameras is used. As you can see in the examples below, it is crucial that the camera changes to night mode reliably.
Detailed information on the solution for Journey time and area-wide traffic flow in terms of data generation, camera set up and Analytics options.
Next to the traffic frequency at given locations, you are wondering about statistics on how long the vehicles take from one to another location and how the traffic flows across your city and municipality. With this solution, you can generate that data with a single sensor solution from SWARM.
For this use case, SWARM software is providing you with the most relevant traffic insights - The counts of vehicles including the classification of the SWARM main classes can be covered. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines. By combining more sensors in different locations the journey time as well as the statistical traffic flow distribution will be generated.
The journey time and traffic flow distribution can be generated for vehicles only (car, bus and truck).
In this technical documentation, accuracy refers to the penetration rate of a single sensor, which is the percentage of correctly identified license plates divided by the total number of vehicles counted during a ground truth count.
The current penetration rate for this use case is 60%, taking into account different day/nighttimes, weather conditions, and traffic situations. When calculating journey time between two sensors, approximately 36% of journeys are used as the baseline, which is calculated by multiplying the penetration rate of both sensors.
The accuracy is sufficient to generate data that can be used to make valid conclusions about vehicle traffic patterns and journey times.
Adaptive traffic control enables you to interface with hardware devices like traffic controllers using dry contacts. Use cases and benefits:
'Smart Prio' System: Prioritise certain traffic classes and ensure fluid traffic behavior in real-time (e.g. pedestrians, bicyclists, e-scooters, heavy traffic).
Simplify infrastructure maintenance: Replace multiple induction loops with a single Swarm Perception Box. The installation does not require excavation work, and reduces the maintenance effort/costs.
Requirements:
Supported event types: Region of Interest (ROI) in combination with rules
First off, enable IO-device, specify the used Quido device type and the endpoint (IP or hostname).
Define at least one ROI and create an associated rule. As long as the rule is valid, the associated Quido relay output is enabled (contact closed). One or more rules can be created for the same ROI.
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 60 PPM
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
USB 3.0/UYVY, YUY2, YVYU
Camera Focal Length
2.8mm
Camera mounting - distance to object center
5-20 meters
Camera mounting height
3-6 meters
Camera mounting - vertical angle to vehicle
<50°
Note: setting correct distance to vehicle and camera mounting height should result in the correct vertical angle to vehicle
0° - 90°
Wide Dynamic Range
Must be enabled
Camera
Link
Comment
HikVision
DS-2CD2046G2-IU
2,8 mm Focal Length
Configuration
Model
Configuration option
CL (Counting Line)
Events for repeated CL crossings
Enabled
ANPR
Disabled
Raw tracks
Disabled
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 30 PPM for object classes car, truck
> 60 PPM for object classes person, bicycle, motorbike
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
2,8mm-12mm
Camera mounting - distance to object center
Object classes car, truck
5-30 meters (2,8mm focal length)
35-100 meters (12mm focal length)
Object classes person, bicycle, scooter
3-12 meters (2,8mm focal length)
25-50 meters (12mm focal length)
Camera mounting height
Up to 10 meters
Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks
Camera mounting - vertical angle to the object
<50°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0° - 90° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)
Wide Dynamic Range
Can be enabled
Camera
Comment
HikVision
Bullet Camera
2,8 mm fixed focal length
HikVision
Bullet Camera
2,8mm - 12mm motorised focal length
Model
Configuration option
Counting Line (optional speed)
ANPR
Disabled
Raw Tracks
Disabled
Speed Estimation
Enabled
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 30 PPM for object classes car, truck
> 60 PPM for object classes person, bicycle, motorbike
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
2.8mm-12mm
Camera mounting - distance to object center
Object classes car, truck
5-30 meters (2,8mm focal length)
35-100 meters (12mm focal length)
Object classes person, bicycle, scooter
3-12 meters (2,8mm focal length)
25-50 meters (12mm focal length)
Camera mounting height
Up to 10 meters
Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks
Camera mounting - vertical angle to the object
<50°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0° - 360°
Camera Infrared Mode
Can be enabled
Wide Dynamic Range
Can be enabled
Camera
Comment
HikVision
Bullet Camera
2,8 mm fixed focal length
HikVision
Bullet Camera
2,8mm - 12mm motorised focal length
Configuration
Model
Configuration option
Origin Destination Zones
ANPR
Disabled
Raw tracks
Disabled
Especially for Automatic Number Plate Recognition (ANPR) the camera choice and positioning are essential.
The requirements for accurate number plate recognition can be aligned with respective norms for the accurate operation of (human-based) surveillance systems.
The standards give a recommended pixel-per-meter measure (“pixels on target”), to reliably perform that task (by a human). The relevant category for clear reading of license plates/identification of a person “without a reasonable doubt” is “identify”. A Bullet camera is recommended.
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 250 PPM
To clearly read a license plate, at least 250 PPM are required. Using the camera parameters defined below ensures to achieve the minimum required PPM value
Resolution
min. 1920×1080 (H264)
Focal Length
min 3.6-8 mm motorized adjustable focal length recommended
Mounting
Distance and height of installation
Note: setting correct distance to license plate and camera mounting height should result in the correct vertical angle to license plate Horizontal angle to license plate
Exposure / Shutter speed
max. 1/250 for objects not moving faster than 40 km/h
Hikvision
Bullet Camera
DS-2CD2645FWD-IZS
Motorized varifocal lens
The configuration of the solution can be managed centrally in SWARM Control Center. Below, you can see how the Entry/Exit parking with license plate detection needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your camera and data configuration.
Configuration
Model
Configuration option
CL (Counting Line)
ANPR
Enabled
Raw tracks
Disabled
For receiving the utilization of your parking space including the park durations of your customers, a CL for each Entry/Exit needs to be configured. The CL should be placed approx at the beginning from last third of the frame in order that the object can be over several frames, so that the License plate detection and classification are most accurate.
Consider that the IN/OUT direction of the crossing line is important as it is relevant for calculation of the park duration. (IN = Entry to parking, OUT = Exit of parking).
You can visualize data via Data Analytics in different widgets.
In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.
You are able to visualize the data for any Entry/Exit you have configured with the Counting lines. So you are able to see the number of vehicles with their classes/subclasses and license plates of any Entry or Exit.
Furthermore, you will be able to gather a list of customers with the corresponding license plate that has parked longer than your preconfigured parking duration is. For the purpose of provability, you can also see a picture of the incoming and outgoing vehicle. Please mind that you need to configure the parking time due to your data privacy restriction.
If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing.
If you would like to integrate the data in your IT environment, you can use the API. In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
In case you are using your custom MQTT broker, you can also retreive the raw data there. We provide a special option to have the license plate capture added to the event schema. This enables you to retrieve the capture within the pushed MQTT message. The picture is encoded in BASE64. In order to enable this option, please contact our support.
Examples:
‘standard’ car license plates ()
Tip: Use the or .
If the distance from the camera to the object (parking space) is higher, the perspective will have a higher impact and you need to adapt the ROI as well according to the perspective. In order to support the calibration in the best way, you can use the calibration mode which can be activated on the top right of the configuration frame. There you will see the detection boxes and center points of the vehicles which are at that moment in the camera. So take care to configure the RoI accordingly that the center point will be in the RoI.
Journey time is defined as the time passed between the sighting of the same vehicle across two or more camera streams. The identification of vehicles is based on number plates. Please refer to our detailed about ANPR for a general overview.
The configuration of the solution can be managed centrally in . Below, you can see how the standard is configured for optimal results.
In order to start your configuration, take care that you have configured your
In order to retrieve the best accuracy we strongly recommend to configure a focus area on the maximum two lanes which should be covered for the use case.
Think of focus areas as inverted privacy zones - the model only "sees" objects inside an area, the rest of the image is black.
In order to receive the counting data as well as the journey time data, a Counting Line needs to be configured as an event trigger.
For receiving the best accuracy of the counting including the Journey time information the Counting Line should be placed at a point where the vehicle and the plate will be seen for approx. 10m in distance. On top take care to configure the Counting line at a place where the track calibration still shows stable tracks.
You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT direction
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.
Here is an example for a Journey time widget. Journey time can be shown as average, median or up to two different percentiles.
Another example below that visualizes the journey distribution. There is a slidebar to go through the different time periods of the chosen aggregation level. On top the figures can be changed easily between absolute and relative values.
If you need your data for further local analysis, you have the option to export the data of any created widget as .csv file for further processing in Excel.
The configuration of the solution can be managed centrally in . Below, you can see how the standard traffic counting needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your
Supported hardware: , ,
All you need to get started: Define a Region of Interest (ROI), define a and select which relay output to trigger.
You can visualize data via in different widgets.
Camera mounting - horizontal angle to vehicle
Camera mounting - horizontal angle to the object
Camera mounting - horizontal angle to the object
Tip: Use the or .
Make sure to place focus areas in a way that it covers enough space before an event trigger so that the model is able to "see" the objects for a similar amount of time as if the focus area wasn't there. The model ignores all objects outside a focus area so there is no detection, no classification, no tracking and no ANPR reading conducted.
You can visualize data via in different widgets.
If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
3
5
19
4-12
4
7
18
5.4-12
5
9
18
6.6-12
6
10
18
10-12
7
12
18
11-12
8
14
17
12
Object velocity
< 80 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Supported Products
VPX, P401, P101/OP101
Frames Per Second
25
Camera
Note
HikVision
Bullet Camera
2,8mm - 12mm motorised focal length
Model
Configuration option
Region of Interest + Rule
ANPR
Disabled
Raw Tracks
Disabled
Object velocity
< 40 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Expected Accuracy
(when all environmental, hardware, and camera requirements are met)
Presence Detection >95% Classification of main classes: >95% Classification of subclasses: >85%
Supported Products
VPX, P401, P101
Frames Per Second
12
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 30 PPM for object classes cars, trucks
> 60 PPM for object classes person, bicycle, motorbike
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
2,8mm-12mm
Camera mounting - distance to object
Object classes cars, trucks
5-30 meters (2,8mm focal length)
35-100 meters (12mm focal length)
Object classes person, bicycle, scooter
3-12 meters (2,8mm focal length)
25-50 meters (12mm focal length)
Camera mounting height
Up to 10 meters
Note: Higher mounting is preferred to minimize occlusions from larger objects like trucks
Camera mounting - vertical angle to the object
<50°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0°-45° Note: An angle of about 15° provides better classification results due to more visible object details (e.g. wheels/axes)
Wide Dynamic Range
Can be enabled
Configure the connection to your camera
SWARM offers Multi-Camera support, allowing you to process more than one camera per Perception Box.
To open the configuration page of a Perception Box, click on the row of the Box. There you can manage all cameras running on one device.
Although you are completely free in naming your Perception Boxes, you might want to have a logical naming scheme.
Device information
By clicking on the pen you may change the name of the Perception Box. There are nearly no limitations to doing so. You may use any special character and as many chars as you want. On top, you find the Device ID and the Serial number of the device with a copy option. The device ID is necessary for any support case you are opening. The serial number is the one from the Perception box which you find on the label of the box
Here you can see the individual naming of each camera on one device, which can be changed in the next steps where you are configuring the camera settings. By clicking on the row of the camera the camera settings will open.
Add Camera
In the configuration step of your Perception Box you might need to add new cameras which can be achieved by clicking on this button.
Retrieve Logs & Reboot Device
The Camera Status represents basic monitoring of SWARM software and gives an indication if the software, camera input and the MQTT connection is up and running on camera level.
Depending on your subscription, you will have a pre-defined number of cameras you may use with your Perception Box. If you need to process more cameras, contact our Sales team.
Clicking on a camera will collapse the corresponding settings. You can name the camera. On top you have the option to deactivate the camera stream. If a stream is deactivated it will not be taken into consideration by the SWARM Software and the performance will not be impacted but the configuration will be kept.
A GPS coordinate needs to be set for each camera. The GPS coordinate is mandatory and can by entering the coordinates or with the location picker directly on the map.
We are currently able to process camera streams over RTSP, as well as streams coming over USB. You can select the different options as Connection Type.
RTSP cameras must be configured with H264 or H264+ video codec. For more details, head over to Solution areas
USB cameras must be available as V4L device at /dev/video0.
The following specifications are supported:
RAW color format: UYVY, YUY2, YVYU
Resolution: 1080p
Other camera settings:
Shutter speed, brightness, FPS are camera/sensor dependent and have to be individually calibrated for optimal results
Make sure to use USB 3.0 camera in order to benefit from the full frame rate.
The other fields for the Camera Connection can be found in the manual of the camera and/or can be configured on the camera itself.
There are some special characters, which could lead to problems with the connection to the camera. If possible, avoid characters like "?", "%" and "+" in the password field.
As soon as you got the Camera Connection configured, you will see one frame of the camera as a preview. You can now start with the Scenario Configuration from here.
The Swarm Perception Box sends the results from the real-time analysis to an MQTT broker. The default configuration will send data to Azure Cloud and to Data Analytics for retrieving the data. If you want to configure a Custom MQTT, see more info in the Advances set-up section of the documentation.
In the device configuration, you have seen the overall status of the cameras included in one Perception Box. On the camera level, you have the option to see the individual status to better identify the root cause of the issue (see mark 4 in the overview above).
As soon as you see a frame of your camera, you have the option to configure your Scenarios. This is where the magic happens! --> See next page!
Configure your scenario according to your covered use cases
Now, as you see your camera, you have the option to configure it. This is where the magic happens!
As SWARM software is mostly used in dedicated use cases, you can find all information for a perfect set-up in our Use Cases for Traffic Insights, Parking Insights and Advanced Traffic Insights.
In the configuration, you can select the best model for your use case as well as configure any combination of different event triggers and additional features to mirror your own use case.
Each event trigger will generate a unique ID in the background. In order for you to keep track of all your configured types, you are able to give it a custom name on the left side panel of the configuration screen. --> This name is then used for choosing the right data in Data Analytics.
Please find the abbreviation and explanation of each event type below.
We provide templates for the three different areas in order to have everything set for your use case.
Parking events --> Templates for any use case for Parking monitoring
Traffic events --> Templates for use cases around Traffic Monitoring and Traffic Safety.
People events --> Templates for using the People Full Body or People Head model.
This will support you to easier configure your scene with the corresponding available settings. You can find the description of the available Event Triggers and the individual available Trigger Settings below.
Counting Lines will trigger a count as soon as the center of an object crosses the line. While configuring a CL you should consider the perspective of the camera and keep in mind that the center of the object will trigger the count.
The CL is logging as well the direction the object crossed the line in IN and OUT. You may toggle IN and OUT at any time to change the direction according to your needs. On top a custom name for IN and OUT direction can be configured. The custom name for direction can then be used as segmentation in Data Analytics and is part of the event output.
Per default, a CL only counts objects once. In case each crossing should be counted there is an option to enable events for repeated CL crossings. The only limitation taken there is that only counts will be taken into consideration if they are 5 seconds apart from each other.
Available Trigger Settings: ANPR, Speed Estimation, Events for repeated CL crossing
You can enable the Speed Estimates feature as a specific trigger setting with a Counting Line in the left side bar. This action will add one additional line that can be used to configure the distance between in your scenario. For best results, use a straight distance without bendings.
RoIs are counting objects in the specified region. This type also provides as well the Class and Dwell Time, which tells you how long the object has been in this region.
Depending on the scenario type we can differentiate between 3 types of RoIs. For those 3 types we are offering predefined templates described below:
Event Trigger
Time
Time
Time or Occupancy
Type
Parking
Parking
People & Traffic Events
Default number of objects
1
5
1
Color
Zones are used for OD - Origin - Destination. Counts will be generated, if an object moves through OD 1 and afterwards through OD 2. For OD at least two zones need to be configured.
The first zone the object passes will be the origin zone and the last one it moved through the destination zone.
A VD covers the need of having 3D counting lines. The object needs either to move into the field and then vanish or appear within the field and move out. Objects appearing and disappearing within the field, as well as objects passing the field are not counted.
Learn more about the Virtual Door logic.
The Virtual Door is designed for scenes to obtain detailed entry/exits count for doors or entrances of all kinds.
The logic for the Virtual Door is intended to be very simple. Each head or body is continuously tracked as it moves through the camera's view. Where the track starts and ends is used to define if an entry or exit event has occurred.
Entry: When the track start point starts within the Virtual Door and ends outside the Virtual Door, an in event is triggered
Exit: When the track start point starts outside the Virtual Door and ends within the Virtual Door, an out event is triggered
Walk by: When a track point starts outside the Virtual Door and ends outside the Virtual Door, no event is triggered
Stay outside: When a track point starts inside the Virtual Door and ends inside the Virtual Door, no event is triggered
Note: There is no need to configure the in and out directions of the door (like (legacy) Crossing Lines) as this is automatically set.
You can enable the ANPR feature with a Counting Line, which will add the license plate of vehicles as an additional parameter to the generated events. When enabling the ANPR feature, please consider your local data privacy laws and regulations, as number plates are sensitive information.
The Image Retention Time can be manually set. After this time, any number plate raw information as well as screen captures will be deleted.
You can enable the Journey time feature in the Global Settings on the left side bar. This feature generates journey time and traffic flow data. This setting is needed for Advanced Traffic Insights. Find more technical details on data which will be generated in following section:
In the Global Settings section, you have the option to add focus areas. A focus area will define the area of detection on the frame. So in case focus areas are defined, detections will only be taken into consideration in these corresponding areas. If a focus area is configured, the areas will be shown on the preview frame and in the table below. In the table you have the option to delete the focus area.
Attention: When a focus area is drawn, the live and track calibration will only show detections and tracks in these areas. So before focus areas are drawn check the track calibration in order to see where the tracks are on the frame to not miss essential detections in the focus area definition.
In the configuration, there are two trigger actions to choose from. Either a time or an occupancy change, depending on the use case.
In the Global Trigger settings you can adjust the RoI time interval.
The RoI time interval is used accordingly depending on the chosen trigger action:
Time --> The status of the region will be sent at the fixed configured time interval.
Occupancy --> You will receive an event if the occupancy state (vacant/occupied) changes. The RoI time interval is a pause time after an event was sent. This means that the occupancy change will not be checked for the configured time interval and you will receive max. one event per time frame. The state is always compared with the state sent in the last event.
At the raw track mode an event will be generated as soon as the object is leaving the camera frame. At this event the exact track of the object will be retrieved. The track will be gathered in X/Y coordinates of the camera frame.
Raw Tracks should only be used in case you decide for the advanced set up with a custom MQTT connection.
To create your own solution, select a model for your solution and then place your type (or select raw tracks mode).
When a type is active, left-click and hold the white circles to move the single corner points. You can create any tetragon (four-sided polygon). To move the entire type, left-click and hold anywhere on the type.
How to success in setting up counting for people entering and exiting a dedicated area
You want to know how SWARM software is providing the solution to get the number of People either entering or leaving your configured area via a Virtual Door in different sceneries. SWARM software is providing the solution to get the number of People either entering or leaving your configured area via a Virtual Door in different sceneries.
For this use case, SWARM software is providing you with the counts of people split by direction (IN/OUT). On top, several counts can be made in one camera, e.g.: count each door separately.
Object velocity
< 10 km/h (walking speed)
Day/Night/Lighting
Daytime or well illuminated
Indoor/Outdoor
Indoor or Outdoor
Expected Accuracy
(when all environmental, hardware, and camera requirements met)
>95%
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second (FPS)
12
How to get insights on traffic congestions in terms of data generation, camera set up and Analytics options.
Next to the traffic frequency at given locations, you are wondering about the length of a queue when traffic congestion is given. In combination with the speed of the detected vehicle, you can get proper insights into the length and speed of the current queue.
For this use case, SWARM software is providing you with the most relevant traffic insights - The counts of vehicles including the classification of the SWARM main classes can be covered. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines. By combining this with different Regions of Interest (RoI) you can retrieve the needed insights into traffic congestion.
For traffic frequency, all SWARM main classes can be generated. Depending on the camera settings, we can detect present vehicles up to 70 m.
This page describes the different status devices and camera streams can have and what to expect.
In the camera overview of your devices and dashboards, you will find the camera monitoring, which tells you if your camera is working as expected. In device configuration, you find the device monitoring, which shows the worst state of all cameras running on the device
The device monitoring depends on the worst status of the stream monitoring in order to give you an overview in your device list on devices where a camera is not working as expected.
The monitoring takes into consideration the system, the camera input, and the MQTT connection.
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
250 PPM (vehicle)
Using the camera parameters defined below ensures achieving the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1920x1080 pixel
Camera video protocol/codec
RTSP/H264
USB 3.0/UYVY, YUY2, YVYU
Camera Focal Length
min. 3.612 mm motorized adjustable focal length
Camera mounting - distance to object center
5-20 meters Please consider that the zoom needs to be adjusted according to the capture distance. More details are in the installation set-up guide.
Camera mounting height
3-8 meters Please follow the installation set-up guide in detail.
Camera mounting - vertical angle to the object
<40°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle More information in the Installation set-up guide.
0° - 20°
Dahua
IPC_HFW5442EP-ZE-B
HikVision
DS-2CD2646G2-IZS
Model
Configuration option
Counting Line
Journey Time
Choose Journey time mode on Global settings
Raw Tracks
Disabled
Camera mounting - horizontal angle to the object
On top, you have the options to retrieve and display the SWARM software logs to get a more detailed overview in case the box is not running as expected. There you can see if the box is able to connect to the camera. In case the connection to the camera is not successful, please check the camera & network settings on your side. As every hardware needs a reboot from time to time, we included this function "Reboot device" here to do this. In case you still experience issues, please contact our
See the definition of the status in the page.
dark green
purple
light green
Find detailed information about camera requirements/settings as well as camera positioning in the table below.
Recommended
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 80 PPM
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
USB 3.0/UYVY, YUY2, YVYU
Camera Focal Length
2.8mm
Camera mounting - distance to object center
2-8 meters
Camera mounting height
2-4 meters
Camera mounting - vertical angle to the object
<45°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0° - 360°
Camera FPS
> 25 FPS
Wide Dynamic Range
Should be enabled
Camera
Link
Comment
HikVision
DS-2CD2046G2-IU
2,8 mm Focal Length
The configuration of the solution can be managed centrally in SWARM Control Center. Below, you can see how a standard people counting needs to be configured for optimal results.
In order to start your configuration, take care that you have configured your camera and data configuration.
Configuration
Model
People Full Body (if distance > 5m to Virtual Door)
People Head (if distance < 5m to Virtual Door)
Configuration option
VD (Virtual Door)
ANPR
Disabled
Heatmap
Disabled
For receiving the best accuracy of the Virtual Door, the Virtual Door should be placed approx in the middle of the video frame and not too close to the camera so that people will be detected before their center point is already in the perspective of the Virtual Door.
The direction IN/OUT can not be chosen. If a person is detected outside the VD and disappears inside the VD, it will be direction IN.
You can visualize data via Data Analytics in different widgets. To perform people entry/exit counting, we offer a generic scenario, which offers a bundle of metrics to analyze your raw data.
In our Generic Scenario section, you can find more details about the possible metrics to use for creating your Generic Scenario Dashboards.
As people counting is based on a Virtual Door, you need to choose the Metrics "VD count" or "VD IN/OUT Difference". You have then the different options to choose the data you want for a certain time period as well as choosing the output format (e.g.: bar chart, number, table, ...).
If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.
If you would like to integrate the data in your IT environment, you can use the API. In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
Find detailed information about camera requirements/settings as well as camera positioning in the table below.
*The higher the distance of the objects to the camera, the higher the focal length, the higher the dead zone. In order to achieve the needed PPM for the detection of objects (30 PPM, please consider the following illustration and table:
Possible cameras for this use case
In order to receive the counting data including speed as well as the RoIs occupancy, a Counting Line and several RoIs need to be configured as event triggers. Depending on the specific use case and object distance, several triggers might need to be combined.
In order to receive information on how fast vehicles are driving and how many objects are currently present in a specific region, you need to configure counting lines with speed estimation and generic RoIs.
You can choose the direction IN/OUT as you want in order to retrieve the data as needed and give a custom name to that direction.
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards for Speed Events and combined trigger.
If you need your data for further local analysis, you have the option to export the data of any created widget as .csv file for further processing in Excel.
Find out how to configure automatic email alerts for status changes in our .
If your device appears offline and this is not intended, please follow our
Camera mounting - horizontal angle to the object
Tip: Use the or .
Camera mounting - horizontal angle to the object
The configuration of the solution can be managed centrally in . Below, you can see how the standard is configured for optimal results.
In order to start your configuration, take care that you have configured your accordingly.
You can visualize data via in different widgets.
If you would like to integrate the data in your IT environment, you can use the . In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
30
2,8 mm
2,8 mm
50
5 mm
5 m
70
7 mm
>8 m
HikVision
DS-2CD2646G2-IZS
Model
Configuration option
Counting Line & RoIs
ANPR
Disabled
Raw Tracks
Disabled
Object velocity
< 130 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Expected Accuracy (Counting+Classification)
(when all environmental, hardware, and camera requirements are met)
Counting >95% (vehicles, bicycles) Classification of main classes: >95% Classification of subclasses: >85%
Supported Products
VPX, P401, P101/OP101, P100/OP100
Frames Per Second
25
The device is up and running (powered, connected to the internet)
The device is offline (no power, no internet, etc.). There are several easy steps to check before you can contact our support team.
Everything is fine. All the cameras configured on your device are running as expected.
At least one of the cameras on the device is not configured. Check the camera monitoring status for more details.
At least one of the cameras on the device has an issue and is not sending data as expected.
At least one of the cameras on the device has a Warning status.
The device is offline. Check if the hardware is connected to the power supply and has a running network connection.
When you just have changed the configuration from one of the cameras on the device, the status will go on pending for max. 5 minutes until the correct status is determined.
One or more camera streams are disabled.
Everything is fine. Your camera is running as expected.
Software is running smoothly, camera connection is available and MQTT broker is connected.
The camera is not configured.
You need to configure the camera and data connection as well as your specific configuration according to your use case.
The status means that data are still generated and delivered but there are dedicated issues that could have an impact on the data accuracy. Issues types: Video frames cannot be retrieved correctly - At least 10% of camera frames the camera delivers are broken. Performance issues - The performance (frames per second) are dropping the limit of the configured event types.
Something unexpected happened and the software is not running --> no data is generated
Issue types:
Docker container is not running correctly - please contact support - Software is not running.
Data cannot be sent to MQTT endpoint - There are more than 10 MQTT events that have not been delivered to MQTT broker successfully since at least 10 seconds. Please check if your MQTT broker is up and running.
Camera not connected - Camera connection can't be established. Please check if the camera is up and running and if the camera details as for example user & password are configured correctly
The Perception Box or your hardware is offline. Check if the hardware is connected to the power supply and has a running network connection.
When you just have changed the configuration, the status will go on pending for approx. 5 minutes until the correct status will be determi
The respective stream is disabled and can only be enabled again if there are enough licenses available. This state can also be used to save the current configuration, while you don't have a need for the device to run.
Option to change camera parameters to optimize video stream settings for the SWARM solution.
Make sure to enable ONVIF on your camera and create an admin user with the same user credentials as for the camera itself. In case the camera is delivered by SWARM, ONVIF is enabled by default.
The camera settings section is split into two tabs. One tab is for checking if the Basic settings needed for the Swarm Analytics processing are correctly set. In the Advanced settings, camera parameters can be manually adjusted and optimized.
In the basic settings tab, the current main configuration of the camera is shown and compared with the recommended settings for your configuration. The icons per setting indicate if the applied settings match Swarm's recommendations.
There is an option to automatically apply the recommended settings in order to have the camera configured for achieving the best results.
As each installation is different, especially in terms of illumination and distance as well as further external factors, you can configure the camera settings individually for receiving the best image quality for data analysis with the SWARM solution.
Change and apply settings. When settings are applied, the preview frame is refreshed and you will see how the changes impact the image quality. In case you are not happy with the changes you just made, click on revert settings. The settings will then be reverted to the settings which have been applied at the time the camera settings page was opened.
The following configuration options are available:
Brightness
Defines how dark or bright the camera image is
From 0 (dark) to 10 (bright)
Contrast
Difference between bright and dark areas of the camera image
From 0 (low contrast) to 10 (very high contrast)
Saturation
Describes the depth or intensity of colors in the camera image
From 0 (low color intensity) to 10 (high color intensity)
Sharpness
Defines how clearly details are rendered in camera image
From 0 (low details) to 10 (high details)
Shutter speed
Speed at which the shutter of the camera closes (illumination time)
Generally, a fast shutter can prevent blurry images, however low-light conditions sometimes require a higher value. Values are in seconds, for example 1/200s = 0.005s
Day/Night mode
Choose between day-, night-, or auto-mode, which will apply the IR-cut filter depending on camera sensor inputs
Day, Night, Auto
WDR (Wide dynamic range)
For high-contrast illumination scenarios WDR helps to get details even in dark and bright areas
When WDR is activated, the intensity level of WDR can be adjusted
Zoom
Motorized optical zoom of cameras
Two levels of zoom distance are available indicated by the + and - buttons. Zoom is applied instantly to the camera and cannot be reverted automatically.
You can find the ONVIF setting in the following section of the camera settings on the Hikvision UI: Network --> Advanced Settings --> Integration protocol
Enable Open Network Video Interface
Make sure to select "Digest&ws-username token"
Add user
User Name: <same as for camera access>
Password: <same as for camera access>
Level: Administrator
Save
Time Synchronization needs to be correct for ONVIF calls to work
System --> System settings --> Time
Enable NTP for time synchronization
In order to configure the stream properely for best data accuracy there are two options which will support you in the configuration process.
For easy calibration, you can use our Live calibration in the top right corner drop down of the preview frame. As you can see in the screenshot below, this mode offers visibility about what objects the software is able to detect in the current previewed frame.
The detected objects are surrounded by a so-called bounding box. Any bounding box also displays the center of the object. In order to distinguish the objects, we offer the calibration more in differentiated colors of the main classes. Any event that gets delivered via MQTT is triggered by the center of the object (dot in the center of the bounding box).
The track calibration feature gives the option to overlay a relevant amount of object tracks on the screen. With the overlay of the tracks, it will be clearly visible where in the frame the objects are detected the best. According to this input, it is much easier to configure your needed use cases properly and have good results with the first configuration try.
With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.
The color of the tracks are split by object class so that they can be distinguished between cars, trucks, buses, people and bicycles.
The colors of the tracks and bounding boxes are differentiated per main class. Find the legend for the colors on the question mark in the preview frame as shown in the Screenshot below.
The essence of our computer vision engine's ability to detect and classify lies in its models.
Have a look at our documentation for use cases (Traffic Insights, Parking Insights, Advanced Traffic Insights), we recommend a model for each use case. If unsure, use the model Traffic & Parking (Standard).
Event will contain class and subclass according to this definition.
car
Cars include small to medium sized cars up to SUVs, Pickups and Minivans (for example VW Caddy).
The class does not include cars pulling a trailer.
car
van
Vans are vehicles for transporting a larger number of people (between 6 and 9) or used for delivery.
car
car with trailer
Cars and vans that are pulling a trailer of any kind are defined as car with trailer.
For a correct classification the full car and at least one of the trailer axis have to be visible.
truck
single unit truck
Single unit trucks are defined as large vehicles with two or more axes where the towing vehicle can not be separated from the semi-trailer and is designed as single unit.
truck
articulated truck
Articulated trucks are large vehicles with more than two axes where the towing vehicle can be separated from the semi-trailer. A towing vehicle without a semi-trailer is not included and is classified as single unit truck.
truck
truck with trailer
Single unit trucks or articulated trucks pulling an additional trailer are defined as truck with trailer.
bus
-
A bus is defined as vehicle transporting a large number of people.
motorbike
-
The class motorbike is defined as a person riding a motorized single-lane vehicle. Motorbikes with a sidecar are included, whereas e-bikes are not part of this class.
Motorbikes without a rider are not considered.
bicycle
-
The class bicycle is defined as a person actively riding a bicycle. People walking and pushing a bicycle are not included in this class and are considered as person.
Bicycles without a rider are not considered.
person
-
The class person includes pedestrians that are walking or riding Segways, skateboards, etc. are defined as pedestrians.
People pushing bicycles or strollers are included in this class.
scooter
The class scooter includes a person riding on a so-called kick scooter, which can either be motorized or human-powered. The scooter usually exists out of two wheels and a handlebar.
tram
The class tram is a public transportation vehicle operating on tracks along streets or dedicated tramways. Trams are typically electrically powered, drawing electricity from overhead wires.
other
-
Vehicles not matching the classes above are considered in the class other.
The healthiness of your device at a glance
Device Uptime
See the time, this device has been up and running until now.
Device Status and Device Restarts
Device Free Disk Space
In order, for the disk space of your device to be running full, you can see this as an early indicator here.
Device Temperature
Supported for: P101/OP101/Jetson Nano
If the device is running at a high temperature (depending on specifications defined by the manufacturer) we will state a warning here. The temperature could impact the performance (throttle processing performance).
Modem Traffic, Signal Strength and Reconnects
supported for OP100/OP10
Camera status
Camera processing speed
In case the fps are dropping, there might be a potential problem with the camera occurring, or the device is getting too hot.
Generated and Pending Events
Here you can find details on how to use the Rule Engine for your customized Scenario Configuration
With the Rule Engine, you can customize your event triggers. Reducing Big Data to relevant data is possible with just a few clicks: From simple adjustments to only get counts for one direction of the Counting Line to more complex rules to monitor a Region of Interest status when a vehicle crosses a Counting Line.
For rule creation, an event trigger has to be chosen to attach it to. Depending on the type of the event trigger, options are available to set flexible filter conditions.
You can create combined conditions for RoI and CL. When they are chosen as an event trigger, the option to add another condition appears below. This subcondition needs to be based on a second RoI or CL. They will then be combined by an AND connection.
Combined rules trigger an event only in case an object is crossing the CL and the rule of the additional CL or RoI is met.
In the example below, the rule sends an event in case a car, bus, truck or motorbike is crossing the speed line at more than 50 km/h and at the same time a person which is longer than 5 sec in the RoI.
If you are deleting a rule that is tagged as a template, the template will be removed. In case a rule is created on a trigger (e.g.: CL) and the trigger gets deleted, the rule will disappear as well.
Overview about the Device Configuration in the Swarm Control Center
In the Device Configuration tab of the SWARM Control Center, you can centrally manage all your Perception Boxes and configure the cameras in order to capture the data as needed for your use cases.
You can see the different parts of the device configuration described below.
In this page you can find examples of rules for real world use cases.
Detect vehicles (motorized traffic) passing a street in the wrong direction (e.g.: one-way-streets of highway entrances).
Create a new rule, name it and choose Origin destination as triggers for the rule. For U-turns, a predefined template can be used. You still have the opportunity to adapted it according to your needs. Therefore, you can connect the existing origin and destination zones in your scenario and in case anyone is going from one zone back again to the same, one can assume, that this was a U-turn.
In traffic situations, there are several situations where a given class of street users should not use dedicated areas, e.g.:
people in the center of an intersection
Vehicles in fire-service zones
In order to check when and how often it happens, you can create a rule based on a predefined RoI in these dedicated areas. Create a new rule, name it and choose the RoI as triggers for the rule. You can find a template as an example for "person on street".
In the subcondition you can choose "Object" as a parameter and choose min nb of objects which need to apply to the conditions. You can define which classes are expected or not. On top, a dwell time condition can be added in order to only take objects into account which are in the area longer than a given time. (e.g. jaywalking, wrong parking in fire-service zones).
How often has it happened to any of you have been cut off at a pedestrian crossing while crossing or waiting to cross. This happens on a daily basis, and quite often it is very close to severe incidents. In order to know if and how often this happens, we provide you a solution with our rule engine. This is the basis for you to know where to set dedicated actions. The solution is a combined rule with a CL that is detecting the vehicles and a RoI which is focusing on pedestrians and bicycles. Configure a CL or speed line in front of the pedestrian crossing. On top, an RoI can be configured at the pedestrian crossing and/or the waiting area next to.
With that configuration, one or several rules can be created. In this example, one rule for this high risk situation is defined. You can detect when at least one person is on the Pedestrian crossing and a vehicle is crossing the speed line at more than 10 km/h.
Here is a short video where it is shown how such a rule will be applied.
Define at least one ROI and create an associated rule. As long as the rule is valid, the associated Quido relay output is enabled (contact closed). One or more rules can be created for the same ROI.
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 60 PPM
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1280×720 pixel
Camera video protocol/codec
RTSP/H264
Camera Focal Length
min. 2.8 mm variofocal lense*
Camera mounting - distance to object center
5–70 meters*
Camera mounting height
3–8 meters
Camera mounting - vertical angle to the object
<50°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle
0° - 90°
Smaller vans as the VW Multivan are included as well as vehicles similar to the Fiat Ducato.
This includes autobuses, coaches, double-decker, motor buses, motor coaches, omnibuses, passenger vehicles and school buses.
This class includes tractors (with or without trailer), ATVs and quads, forklifts, road rollers, excavators and snow plows.
The device health metrics allow you to provide evidence for reliable and continuous data collection and to self-diagnose (e.g. stable network connectivity/power supply/camera connection/processing speed,... )
Gives an overview of the and potential restarts of the device
Gives an overview of the per camera stream
In case any Device Health Metric is not showing the expected values, please follow our
Name your rule - This name is used to create widgets in Data Analytics, and will be part of the event you receive via MQTT.
Choose the event trigger the rule should be based on Any of your already configured event triggers can be chosen. In case Origin/Destination is selected, all configured zones are used automatically.
You have the option to choose from predefined templates or your individual rules, which you have tagged as your templates yourself. --> See later in this section how to tag a rule as a template.
Set your subconditions With subconditions, you can filter down to only gather the relevant data for this rule. The parameter options for the subconditions are dependent on the chosen event trigger.
After creating a rule, the Scenario Configuration of the camera needs to be saved in order that the rule will be applied accordingly.
In the actions section, you can click on the tag symbol in order to save the rule as a template. If the rule is tagged as such, the symbol will be highlighted .
Rules can be edited by clicking on the edit symbol . This action will open the edit mode of the rule. By clicking on the bin symbol you can delete a rule. A confirmation of the deletion is required to finalize the action.
As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard . Create a new rule, name it and choose the configured counting line (CL). For wrong-way drivers, a predefined template can be used. You still have the opportunity to adapted it according to your needs. For the wrong-way driver you can create a rule that the direction needs to equal "out" which in your configured scene needs to be the wrong direction.
At an intersection, only detect objects which are performing a U-turn. As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard
Please contact if you would like to try out this feature or if you have any further questions.
Camera mounting - horizontal angle to the object
Analyze any scenario that can be configured with our available event triggers
In case you need a dashboard for another use case that is not covered with Parking or Traffic Scenario, the Generic Scenario will give you this option.
In the Generic Scenario, you can create widgets based on the data generated with any event type.
You will be able to choose between the widget type described in the table below. At the widget creation process, you have the same selection options as in any other scenario.
The measures extract certain key metrics from the SWARM generated raw data. In general, there are metrics around the following areas of use:
Counts: It's always a sort of counting for either Counting Line (CL), Virtual Door (VD) or Origin/Destination Zones (OD).
Region of Interest (ROI): Calculates the number of objects reported within a certain region.
Counting Line Count
The number of objects that crossed a Counting line (CL). This can be split by direction or classification.
Counting Line IN/OUT difference
The difference of objects which crossed the CL in IN direction and OUT direction.
Counting Line IN/OUT difference = Counting Line IN - Counting Line OUT
Origin Destination Count
The number of objects that flowed from origin zone to destination zone in a scene.
Region of Interest Average Person / Cars / Trucks / Buses
Average number of objects (Person, Cars, Trucks or Buses) reported within the configured regions. For this widget type, you have the option to choose multiple classes.
Region of Interest Min Person / Cars / Trucks / Buses
Minimum number of objects (Person, Cars, Trucks or Buses) reported within the configured regions. For this widget type, you have the option to choose multiple classes
Region of Interest Max Person / Cars / Trucks / Buses
Maximum number of objects reported within the configured regions. For this widget type, you have the option to choose multiple classes
Virtual Door count
The number of objects that passed through a Virtual Door. Please remember that only objects will be counted which either appeared out of the VD and disappear in the VD or the other way around.
Virtual Door IN/OUT difference
The difference of objects which were counted as IN direction and OUT direction at Virtual doors.
Virtual Door IN/OUT difference = Virtual Door IN - Virtual Door OUT
Toggle to change between Data Analytics and Device Configuration.
Sort, Search & Filter
Especially when hosting a huge number of devices, you can benefit from our options to search for a specific device you want to manage. Furthermore, we offer the option to sort the list or filter for specific monitoring status of the camera connections. When a filter is set, you can see this indicated on the top including the option to quickly clear all filter.
Device Name / ID of your Perception Boxes or your Hardware. You can change the Device Name of the Boxes according to your preferences.
The Unique ID is used for communication between edge devices (Perception Box) and Azure Cloud.
This status indicates if the connection between the Perception Box and the Management Hub (Azure) is established. Possible values are Online, Offline or Unknown. If a device is offline unexpectedly, please check out our trouble shooting guide.
The Status represents basic monitoring of SWARM software and gives an indication if the software is up and running on device level.
See the definition of the status in the Camera and Device Monitoring page.
Auto refresh Button: Whenever something has been changed in the configuration, or a status changes, this option helps you to automatically refresh the Device Configuration page.
Analyze your traffic at your counting areas or intersections across cities or urban areas
Overview about Data Analytics in the SCC
Data Analytics allows you to digitize your traffic and parking scenarios and visualize both live data and historical trends analyses. In addition, you have the possibility to organize your parking areas, intersections or counting areas into different groups and display them in a list and map view.
As we know that with our detection you can as well gather data for other use cases we provide the option to use Data Analytics as well for any generic scenario.
How to manage Widgets in Data Analytics Dashboards
The Dashboard overview can be customized with your widgets according to your needs. On the top left, you can select the time frame filter, which will be applied to any widget in this dashboard. The time filter is persisted individually for each dashboard on your browser. So as soon as you open the dashboard again, your last time filter will be applied.
You can move widgets across your dashboard by simply dragging and dropping them. The size of the widget can be adjusted by using the left bottom corner.
On top, you have a full-screen option on the top right in order to display the widget dashboards in full size on your screen.
In order to create a new data widget, you can click on New Widget. In the widget creation process, the selection options vary per widget type. The widget type options depend on the scenario you have chosen for your dashboard (Parking, Traffic or Generic).
Below, you can find the description of different selection options at the widget creation process. This will give you an overview of the result of the selection options. (* mandatory)
Widget Name* - You can name your widget as you want. The name will be displayed for each widget on your Dashboard Overview.
Data aggregation - You can choose on which time frame your data should be aggregated. You can aggregate your data per hour, day, week, month and year. E.g.: You choose to aggregate your data per day for a traffic counting use case, all the counts of the day will be summed.
Data segmentation (split by) - You can split your data on given parameters of the created events. E.g.: You want to see the counts per day split per class and subclass? So you need to choose the data segmentation fields class and subclass.
Filter data - In order to narrow down your data you can filter on the given parameters by using one of the following operators: contains, does not contain, equals, does not equal, is set, is not set
Define Output* - For displaying your data in the right output, we have different options based on the widget types.
Available output options: Table, Number, Bar Chart, Line Chart, Pie Chart, Chord Diagram
How to manage dashboards in Data Analytics
Within Data Analytics, you can create several dashboards (digital parking & traffic scenarios) and organize them in Dashboard groups in order to keep a structured control of any Analysis across your parking areas or cities.
Dashboard groups are available to bring structure to your collection of dashboards. You can create dashboard groups on the top bar by clicking on the + symbol. Name your dashboard group and simply click on Save.
By clicking on the dashboard group, you can simply navigate between the dashboards of each group. The groups are sorted alphabetically.
If you have chosen a group, you can edit the name by clicking on the pen next to the name.
By clicking on the x symbol at the Dashboard group navigation bar, you can delete the dashboard group. You will be asked to confirm the deletion.
Deleting a dashboard group will delete any dashboard linked to that group.
Dashboards can be created by clicking on New Dashboard. Each Dashboard has three tabs (Overview, Cameras and Configuration). At the creation of the Dashboard, you will be directed to the Configuration tab in order to set any specific information for your dashboard. After you have set your configuration, go to Cameras tab in order to add one or several cameras which should be taken into consideration for the dashboard.
As soon as you have allocated the cameras to the dashboard, you can start to create your dashboard by adding widgets in the Overview tab.
The overview tab will be your actual Dashboard where you can customize and analyze the data according to your needs.
See more information in the next section of the documentation.
You can add one or more cameras to your dashboard. This needs to be done in order to select from which cameras you want to analyze the data.
Select the cameras in the drop-down and click on add cameras.
Your cameras will then be displayed, and you will have the same view on the cameras as you have in the device configuration. You can see the frame of the camera and directly jump to the Scenario configuration where you can change the event type configuration.
Name your Dashboard and give it a description in order to remember what the dashboard includes. Choose the Scenario according to the use case you want to cover with the dashboard. (Traffic, Parking or Generic)
The Scenario can't be changed in editing mode. So take care to choose the right scenario during dashboard creation
Link the Dashboard to the dashboard group of choice. The dashboard group you have been located in during creating the new dashboard will be the preselected group.
Paste the coordinates of the installation location to the dashboard configuration. The coordinates will help you to navigate across the dashboards on a map view. On top, the coordinates will automatically set the local time in your dashboard. If no coordinates are set the time will be displayed in UTC timezone.
For the Parking Scenario you can set further parameters. See more specific information on the dedicated Parking Scenario page.
Digitize your parking area for smoother and easier operation
At the parking scenario, you have the option to configure additional parameters to define your parking area. You can configure the maximum capacity and the maximum parking time. On top, you have the option to set the current utilization in order to calibrate the parking area once in a while.
The parameters can be set and changed in the Configuration tab of the dashboard.
Consider that changing the current utilization will overwrite the current value.
In a Parking Scenario, the two standard widgets Current & Historic Parking Utilization will be automatically created for you.
For any widget, there will be a predefined filter which is filtering out the classes bicycle, motorbikes and persons in order to only consider vehicles needing a given parking spot.
You want to know and retrieve the current utilization of your parking area, the Current Parking Utilization widget will tell you with one click.
Just select the widget type Current Parking Utilization, name the widget and choose if it should be calculated via Single-/Multispace detection or Entry/Exit counting. This choice will depend on the use case you have installed and configured.
To see utilization trends of your parking area you can use the Historic Parking Utilization widget which tells you the capacity with the option to aggregate the data on given time periods. There is the option to have the utilization calculated based on Single-/Multispace detection or Entry/Exit counting.
You can choose to display the average, minimum and maximum utilization of the chosen aggregation period. On top, you can choose between absolute or percentage figures.
The standard defined output is a line chart that you can change according to your needs.
You try to find out the frequency your parking users are entering or exiting the different entries and exits along your parking area, you can find that information by displaying an Entry Exit Frequency widget.
You can choose the different Entries or Exits you want to consider, and aggregate and segment the data as needed.
In the example below, you see how often vehicles are using the one location for entry and exit (CL direction) per day.
The parking time will be shown as soon as the vehicle entered and exited your parking area. In case that the License plate will not be detected at either entry or exit, there will be no parking time calculated. (This is done in order to not falsify the statistics)
The output for this widget is a table with the standard columns License Plate and the Parking Time. If you want to have more information, can add data segmentation in order to show for example where the vehicle with the given license plate entered and exited or see a capture of the vehicle with the license plate of the entry and exit.
Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.
The Historic Parking Time widgets will show you the minimum, maximum or average parking time of your parking users by saving the parking time based on the License plates.
In case that the License plate will not be detected at either entry or exit, there will be no parking time calculated. (This is done in order to not falsify the statistics)
The standard defined output is a line chart that you can change according to your needs. On top, the data aggregation period can be changed.
Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.
Do you have parking users exceeding the maximum parking time of your parking area quite often?
You can start to automatize the enforcement process by using the SWARM solution, which will automatically tell you the License plates which exceeded the parking time. In order to have evidence, the SWARM software is taking a picture of the vehicle with the License plate and the timestamp the vehicle entered and exited the parking area.
You simply need to choose the Parking Time Violation widget and everything else will be done in the background for you based on the maximum parking time parameter you have set in the Dashboard Configuration tab.
You can preview the evidence picture by clicking on show. In order to download the information required for the enforcement process, you can export the table in csv format as well as export the Evidence pictures as a zip folder.
Please consider that ANPR can't be configured on old SWARM Perception Box P100. So, parking time widget will not retrieve any data in case you use a P100.
In order to display the occupancy of your configured Single- or Multispace parking, you can use the widget type Single / Multi Space Parking Occupancy.
You will see in a grid the occupancy level of each of your configured parking lots. In case you only want to display some dedicated parking spaces you can select these dedicated parking spaces (RoI).
The Data Analytics widgets "Journey Distributions" and "License Plates" allow segmenting by "License Plate Area"
You want to have information on your traffic based on the counts and classification of any object passing your counting area, then the Traffic Counting widget is exactly what you need.
First, you need to choose the Counting Lines you want to have the count from.
You can display the count of the traffic aggregated over the chosen time period and split by any direction. Another option you have at this widget is to display the modal split of your traffic, which shows the distribution of the different object classes. In case you have configured the Speed estimation for your counting line you will be able to retrieve the average speed per counting aggregation or even split the counts in different speed estimate ranges (10 km/h ranges).
The "Include average speed in data" toggle will only give you results in case you have configured speed estimation on the chosen Counting Line.
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
For analyzing an intersection in order to see how your traffic is moving across the intersection, you can use the Origin-Destination Analysis, which is based on the Intersection monitoring use case configuration.
You simply need to choose the widget type and choose your output format, and you see from the counts from an Origin to a Destination zone. You can display these in a dedicated output format called Chord diagram.
You want to get the average speed of your traffic over a given time period, then the Speed Estimation is the widget type to be chosen.
You simply need to choose the Counting Line where you have configured your speed estimates, choose a level of aggregation and you will get a line chart or table with the average speed over your chosen time period.
In case a rule has been created on the chosen cameras, there is the option to display how often the defined rule happened across a given time interval. Simply choose the widget type Rule trigger and the rule you would like to see the occurrence frequency. The data aggregation can be chosen according to the individual needs.
By using the ANPR feature, which can be enabled in the the parking time of your parking users will be calculated.
Orchestration of Control Center Parameters
The administration section of your Control Center consists of the following 3 subsections.
How to Integrate your generated data in external applications
Get any generated event directly from the box without processing it through the cloud use the MQTT option
Be aware that in this option you get the data of each event, and you can't use Data Analytics in our SWARM Perception Platform
API to gather the specific data out of the Swarm Control Center
Every setting and information available in the Control Center can be gathered via an API. The Swagger documentation for our demo instance as an example can be found here: Swagger UI
Generally we stick to the following OAUTH flow documented here OAuth 2.0 client credentials flow on the Microsoft identity platform - Microsoft Entra
Make sure to add your tenant ID as a header in the authentication flow.
To gather the first part of the URL for your specific API Documentation/Swagger UI you can either contact our support or grab it from the source code of your Control Center.
Go to the Swagger UI
2. The API call above will give you the status of a device and returns the following:
3. The states are defined in the API documentation below
4. You can as well get the state of the individual streams. The API returns the following:
Getting started with your custom MQTT connection
As soon as you have configured your use case in the Swarm Control Center, the SWARM software generates events. These events are transferred as standard JSON.
For higher security, you can use MQTT over SSL. Simply add ssl:// prefix to the broker configuration.
In case that message compression is configured, the events are compressed with zlib / inflate .
A counting line event is triggered if an object crosses a virtual line (identified by the property lineId
). The line has a user-defined name (property lineName
). A timestamp (property timestamp
) is set when the event occurred. The object can cross the line in two directions (property direction
) and is either moving in or out. Additionally, the object that crosses the line is classified (property class
& sublcass
). The classes are dependent on the use case.
In case the ANPR feature is enabled, the license plate (property plateNumber
) and the license plate country (property numberPlateOrigin
) will be added to the event.
With ANPR there are captures of the license plate at entries and exits taken. The License plate capture can be attached in JPG format to the MQTT message encoded with BASE64.
If speed estimation is enabled and configured, the speed estimate (property speedestimate
) will give the speed estimate output in km/h.
The Region of Interest Event depends on the type of the Region of Interest. Region of Interest with RoI-type Parking will generate a ParkingEvent
and the RoI type Generic will generate a RegionOfInterestEvent
A parking event is triggered by a time interval every 10 seconds. The information of all the configured Parking RoI will be aggregated in one single event. In parkingSummary
all the RoI will be listed with the configured capacity
and the current count of vehicles
in the RoI.
As a total summary, you will have the totalCapacity
and the totalVehicles
which gives a complete overview of all configured Parking RoI in this camera stream.
As an Early Availability feature, you can enable ANPR for Parking RoI. This will provide the license plate (property plateNumber
) and the license plate country (property numberPlateOrigin
) in a string format.
A region of interest event is triggered either by a state change or by a time interval (property triggerType
). The state (property state
) can change from occupied to vacant or vice-versa. It is occupied in case the number of objects in the RoI is at least as high as the configured capacity
.
Every event contains a user-defined name (property roiName
) and a timestamp (property timestamp
) when the event occurred. Detected objects and their associated class and dwell times are listed (property objects
). The classes are dependent on the use case.
In case a rule is created on an event trigger, a rule event is sent. The rule event is triggered based on the chosen event trigger logic in combination with the defined conditions. A timestamp (property timestamp
) is set when the event occurred. The rule event includes the generic information around the rule name, device and stream UUID. On top of the event information, the chosen standard event information is part of the message in the same format as for the standard messages of the chosen event triggers.
Raw track mode traces objects as they move through the field of view. A complete trace, of the route that the object took, is generated as soon as the object exits the field of view.
This trace includes the classification of the object (property class
) and the path of the object throughout the field of view. The class is dependent on the particular use case.
The track is described as a series of path
elements, which include a timestamp and the top-left coordinates along with width and height of the tracked object. There are a maximum of 10 path elements in every event.
Breakdown of Object related attributes:
Example Counting Line detecting a Van. Note that the class is "car" and the subclass is "van".
Access Data Analytics widgets underlying data via API
The REST API makes generated event data available to third-party applications, retrieved from your Data Analytics widgets.
Once you configure a widget, find the item "API call" in the side menu.
In the GitHub repository below you can find example code that highlights how to integrate the data into your own application. It showcases how to handle the required authentication as well as how to perform queries.
You can see a Data Analytics widget for bicycle counting as an example below. The respective type of widget (Traffic Counting) is selected, data is aggregated per day, split by object class and direction, and we filter for bicycles only.
API Request
The API-Call option shows the respective GET request for this data, as you can see below.
API Response (shortened)
In case you don't want to use Data Analytics and retrieve data via we provide you the option to configure a custom MQTT broker.
The Swarm Perception Box will send events in the form of a JSON to an MQTT broker you configure. The is used to deliver events. In case events cannot be delivered, e.g. no connectivity, we cache events up to 24k messages. The stream UUID is set automatically as MQTT client id.
There are several ways how to validate a JSON against a schema, a good overview is provided by . As a starting point, we recommend , an online tool to manually validate a JSON against our schema.
The header of the JSON is defined by a version of the format being used (property version
). The format is major.minor
, a major version change denotes a breaking change whereas a minor version change indicates backward compatibility. For unique identifiers, we rely on . Timestamps are defined with .
Please contact our to enable the addition of license plate captures via MQTT.
Every event does contain a class of the detected object. We arranged those objects in classes and subclasses for a better overview. You can see the classes and Subclasses as well as examples in the section .
For every Data Analytics widget, the underlying data can be queried via a provided . Integration to third-party applications works out fast and easy.
The provided dialog pop up shows detailed information on how the API request for the generated data of this particular widget looks like. Copy/paste the command into a terminal and execute it. You can directly test the call within the dialog, including the response format, by clicking on "Try it out!" which does not require the usage of a terminal.
The provided access token is temporary. For a permanent integration into third-party applications, please request a permanent access token .
We strictly follow the documented by Microsoft. There are several that you can use.
The REST API is based on Cube.js. More information on the functionality of the API can be found in the .
Overview about your licensed streams
The license management section provides an overview of your software licenses. This means in detail:
The number of licenses currently in use
Number of Camera streams activated. Disabled streams don't count.
The total number of licenses that were purchased
In general, all SPS (Swarm Perception Subscriptions) can be used with any hardware that belongs to you.
The current status of each license
ACTIVE = The license is currently valid, and the expiration date lies in the future
EXPIRED= The license is no longer valid and, therefore, expired. Either the license was already renewed or you decided to let it run out.
INACTIVE = The license period starts on a future date.
The start and end date of each license validity
The order and invoice number as well as the number of streams that are included
Adding or activating additional streams is only possible if sufficient SPS licenses are available.
E-Mail Alerts to monitor the status of your devices and streams
Automatic E-Mail alerts can be created to get immediate notifications on potential issues with your Swarm Perception Boxes. In the section "Monitoring Alerts" in your Swarm Control Center, custom alerts can be created and managed. Choose from several predefined alerting conditions, choose the relevant devices, define E-Mail recipients, and get instant E-Mail notifications if a device changes the status from "Operational" to "Not operational" or "Warning".
Only admins can set and maintain monitoring alerts. For standard users and viewers, the section Monitoring Alerts is not visible in the Control Center at all.
The creation of the alert is split into three steps.
Alert conditions are based on the connection status and the stream monitoring status. The table below explains the three predefined alert conditions.
Device offline
Gets triggered if a device is changing the status from 'Online' to 'Offline'.
Connection: Online
Connection: Offline
Device Error
Gets triggered if the stream status of one or more streams on the devices changes to 'Not Running', from either 'Running' or 'Warning' (because they cannot deliver messages, connect to the camera, ...).
Stream status: Running or Warning
Stream status: Not Running
Device Warning
Gets triggered if the stream status of one or more streams on the device changes from 'Running' to 'Warning' (due to degraded camera connection, ...)
Stream status: Running
Stream status: Warning
In case one chosen condition is true, an alert will be sent. You have the option to multi-select the available conditions.
On top, you can choose to get a resolution notification as soon as the error condition is resolved.
At the multi-select table the devices, where the alert should be applied, need to be chosen. The select all option in the top left corner of the table selects all devices on the page of the table. To search for the right devices, you find a search in the top right corner.
In case there are more pages of devices in the selection table take care that the multi-select will only select the devices on the active page.
In the last step of the Alert creation process, the recipients need to be defined. By clicking on add, an E-Mail address can be added. There is no limitation on the number of recipients.
In the overview table where all created Alerts are displayed, they can be edited or deleted. In the last column, you can find the action buttons to perform this.
The editing workflow looks the same as the creation process.
In this section, you can find our White papers for different use cases
Manage users having access to your Control Center
The user management section provides an overview of all users that have access to your control center as well as the possibility to add, remove, or edit users and user roles.
To add a new user, simply click on "New User" and fill out all required fields. The new user needs to set a personal password by verifying the email address via the workflow "Forgot your password" on the Login Page.
You can only change the role of existing users. If you have to change users' names or email addresses, you need to delete the user and subsequently create a new user.
Viewer: This is read-only permission for data analytics. It allows access to existing scenes and dashboards.
User: Can access device configuration and data analytics in a read/write fashion. Is allowed to reconfigure devices, create new scenes, dashboards, etc.
Admin: Can do everything a “User” is allowed to do. Additionally, an admin has access to the Administration section.
Overview about how we measure the performance of our released models
We calculate accuracy by comparing counts obtained by our traffic counting solution against a manually obtained ground truth (GT). Delivering correct and realistic accuracy measures is most important, and therefore we make a real effort obtaining our GT data.
We also make sure that scenes from any performance measurement never find their way into our training dataset, and thereby avoiding overtraining and unrealistic performance measurements which cannot be reached in real world usecases.
The following example describes counting accuracy calculation for crossing lines.
Given the following results table:
Scene 1 has 2 errors (1 missed, 1 overcount)
Scene 2 has 1 error (1 missed)
In total, there are 3 errors and 16 Ground truth counts (5 + 3 + 3 + 5)
This gives us an accuracy of 16-3/16
= 81.25%
The following described counting accuracy calculation for origin/destination.
Scene 1 has 2 errors (1 missed, 1 overcount)
Scene 2 has 1 error (1 missed)
In total, there are 3 errors and 11 GT counts (5 + 3 + 3)
This gives us an accuracy of 11-3/11
= 72.72%
For automated number plate recognition (ANPR), the accuracy logic is the same as for crossing lines, with two additional restrictions:
vehicle class is not taken into account
the number plate sent in the event is compared and has to fully match the ground truth
image-crop
ground-truth
model-result
correct
85BZHP
85BZHP
YES
BNW525
BNW555
NO
DY741WJ
DY741WJ
YES
GU278MB
GU278MB
YES
FW749XA
FW749XA
YES
ERZM551
ERZM55
NO
For this example, we receive an accuracy of 4/6*100% = 66%
For our performance measures, we are using different types of hardware to guarantee a stable version of our software. When we receive different results in our Happy RTSP Performance lab, we are going to proudly announce the minimum percentage as our accuracy to be achieved.
In the table below, you can see the 4 different devices we are testing, as well as an example of results achieved. In this case, we would publish an accuracy of 90% as a target.
Device
Accuracy
P101
91%
Nvidia Jetson AGX
91%
Nvidia Jetson NX
91%
Nvidia GTX 1080
90%
Use case for counting traffic on dedicated urban & highway streets with the classification of vehicles according to our Classes/Subclasses
You would like to know the traffic situation of an urban street or highway? SWARM software is providing the solution to get the number of vehicles passing at the street split by object type (Classes) and direction.
In order to efficiently organize and plan strategic infrastructure and traffic installations, gathering accurate and reliable data is key. Our traffic counting solution builds on Artificial Intelligence based software that is designed to detect, count and classify objects taking part in road traffic scenarios such as highways, urban and country roads, broad walks, intersections, and roundabouts. Generated traffic data can be used as an information basis helping decision-making processes in large Smart City projects as well as to answer basic questions about local traffic situations such as:
How many trucks are using an inner-city intersection every day?
Smart Insights about traffic load — Do I need to expand the road?
How many people are driving in the wrong direction?
Why/When and Where are people parking/using side strips on the highway?
What areas are more frequently used than others on the road?
…
Technology wise our traffic counting system consists of the following parts: object detection, object tracking, counting objects crossing a virtual line in the field of interest as well as object classification. The following section of this article will briefly describe those pretrained technologies used for traffic counting.
The main task here is to distinguish objects from the background of the video stream. This is accomplished by training our algorithm to recognize a car as an object of interest in contrast to a tree, for example. This computer vision technology deals with localization of the object. While framing the item in the image retrieved from the frames per second out of the analyzed stream, it is correctly labeled with one of the predefined classes.
The recognized objects are furthermore classified to differentiate the different types of vehicles available in traffic. Depending on the weight, axis and other features, the software can distinguish the recognized images from predefined and trained classes. For each item, our machine learning model will provide one of the object classes detected by SWARM as an output.
Where was the object initially detected, and where did it leave the retrieved camera image? We accomplish to equip you with information to answer to this question. Our software is detecting the same object again and again and in the way tracking it from one frame to the next within the generated stream. The gathered data enables you to visualize the exact way of the object for e.g. generating heat maps, analyzing frequented areas in the scene and/or planning strategic infrastructure needs.
Another technology available in our traffic counting is used to monitor the streamed scene. By manually drawing a virtual line in our Swarm Control Center (SCC), we offer an opportunity to quantify the objects of interest crossing your counting line (CL). When objects are successfully detected and tracked until they reach a CL, our software will be triggering an event, setting the counter for this line accordingly.
In traffic counting we distinguish between the following use cases: highway, roundabout, urban traffic and country road. We measure the accuracy values individually for each scene. This way ensures that every new version of our model not only improves accuracy in some usecases but delivers more stable and more accurate measurements across possible scenarios.
Scene description: Highway with four lanes
Task: Count cars and trucks in both directions
Conditions: daylight
Camera setup: 1280×720 resolution, 6 m height, 20 m distance
Object velocity: 60-130 km/h
Objects: >900
Scene description: Roundabout with four exits
Task: Count cars and trucks in all eight directions
Conditions: daylight
Camera setup: 1280×720 resolution, 4 m height, 30 m distance
Object velocity: 5-30 km/h
Objects: >100
Our Traffic Counting solution acquires an accuracy over 93.59%*.
*SWARM main classes detected only (Person, Rider, Vehicle — PRV)
In order to optimize the usage of our SWARM Control Center, you need to use one of the recommended and supported browsers below.
We recommend using the most up-to-date browser that's compatible with your operating system. The following browsers are supported:
Newer browser versions are generally the safest, as they lower the chance of intrusion and other security risks. Older browsers and operating systems are not recommended since they do not always support the operation and security of our Control Center. We do not support older, out-of-date versions or browsers not mentioned below.
We test new browser versions to ensure they work properly with our websites, although not usually right away after they are released.
Use case for barrierless parking, including utilization of the parking area by recognizing the license plate of the parking customer
You would like to get more insights about your parking spaces and the customers using it? SWARM software is providing the solution to gather the needed data for parking monitoring.
In order to efficiently manage your parking infrastructure, gathering accurate and reliable data is key. Our parking monitoring solution builds on Artificial intelligence based software that is designed to detect, count and classify objects entering indoor our outdoor parking spots. Generated data can be used as an information basis, helping to predicatively guide customers in parking garages and outside facilities or manage parking violations. Basically speaking, we can help you to answer questions such as:
How is the current utilization of my parking spot?
What is the historic parking utilization at an average level?
How long are my customers parking in the garage?
Is there any possibility to see and proof parking violations (e.g. long time parking)?
…
Technology wise our parking monitoring system consists of the following parts: object detection, object tracking, counting objects crossing a virtual line in the field of interest as well as object classification and ANPR. The following section of this article will briefly describe those pretrained technologies used for parking monitoring.
The main task here is to distinguish objects from the background of the video stream. This is accomplished by training our algorithm to recognize a car as an object of interest in contrast to a tree, for example. This computer vision technology deals with localization of the object. While framing the item in the image retrieved from the frames per second out of the analyzed stream, it is correctly labeled with one of the predefined classes.
The recognized objects are furthermore classified to differentiate the different types of vehicles available in traffic. Depending on the weight, axis and other features, the software can distinguish the recognized images from predefined and trained classes. For each item, our machine learning model will provide one of the object classes detected by SWARM as an output.
Where was the object initially detected, and where did it leave the retrieved camera image? We accomplish to equip you with information to answer to this question. Our software is detecting the same object again and again and in the way tracking it from one frame to the next within the generated stream. The gathered data enables you to visualize the exact way of the object for e.g. generating heat maps, analyzing frequented areas in the scene and/or planning strategic infrastructure needs.
Another technology available in our traffic counting is used to monitor the streamed scene. By manually drawing a virtual line in our Swarm Control Center (SCC), we offer an opportunity to quantify the objects of interest crossing your counting line (CL). When objects are successfully detected and tracked until they reach a CL, our software will be triggering an event, setting the counter for this line accordingly.
Before sending and event including the license plate-information of a vehicle (entering a parking zone) our system performs the following steps:
Our performance laboratory (“Performance Lab”) is set up like a real-world installation. For each scene, we send a test-video from a RTSP server to all of our supported devices using an Ethernet connection. Models and software versions to be tested run on the devices, sending messaged to an MQTT-broker. Retrieved messages are compared with ground-truth counts, delivering accuracy measurements as well as ensuring overall system stability.
Our ANPR parking test scenario includes the following scenes:
Our ANPR solution acquires an accuracy of over 90%.
Crossing lines are not positioned correctly
Vehicles + license plates are occluded by another vehicle
The camera image is too dark, too bright or too blurry in order to correctly detect an object. Please see our requirements for the Parking Monitoring on the page linked below.
Our performance laboratory (“Performance Lab”) is set up like a real-world installation. For each scene, we send a test video from our to all of our supported devices using an ethernet connection. The following columns will provide an overview about two scenes before we offer the performance values gathered in our accuracy measurement tests.
When measuring performance of our traffic counting solution, a crucial point is the selection of the scene. We are choosing real-world scenarios from some of our installations as well as publicly available video material. We make sure that accuracy values obtained in our test laboratory reflect real life use cases in the best possible way. All video material used to test performance fulfills the specification requirements, which can be found in our
In order to understand how to interpret our accuracy-numbers, we gave some more technical details on our traffic monitoring solution. The detailed way of our accuracy calculation and an explanation of our test-setup is documented in our “” section.
In general, there are several reasons why traffic counting systems cannot be expected to reach 100% accuracy. Those reasons can again be split into various categories (technological, environmental and software side) that either lead to missed or over-counts. Given our technical and environmental prerequisites specified in our , we could reveal the following limitations in the provided software.
The is a web-based application and runs in the browser of all modern desktops and tablet devices. To use the portal login to your , you must have JavaScript enabled on your browser.
(the latest version)
(the latest version)
(the latest version)
(the latest version, Mac only)
ANPR stands for automated number plate recognition. For detailed settings and camera requirements, we refer to our use case description. To identify the number plate of the parking customer, we are using optical character recognition to read the numbers and figures that identify the vehicle.
OCR stands for optical character recognition and means, basically, converting an image of letters to the letters itself (longer description: ). In our ANPR solution, we are scanning the image of the retrieved classified vehicle. Out of this picture, our OCR solution reads the combination of the license plates to identify the customer. Recognizing the image at entry and exit level enables the possibility to track the parking time of each single vehicle.
In order to understand how to interpret our accuracy-numbers, we gave some more technical details on the ANPR solution. The detailed way of our accuracy calculation and an explanation of our test-setup is documented in our “” section.
In general, there are several reasons why parking monitoring systems cannot be expected to reach 100% accuracy. Those reasons can again be split into various categories (technological, environmental and software side) that either lead to missed or over-counts. Given our technical and environmental prerequisites specified in our, we could reveal the following limitations in the provided software.
Hardware
Jetson Orin Nano 4 GB 32 GB SD Card Forecr DSBOX-ORN
In order to replicate the above results, we describe our test setup in the following. In order to emulate RTSP cameras, we are using an All tests have been conducted at room temperature.
P101
1
2
1
2
1
4
2
P401
3
6
3
6
3
12
6
OP101AC
1
-
1
2
1
2
2
OP101DC
1
-
1
1
1
1
1
P100
1
-
0
2
0
3
2
OP100
1
-
0
1
0
1
1
10 W
4
2
5
3
6
4
MAX N
1
1
2
1
4
2
20W 6 core
3
3
6
3
12
6
MAX N
5
3
8
5
10
8
Needed Requirements for your SWARM Perception Box
IPv4 is required (IPv6 is not supported)
A private IP4 address is okay. A public routable IP4 address is not required.
Make sure the MTU size is at least 1500 bytes.
At least 1Mbit/s down/up
The P101/OP101/VPX Agent need to connect to the SWARM Control Center, which is hosted in the Microsoft Azure Cloud. This requires the following outgoing ports to be open in your firewall. Incoming ports are not required to be open.
80
IPv4 - TCP/UDP
Outgoing
123
IPv4 - UDP
Outgoing
443
IPv4 - TCP/UDP
Outgoing
1194
IPv4 - UDP
Outgoing
8883
IPv4 - TCP
Outgoing
5671
IPv4 - TCP
Outgoing
Connect your PC to the network the Perception Box is connected to.
Make sure IP4 is supported
Make sure the DNS is able to resolve *.azure-devices.net, *.azure-devices-provisioning.net.
Make sure that all above listed outgoing ports are open.
Make sure the TLS certificate is valid (and not inspected). Watch out for Verification: OK
.
Upgrading Jetpack is documented by NVIDIA.
For 2023.1 and later IotEdge 1.4 is required. For new devices our installer is handling the installation process. Existing VPX devices must be upgraded by the partner. P10X/OP10X are upgraded by Swarm with the rollout of 2023.1.
Please read the instructions from the Microsoft documentation first, roughly these steps are required. Note: you don't need the package defender-iot-micro-agent-edge
Can I delete /etc/iotedge/config.yaml after the upgade?
No, please keep the file. Our containers still need access to the file. In a future version we will remove this dependency.
Early Availability Feature for supported countries
In addition to the number plate raw text and country code, we now support the number plate area code. Some number plates include an area code associated with a geographical area, such as "W" for Vienna (Austria) or "M" for Munich (Germany).
The following 13 countries are supported
Austria
Bulgaria
Switzerland
Czech Republic
Germany
Greece
Croatia
Ireland
Norway
Poland
Romania
Slovakia
Slovenia
For supported countries, we detect spaces between letters and parse the raw license plate text according to a predefined format.
In the case of countries that are not supported (e.g. Italy), the generated event won't contain an area code.
All use cases based on ANPR are supported, no additional configuration is required.
Crossing line or item behind a big obstacle
Object (PRV), with high distance to the camera
Camera perspective is not matching our set-up requirements
Objects overlap strongly, so our detection model detects more than 1 object as only 1
Color and/or shape of Objects are very similar to the background, so our detection model is not able to distinguish between object and background (r.g. grey cars, persons dressed in grey/white)
Different objects (classes) look very similar from certain perspectives, e.g. single-unit-trucks are barely distinguishable from articulated-trucks when only seen from the front or behind
Detect the vehicle
Detect the license plate of the vehicle
3. Track the vehicle and check if it crossed the entry/exit line.
4. Recognize the letters on the detected number-plate (OCR*), and send the information if the vehicle-track crossed the line.
recognized as BNW525
Task: Count entering vehicles and recognize the number plate
Conditions: daylight/indoor
Camera setup:
2688 × 1520 resolution
height: 3 m
distance: 10 m
focal length: 6 mm focal length
Object velocity: 0-15 km/h
Objects: >150
Scene description: Parking garage exit
Task: Count exiting vehicles and recognize the number plate
Conditions: daylight/indoor
Camera setup:
2688 × 1520 resolution
height: 3 m
distance: 10 m
focal length: 6 mm focal length
Object velocity: 0-15 km/h
Objects: >150
How to contact the SWARM team
If this is your first time contacting us, we will need to create an account for you. In this case, kindly find our support email here: support@swarm-analytics.com
Please find any details around our subscription and support terms in the official document.
This page provides a collection of common issues that might occur as well as steps to resolve them.
Check if the device is powered
Is the device powered with DC barrel (PoE is not supported)
Is the power supply fulfilling the recommended specs (12V, >2A)
Is the LED next to the ethernet port on? Please take a picture
Check internet connectivity
Does the P101 respond to ping in the local network?
Check if the device is powered
Is the device powered with DC via an external power supply?
Are the red and the yellow wires connected to the +-pole?
Is the power supply fulfilling the recommended specs (24VDC/4A)
Is the LED next to the connection ports on? Please take a picture
Check internet connectivity
Does the P401 respond to ping in the local network?
Check if the device is powered
Check internet connectivity
Check if your SIM card has enough data left (e.g. online portal)
Check if the SIM card works with your LTE stick
Plug the LTE stick with the SIM card into your PC or Notebook
Deactivate WLAN, unplug ethernet cable
In case your are using a Huawei stick provided by SWARM, the stick's LED has to be solid blue or green. If the LED blinks, there is no internet connection
Check if the PC/Notebook is connected to the internet by opening a website
If your PC/Notebook can connect to the internet by using the LTE stick it should as well work with your OP101
Check OP101 Hardware
Open OP101
Can you spot any damage? (e.g. loose cables) - pls take a picture
Check if the LTE stick as well as the USB connector are properly plugged in
Check if all ethernet cables connecting P101, PoE switch and cameras are propperly plugged in
How to record debug videos for calibration and performance checks of the configured scenarios
For more detailed calibration insights and performance check, you can use the advanced debug mode.
Debug mode let you visually show the SWARM software in action. It is designed for debugging on Swarm side mainly, but can also be used for adjusting and understanding SWARM.
Remember GDPR/DSGVO, whenever you work with the debug mode! Technically, it is possible to record the debug mode and breach data privacy!
Once the debug mode is enabled, you can access the stream on all IPs, which are configured on the SWARM Perception Box (or your own Hardware) via a browser and port 8090 with the stream ID as path. http://IP:8090/STREAM-ID
With the stream ID in the path, you can choose the dedicated steam
You may also use any kind of video streaming application, like VLC, to access the stream.
The official documentation from Microsoft
Azure IotHub specialities
The IotHub device ID must correspond to the MQTT client ID
You can only connect with one client for a given IotHub device
The SAS token expires after a pre-defined time and needs to be refreshed. You need to update the token and update the MQTT password once in a while for every Stream in the Control Center.
What does this mean in terms of Swarm?
You can either:
create for every stream a corresponding IotHub device ID (recommended and used below) OR
create random IotHub device IDs and assign one to each stream by setting the MQTT client ID.
Steps
Create an IotHub device, copy the stream ID from the Control Center
az iot hub device-identity create --hub-name <hubname> --device-id "<stream-id> --edge-enabled
Generate a SAS token for the IotHub device.
az iot hub generate-sas-token --hub-name <hubname> --duration 51840000 --device-id <stream-id>
Monitor incoming events
az iot hub monitor-events --hub-name <hubname> -d "stream-id"
Test with an MQTT client (e.g. mosquitto) to publish a message. We used this root.pem file.
mosquitto_pub -p 8883 -i <stream-id> -u '<hubname>.azure-devices.net/<stream-id>/?api-version=2021-04-12' -P '<SAS token>' -t 'devices/<stream-id>/messages/events' --cafile root.pem -d -V mqttv311 -m '{"swarm":"test"}'
Make sure you receive messages at this point. Don't proceed unless this step works.
Enter URL, username, password and topic as custom broker in the Control Center.
Frequently Asked Questions
In this section you will find all frequently asked questions. We hope to be able to support you with this. This area will be updated continuously.
We set the stream ID as MQTT client ID (default behaviour). You can overwrite the MQTT client ID if needed.
With the SWARM software, multiple motorized as well as non-motorized traffic classes can be surveyed - from cars to trucks with trailers to pedestrians. For motorized traffic in Germany, we have followed the classification guidelines as far as visually possible, but we also offer other standards. More details regarding the can be found in our technical documentation.
We appreciate your understanding that training new classes is a complex process that requires extensive research and testing to ensure that our models perform accurately and reliably with our known level of quality. If you have a relevant use-case in mind, feel free to get in touch with our to discuss the opportunities and scaling of the project.
Nevertheless, we can work with an accuracy between 95 and 99% for standardized applications, such as parking lots or traffic counting on designated highways and urban streets.
See the next questions for further information and support. For further information regarding our technology’s accuracy, we recommend .
→
→
→
→
→
→
→
Please find in our technical documentation.
You can find our on our website. Also our can be found there. Please let us know via if you need further documents and information.
You can find all electrical and building requirements to mount the different Perception Boxes in our . Please also have a look in the quick start guide, where the setup is explained step by step. If you encounter issues in this process, have a look at our troubleshooting guidelines. Please feel free to contact our support if you need further assistance.
→
→
→
→
→
No videos are stored. In addition, the system only sends data, so that active attacks from the outside are prevented. In addition, while the AI collects information about the objects detected in the camera images, it does not collect biometric data or video footage. More information is collected in our .
Only the data of the configured events are stored. Events can be configured around traffic counts (motorized and non-motorized traffic), origin-destination analysis, and information about objects in a given zone. It is also possible to specify additional parameters that will be included in the event output. Examples are: Speeds, number plates in parking lots for parking time analysis and the maximum number of objects in a zone for a utilization analysis. The transmission of the event data from the Perception Box to the cloud takes place via JSON format to an MQTT broker. More information can be found in our .
The main difference lies in the data retention. While our standard model (SWARM Perception Subscription) offers a data retention period of 30 days, this can be extended to three years with the SWARM Data Subscription. Further details can be found on our .
The final costs depend, of course, strongly on the scope and timeframe of the project. You can find our pricing model with all cost factors on our , as well as a project example with sample costs.
Needless to say, we will provide you with support for all installations and tests, and we are also happy to send hardware for testing purposes. However, we ask for your understanding that we cannot provide this free of charge and that the expenses have to be covered. Please feel free to contact our for further details.