Version 2023.1
Release Date: 15.03.2023
Update size (data consumption):
- P101/OP101 & NVIDIA jetson devices: 227MB
- P100/OP100: 469MB
This new feature enables you to combine data from several Swarm Perception Boxes to generate journey time and journey distribution data. With a fully GDPR-conform solution, journeys will be matched using pseudonymized license plates. Next to gather traffic counting data with a classification into three classes (car, bus, and truck) the journeys between several locations will be matched.
In Data Analytics, the option is provided to analyze the average, median, and different percentiles of the journey time aggregated over time. On top, the number and distribution of journeys can be analyzed in the form of a matrix, as you can see below.
Journey Time Visualization

Aggregated journey times
Changing camera settings in the SWARM Control Center will provide faster and even easier installation and configuration. You can now adjust different camera settings directly from the SWARM Control Center. For the basic, most important settings to run the SWARM solution, there is a recommendation that can be applied. On a second tab advanced camera settings can be changed for example shutter speed or zoom in order to optimize the installation to the given environment.

Advanced camera settings
Find all details and requirements to use this feature in the following part of the documentation:
With some model evolvements and performance boosts, an accuracy improvement by 4-5% on our Barrierless parking with ANPR use case was possible. In the real-world test cases of our performance laboratory, we now achieved an average accuracy of 97%.
Traffic & parking models stability improvement
The traffic & parking model as well as the Parking (Single-/Multispace) model have been improved with a focus on stability. We were able to reduce false detections that might have caused issues on your side for some installations, especially while using RoI event triggers.
Focus area is introduced as a configuration option
In order to focus the detection on a given area in the video stream, the option is introduced to configure focus areas. In case a focus area is configured, only detection in that given area will be considered. This improves the processing performance and will increase the output quality for installations where only a certain area is important to deliver necessary data for the use case. Especially for Advanced traffic insights solutions as well as Barrierless parking and Barrierless parking with ANPR the focus area can be very helpful for achieving outstanding accuracies.

Focus areas
Blurred preview frame
In order to ensure user data privacy, users will only be granted access to a pixelated preview frame. For use cases that require a clear image, such as Advanced Traffic Insights or Barrierless Parking with ANPR, admins will still have the option to retrieve an unblurred frame.
- Edge-to-cloud communication working again stable thanks to an update of our IoTEdge services. With the new version, all SWARM Perception boxes are updated to the new version. In case you are running our software as a VPX version on your hardware, you need to make sure to update the IoTEdge to version 1.4. For more read here Upgrade IotEdge from 1.1 to 1.4
- For USB cameras the software is now analyzing the full frame rate which the camera delivers. If the camera delivers 30fps, the software can now process them all, depending on the scenario. The supported formats from USB cameras are UYVY, YUY2 and YVYU.
Last modified 2mo ago