Journey time set up
Journey time is defined as the time passed between the sighting of the same vehicle across two or more camera streams. The identification of vehicles is based on number plates. Please refer to our detailed guide about ANPR for a general overview.
This guide focuses on specific details to be considered for journey times and area-wide traffic flow on public roads, focusing on camera placement, camera settings, and event trigger configuration.
Please be aware that the camera settings need to be adjusted according to the installed location as lightning conditions might differ.
Perfect camera placement is critical in order to get a clear image and readable number plates. While some parameters such as distance from the camera to the number plate can be fine-tuned by zooming after installation, mounting height and angle between the camera and travel direction of vehicles can only be adjusted by physical and cost-intensive re-arrangement. The camera position has to be chosen in a way that passing vehicles are fully visible and can be captured throughout several frames of the video stream while making sure the number plates are large enough for the ANPR system to identify every single character.
We recommend mounting heights between 3 and 8 meters, therefore the suitable minimum capture distance ranges from 5 to 14 meters. Besides the vertical angle constraint, number plates should be visible with at least 250 pixels-per-meter (PPM), this constraint determines the minimum focal length (zoom) the camera has to be set to.
Why between 3 and 8 meters of camera mounting height?
The lower bound of 3 meters is determined by rather practical reasons and not technical limitations. Cameras mounted lower than 3 meters are often prone to vandalism. Also, headlights from passing vehicles can lead to reflections on the camera. The upper bound of 8 meters is determined by the resulting minimum capture distance of at least 14 meters for the needed camera resolution of 1920x1080p. License plates need to be visible with 250 pixel-per-meter (PPM).
As the Swarm Perception Box and cameras are mainly mounted on existing infrastructure such as traffic light poles, there are two general options to mount the cameras: side mounting or overhead mounting.
When positioning the camera above the vehicles, two lanes can be covered with one sensor.
Consider mounting height (1) and capture distance (2) which determine the vertical angle (3) between the camera and the travel direction of the vehicle. The distance between the center of the lane (4) and the camera determines the horizontal angle (5) between the camera and the travel direction of the vehicle.
When mounting the camera to the side of the road, two lanes can be covered, assuming the horizontal angle between the camera and the travel direction of the vehicles is not exceeding 20°.
Position the camera as close as possible to the side of the road to avoid a horizontal angle larger than 20°. Larger angles can lead to lower accuracy because parts of the number plate can become unreadable. While traveling directions (1) and (2) are the same for both vehicles, horizontal angle (3) is much larger than (4).
While capturing sharp images during the day with good lighting conditions is relatively easy, low lightning and dark conditions make it a lot more difficult for cameras to deliver readable number plates from moving vehicles. The following section of this guide, therefore, provides an overview to fine-tune your camera to deliver readable number plates in such conditions.
However, the setting of the following parameters strongly depends on the specific camera mounting position and its environment. A light source such as a streetlamp or a vehicle passing on a different lane can send light to the camera sensors and influence the resulting image to a great extent. For this reason, this guide can only provide a general overview of relevant settings and their effect on image quality.
We recommend that the Auto day/night switch mode from the cameras is used. As you can see in the examples below, it is crucial that the camera changes to night mode reliably.
Mounting height [m] | Minimum capture distance [m] | Maximum capture distance [m] | Range of focal length [mm] |
---|---|---|---|
3
5
19
4-12
4
7
18
5.4-12
5
9
18
6.6-12
6
10
18
10-12
7
12
18
11-12
8
14
17
12
In this page it is described how the detection and the matching of the journeys work from a technical perspective
In order to detect journeys of vehicles, there is a need to detect the same vehicle at several predefined locations. This means there needs to be a dedicated identifier in order to tell if these are the same vehicles.
For vehicles, the unique identifier is obviously the license plate (LP). So, the LP will be taken as the unique identifier for matching vehicles across several locations. As LPs are considered personal data, a salt hashing function will be applied to pseudo-anonymize the personal data.
Based on the SWARM standard use case for traffic counting, the object (vehicle) will be detected and classified. If the journey time feature is enabled, the algorithm will run an LP detection and an LP reading for each detected vehicle. The raw string of the LP will then be pseudonymized with a so-called hashing mechanism, and the pseudonymized random text will be sent within the standard Counting Line (CL) event over the encrypted network.
In the upcoming section, more details of the single steps are described:
In each frame of the video stream, vehicles are detected and classified as cars, trucks, and buses. Alongside this, the vehicle is tracked across the frames of the video stream.
For each classified vehicle, the license plate is detected and mapped to the object.
For each detected license plate, an optical character recognition (OCR) is applied to read the plate. The output of this part is a text which includes the raw string of the license plate.
In order to hash the LP, a salt shaker generates random salts in the backend (Cloud) and distributes the salts to the edge devices. A salt is random data that is used as an input to hash data, for example, passwords or in our case LPs. The salt will not be saved in the backend. The only point, where the salts are temporarily stored is in the edge device (Perception box).
In order to increase the safety of potential attacks, the salt has a validity window of 12 hours. After the validity window, a new randomly generated salt will be used. The graphic below illustrates an example of the hashing function used for LPs.
Salts 1-4 are generated by the salt shaker and distributed to each edge device. In order to always detect all journeys, each LP is hashed with two salts. Two salts are needed, as a journey could potentially have a longer travel time than the salt validity time. In the upcoming section, match event on possible journeys, it is shown why two salts per LP are needed.
If the vehicle crosses a counting line (CL), a CL event with the hashes (h) from the detected LP is sent via MQTT to the Cloud (Microsoft Azure) and saved in a structured database (DB).
On the cloud, the DB is regularly checked for possible matches within the hashes. As shown above, two hashes are created per detected vehicle. If one of the two hashes is the same for two different detections it will be saved as a journey with the journey time information, class, edge device names & GPS coordinates of the edge device.
In case the same hash is found in several locations, a multi-hop journey will be saved based on the sorting of the timestamps. (e.g.: Journey from location A to B to C)
After 12h, which is the validity time of the salt used for pseudonymizing the license plate, the pseudonymized LP will be deleted. This action makes the pseudonymized data anonymized. In summary, it means, that after 12 hours past the detection of the vehicle and LP all data are anonymized.
Detailed information on the solution for Journey time and area-wide traffic flow in terms of data generation, camera set up and Analytics options.
Next to the traffic frequency at given locations, you are wondering about statistics on how long the vehicles take from one to another location and how the traffic flows across your city and municipality. With this solution, you can generate that data with a single sensor solution from SWARM.
For this use case, SWARM software is providing you with the most relevant traffic insights - The counts of vehicles including the classification of the SWARM main classes can be covered. On top, you have the opportunity to add a second counting line, calibrate the distance in between and estimate the speed of the vehicles passing both lines. By combining more sensors in different locations the journey time as well as the statistical traffic flow distribution will be generated.
The journey time and traffic flow distribution can be generated for vehicles only (car, bus and truck).
The configuration of the solution can be managed centrally in SWARM Control Center. Below, you can see how the standard is configured for optimal results.
In order to start your configuration, take care that you have configured your camera and data configuration.
In order to retrieve the best accuracy we strongly recommend to configure a focus area on the maximum two lanes which should be covered for the use case.
Think of focus areas as inverted privacy zones - the model only "sees" objects inside an area, the rest of the image is black.
In order to receive the counting data as well as the journey time data, a Counting Line needs to be configured as an event trigger.
For receiving the best accuracy of the counting including the Journey time information the Counting Line should be placed at a point where the vehicle and the plate will be seen for approx. 10m in distance. On top take care to configure the Counting line at a place where the track calibration still shows stable tracks.
You can choose the direction IN/OUT as you want in order to retrieve the data as needed. On top, you have the option to give a custom direction name for the IN and OUT direction
You can visualize data via Data Analytics in different widgets.
In our Traffic Scenario section, you can find more details about the possible Widgets to be created in the Traffic Scenario Dashboards.
Here is an example for a Journey time widget. Journey time can be shown as average, median or up to two different percentiles.
Another example below that visualizes the journey distribution. There is a slidebar to go through the different time periods of the chosen aggregation level. On top the figures can be changed easily between absolute and relative values.
If you need your data for further local analysis, you have the option to export the data of any created widget as .csv file for further processing in Excel.
If you would like to integrate the data in your IT environment, you can use the API. In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.
In this technical documentation, accuracy refers to the penetration rate of a single sensor, which is the percentage of correctly identified license plates divided by the total number of vehicles counted during a ground truth count.
The current penetration rate for this use case is 60%, taking into account different day/nighttimes, weather conditions, and traffic situations. When calculating journey time between two sensors, approximately 36% of journeys are used as the baseline, which is calculated by multiplying the penetration rate of both sensors.
The accuracy is sufficient to generate data that can be used to make valid conclusions about vehicle traffic patterns and journey times.
Set up parameters | Recommended |
---|---|
Camera | Link | |
---|---|---|
Configuration | Settings |
---|---|
Make sure to place focus areas in a way that it covers enough space before an event trigger so that the model is able to "see" the objects for a similar amount of time as if the focus area wasn't there. The model ignores all objects outside a focus area so there is no detection, no classification, no tracking and no ANPR reading conducted.
Dahua
IPC_HFW5442EP-ZE-B
HikVision
DS-2CD2646G2-IZS
Model
Configuration option
Counting Line
Journey Time
Choose Journey time mode on Global settings
Raw Tracks
Disabled
Object velocity
< 80 km/h
Day/Night/Lighting
Daytime/Well illuminated/Night vision
Indoor/Outdoor
Outdoor
Supported Products
VPX, P401, P101/OP101
Frames Per Second
25
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
250 PPM (vehicle)
Using the camera parameters defined below ensures achieving the minimum required PPM value)
Tip: Use the Axis lens calculator or generic lens calculator.
Camera video resolution
1920x1080 pixel
Camera video protocol/codec
RTSP/H264
USB 3.0/UYVY, YUY2, YVYU
Camera Focal Length
min. 3.612 mm motorized adjustable focal length
Camera mounting - distance to object center
5-20 meters Please consider that the zoom needs to be adjusted according to the capture distance. More details are in the installation set-up guide.
Camera mounting height
3-8 meters Please follow the installation set-up guide in detail.
Camera mounting - vertical angle to the object
<40°
Note: setting the correct distance to vehicle and camera mounting height should result in the correct vertical angle to the vehicle More information in the Installation set-up guide.
0° - 20°
Camera mounting - horizontal angle to the object