The essence of our computer vision engine's ability to detect and classify lies in its models.
Have a look at our documentation for use cases (Traffic Insights, Parking Insights, Advanced Traffic Insights), we recommend a model for each use case. If unsure, use the model Traffic & Parking (Standard).
Event will contain class and subclass according to this definition.
The following images are intended as examples only and are not supposed to provide information about exact camera distances or perspectives.
Class | Subclass | Definition |
---|---|---|
car
Cars include small to medium sized cars up to SUVs, Pickups and Minivans (for example VW Caddy).
The class does not include cars pulling a trailer.
car
van
Vans are vehicles for transporting a larger number of people (between 6 and 9) or used for delivery.
car
car with trailer
Cars and vans that are pulling a trailer of any kind are defined as car with trailer.
For a correct classification the full car and at least one of the trailer axis have to be visible.
truck
single unit truck
Single unit trucks are defined as large vehicles with two or more axes where the towing vehicle can not be separated from the semi-trailer and is designed as single unit.
truck
articulated truck
Articulated trucks are large vehicles with more than two axes where the towing vehicle can be separated from the semi-trailer. A towing vehicle without a semi-trailer is not included and is classified as single unit truck.
truck
truck with trailer
Single unit trucks or articulated trucks pulling an additional trailer are defined as truck with trailer.
bus
-
A bus is defined as vehicle transporting a large number of people.
motorbike
-
The class motorbike is defined as a person riding a motorized single-lane vehicle. Motorbikes with a sidecar are included, whereas e-bikes are not part of this class.
Motorbikes without a rider are not considered.
bicycle
-
The class bicycle is defined as a person actively riding a bicycle. People walking and pushing a bicycle are not included in this class and are considered as person.
Bicycles without a rider are not considered.
person
-
The class person includes pedestrians that are walking or riding Segways, skateboards, etc. are defined as pedestrians.
People pushing bicycles or strollers are included in this class.
scooter
The class scooter includes a person riding on a so-called kick scooter, which can either be motorized or human-powered. The scooter usually exists out of two wheels and a handlebar.
tram
The class tram is a public transportation vehicle operating on tracks along streets or dedicated tramways. Trams are typically electrically powered, drawing electricity from overhead wires.
other
-
Vehicles not matching the classes above are considered in the class other.
Smaller vans as the VW Multivan are included as well as vehicles similar to the Fiat Ducato.
This includes autobuses, coaches, double-decker, motor buses, motor coaches, omnibuses, passenger vehicles and school buses.
This class includes tractors (with or without trailer), ATVs and quads, forklifts, road rollers, excavators and snow plows.
In order to configure the stream properely for best data accuracy there are two options which will support you in the configuration process.
For easy calibration, you can use our Live calibration in the top right corner drop down of the preview frame. As you can see in the screenshot below, this mode offers visibility about what objects the software is able to detect in the current previewed frame.
We suggest to use this calibration view especially for calibrating your Single & Multispace use case configurations with Region of Interests.
The detected objects are surrounded by a so-called bounding box. Any bounding box also displays the center of the object. In order to distinguish the objects, we offer the calibration more in differentiated colors of the main classes. Any event that gets delivered via MQTT is triggered by the center of the object (dot in the center of the bounding box).
The track calibration feature gives the option to overlay a relevant amount of object tracks on the screen. With the overlay of the tracks, it will be clearly visible where in the frame the objects are detected the best. According to this input, it is much easier to configure your needed use cases properly and have good results with the first configuration try.
We suggest to use this calibration support for any Traffic monitoring use case as well as Barrierless parking use case.
With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.
The track calibration images will be stored on the edge device and are only accessible through the Control Center. Make sure that viewing, storing, and processing of these images for up to 24 hours is compliant with your applicable data privacy regulations.
The color of the tracks are split by object class so that they can be distinguished between cars, trucks, buses, people and bicycles.
The colors of the tracks and bounding boxes are differentiated per main class. Find the legend for the colors on the question mark in the preview frame as shown in the Screenshot below.
Configure your scenario according to your covered use cases
Now, as you see your camera, you have the option to configure it. This is where the magic happens!
As SWARM software is mostly used in dedicated use cases, you can find all information for a perfect set-up in our Use Cases for Traffic Insights, Parking Insights and Advanced Traffic Insights.
In the configuration, you can select the best model for your use case as well as configure any combination of different event triggers and additional features to mirror your own use case.
Each event trigger will generate a unique ID in the background. In order for you to keep track of all your configured types, you are able to give it a custom name on the left side panel of the configuration screen. --> This name is then used for choosing the right data in Data Analytics.
Please find the abbreviation and explanation of each event type below.
We provide templates for the three different areas in order to have everything set for your use case.
Parking events --> Templates for any use case for Parking monitoring
Traffic events --> Templates for use cases around Traffic Monitoring and Traffic Safety.
People events --> Templates for using the People Full Body or People Head model.
This will support you to easier configure your scene with the corresponding available settings. You can find the description of the available Event Triggers and the individual available Trigger Settings below.
Counting Lines will trigger a count as soon as the center of an object crosses the line. While configuring a CL you should consider the perspective of the camera and keep in mind that the center of the object will trigger the count.
The CL is logging as well the direction the object crossed the line in IN and OUT. You may toggle IN and OUT at any time to change the direction according to your needs. On top a custom name for IN and OUT direction can be configured. The custom name for direction can then be used as segmentation in Data Analytics and is part of the event output.
Per default, a CL only counts objects once. In case each crossing should be counted there is an option to enable events for repeated CL crossings. The only limitation taken there is that only counts will be taken into consideration if they are 5 seconds apart from each other.
Available Trigger Settings: ANPR, Speed Estimation, Events for repeated CL crossing
You can enable the Speed Estimates feature as a specific trigger setting with a Counting Line in the left side bar. This action will add one additional line that can be used to configure the distance between in your scenario. For best results, use a straight distance without bendings.
RoIs are counting objects in the specified region. This type also provides as well the Class and Dwell Time, which tells you how long the object has been in this region.
Depending on the scenario type we can differentiate between 3 types of RoIs. For those 3 types we are offering predefined templates described below:
Zones are used for OD - Origin - Destination. Counts will be generated, if an object moves through OD 1 and afterwards through OD 2. For OD at least two zones need to be configured.
The first zone the object passes will be the origin zone and the last one it moved through the destination zone.
A VD covers the need of having 3D counting lines. The object needs either to move into the field and then vanish or appear within the field and move out. Objects appearing and disappearing within the field, as well as objects passing the field are not counted.
Learn more about the Virtual Door logic.
The Virtual Door is designed for scenes to obtain detailed entry/exits count for doors or entrances of all kinds.
The logic for the Virtual Door is intended to be very simple. Each head or body is continuously tracked as it moves through the camera's view. Where the track starts and ends is used to define if an entry or exit event has occurred.
Entry: When the track start point starts within the Virtual Door and ends outside the Virtual Door, an in event is triggered
Exit: When the track start point starts outside the Virtual Door and ends within the Virtual Door, an out event is triggered
Walk by: When a track point starts outside the Virtual Door and ends outside the Virtual Door, no event is triggered
Stay outside: When a track point starts inside the Virtual Door and ends inside the Virtual Door, no event is triggered
Note: There is no need to configure the in and out directions of the door (like (legacy) Crossing Lines) as this is automatically set.
You can enable the ANPR feature with a Counting Line, which will add the license plate of vehicles as an additional parameter to the generated events. When enabling the ANPR feature, please consider your local data privacy laws and regulations, as number plates are sensitive information.
The Image Retention Time can be manually set. After this time, any number plate raw information as well as screen captures will be deleted.
Please consider our Use Case specification to properly use this feature. The feature is especially available for Barrierless Parking Use Case.
You can enable the Journey time feature in the Global Settings on the left side bar. This feature generates journey time and traffic flow data. This setting is needed for Advanced Traffic Insights. Find more technical details on data which will be generated in following section:
In the Global Settings section, you have the option to add focus areas. A focus area will define the area of detection on the frame. So in case focus areas are defined, detections will only be taken into consideration in these corresponding areas. If a focus area is configured, the areas will be shown on the preview frame and in the table below. In the table you have the option to delete the focus area.
Attention: When a focus area is drawn, the live and track calibration will only show detections and tracks in these areas. So before focus areas are drawn check the track calibration in order to see where the tracks are on the frame to not miss essential detections in the focus area definition.
In the configuration, there are two trigger actions to choose from. Either a time or an occupancy change, depending on the use case.
In the Global Trigger settings you can adjust the RoI time interval.
The RoI time interval is used accordingly depending on the chosen trigger action:
Time --> The status of the region will be sent at the fixed configured time interval.
Occupancy --> You will receive an event if the occupancy state (vacant/occupied) changes. The RoI time interval is a pause time after an event was sent. This means that the occupancy change will not be checked for the configured time interval and you will receive max. one event per time frame. The state is always compared with the state sent in the last event.
At the raw track mode an event will be generated as soon as the object is leaving the camera frame. At this event the exact track of the object will be retrieved. The track will be gathered in X/Y coordinates of the camera frame.
Raw Tracks should only be used in case you decide for the advanced set up with a custom MQTT connection.
To create your own solution, select a model for your solution and then place your type (or select raw tracks mode).
When a type is active, left-click and hold the white circles to move the single corner points. You can create any tetragon (four-sided polygon). To move the entire type, left-click and hold anywhere on the type.
Option to change camera parameters to optimize video stream settings for the SWARM solution.
The connection for changing camera settings from the SWARM Perception Platform is established via the open ONVIF standard.
Make sure to enable ONVIF on your camera and create an admin user with the same user credentials as for the camera itself. In case the camera is delivered by SWARM, ONVIF is enabled by default.
The camera settings section is split into two tabs. One tab is for checking if the Basic settings needed for the Swarm Analytics processing are correctly set. In the Advanced settings, camera parameters can be manually adjusted and optimized.
In the basic settings tab, the current main configuration of the camera is shown and compared with the recommended settings for your configuration. The icons per setting indicate if the applied settings match Swarm's recommendations.
There is an option to automatically apply the recommended settings in order to have the camera configured for achieving the best results.
As each installation is different, especially in terms of illumination and distance as well as further external factors, you can configure the camera settings individually for receiving the best image quality for data analysis with the SWARM solution.
Change and apply settings. When settings are applied, the preview frame is refreshed and you will see how the changes impact the image quality. In case you are not happy with the changes you just made, click on revert settings. The settings will then be reverted to the settings which have been applied at the time the camera settings page was opened.
The following configuration options are available:
You can find the ONVIF setting in the following section of the camera settings on the Hikvision UI: Network --> Advanced Settings --> Integration protocol
Enable Open Network Video Interface
Make sure to select "Digest&ws-username token"
Add user
User Name: <same as for camera access>
Password: <same as for camera access>
Level: Administrator
Save
Time Synchronization needs to be correct for ONVIF calls to work
System --> System settings --> Time
Enable NTP for time synchronization
Single Space Parking RoI | Multi Space Parking RoI | Generic RoI | |
---|---|---|---|
Configuration | Description | Value |
---|---|---|
Brightness
Defines how dark or bright the camera image is
From 0 (dark) to 10 (bright)
Contrast
Difference between bright and dark areas of the camera image
From 0 (low contrast) to 10 (very high contrast)
Saturation
Describes the depth or intensity of colors in the camera image
From 0 (low color intensity) to 10 (high color intensity)
Sharpness
Defines how clearly details are rendered in camera image
From 0 (low details) to 10 (high details)
Shutter speed
Speed at which the shutter of the camera closes (illumination time)
Generally, a fast shutter can prevent blurry images, however low-light conditions sometimes require a higher value. Values are in seconds, for example 1/200s = 0.005s
Day/Night mode
Choose between day-, night-, or auto-mode, which will apply the IR-cut filter depending on camera sensor inputs
Day, Night, Auto
WDR (Wide dynamic range)
For high-contrast illumination scenarios WDR helps to get details even in dark and bright areas
When WDR is activated, the intensity level of WDR can be adjusted
Zoom
Motorized optical zoom of cameras
Two levels of zoom distance are available indicated by the + and - buttons. Zoom is applied instantly to the camera and cannot be reverted automatically.
Event Trigger
Time
Time
Time or Occupancy
Type
Parking
Parking
People & Traffic Events
Default number of objects
1
5
1
Color
dark green
purple
light green