Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page describes the different status devices and camera streams can have and what to expect.
In the SWARM Control Center, you will find a basic monitoring status on the camera as well as the device level. This status will show if your cameras are up and running or if there is any need for action to get them up and running again.
In the camera overview of your devices and dashboards, you will find the camera monitoring, which tells you if your camera is working as expected. In device configuration, you find the device monitoring, which shows the worst state of all cameras running on the device
Find out how to configure automatic email alerts for status changes in our Monitoring Alerts section.
If your device appears offline and this is not intended, please follow our Troubleshooting Guidelines!
The device monitoring depends on the worst status of the stream monitoring in order to give you an overview in your device list on devices where a camera is not working as expected.
The monitoring takes into consideration the system, the camera input, and the MQTT connection.
Camera Status | Description |
---|---|
Camera Status | Description |
---|---|
Camera Status | Description |
---|---|
The device is up and running (powered, connected to the internet)
The device is offline (no power, no internet, etc.). There are several easy steps to check before you can contact our support team.
Everything is fine. All the cameras configured on your device are running as expected.
At least one of the cameras on the device is not configured. Check the camera monitoring status for more details.
At least one of the cameras on the device has an issue and is not sending data as expected.
At least one of the cameras on the device has a Warning status.
The device is offline. Check if the hardware is connected to the power supply and has a running network connection.
When you just have changed the configuration from one of the cameras on the device, the status will go on pending for max. 5 minutes until the correct status is determined.
One or more camera streams are disabled.
Everything is fine. Your camera is running as expected.
Software is running smoothly, camera connection is available and MQTT broker is connected.
The camera is not configured.
You need to configure the camera and data connection as well as your specific configuration according to your use case.
The status means that data are still generated and delivered but there are dedicated issues that could have an impact on the data accuracy. Issues types: Video frames cannot be retrieved correctly - At least 10% of camera frames the camera delivers are broken. Performance issues - The performance (frames per second) are dropping the limit of the configured event types.
Something unexpected happened and the software is not running --> no data is generated
Issue types:
Docker container is not running correctly - please contact support - Software is not running.
Data cannot be sent to MQTT endpoint - There are more than 10 MQTT events that have not been delivered to MQTT broker successfully since at least 10 seconds. Please check if your MQTT broker is up and running.
Camera not connected - Camera connection can't be established. Please check if the camera is up and running and if the camera details as for example user & password are configured correctly
The Perception Box or your hardware is offline. Check if the hardware is connected to the power supply and has a running network connection.
When you just have changed the configuration, the status will go on pending for approx. 5 minutes until the correct status will be determi
The respective stream is disabled and can only be enabled again if there are enough licenses available. This state can also be used to save the current configuration, while you don't have a need for the device to run.
Overview about the Device Configuration in the Swarm Control Center
In the Device Configuration tab of the SWARM Control Center, you can centrally manage all your Perception Boxes and configure the cameras in order to capture the data as needed for your use cases.
You can see the different parts of the device configuration described below.
Configure the connection to your camera
SWARM offers Multi-Camera support, allowing you to process more than one camera per Perception Box.
To open the configuration page of a Perception Box, click on the row of the Box. There you can manage all cameras running on one device.
Although you are completely free in naming your Perception Boxes, you might want to have a logical naming scheme.
Depending on your subscription, you will have a pre-defined number of cameras you may use with your Perception Box. If you need to process more cameras, contact our Sales team.
The maximum number of cameras you can use depends on your hardware. While our SWARM Perception Box does have a fixed number of cameras, we did tests and benchmarks for recommended hardware.
Clicking on a camera will collapse the corresponding settings. You can name the camera. On top you have the option to deactivate the camera stream. If a stream is deactivated it will not be taken into consideration by the SWARM Software and the performance will not be impacted but the configuration will be kept.
A GPS coordinate needs to be set for each camera. The GPS coordinate is mandatory and can by entering the coordinates or with the location picker directly on the map.
We are currently able to process camera streams over RTSP, as well as streams coming over USB. You can select the different options as Connection Type.
RTSP cameras must be configured with H264 or H264+ video codec. For more details, head over to Traffic Insights
USB cameras must be available as V4L device at /dev/video0.
The following specifications are supported:
RAW color format: UYVY, YUY2, YVYU
Resolution: 1080p
Other camera settings:
Shutter speed, brightness, FPS are camera/sensor dependent and have to be individually calibrated for optimal results
Make sure to use USB 3.0 camera in order to benefit from the full frame rate.
You can use VLC Media Player to test the video stream of the camera beforehand. If you are unsure which parts of the streaming URL you should use, select Custom Connection String and copy and paste the working string from VLC Media Player.
The other fields for the Camera Connection can be found in the manual of the camera and/or can be configured on the camera itself.
There are some special characters, which could lead to problems with the connection to the camera. If possible, avoid characters like "?", "%" and "+" in the password field.
As soon as you got the Camera Connection configured, you will see one frame of the camera as a preview. You can now start with the Scenario Configuration from here.
The Swarm Perception Box sends the results from the real-time analysis to an MQTT broker. The default configuration will send data to Azure Cloud and to Data Analytics for retrieving the data. If you want to configure a Custom MQTT, see more info in the Advances set-up section of the documentation.
Using Message Compression can save up to 50% of the bandwidth used for sending events to the MQTT broker. Be aware that the broker needs to be configured for compression as well
In the device configuration, you have seen the overall status of the cameras included in one Perception Box. On the camera level, you have the option to see the individual status to better identify the root cause of the issue (see mark 4 in the overview above).
As soon as you see a frame of your camera, you have the option to configure your Scenarios. This is where the magic happens! --> See next page!
Configure your scenario according to your covered use cases
Now, as you see your camera, you have the option to configure it. This is where the magic happens!
As SWARM software is mostly used in dedicated use cases, you can find all information for a perfect set-up in our Use Cases for Traffic Insights, Parking Insights and Advanced Traffic Insights.
In the configuration, you can select the best model for your use case as well as configure any combination of different event triggers and additional features to mirror your own use case.
Each event trigger will generate a unique ID in the background. In order for you to keep track of all your configured types, you are able to give it a custom name on the left side panel of the configuration screen. --> This name is then used for choosing the right data in Data Analytics.
Please find the abbreviation and explanation of each event type below.
We provide templates for the three different areas in order to have everything set for your use case.
Parking events --> Templates for any use case for Parking monitoring
Traffic events --> Templates for use cases around Traffic Monitoring and Traffic Safety.
People events --> Templates for using the People Full Body or People Head model.
This will support you to easier configure your scene with the corresponding available settings. You can find the description of the available Event Triggers and the individual available Trigger Settings below.
Counting Lines will trigger a count as soon as the center of an object crosses the line. While configuring a CL you should consider the perspective of the camera and keep in mind that the center of the object will trigger the count.
The CL is logging as well the direction the object crossed the line in IN and OUT. You may toggle IN and OUT at any time to change the direction according to your needs. On top a custom name for IN and OUT direction can be configured. The custom name for direction can then be used as segmentation in Data Analytics and is part of the event output.
Per default, a CL only counts objects once. In case each crossing should be counted there is an option to enable events for repeated CL crossings. The only limitation taken there is that only counts will be taken into consideration if they are 5 seconds apart from each other.
Available Trigger Settings: ANPR, Speed Estimation, Events for repeated CL crossing
You can enable the Speed Estimates feature as a specific trigger setting with a Counting Line in the left side bar. This action will add one additional line that can be used to configure the distance between in your scenario. For best results, use a straight distance without bendings.
RoIs are counting objects in the specified region. This type also provides as well the Class and Dwell Time, which tells you how long the object has been in this region.
Depending on the scenario type we can differentiate between 3 types of RoIs. For those 3 types we are offering predefined templates described below:
Zones are used for OD - Origin - Destination. Counts will be generated, if an object moves through OD 1 and afterwards through OD 2. For OD at least two zones need to be configured.
The first zone the object passes will be the origin zone and the last one it moved through the destination zone.
A VD covers the need of having 3D counting lines. The object needs either to move into the field and then vanish or appear within the field and move out. Objects appearing and disappearing within the field, as well as objects passing the field are not counted.
Learn more about the Virtual Door logic.
The Virtual Door is designed for scenes to obtain detailed entry/exits count for doors or entrances of all kinds.
The logic for the Virtual Door is intended to be very simple. Each head or body is continuously tracked as it moves through the camera's view. Where the track starts and ends is used to define if an entry or exit event has occurred.
Entry: When the track start point starts within the Virtual Door and ends outside the Virtual Door, an in event is triggered
Exit: When the track start point starts outside the Virtual Door and ends within the Virtual Door, an out event is triggered
Walk by: When a track point starts outside the Virtual Door and ends outside the Virtual Door, no event is triggered
Stay outside: When a track point starts inside the Virtual Door and ends inside the Virtual Door, no event is triggered
Note: There is no need to configure the in and out directions of the door (like (legacy) Crossing Lines) as this is automatically set.
You can enable the ANPR feature with a Counting Line, which will add the license plate of vehicles as an additional parameter to the generated events. When enabling the ANPR feature, please consider your local data privacy laws and regulations, as number plates are sensitive information.
The Image Retention Time can be manually set. After this time, any number plate raw information as well as screen captures will be deleted.
Please consider our Use Case specification to properly use this feature. The feature is especially available for Barrierless Parking Use Case.
You can enable the Journey time feature in the Global Settings on the left side bar. This feature generates journey time and traffic flow data. This setting is needed for Advanced Traffic Insights. Find more technical details on data which will be generated in following section:
In the Global Settings section, you have the option to add focus areas. A focus area will define the area of detection on the frame. So in case focus areas are defined, detections will only be taken into consideration in these corresponding areas. If a focus area is configured, the areas will be shown on the preview frame and in the table below. In the table you have the option to delete the focus area.
Attention: When a focus area is drawn, the live and track calibration will only show detections and tracks in these areas. So before focus areas are drawn check the track calibration in order to see where the tracks are on the frame to not miss essential detections in the focus area definition.
In the configuration, there are two trigger actions to choose from. Either a time or an occupancy change, depending on the use case.
In the Global Trigger settings you can adjust the RoI time interval.
The RoI time interval is used accordingly depending on the chosen trigger action:
Time --> The status of the region will be sent at the fixed configured time interval.
Occupancy --> You will receive an event if the occupancy state (vacant/occupied) changes. The RoI time interval is a pause time after an event was sent. This means that the occupancy change will not be checked for the configured time interval and you will receive max. one event per time frame. The state is always compared with the state sent in the last event.
At the raw track mode an event will be generated as soon as the object is leaving the camera frame. At this event the exact track of the object will be retrieved. The track will be gathered in X/Y coordinates of the camera frame.
Raw Tracks should only be used in case you decide for the advanced set up with a custom MQTT connection.
To create your own solution, select a model for your solution and then place your type (or select raw tracks mode).
When a type is active, left-click and hold the white circles to move the single corner points. You can create any tetragon (four-sided polygon). To move the entire type, left-click and hold anywhere on the type.
Mark | Description |
---|---|
Mark | Description |
---|---|
Single Space Parking RoI | Multi Space Parking RoI | Generic RoI | |
---|---|---|---|
Device information
By clicking on the pen you may change the name of the Perception Box. There are nearly no limitations to doing so. You may use any special character and as many chars as you want. On top, you find the Device ID and the Serial number of the device with a copy option. The device ID is necessary for any support case you are opening. The serial number is the one from the Perception box which you find on the label of the box
Here you can see the individual naming of each camera on one device, which can be changed in the next steps where you are configuring the camera settings. By clicking on the row of the camera the camera settings will open.
Add Camera
In the configuration step of your Perception Box you might need to add new cameras which can be achieved by clicking on this button.
Retrieve Logs & Reboot Device
On top, you have the options to retrieve and display the SWARM software logs to get a more detailed overview in case the box is not running as expected. There you can see if the box is able to connect to the camera. In case the connection to the camera is not successful, please check the camera & network settings on your side. As every hardware needs a reboot from time to time, we included this function "Reboot device" here to do this. In case you still experience issues, please contact our support team.
The Camera Status represents basic monitoring of SWARM software and gives an indication if the software, camera input and the MQTT connection is up and running on camera level.
See the definition of the status in the Camera and Device Monitoring page.
Toggle to change between Data Analytics and Device Configuration.
Sort, Search & Filter
Especially when hosting a huge number of devices, you can benefit from our options to search for a specific device you want to manage. Furthermore, we offer the option to sort the list or filter for specific monitoring status of the camera connections. When a filter is set, you can see this indicated on the top including the option to quickly clear all filter.
Device Name / ID of your Perception Boxes or your Hardware. You can change the Device Name of the Boxes according to your preferences.
The Unique ID is used for communication between edge devices (Perception Box) and Azure Cloud.
This status indicates if the connection between the Perception Box and the Management Hub (Azure) is established. Possible values are Online, Offline or Unknown. If a device is offline unexpectedly, please check out our trouble shooting guide.
The Status represents basic monitoring of SWARM software and gives an indication if the software is up and running on device level.
See the definition of the status in the Camera and Device Monitoring page.
Auto refresh Button: Whenever something has been changed in the configuration, or a status changes, this option helps you to automatically refresh the Device Configuration page.
Event Trigger
Time
Time
Time or Occupancy
Type
Parking
Parking
People & Traffic Events
Default number of objects
1
5
1
Color
In order to configure the stream properely for best data accuracy there are two options which will support you in the configuration process.
For easy calibration, you can use our Live calibration in the top right corner drop down of the preview frame. As you can see in the screenshot below, this mode offers visibility about what objects the software is able to detect in the current previewed frame.
We suggest to use this calibration view especially for calibrating your Single & Multispace use case configurations with Region of Interests.
The detected objects are surrounded by a so-called bounding box. Any bounding box also displays the center of the object. In order to distinguish the objects, we offer the calibration more in differentiated colors of the main classes. Any event that gets delivered via MQTT is triggered by the center of the object (dot in the center of the bounding box).
The track calibration feature gives the option to overlay a relevant amount of object tracks on the screen. With the overlay of the tracks, it will be clearly visible where in the frame the objects are detected the best. According to this input, it is much easier to configure your needed use cases properly and have good results with the first configuration try.
We suggest to use this calibration support for any Traffic monitoring use case as well as Barrierless parking use case.
With track calibration history enabled you will be able to access the track calibration for every hour of the past 24 hours.
The track calibration images will be stored on the edge device and are only accessible through the Control Center. Make sure that viewing, storing, and processing of these images for up to 24 hours is compliant with your applicable data privacy regulations.
The color of the tracks are split by object class so that they can be distinguished between cars, trucks, buses, people and bicycles.
The colors of the tracks and bounding boxes are differentiated per main class. Find the legend for the colors on the question mark in the preview frame as shown in the Screenshot below.
Here you can find details on how to use the Rule Engine for your customized Scenario Configuration
With the Rule Engine, you can customize your event triggers. Reducing Big Data to relevant data is possible with just a few clicks: From simple adjustments to only get counts for one direction of the Counting Line to more complex rules to monitor a Region of Interest status when a vehicle crosses a Counting Line.
For rule creation, an event trigger has to be chosen to attach it to. Depending on the type of the event trigger, options are available to set flexible filter conditions.
For those conditions combined via AND, all conditions need to be fulfilled. In the example above, events are sent, only if a bicycle or person is crossing the Counting Line in the IN direction.
You can create combined conditions for RoI and CL. When they are chosen as an event trigger, the option to add another condition appears below. This subcondition needs to be based on a second RoI or CL. They will then be combined by an AND connection.
Combined rules trigger an event only in case an object is crossing the CL and the rule of the additional CL or RoI is met.
In the example below, the rule sends an event in case a car, bus, truck or motorbike is crossing the speed line at more than 50 km/h and at the same time a person which is longer than 5 sec in the RoI.
Any created rule can be tagged as a template. This provides the option to use the same logic on any camera stream within the same Control Center.
If you are deleting a rule that is tagged as a template, the template will be removed. In case a rule is created on a trigger (e.g.: CL) and the trigger gets deleted, the rule will disappear as well.
dark green
purple
light green
Name your rule - This name is used to create widgets in Data Analytics, and will be part of the event you receive via MQTT.
Choose the event trigger the rule should be based on Any of your already configured event triggers can be chosen. In case Origin/Destination is selected, all configured zones are used automatically.
You have the option to choose from predefined templates or your individual rules, which you have tagged as your templates yourself. --> See later in this section how to tag a rule as a template.
Set your subconditions With subconditions, you can filter down to only gather the relevant data for this rule. The parameter options for the subconditions are dependent on the chosen event trigger.
After creating a rule, the Scenario Configuration of the camera needs to be saved in order that the rule will be applied accordingly.
In the actions section, you can click on the tag symbol in order to save the rule as a template. If the rule is tagged as such, the symbol will be highlighted .
Rules can be edited by clicking on the edit symbol . This action will open the edit mode of the rule. By clicking on the bin symbol you can delete a rule. A confirmation of the deletion is required to finalize the action.
Option to change camera parameters to optimize video stream settings for the SWARM solution.
The connection for changing camera settings from the SWARM Perception Platform is established via the open ONVIF standard.
Make sure to enable ONVIF on your camera and create an admin user with the same user credentials as for the camera itself. In case the camera is delivered by SWARM, ONVIF is enabled by default.
The camera settings section is split into two tabs. One tab is for checking if the Basic settings needed for the Swarm Analytics processing are correctly set. In the Advanced settings, camera parameters can be manually adjusted and optimized.
In the basic settings tab, the current main configuration of the camera is shown and compared with the recommended settings for your configuration. The icons per setting indicate if the applied settings match Swarm's recommendations.
There is an option to automatically apply the recommended settings in order to have the camera configured for achieving the best results.
As each installation is different, especially in terms of illumination and distance as well as further external factors, you can configure the camera settings individually for receiving the best image quality for data analysis with the SWARM solution.
Change and apply settings. When settings are applied, the preview frame is refreshed and you will see how the changes impact the image quality. In case you are not happy with the changes you just made, click on revert settings. The settings will then be reverted to the settings which have been applied at the time the camera settings page was opened.
The following configuration options are available:
You can find the ONVIF setting in the following section of the camera settings on the Hikvision UI: Network --> Advanced Settings --> Integration protocol
Enable Open Network Video Interface
Make sure to select "Digest&ws-username token"
Add user
User Name: <same as for camera access>
Password: <same as for camera access>
Level: Administrator
Save
Time Synchronization needs to be correct for ONVIF calls to work
System --> System settings --> Time
Enable NTP for time synchronization
Configuration | Description | Value |
---|---|---|
Brightness
Defines how dark or bright the camera image is
From 0 (dark) to 10 (bright)
Contrast
Difference between bright and dark areas of the camera image
From 0 (low contrast) to 10 (very high contrast)
Saturation
Describes the depth or intensity of colors in the camera image
From 0 (low color intensity) to 10 (high color intensity)
Sharpness
Defines how clearly details are rendered in camera image
From 0 (low details) to 10 (high details)
Shutter speed
Speed at which the shutter of the camera closes (illumination time)
Generally, a fast shutter can prevent blurry images, however low-light conditions sometimes require a higher value. Values are in seconds, for example 1/200s = 0.005s
Day/Night mode
Choose between day-, night-, or auto-mode, which will apply the IR-cut filter depending on camera sensor inputs
Day, Night, Auto
WDR (Wide dynamic range)
For high-contrast illumination scenarios WDR helps to get details even in dark and bright areas
When WDR is activated, the intensity level of WDR can be adjusted
Zoom
Motorized optical zoom of cameras
Two levels of zoom distance are available indicated by the + and - buttons. Zoom is applied instantly to the camera and cannot be reverted automatically.
In this page you can find examples of rules for real world use cases.
Detect vehicles (motorized traffic) passing a street in the wrong direction (e.g.: one-way-streets of highway entrances).
As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard traffic counting use case. Create a new rule, name it and choose the configured counting line (CL). For wrong-way drivers, a predefined template can be used. You still have the opportunity to adapted it according to your needs. For the wrong-way driver you can create a rule that the direction needs to equal "out" which in your configured scene needs to be the wrong direction.
At an intersection, only detect objects which are performing a U-turn. As a first step, the scenario needs to be configured on camera level. Follow the setup guideline for a standard intersection monitoring use case.
Create a new rule, name it and choose Origin destination as triggers for the rule. For U-turns, a predefined template can be used. You still have the opportunity to adapted it according to your needs. Therefore, you can connect the existing origin and destination zones in your scenario and in case anyone is going from one zone back again to the same, one can assume, that this was a U-turn.
In traffic situations, there are several situations where a given class of street users should not use dedicated areas, e.g.:
people in the center of an intersection
Vehicles in fire-service zones
In order to check when and how often it happens, you can create a rule based on a predefined RoI in these dedicated areas. Create a new rule, name it and choose the RoI as triggers for the rule. You can find a template as an example for "person on street".
In the subcondition you can choose "Object" as a parameter and choose min nb of objects which need to apply to the conditions. You can define which classes are expected or not. On top, a dwell time condition can be added in order to only take objects into account which are in the area longer than a given time. (e.g. jaywalking, wrong parking in fire-service zones).
How often has it happened to any of you have been cut off at a pedestrian crossing while crossing or waiting to cross. This happens on a daily basis, and quite often it is very close to severe incidents. In order to know if and how often this happens, we provide you a solution with our rule engine. This is the basis for you to know where to set dedicated actions. The solution is a combined rule with a CL that is detecting the vehicles and a RoI which is focusing on pedestrians and bicycles. Configure a CL or speed line in front of the pedestrian crossing. On top, an RoI can be configured at the pedestrian crossing and/or the waiting area next to.
With that configuration, one or several rules can be created. In this example, one rule for this high risk situation is defined. You can detect when at least one person is on the Pedestrian crossing and a vehicle is crossing the speed line at more than 10 km/h.
Here is a short video where it is shown how such a rule will be applied.
Define at least one ROI and create an associated rule. As long as the rule is valid, the associated Quido relay output is enabled (contact closed). One or more rules can be created for the same ROI.
Please contact Support if you would like to try out this feature or if you have any further questions.
The essence of our computer vision engine's ability to detect and classify lies in its models.
Have a look at our documentation for use cases (Traffic Insights, Parking Insights, Advanced Traffic Insights), we recommend a model for each use case. If unsure, use the model Traffic & Parking (Standard).
Event will contain class and subclass according to this definition.
The following images are intended as examples only and are not supposed to provide information about exact camera distances or perspectives.
The healthiness of your device at a glance
Device Uptime
See the time, this device has been up and running until now.
Device Status and Device Restarts
Device Free Disk Space
In order, for the disk space of your device to be running full, you can see this as an early indicator here.
Device Temperature
Supported for: P101/OP101/Jetson Nano
If the device is running at a high temperature (depending on specifications defined by the manufacturer) we will state a warning here. The temperature could impact the performance (throttle processing performance).
Modem Traffic, Signal Strength and Reconnects
supported for OP100/OP10
Camera status
Camera processing speed
In case the fps are dropping, there might be a potential problem with the camera occurring, or the device is getting too hot.
Generated and Pending Events
Class | Subclass | Definition |
---|---|---|
The device health metrics allow you to provide evidence for reliable and continuous data collection and to self-diagnose (e.g. stable network connectivity/power supply/camera connection/processing speed,... )
Gives an overview of the and potential restarts of the device
Gives an overview of the per camera stream
In case any Device Health Metric is not showing the expected values, please follow our
car
Cars include small to medium sized cars up to SUVs, Pickups and Minivans (for example VW Caddy).
The class does not include cars pulling a trailer.
car
van
Vans are vehicles for transporting a larger number of people (between 6 and 9) or used for delivery.
car
car with trailer
Cars and vans that are pulling a trailer of any kind are defined as car with trailer.
For a correct classification the full car and at least one of the trailer axis have to be visible.
truck
single unit truck
Single unit trucks are defined as large vehicles with two or more axes where the towing vehicle can not be separated from the semi-trailer and is designed as single unit.
truck
articulated truck
Articulated trucks are large vehicles with more than two axes where the towing vehicle can be separated from the semi-trailer. A towing vehicle without a semi-trailer is not included and is classified as single unit truck.
truck
truck with trailer
Single unit trucks or articulated trucks pulling an additional trailer are defined as truck with trailer.
bus
-
A bus is defined as vehicle transporting a large number of people.
motorbike
-
The class motorbike is defined as a person riding a motorized single-lane vehicle. Motorbikes with a sidecar are included, whereas e-bikes are not part of this class.
Motorbikes without a rider are not considered.
bicycle
-
The class bicycle is defined as a person actively riding a bicycle. People walking and pushing a bicycle are not included in this class and are considered as person.
Bicycles without a rider are not considered.
person
-
The class person includes pedestrians that are walking or riding Segways, skateboards, etc. are defined as pedestrians.
People pushing bicycles or strollers are included in this class.
scooter
The class scooter includes a person riding on a so-called kick scooter, which can either be motorized or human-powered. The scooter usually exists out of two wheels and a handlebar.
tram
The class tram is a public transportation vehicle operating on tracks along streets or dedicated tramways. Trams are typically electrically powered, drawing electricity from overhead wires.
other
-
Vehicles not matching the classes above are considered in the class other.
Smaller vans as the VW Multivan are included as well as vehicles similar to the Fiat Ducato.
This includes autobuses, coaches, double-decker, motor buses, motor coaches, omnibuses, passenger vehicles and school buses.
This class includes tractors (with or without trailer), ATVs and quads, forklifts, road rollers, excavators and snow plows.