Technical Documentation
Version 2023.1

Set-up Single Space/Multi Space Parking

Gather real time occupancy state about specific parking spaces - free or occupied
You have a parking space where you simply want to know if your specific parking spaces are occupied or free, SWARM provides a perfect solution for doing that quite easily. See yourself:

What data can be generated?

For this use case, SWARM software is providing you with any relevant data for Single Space detection within your parking space. The solution is to provide you with the occupancy state of each of your configured parking lots.
The single space detection will give you information about the occupancy state of your parking lot (free or occupied) as well as the information about the object in your parking space, including the classification. Nevertheless, consider that the following configuration set-up is optimized to detect vehicles and not people and bicycles. On top the classification is depending on the camera installation, for a more top-down view the classification will be less accurate.

Camera placement

Good camera placement and understanding of the following section are key for accurate detections for Single Space Parking.
The main challenge in planning a camera installation is to avoid potential occlusions by other cars. We advise using the Axis lens calculator or generic lens calculator and testing your parking setup for the following conditions:
  • put a car on one of the parking spaces
  • put a large vehicle (high van, small truck - the largest vehicle that you expect in your parking) on all parking spaces next to your car
  • if you still can see >70 % of the car, then this parking spot is valid.

General & easy recommendations for deciding where to place the camera:

  • Parking spots have to be fully visible (inside the field of view of the camera). We do not guarantee full accuracy for cropped single parking spaces.
Fully visible single parking spaces
  • Avoid objects (trees, poles, flags, walls, other vehicles) that occlude the parking spaces. Avoid camera positions, where cars (especially high cars like vans) occlude other cars.
Avoid occlusions by other objects
  • Occlusions by other parking cars, mainly happen if parking spaces are aligned in direction of camera-alignment lines.
Avoid occlusions by other vehicles

Get a better overview for installations with more details on camera distance to objects and mounting height:

What needs to be considered for a successful analysis?

Camera Set-up
How to configure the solution?
How to get the data
Find detailed information about camera requirements/settings as well as camera positioning in the table below.
Pixels Per Meter is a measurement used to define the amount of potential image detail that a camera offers at a given distance.
> 30 PPM
Using the camera parameters defined below ensures to achieve the minimum required PPM value)
Camera video resolution
1280×720 pixel
Camera video protocol/codec
Camera Focal Length
2.8mm - 4mm
Camera mounting - distance to object center
5-30 m (cars in the center of the image)
For 5 meters distance we guarantee a high accuracy for 3 parking spaces, aligned orthogonal to the camera.
The higher the distance, to the camera, the more parking-spaces can be monitored.
Camera mounting height
Indoor: 2,5 - 5m Outdoor: 2,5 - 10m Higher is better. Vehicles can potentially occlude the parked cars, hence we recommend higher mounting points.
Camera FPS
> 25 FPS
Wide Dynamic Range
Must be enabled
The configuration of the solution can be managed centrally in SWARM Control Center. Below, you can see how to configure a Single Space Parking use case to get the best results
In order to start your configuration, take care that you have configured your camera and data configuration.

Configuration settings

Parking (Single-/ Multispace)
Configuration option
ROI (Region of Interest)
Raw tracks

How to place the configuration type?

In the Parking Event templates you will find the two options Single Space (RoI) and Multi Space (RoI). These event types are the ones you need to set up this use case. Use an Single Space (RoI) in case you configure a parking space for a single car. In case you have an area where you expect more than one car choose the Multi Space (RoI). The difference between these two event types is the maximum capacity that you can set in the trigger settings.
Place the Region of interest (RoI) on the parking space you would like to configure. Consider that a vehicle is in the RoI if the center point of the object is in the ROI.
As the center point of the object is defining if the object is in an ROI or not please take care to configure the ROI taking into consideration the perspective.
If the distance from the camera to the object (parking space) is higher, the perspective will have a higher impact and you need to adapt the ROI as well according to the perspective. In order to support the calibration in the best way, you can use the calibration mode which can be activated on the top right of the configuration frame.
There you will see the detection boxes and center points of the vehicles which are at that moment in the camera. So take care to configure the RoI accordingly that the center point will be in the RoI.

Visualize data

You can visualize data via Data Analytics in different widgets.


In our Parking Scenario section, you can find more details about the possible Widgets to be created in the Parking Scenario Dashboards.


You are able to visualize the data for any Single- or Multispace parking lot you have configured with the Parking RoI. So you are able to see the occupancy status as well as the number of vehicles in each RoI or aggregated across one or several camera streams. You have the option to add Current & Historic Parking Utilization or the Single Multi Space Occupancy widgets for your data in this use case.

Retrieve your data

If you need your data for further local analysis, you have the option to export the data of any created widget as csv file for further processing in Excel.
csv. export
If you would like to integrate the data in your IT environment, you can use the API. In Data Analytics, you will find a description of the Request to use for retrieving the data of each widget.

Environment requirements

Object velocity
0 km/h
Nighttime (Only well illuminated or night vision mode)
Indoor or Outdoor
Expected Accuracy
(when all environmental, hardware and camera requirements met)
>95% Classification is not considered

Hardware Specifications

Supported Products
VPX, P101/OP101, P100/OP100
Frames Per Second (FPS)