Barrierless Parking and ANRP
Use case for barrierless parking, including utilization of the parking area by recognizing the license plate of the parking customer
You would like to get more insights about your parking spaces and the customers using it? SWARM software is providing the solution to gather the needed data for parking monitoring.
In order to efficiently manage your parking infrastructure, gathering accurate and reliable data is key. Our parking monitoring solution builds on Artificial intelligence based software that is designed to detect, count and classify objects entering indoor our outdoor parking spots. Generated data can be used as an information basis, helping to predicatively guide customers in parking garages and outside facilities or manage parking violations. Basically speaking, we can help you to answer questions such as:
- How is the current utilization of my parking spot?
- What is the historic parking utilization at an average level?
- How long are my customers parking in the garage?
- Is there any possibility to see and proof parking violations (e.g. long time parking)?
Technology wise our parking monitoring system consists of the following parts: object detection, object tracking, counting objects crossing a virtual line in the field of interest as well as object classification and ANPR. The following section of this article will briefly describe those pretrained technologies used for parking monitoring.
The main task here is to distinguish objects from the background of the video stream. This is accomplished by training our algorithm to recognize a car as an object of interest in contrast to a tree, for example. This computer vision technology deals with localization of the object. While framing the item in the image retrieved from the frames per second out of the analyzed stream, it is correctly labeled with one of the predefined classes.
The recognized objects are furthermore classified to differentiate the different types of vehicles available in traffic. Depending on the weight, axis and other features, the software can distinguish the recognized images from predefined and trained classes. For each item, our machine learning model will provide one of the object classes detected by SWARM as an output.
Where was the object initially detected, and where did it leave the retrieved camera image? We accomplish to equip you with information to answer to this question. Our software is detecting the same object again and again and in the way tracking it from one frame to the next within the generated stream. The gathered data enables you to visualize the exact way of the object for e.g. generating heat maps, analyzing frequented areas in the scene and/or planning strategic infrastructure needs.
Another technology available in our traffic counting is used to monitor the streamed scene. By manually drawing a virtual line in our Swarm Control Center (SCC), we offer an opportunity to quantify the objects of interest crossing your counting line (CL). When objects are successfully detected and tracked until they reach a CL, our software will be triggering an event, setting the counter for this line accordingly.
ANPR stands for automated number plate recognition. For detailed settings and camera requirements, we refer to our Barrierless Parking (ANPR) use case description. To identify the number plate of the parking customer, we are using optical character recognition to read the numbers and figures that identify the vehicle.
OCR stands for optical character recognition and means, basically, converting an image of letters to the letters itself (longer description: Optical character recognition). In our ANPR solution, we are scanning the image of the retrieved classified vehicle. Out of this picture, our OCR solution reads the combination of the license plates to identify the customer. Recognizing the image at entry and exit level enables the possibility to track the parking time of each single vehicle.
Before sending and event including the license plate-information of a vehicle (entering a parking zone) our system performs the following steps:
Our performance laboratory (“Performance Lab”) is set up like a real-world installation. For each scene, we send a test-video from a RTSP server to all of our supported devices using an Ethernet connection. Models and software versions to be tested run on the devices, sending messaged to an MQTT-broker. Retrieved messages are compared with ground-truth counts, delivering accuracy measurements as well as ensuring overall system stability.
Our ANPR parking test scenario includes the following scenes:
In order to understand how to interpret our accuracy-numbers, we gave some more technical details on the ANPR solution. The detailed way of our accuracy calculation and an explanation of our test-setup is documented in our “How do we measure Performance” section.
In general, there are several reasons why parking monitoring systems cannot be expected to reach 100% accuracy. Those reasons can again be split into various categories (technological, environmental and software side) that either lead to missed or over-counts. Given our technical and environmental prerequisites specified in our set-up documentation, we could reveal the following limitations in the provided software.
- Crossing lines are not positioned correctly
- Vehicles + license plates are occluded by another vehicle
- The camera image is too dark, too bright or too blurry in order to correctly detect an object. Please see our requirements for the Parking Monitoring on the page linked below.