Number Plate recognition has a wide range of applications, from using it to solve crimes, to finding lost cars which get washed away during high-intensity floods.
This read is about a Number Plate Recognition demo system created using TensorFlow and Agora.io. It will give you a quick understanding of the python code used, function by function.
Agora.io is a Real-Time Engagement provider delivering voice, video, and live streaming on a global scale for mobile, native and desktop apps.
We will be using Agora.io’s live interactive video streaming for detecting number plates in real-time.
Let’s dive straight into the code!
Step 1: We need to set up a video call. To do so, sign-up on Agora.io here. After signing up, log in and head to the ‘project management’ tab. Create a new project with a suitable name of your choice. Procure the app-id by copying it onto your clipboard and pasting it somewhere you will be able to access it later while developing the code.
Step 2: Go to my github.com repository here.
Understanding the code:
In the above-given code, we are using the functions of the AgoraRTC library from the agora_community_sdk package to connect to the video call from our remote terminal and via the internet using the chromium driver and the Agora app-id you created. Enter your app-id, channel-name, links to the chromium driver executable file, and in.png as directed in the code. The first frame from the live video will be extracted and saved.
This is the function where the model parameters obtained after training the model are used to deduce the probable bounding boxes.
`bbox_tl, bbox_br, letter_probs` define the bounding box top-left and bottom-right corners respectively, and a 7,36 matrix gives the probability distributions of each letter in the entire frame captured.
The image is converted into multiple scales and the model detects the number plates over a sliding window for each scale.
Finally, it predicts the number plate(s) that has a greater than 50% probability of appearing over multiple scales.
The above functions are used for the following two purposes
- Finding sets of overlapping rectangles, detected previously over the frame.
- Finding the intersection of those sets, along with the code corresponding with the rectangle with the highest presence parameter.
The above function is used to join the probable letters detected on the number plate and return as a string.
This is the main function where links to ‘in.png’ (input frame, the frame where the number plate and the registration number has to be detected), ‘out.png’ (in.png, post-processing with the bounding box around number plate and registration number) and ‘weights.npz’ (weights, post-training the model) are given. The rest of the functions explained above are called and executed in this section.
Step 3: Build the system as mentioned earlier in this blog, download my GitHub repository as a .zip folder here.
- All python libraries given in requirements.txt
- An Agora app-id
- Latest version of any text editor that supports python, preferably ‘Sublime Text3’
- Create an Agora.io account:
- Sign-up and login here.
- Navigate to the ‘project management’ tab in the dashboard.
- Create a new project with a name of your choice.
- Procure the app-id by copying it onto your clipboard and pasting it somewhere you will be able to access it later while developing the code.
- Install all the dependencies given in requirements.txt using:
pip install -r requirements.txt
sudo pip install -r requirements.txt
- Download a zip file of this repository.
- Download chromedriver.exe and weights.npz.
- Open detect.py in sublime or any compatible text editor of your choice.
- Paste the app-id and link to the chromedriver.exe as directed on line 20 of the code and a channel name on line 21.
- Paste the link to a high resolution(approx 2000 x 1500) image on line 30.
- Give links to in.png on line 161 and weights.npz on line 164 as directed in the code.
- Give a link to out.png on line 194.
- Go here.
- Paste app-id and channel name in the respective input boxes on sender and receiver side.
- Click on join to activate call on sender and receiver side.
- Execute detect.py from the master folder on the terminal.
- The desired result will be in out.png.