Parking Slot Detection Github
Contribute to dohoseok/context-based-parking-slot-detect development by creating an account on GitHub. PARKING SLOT DETECTION. Finding a vacant spot in a parking lot is a tough task. It is even more difficult to manage such facilities with varying levels of incoming traffic. Which slots are vacant at this instant? When do we need more slots? Are commuters finding it difficult to reach a particular slot? In VISSLAM, apart from low-level visual features and IMU (inertial measurement unit) motion data, parking-slots in surround-view images are also detected and geometrically associated, forming semantic constraints. Specifically, each parking-slot can impose a surround-view constraint that can be split into an adjacency term and a registration term. The contours are a useful tool for shape analysis and object detection and recognition. # # For better accuracy, use binary images. So before finding contours, apply threshold or canny edge detection. # findContours function modifies the source image. So if you want source image even after finding contours, already store it to some other variables. Methods in 26-34 detect parking slot markings in a full-automatic manner. 26 proposed a method that recognizes parking slot markings using a neural network-based color segmentation. 27 detected parking slots by finding parallel line pairs using a specialized filter and Hough transform.
- Parking Slot Detection Github Machine Learning
- Parking Slot Detection Github Extension
- Parking Slot Detection Github Download
- Parking Slot Detection Github Device
This question was found at https://www.careercup.com/question?id=5750868554022912 |
Suppose a row of parking lot with n spots, one of them is empty and n-1 spots are occupied with cars. |
Only one operation is allowed: move one car from its position to the empty spot. |
Given a initial order of cars and a final order, output steps needed to convert initial order to final oder with that operation. |
Solution |
This type of planning problem can be solved using the AI planning algorithm STRIPS. |
Solution at: |
http://stripsfiddle.herokuapp.com/?d=n8edpSRhwdgQP4fq8&p=HnYPMpoRm6zf5F6SG&a=BFS |
Click 'Run' to see a demo of it running! |
For a tutorial on STRIPS, see: |
'Artificial Intelligence Planning with STRIPS, A Gentle Introduction' |
http://www.primaryobjects.com/2015/11/06/artificial-intelligence-planning-with-strips-a-gentle-introduction/ |
Problem 1 |
Start: {1 2 3 X 4 5} |
End: {X 2 3 1 4 5} |
Solution found in 1 steps! |
1. move c1 s1 s4 |
Problem 2 |
Start: {1 2 3 X 4 5} |
End: {5 1 X 3 2 4} |
Solution found in 6 steps! |
1. move c1 s1 s4 |
2. move c5 s6 s1 |
3. move c4 s5 s6 |
4. move c2 s2 s5 |
5. move c1 s4 s2 |
6. move c3 s3 s4 |
;; Suppose a row of parking lot with n spots, one of them is empty and n-1 spots are occupied with cars. |
;; Only one operation is allowed: move one car from its position to the empty spot. |
;; Given a initial order of cars and a final order, output steps needed to convert initial order to final order with that operation. |
(define (domain parking) |
(:requirements :strips :typing) |
(:types car spot) |
(:action move |
:parameters (?c - car ?s1 - spot ?s2 - spot) |
:precondition (and (vehicle ?c) (location ?s1) (location ?s2) (at ?c ?s1) (empty ?s2)) |
:effect (and (at ?c ?s2) (empty ?s1)) (not (empty ?s2)) (not (at ?c ?s1)) |
) |
) |
(define (problem 123x56) |
(:domain parking) |
(:objects |
c1 c2 c3 c4 c5 - car |
s1 s2 s3 s4 s5 s6 - spot) |
(:init (and (vehicle c1) (vehicle c2) (vehicle c3) (vehicle c4) (vehicle c5) |
(location s1) (location s2) (location s3) (location s4) (location s5) (location s6) |
(at c1 s1) (at c2 s2) (at c3 s3) (at c4 s5) (at c5 s6) |
(empty s4)) |
(:goal (and (at c1 s4))) |
) |
(define (problem 123x56to51x324) |
(:domain parking) |
(:objects |
c1 c2 c3 c4 c5 - car |
s1 s2 s3 s4 s5 s6 - spot) |
(:init (and (vehicle c1) (vehicle c2) (vehicle c3) (vehicle c4) (vehicle c5) |
(location s1) (location s2) (location s3) (location s4) (location s5) (location s6) |
(at c1 s1) (at c2 s2) (at c3 s3) (at c4 s5) (at c5 s6) |
(empty s4)) |
(:goal (and (at c5 s1) (at c1 s2) (at c3 s4) (at c2 s5) (at c4 s6))) |
) |
commented Jul 12, 2017
Thanks a lot for this solution. I guess you should contribute to the following post on this blog: https://www.cleveroad.com/blog/develop-an-advanced-parking-management-software-for-your-lot and I'm sure the editor will add your comments to the article |
For a fun weekend project, I decided to play around with the OpenCV (Open Source Computer Vision) library in python.
OpenCV is an extensive open source library (available in python, Java, and C++) that’s used for image analysis and is pretty neat.
The lofty goal for my OpenCV experiment was to take any static image or video of a parking lot and be able to automatically detect whenever a parking space was available or occupied.
Through research and exploration, I discovered how lofty of a goal that was (at least for the scope of a weekend). What I was able accomplish was to detect how many spots were available in a parking lot, with just a bit of upfront work by the user.
This page is a walkthrough of my process and what I learned along the way.
I’ll start with an overview, then talk about my process, and end with some ideas for future work.
Overview
The above link takes you to a video of the parking space detection program in action.
To run:
Program flow is as follows:
- User inputs file name for a video, a still image from the video, and a path for the output file of parking space coordinates.
- User clicks 4 corners for each spot they want tracked. Presses ‘q’ when all desired spots are marked.
- Video begins with the user provided boxes overlayed the video. Occupied spots initialized with red boxes, available spots with green.
- Car leaves a space, the red box turns green.
- Car drives into a free space, the green box turns red.
The data on the entering and exiting of these cars can be used for a number of purposes: closest spot detection, analytics on parking lot usage, and for those counters outside of parking garages that tell you how many cars are on each level (to name a few).
This project was my first tour through computer vision, so to get it working in a weekend, I went the “express learning” route. That consisted of auditing this Computer Vision and Image Analytics course, reading through OpenCV documentation, querying the net, and toggling OpenCV function parameters to see what happened. Overall, a lot of learning and a ton of fun.
Process
The beginning
My first thought was how can I tell whether a parking space is empty?
Well, if a space is empty, it would be the color of the pavement. Otherwise, it wouldn’t be.
I also knew that I needed a way to mark the boundaries of the space, so that I could return the number of spots available.
Let’s grab an image and head to the OpenCV docs!
Line Detection
To detect the parking spots, I knew I could take advantage of the lines demarking the boundaries.
The Hough Transform is a popular feature extraction technique for detecting lines. OpenCV encapsulates the math of the Hough Transform into HoughLines(). Further abstraction in captured in HoughLinesP(), which is the probabilistic model of creating lines with the points that HoughLines() returns. For more info, check out the OpenCV Hough Lines tutorial.
The following is a walkthrough to prepare an image to detect lines with the Hough Transform. Links point to OpenCV documentation for each function. Arguments for each function are given as keyword args for clarity.
Reading in this image:
Parking Slot Detection Github Machine Learning
I converted it to gray scale to reduce the info in the photo:
Gave it a good Gaussian blur to remove even more unnecessary noise:
Detected the edges with Canny:
And then, a few behind-the-scenes rhos and thetas later, we have our Hough Line results.
Well that wasn’t quite what I expected.
I experimented a bit with the hough line, but toggling the parameters kept getting me the same one line.
A bit of digging and I found a promising post on StackOverflow
After following the directions of the top answer, I got this:
Which gave me more lines, but I still had to figure out which lines were part of the parking space and which weren’t. Then, I would also need to detect when a car moved from a spot.
I was running into a challenge; with this approach, I needed an empty parking lot to overlay with an image of a non-empty lot. Which would also call for a mask to cover unimportant information (trees, light posts, etc.)
Given my scope for the weekend, it was time to find another approach.
Drawing Rectangles
If my program wasn’t able to detect parking spots on it’s own, maybe it was reasonable to expect that the user give positions for each of the parking spots.
Now, the goal was to find a way to click on the parking lot image and to store the 4 points that made up a parking space for all of the spaces in the lot.
I discovered that I could do this using a mouse as a “paintbrush”
After some calculations for the center of the rectangle (to label each space), I got this:
Finishing touches
After drawing the rectangles, all there was left to do was examine the area of each rectangle to see if there was a car in there or not.
Parking Slot Detection Github Extension
By taking each (filtered and blurred) rectangle, determining the area, and doing an average on the pixels, I was able to tell when there wasn’t a car in the spot if the average was high (more dark pixels). I changed the color of the bounding box accordingly and viola, a parking detection program!
The code for drawing the rectangles and motion detection is pretty generic. It’s seperated out into classes and should be reusable outside of the context of a parking lot. I have tested this with two different parking lot videos and it worked pretty well. I plan to make other improvements and try to seperate OpenCV references to make code easier to test. I’m open to ideas and feedback.
Parking Slot Detection Github Download
Check out the code for more!
Future work
Parking Slot Detection Github Device
- Hook up a webcam to a Raspberry Pi and have live parking monitoring at home!
- Transform parking lot video to have overview perspective (for clearer rectangles)
- Experiment with HOG descriptors to detect people or other objects of interest