I'm Martin's friend who was working on this earlier this year. This was my first ever coding project, and kinda ended in a bit of a rush, so the code needs some errr...decoding...
I'll give a few tips from what I've seen you doing already, and then sort my code on my day off tomorrow.
First tip, OpenCV
and python
are awesome, move to them as soon as possible. :D
Instead of removing small objects and or noise, lower the canny restraints, so it accepts more edges, and then find the largest closed contour (in OpenCV use findcontour()
with some simple parameters, I think I used CV_RETR_LIST
). might still struggle when it's on a white piece of paper, but was definitely providing best results.
For the Houghline2()
Transform, try with the CV_HOUGH_STANDARD
as opposed to the CV_HOUGH_PROBABILISTIC
, it'll give rho and theta values, defining the line in polar coordinates, and then you can group the lines within a certain tolerance to those.
My grouping worked as a look up table, for each line outputted from the hough transform it would give a rho and theta pair. If these values were within, say 5% of a pair of values in the table, they were discarded, if they were outside that 5%, a new entry was added to the table.
You can then do analysis of parallel lines or distance between lines much more easily.
Hope this helps.
First of all, to detect lines you need to work on a boolean
matrix image (or binary), I mean: the color is black or white, there's no grayscale.
HoughLines()
's requirement to work properly is to have this kind of image as input. That's the reason you have to use Canny
or Treshold
, to convert the colored image matrix into a boolean one.
Hough transformation
A line in one picture is actually an edge. Hough transform scans the whole image and using a transformation that converts all white pixel cartesian coordinates in polar coordinates; the black pixels are left out. So you won't be able to get a line if you first don't detect edges, because HoughLines()
don't know how to behave when there's a grayscale.
Best Answer
The best option is to filter the image before applying the edge detector. In order to keep the sharp edges you need to use a more sophisticated filter than the Gaussian blur.
Two easy options are the Bilateral filter or the Guided filter. These two filters are very easy to implement and they provide good results in most cases: gaussian noise removal preserving edges. If you need something more powerful, you can try the filter BM3D, which is one of the state-of-the-art filters, and you can find an open source implementation here.