I\'m developing a proof of concept algorithm for iris-related biometrics. I\'d like to be able to test it on a series of images, but in order to do so, I need to know the iris boundaries. Following th
i\'m working with a hough transform (polar coordinates). i\'d like to compute a vector representation of a line from a coordinate from the hough transform.
I am trying to find horizontal and vertical lines from an image which came from a \"document\". The documents are scanned pages from contracts and so the lines look like what you would see in a table
Does anyone know how to use the Hough transform to detect the strongest lines in the binary image: A = zeros(开发者_StackOverflow社区7,7);
What is the best way to detect the corners of an invoice/receipt/sheet-of-paper in a photo? This is to be used for subsequent perspective correction, before OCR.
using Hough Transform, how can I detect and get coordinates of (x0,y0) and \"a\" and \"b\" of an ellipse in 2D space?
Does OpenCV method HoughLines2 has a memory leak that\'s not been fixed since now (version 2.1.0.6), or there\'s something wrong with this part of my code ?
I basically understand the theory behind using the Hough Transform to de开发者_如何学Pythontect parabolas (i.e. y = a( x - x_c ) + y_c).
I have a vector of lines produced by calling hough transformation function in Opencv, and need to convert them back to image coordinates. I found this piece of sample code from Opencv\'s official d开发
I am just bein开发者_开发技巧g adventurous and taking my first baby step toward computer vision. I tried to implement the Hough Transformation on my own but I just don\'t get the whole picture. I read