The algorithms that find edges in the image require the image to be in grayscale.
In the Bitmap tutorial, example code was provided to capture the full pixel data of the original image. In finding the sprockets, only about the top 1/3 of the image will be used. The first step is to convert the top 1/3 of the full image data to grayscale data. This will not affect the original image, but has lots of advantages:
1. Simplicity: Each pixel can be represented by a single byte
2. Memory: Required storage space in memory will be less
3. Processor Time: Fewer numbers to crunch = less core time
I keep saying that we only need about the top 1/3 of the image, but to be exact, I take the frame width dimension (given in the background section), convert it from inches to pixels, and subtract it from the total height of the scanned image in pixels. The remaining image is all that is required until we are ready to pick off the actual frames.
Many algorithms for converting full color images to grayscale have been developed. I found one on the internet that works well, but I have lost the source. If you happen to know the source, please email me so that I may update this page (digireel@gmail.com). The algorithm I use is:
gray = 0.2 * blue + 0.5 * green + 0.3 * red
For 24 bit color, gray will end up as an integer from 0 to 255.
For example, a pixel with BGR color [125 200 50] would be:
0.2 * 125 + 0.5 * 200 + 0.3 * 50 = 140
When truncating the image and converting it to grayscale, the remainder must not be neglected (see the bitmap tutorial for discussion on the remainder). Forgetting to account for the remainder bytes at the end of each row may result in a final image that is slanted as much as 45 degrees to the left and that exhibits color shift. After the image is converted to grayscale, the remaining processes can be done without having to worry about the remainder until it is time to pick off the frames from the original image, or until a debugging image is to be created.
With the truncated image converted to grayscale, the data is ready for edge detection.
Go on to Canny Edge Detection
Thursday, October 25, 2007
Subscribe to:
Post Comments (Atom)
2 comments:
Thanks for this post. How about using the sobel edge detect. What's the difference between the two? I've used this sobel edge with my special project. i does uses a 3x3 convolution..It offers edge detection to MRI.
I am not familiar with the different edge detection algorithms. I have heard of Sobel, but don't know much about it. My approach copied the work already set down by the three credited sites (although one of them has since been taken down). I just used what already worked.
I would be very interested in comparing the results of the two methods, but I won't be able to pursue it myself. If Sobel offers savings in processor time, it would be very worth the research.
Post a Comment