I’ve been starting to explore different methods for figuring out the location of drawing surfaces for my yet-to-be-named Drawing Robot. I have a few options available to me for use, which mainly include OpenCV, for both C++ and Python, and the imaging processing tools within MATlab. There are a few differences between these methods, and I’ve decided to go with openCV for several reasons:
- As the name might suggest, OpenCV is free to use, and doesn’t require a license of MATlab (I don’t know how much longer I’ll be able to keep my university’s student license).
- OpenCV, when using C++, is allegedly faster than MATlab at most tasks.
- OpenCV can run on several platforms, including embedded Linux. That means eventually the drawing machine could eventually do its own image processing
Of course, MATlab’s main advantage is their interactive workspace, so unfortunately I won’t be able to use that.
OpenCV provides pre-built libraries for Windows, which makes installation much simpler. I considered using a Linux Virtual Machine, or appropriating a Raspberry Pi or dedicated Linux PC to perform the operations, but I came to the realization that staying with Windows would be the easiest method for now. This tutorial made the installation and integration into Visual Studio a very simple process.
Once openCV was accessible, I decided to try out a bit of simple image processing relevant to my project. I need to find the edges of a piece of paper placed in the robot’s workspace, and to do that I need to be able to detect edges. One way to accomplish this is with a Canny Edge Detector.

- apply a slight Gaussian Blur to the image, to remove high-frequency pixel noise (like film grain). This does degrade the accuracy of the process, so smarter filters can also be used.
- use a Sobel, or similar type filter to get the pixel gradients. This is similar to taking the derivative of each pixel value with respect to position. These are done in both the x (horizontal) and y(vertical) directions, by convolving a kernel with the expected gradient built in. the x and y derivatives can then be combined to produce a vector with a magnitude and angle. This is the gradient of the that given pixel. this gives you the image on the right above.
- A process called Non-maximum suppression is used to find the center lines of each edge. This looks at all pixels, and decides either keep the pixel if all surrounding neighbors in the direction of the edge gradient are less, or discard it otherwise. This leaves only the pixels in the “brightest” regions of the edges that were found in the Sobel filtering.
- Some final filtering is done to remove any small points or low values that were left over. This is done by setting a constant threshold for edges that we want to keep.
Of course, I don’t need to implement all of that, because openCV has a Canny function built in! I used the example code, and a picture from a webcam of a table with a sheet of paper on it, and got a very nice result:

Damn. That was much easier than I expected. Of course, there’s much more to do (the page is not the only object being detected here, so I need to account for that). But, this data can easily be used to perform Principle Component Analysis, so it’s a step in the right direction.
One thought on “Detecting Edges of Objects using OpenCV”