Edges play a major role in both human and computer vision. We, as humans, can easily recognize many object types and their pose just by seeing a backlit silhouette or a rough sketch. Indeed, when art emphasizes edges and poses, it often seems to convey the idea of an archetype, such as Rodin's The Thinker or Joe Shuster's Superman. Software, too, can reason about edges, poses, and archetypes. We will discuss these kinds of reasonings in later chapters.
OpenCV provides many edge-finding filters, including Laplacian()
, Sobel()
, and Scharr()
. These filters are supposed to turn non-edge regions to black while turning edge regions to white or saturated colors. However, they are prone to misidentifying noise as edges. This flaw can be mitigated by blurring an image before trying to find its edges. OpenCV also provides many blurring filters, including blur()
(simple average), medianBlur()
, and GaussianBlur()
. The arguments for the edge-finding and blurring filters vary but always include ksize
, an odd whole number that represents the width and height (in pixels) of a filter's kernel.
For blurring, let's use medianBlur()
, which is effective in removing digital video noise, especially in color images. For edge-finding, let's use Laplacian()
, which produces bold edge lines, especially in grayscale images. After applying medianBlur()
, but before applying Laplacian()
, we should convert the image from BGR to grayscale.
Once we have the result of Laplacian()
, we can invert it to get black edges on a white background. Then, we can normalize it (so that its values range from 0 to 1) and multiply it with the source image to darken the edges. Let's implement this approach in filters.py
:
def strokeEdges(src, dst, blurKsize = 7, edgeKsize = 5): if blurKsize >= 3: blurredSrc = cv2.medianBlur(src, blurKsize) graySrc = cv2.cvtColor(blurredSrc, cv2.COLOR_BGR2GRAY) else: graySrc = cv2.cvtColor(src, cv2.COLOR_BGR2GRAY) cv2.Laplacian(graySrc, cv2.CV_8U, graySrc, ksize = edgeKsize) normalizedInverseAlpha = (1.0 / 255) * (255 - graySrc) channels = cv2.split(src) for channel in channels: channel[:] = channel * normalizedInverseAlpha cv2.merge(channels, dst)
Note that we allow kernel sizes to be specified as arguments for strokeEdges()
. The blurKsize
argument is used as ksize
for medianBlur()
, while edgeKsize
is used as ksize
for Laplacian()
. With my webcams, I find that a blurKsize
value of 7
and an edgeKsize
value of 5
looks best. Unfortunately, medianBlur()
is expensive with a large ksize
, such as 7
.
3.135.206.254