Some notes about the Laplacian

Let's take a look at the following notes:

  •  is a scalar (unlike the gradient, which is a vector)
  • A single kernel (mask) is used to compute the Laplacian (unlike the gradient where we usually have two kernels, the partial derivatives in the x and y directions, respectively)
  • Being a scalar, it does not have any direction and hence we lose the orientation information
  •  is the sum of the second-order partial derivatives (the gradient represents a vector consisting of the first-order partial derivatives), but the higher the order of the derivative, the more is the increase in noise
  • Laplacian is very sensitive to noise
  • Hence, the Laplacian is always preceded by a smoothing operation (for example, with a Gaussian filter), otherwise the noise can be greatly aggravated

The following code snippet shows how to compute the Laplacian of an image using convolution with the kernel shown previously:

ker_laplacian = [[0,-1,0],[-1, 4, -1],[0,-1,0]]
im = rgb2gray(imread('../images/chess.png'))
im1 = np.clip(signal.convolve2d(im, ker_laplacian, mode='same'),0,1)
pylab.gray()
pylab.figure(figsize=(20,10))
pylab.subplot(121), plot_image(im, 'original')
pylab.subplot(122), plot_image(im1, 'laplacian convolved')
pylab.show()

The following screenshot shows the output of the preceding code snippet. As can be seen, the Laplacian output also finds the edges in the image:

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.191.223.123