© Nicolas Modrzyk 2018
Nicolas ModrzykJava Image Processing Recipeshttps://doi.org/10.1007/978-1-4842-3465-5_3

3. Imaging Techniques

Nicolas Modrzyk1 
(1)
Tokyo, Japan
 

The most perfect technique is that which is not noticed at all.

Pablo Casals

The previous chapter was an introduction to Origami and how to perform mostly single-step processing operations on simple mats and images.

While that was already a very good show to highlight the library’s ease of use, the third chapter wants to take you one step further by combining simple processing steps together to reach a bigger goal. From performing content analysis, contour detection, shape finding, and shape movements, all the way to computer-based sketching and landscape art, you name it, many an adventure awaits here.

We will start again on familiar ground by manipulating OpenCV mats at the byte level, to grasp in even more detail the ins and outs of image manipulation.

The learning will be split into two big sections. First will be a slightly art-focused section, where we play with lines, gradations, and OpenCV functions to create new images from existing ones. You will be using already known origami/opencv functions, but a few other ones will also be introduced as needed to go with the creative flow.

It was one of the original plans of Origami to be used to create drawings. It just happened that to understand how simple concepts were brought together, I had to play with image compositions and wireframes that actually came out better than I thought they would. Even more so, it was easy to just add your own touch and reuse the creations later on. So that first part is meant to share this experience.

Then, in a second part, we will move onto techniques more focused on image processing. Processing steps will be easier to grasp at that stage, after reviewing steps with immediate feedback from the art section.

Processing steps in OpenCV are easy most of the time, but the original samples in C++ make it quite hard to read through the lines of pointers. I personally find, even with the Clojure learning curve included, that Origami is an easier way to get started with OpenCV: you can focus on the direct impact of your lines of code, and try writing each step in different ways without restarting everything by getting instant feedback each time, until eventually it comes into place nicely. Hopefully, the second part of the chapter will make you comfortable enough that you will want to go and challenge the examples even more.

Note that it is probably a good idea to read this chapter linearly so that you do not miss new functions or new tricks along the way. However, nothing prevents you from just jumping in where you feel like it, of course. It is a recipe book after all!

3.1 Playing with Colors

Problem

In the previous chapter, you already saw various techniques to change colors in a mat.

You would like to get control over how to specify and impact colors , for example, increasing or decreasing their intensity, by applying specific factors or functions on the mats.

Solution

Here, you will learn about the following: how to combine operations like converting an image color channel using the already known cvt-color; how to use other OpenCV functions like threshold to limit channel values; how to create masks and use them with the function set-to; and how to use functions to combine separate versions of a mat.

You will review also in more detail how to use the transform! function to create basic art effects.

How it works

To play with mats, we will be using another set of cats and flowers, but you can of course try applying the functions on your own photos any time.

The namespace header of the chapter, with all the namespace dependencies, will use the same namespaces required in the last chapter, namely, opencv3.core and opencv3.utils as well as opencv3.colors.rgb from origami’s opencv3 original namespaces.

The required section looks like the following code snippet.

 (ns opencv3.chapter03
 (:require
  [opencv3.core :refer :all]
  [opencv3.colors.rgb :as rgb]
  [opencv3.utils :as u]))

It is usually a good idea to create a new notebook for each experiment, and to save them separately.

Applying Threshold on a Colored Mat

Back to the basics . Do you remember how to threshold on a mat, and keep only the values in the matrix above 150?

Yes, you’re correct: use the threshold function.

(-> (u/matrix-to-mat [[100 255 200]
                      [100 255 200]
                      [100 255 200]])
    (threshold! 150 255 THRESH_BINARY)
    (dump))

The input matrix contains various values, some below and some above the threshold value of 150. When applying threshold, the values below are set to 0 and the ones above are set to threshold’s second parameter value, 255.

This results in the following matrix (Figure 3-1):

[0 255 255]
[0 255 255]
[0 255 255]

That was for a one-channel mat , but what happens if we do the same on a three-channel mat?

(-> (u/matrix-to-mat [[0 0 170]
                      [0 0 170]
                      [100 100 0]])
    (cvt-color! COLOR_GRAY2BGR)
    (threshold! 150 255 THRESH_BINARY)
    (dump))

Converting the colors to BGR duplicates each of the values of the one-channel mat to the same three values on the same pixel.

Applying the OpenCV threshold function right afterward applies the threshold to all the values over each channel. And so the resulting mat loses the 100 values of the original mat and keeps only the 255 values.

[0 0 0 0 0 0 255 255 255]
[0 0 0 0 0 0 255 255 255]
[0 0 0 0 0 0 0 0 0]

A 3×3 matrix is a bit too small to show onscreen, so let’s use resize on the input matrix first.

(-> (u/matrix-to-mat [[0 0 170]
                      [0 0 170]
                      [100 100 0]])
    (cvt-color! COLOR_GRAY2BGR)
    (resize! (new-size 50 50) 1 1 INTER_AREA)
    (u/mat-view))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig1_HTML.jpg
Figure 3-1

Black and white mat

Applying a similar threshold on the preceding mat keeps the light gray, which has a value above the threshold, but removes the darker gray by turning it to black.

(-> (u/matrix-to-mat [[0 0 170]
                      [0 0 170]
                      [100 100 0]])
        (cvt-color! COLOR_GRAY2BGR)
    (threshold! 150 255 THRESH_BINARY)
    (resize! (new-size 50 50) 0 0 INTER_AREA)
    (u/mat-view))
This gives us Figure 3-2.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig2_HTML.jpg
Figure 3-2

Thresholded!

Notice the use of a specific interpolation parameter with resize, INTER_AREA, which nicely cuts the shape sharp, instead of interpolating and forcing a blur.

Just for some extra info, the default resize method gives something like Figure 3-3, which can be used in other circumstances , but this is not what we want here.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig3_HTML.jpg
Figure 3-3

Resize with default interpolation

Anyway, back to the exercise, and you probably have it at this point: applying a standard threshold pushes forward vivid colors.

Let’s see how that works on a mat loaded from an image, and let’s load our first image of the chapter (Figure 3-4).

(def rose
  (imread "resources/chapter03/rose.jpg" IMREAD_REDUCED_COLOR_4))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig4_HTML.jpg
Figure 3-4

Some say love it is a river

We start by applying the same threshold that was applied on the mat loaded from a matrix, but this time on the rose image .

(->
  original
  (clone)
  (threshold! 100 255 THRESH_BINARY)  
  (u/mat-view))
You get a striking result! (Figure 3-5)
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig5_HTML.jpg
Figure 3-5

Vivid colors

In a nicely shot photograph, this actually gives you an artistic feeling that you can build upon for cards and Christmas presents!

Let’s now apply a similar technique on a completely different image. We’ll turn the picture to black and white first and see what the result is.

This time, the picture is of playful kittens, as shown in Figure 3-6.

 (-> "resources/chapter03/ai6.jpg"
    (imread IMREAD_REDUCED_COLOR_2)
    (u/mat-view))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig6_HTML.jpg
Figure 3-6

Playful cats

If you apply a similar threshold but on the grayscale version , something rather interesting happens.

(-> "resources/chapter03/ai6.jpg"
  (imread  IMREAD_REDUCED_GRAYSCALE_2)
  (threshold! 100 255 THRESH_BINARY)
  (u/mat-view))
The two cats are actually standing out and being highlighted (Figure 3-7).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig7_HTML.jpg
Figure 3-7

Playful, highlighted cats

Cool; this means that the shape we wanted to stand out has been highlighted.

Something similar to this can be used to find out shapes and moving objects; more in recipe 3-6 and 3-7.

For now, and to keep things artistic, let’s work on a small function that will turn all the colors under a given threshold to one color, and all the values above the threshold to another one.

We can achieve this by
  • First, turning to a different color space, namely HSV

  • creating a mask from the threshold applied with THRESH_BINARY setting

  • creating a second mask from the threshold applied with THRESH_BINARY_INV setting, thus creating a mask with opposite values from the first one

  • converting the two masks to gray, so they are only made of one channel

  • setting the color of the work mat using set-to, following the first mask

  • setting the color of the work mat using again set-to, but following the second mask

  • That’s it!

In coding happiness, we will create a low-high! function that does the algorithm described in the preceding.

The low-high! function is composed of cvt-color!, threshold, and set-to, all functions you already have seen.

(defn low-high!
  ([image t1 color1 color2 ]
  (let [_copy (-> image clone (cvt-color! COLOR_BGR2HSV))
        _work (clone image)
        _thresh-1 (new-mat)
        _thresh-2 (new-mat)]
    (threshold _copy _thresh-1 t1 255 THRESH_BINARY)
    (cvt-color! _thresh-1 COLOR_BGR2GRAY)
    (set-to _work color1 _thresh-1)
    (threshold _copy _thresh-2 t1 255 THRESH_BINARY_INV)
    (cvt-color! _thresh-2 COLOR_BGR2GRAY)
    (set-to _work color2 _thresh-2)
    _work)))

We will call it on the rose picture, with a threshold of 150 and a white smoke to light blue split.

(->
 (imread "resources/chapter02/rose.jpg" IMREAD_REDUCED_COLOR_4)
 (low-high! 150 rgb/white-smoke- rgb/lightblue-1)
 (u/mat-view))
Executing the preceding snippet gives us Figure 3-8.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig8_HTML.jpg
Figure 3-8

White on light blue rose

Great. But, you ask, do we really need to create two masks for this? Indeed, you do not. You can do a bitwise operation perfectly on the first mask. To do this, simply comment out the second mask creation and use bitwise-not! before calling set-to the second time.

    ;(threshold _copy _thresh-2 t1 255 THRESH_BINARY_INV)
    ;(cvt-color! _thresh-2 COLOR_BGR2GRAY)
    (set-to _work color2 (bitwise-not! _thresh-1))

From there, you could also apply thresholds on different color maps, or create ranges to use as threshold values.

Another idea here is, obviously, to just hot-space-queen-ize any picture.

In case you are wondering, the following snippet does that for you.

(def freddie-red (new-scalar 26 48 231))
(def freddie-blue (new-scalar 132 46 71))
(def bryan-yellow (new-scalar 56 235 255))
(def bryan-grey (new-scalar 186 185 181))
(def john-blue (new-scalar 235 169 0))
(def john-red (new-scalar 32 87 233))
(def roger-green (new-scalar 72 157 53))
(def roger-pink (new-scalar 151 95 226))
(defn queen-ize [mat thresh]
    (vconcat! [
    (hconcat!
     [(-> mat clone (low-high! thresh freddie-red freddie-blue))
      (-> mat clone (low-high! thresh john-blue john-red))])
    (hconcat!
     [(-> mat clone (low-high! thresh roger-pink roger-green ))
      (-> mat clone (low-high! thresh bryan-yellow bryan-grey))] )]))

This really just is calling low-high! four times, each time with colors from the Queen album Hot Space, from 1982.

And the old-fashioned result is shown in Figure 3-9.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig9_HTML.jpg
Figure 3-9

Cats and Queen

You really know how to set the mood

And you really get inside the groove

Cool cat

Queen – “Cool Cat”

Channels by Hand

Whenever you are about to play with channels of a mat, remember the opencv split function. The function separates the channels in a list of independent mats, so you can entirely focus on only one of them.

You can then apply transformations to that specific mat, without touching the others, and when finished, you can return to a multichannel mat using the merge function, which does the reverse and takes a list of mats , one per channel, and creates a target mat combining all the channels into one mat.

To see that in action, suppose you have a simple orange mat (Figure 3-10).

(def orange-mat
  (new-mat 3 3 CV_8UC3 rgb/orange-2))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig10_HTML.jpg
Figure 3-10

Orange mat

If you want to turn the orange mat into a red one, you would simply set all the values of the green channel to 0.

So, you start by splitting the RGB channels into three mats; then, set all the values of the second mat to 0 and merge all three mats into one.

First, let’s split the mat into channels, and see the content of each of them.

In happy coding, this gives

(def channels (new-arraylist))
(split orange-mat channels)

The three channels are now separated into three elements in the list. You can look at the content of each channel simply by using dump.

For example, dump of the blue channel:

(dump (nth channels 0))
; no blue
;[0 0 0]
;[0 0 0]
;[0 0 0]

or dump of the green channel :

(dump (nth channels 1))
; quite a bit of green
;[154 154 154]
;[154 154 154]
;[154 154 154]

Finally, dump of the red channel:

(dump (nth channels 2))
; almost max of red
;[238 238 238]
;[238 238 238]
;[238 238 238]

From there, let’s turn all those 154 values in the green channel to 0.

(set-to (nth channels 1) (new-scalar 0.0))

And then, let’s merge all the different mats back to a single mat and get Figure 3-11.

(merge channels red-mat)
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig11_HTML.jpg
Figure 3-11

Red mat

The green intensity on all pixels in the mat was uniformly set to 0, and so with all the blue channel values already set to 0, the resulting mat is a completely red one.

We can combine all the different steps of this small exercise and create the function update-channel!, which takes a mat, a function, and the channel to apply the function to and then returns the resulting mat.

Let’s try a first version using u/mat-to-bytes and u/bytes-to-mat! to convert back and forth between mat and byte arrays.

This gets complicated, but is actually the easiest version I could come up with to explain the flow of the transformation .

The code flow will be as follows:
  • split the channels into a list

  • retrieve the target channel’s mat

  • convert the mat to bytes

  • apply the function to every byte of the channel mat

  • turn the byte array back to a mat

  • set that mat to the corresponding channels in the list

  • merge the channels into the resulting mat

This should now, at least, read almost sequentially as in the following:

(defn update-channel! [mat fnc chan]
  (let [ channels (new-arraylist)]
    (split mat channels)
    (let [
      old-ch (nth channels chan)
      new-ch
    (u/bytes-to-mat!
      (new-mat (.height mat) (.width mat) (.type old-ch) )
       (byte-array (map fnc (u/mat-to-bytes old-ch) )))]
     (.set channels chan new-ch)
     (merge channels mat)
     mat)))

Now let’s get back to my sister’s cat, who’s been sleeping on the couch for some time. Time to tease him a bit and wake him up.

(def my-sister-cat   
(-> "resources/chapter03/emilie1.jpg" (imread IMREAD_REDUCED_COLOR_8)))

With the help of the update-channel! function, let’s turn all the blue and green channel values to their maximum possible values of 255. We could have written a function that applies multiple functions at the same time, but for now let’s just call the same function one by one in a row.

(->
   my-sister-cat
   clone
   (update-channel! (fn [x] 255) 1)
   (update-channel! (fn [x] 255) 0)
u/mat-view)
This is not very useful as far as imaging goes, nor very useful for my sister’s cat either, but by maxing out all the values of the blue and green channels, we get a picture that is all cyan (Figure 3-12).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig12_HTML.jpg
Figure 3-12

Cyan cat

This newly created function can also be combined with converting colorspace.

Thus, switching to HSV color space before calling update-channel! gives you full control over the mat’s color.

(->
   my-sister-cat
   clone
  (cvt-color! COLOR_RGB2HSV)
  (update-channel! (fn [x] 10) 0) ; blue filter
  (cvt-color! COLOR_HSV2RGB)
  (u/mat-view))

The preceding code applies a blue filter, leaving saturation and brightness untouched, thus still keeping the image dynamics .

Of course, you could try with a pink filter, setting the filter’s value to 150, or red, by setting the filter’s value to 120, or any other possible value. Try it out!

For now, enjoy the blue variation in Figure 3-13.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig13_HTML.jpg
Figure 3-13

Blue-filtered cat

Personally, I also like the YUV switch combined with maximizing all the luminance values (Y).

(->
   my-sister-cat
   clone
  (cvt-color! COLOR_BGR2YUV)
  (update-channel! (fn [x] 255) 0)
  (cvt-color! COLOR_YUV2BGR)
  (u/mat-view))
This gives a kind of watercolor feel to the image (Figure 3-14).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig14_HTML.jpg
Figure 3-14

Artful cat

Transform

If you remember transform , you could also apply different sorts of transformation using the opencv transform function.

To understand the background of transform a bit, let’s get back once again to the usual byte-per-byte matrix manipulation, first on a one-channel 3×3 mat that we would like to make slightly darker.

(def s-mat (new-mat 3 3 CV_8UC1))
(.put s-mat 0 0 (byte-array [100 255 200
                             100 255 200
                             100 255 200]))

This can be viewed with the following code (Figure 3-15).

(u/mat-view (-> s-mat clone (resize! (new-size 30 30) 1 1 INTER_AREA)))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig15_HTML.jpg
Figure 3-15

Flag in black and white

Then we define a 1×1 transformation matrix, with one value of 0.7.

(def t-mat
  (new-mat 1 1 CV_32F (new-scalar 0.7))

Next, we apply the transformation in place and also dump the result to see the values out from the transformation.

(-> s-mat
    (transform! t-mat)
    (dump))

Calling the transform function has the effect of turning all the values of the input matrix to their original value multiplied by 0.7.

The result is shown in the following matrix:

[70 178 140]
[70 178 140]
[70 178 140]

It also means that the visuals of the mat have become darker (Figure 3-16):

(u/mat-view (-> s-mat (resize! (new-size 30 30) 1 1 INTER_AREA)))
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig16_HTML.jpg
Figure 3-16

Darker flag

This is a simple matrix computation, but it already shows two things:
  • The bytes of the source mat are all multiplied by the value in the 1×1 mat;

  • It’s actually easy to apply custom transformation .

Those transformations work much the same for mats with multiple channels. So, let’s grab an example and move to a colored colorspace (yeah, I know) using cvt-color!

(def s-mat (new-mat 3 3 CV_8UC1))
(.put s-mat 0 0 (byte-array [100 255 200
                             100 255 200
                             100 255 200]))
(cvt-color! s-mat COLOR_GRAY2BGR)

Because the mat is now made of three channels, we now need a 3×3 transformation matrix.

The following transformation mat will give more strength to the blue channel.

[ 2 0 0  ; B -> B G R
  0 1 0  ; G -> B G R
  0 0 1] ; R -> B G R
The transformation matrix is made of lines constructed as input-channel -> output channel, so three values per row, one for each output value of each channel, and three rows , one for each input.
  • [2 0 0] boosts the values of the blue channel by 2, and does not affect green or red output values

  • [0 1 0] keeps the green channel as is, and does not contribute to other channels in the output

  • [0 0 1] keeps the red channel as is, and similarly does not contribute to other channels in the output

(def t-mat (new-mat 3 3 CV_32F))
(.put t-mat 0 0 (float-array [2 0 0
                              0 1 0
                              0 0 1]))
Applying the transformation to the newly colored mat gives you Figure 3-17, where blue is prominently standing out.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig17_HTML.jpg
Figure 3-17

Blue flag

Since there is definitely no way we can leave my sister’s cat in peace, let’s apply a similar transformation to it.

The code is exactly the same as the preceding small mat example, but applied on an image .

(-> my-sister-cat   
  clone
  (transform! (u/matrix-to-mat [ [2 0 0] [0 1 0] [0 0 1]])))
And Figure 3-18 shows a blue version of a usually white cat.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig18_HTML.jpg
Figure 3-18

Blue meeoooww

If you wanted blue in the input to also influence red in the output, you could use a matrix slightly similar to the following:

[2 0 1.1
 0 1 0
 0 0 1  ]

You can understand why by now, right? [2 0 1.1] means that the blue in the input is gaining intensity, but that it also contributes to the intensity of red in the output.

You should probably try a few transformation matrices by yourself to get a feel for them.

So, now, how could you increase the luminosity of a mat using a similar technique?

Yes, that’s right: by converting the matrix to HSV colorspace first, then multiplying the third channel and keeping the others as they are.

The following sample increases the luminosity by 1.5 in the same fashion .

  (-> my-sister-cat   
  clone
  (cvt-color! COLOR_BGR2HSV)
  (transform! (u/matrix-to-mat [ [1 0 0] [0 1 0] [0 0 1.5]]))
  (cvt-color! COLOR_HSV2BGR)
  u/mat-view)
Figure 3-19 shows the image output of the preceding snippet.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig19_HTML.jpg
Figure 3-19

Luminous cat

Artful Transformations

To conclude this recipe, let’s play a bit with luminosity and contours to create something a bit artistic.

We want to create a watercolor version of the input picture, by maximizing the luminosity. We also want to create a “contour” version of the image, by using opencv’s canny quick feature of contour detection. Then finally, we will combine the two mats for a pencil-over-watercolor effect.

First, let’s work on the background. The background is created by performing two transformations in a row: one to max out the luminosity in the YUV color space, the other to get it more vivid by increasing blue and red colors.

(def
  usui-cat
  (-> my-sister-cat   
  clone
  (cvt-color! COLOR_BGR2YUV)
  (transform! (u/matrix-to-mat [
                    [20 0 0]
                    [0 1 0]
                    [0 0 1]]))
  (cvt-color! COLOR_YUV2BGR)
  (transform! (u/matrix-to-mat [[3 0 0]
                                [0 1 0]
                                [0 0 2]]))))

If you get a result that is too transparent, you could also add another transformation at the end of the pipeline to increase contrast ; this is easily done in another colorspace, HSV.

  (cvt-color! COLOR_BGR2HSV)    
  (transform! (u/matrix-to-mat
                          [[1 0 0]
                          [0 3 0]
                          [0 0 1]]))
  (cvt-color! COLOR_HSV2BGR)    
This gives us a nice pink-y background (Figure 3-20).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig20_HTML.jpg
Figure 3-20

Pink cat for background

Next is the foreground. The front cat is created using a call to opencv’s canny function. This time, this is done in the one-channel gray color space.

(def
  line-cat
  (-> my-sister-cat
      clone
  (cvt-color! COLOR_BGR2GRAY)
  (canny! 100.0 150.0 3 true)
  (cvt-color! COLOR_GRAY2BGR)
  (bitwise-not!)))
The canny version of my sister’s cat gives the following (Figure 3-21):
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig21_HTML.jpg
Figure 3-21

Cartoon cat

Then, the two mats are combined using a simple call to the function bitwise-and, which merges two mats together by doing simple “and” bit operations.

(def target (new-mat))
(bitwise-and usui-cat line-cat target)
This gives the nice artful cat in Figure 3-22.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig22_HTML.jpg
Figure 3-22

Pink and art and cat

While the pink color may not be your favorite, you now have all the tools to modify to your liking the flows presented in this recipe to create many variations of artful cats, with different background colors and also different foregrounds.

But please. No dogs.

3.2 Creating Cartoons

Be yourself. No one can say you’re doing it wrong.

Charles M.​ Schulz

Problem

You have seen a very simple way of doing cartoon artwork using canny, but you would like to master a few more variations of doing cartoony artwork.

Solution

Most of the cartoon-looking transformations can be creating using a variation of grayscale, blur, canny, and the channel filter functions that were seen in the previous recipe.

How it works

You have already seen the canny function , famous for easily highlighting shapes in a picture. It can actually also be used for cartooning a bit. Let’s see that with my friend Johan.

Johan is a sharp Belgian guy who sometimes gets tricked into having a glass of good Pinot Noir (Figure 3-23).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig23_HTML.jpg
Figure 3-23

Johan

In this recipe, Johan was loaded with the following snippet:

(def source
  (-> "resources/chapter03/johan.jpg"
  (imread IMREAD_REDUCED_GRAYSCALE_8)))

A naïve canny call would look like this, where 10.0 and 90.0 are the bottom and top thresholds for the canny function, 3 is the aperture, and true/false means basically superhighlight mode or standard (false).

(->
source
clone
(canny! 10.0 90.0 3 false))
Johan has now been turned into a canny version of himself (Figure 3-24).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig24_HTML.jpg
Figure 3-24

Naïve canny usage

You already know that we can use the result of the canny function as a mask and for example do a copy of blue over white (Figure 3-25).

(def colored (u/mat-from source))
(set-to colored rgb/blue-2)
(def target (u/mat-from source))
(set-to target rgb/white)
(copy-to colored target c)
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig25_HTML.jpg
Figure 3-25

Copy blue over white

That is quite a few lines showing in the picture. By reducing the range between the two threshold values, we can make the picture significantly clearer and look less messy.

(canny! 70.0 90.0 3 false)
This indeed makes Johan a bit clearer (Figure 3-26).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig26_HTML.jpg
Figure 3-26

Clearer Johan

The result is nice, but it still seems that there are quite a few extra lines that should not be drawn.

The technique usually used to remove those extra lines is to apply a median-blur or a gaussian-blur before calling the canny function.

Gaussian blur is usually more effective; do not hesitate to go big and increase the size of the blur to at least 13×13 or even 21×21, as shown in the following:

(->
source
clone
(cvt-color! COLOR_BGR2GRAY)
(gaussian-blur! (new-size 13 13) 1 1)
(canny! 70.0 90.0 3 false))
That code snippet gives a neatly clearer picture (Figure 3-27).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig27_HTML.jpg
Figure 3-27

Even better Johan

Do you remember the bilateral filter function ? If you use it after calling the canny function, it also gives some interesting cartoon shapes, by putting emphasis where there are more lines coming out of the canny effect.

(->
 source
 clone
 (cvt-color! COLOR_BGR2GRAY)
 (canny! 70.0 90.0 3 false)
 (bilateral-filter! 10 80 30)))
Figure 3-28 shows the bilateral-filter! applied through a similar processing pipeline.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig28_HTML.jpg
Figure 3-28

Applying a bilateral filter

You would remember that the focus of the bilateral filter is on reinforcing the contours. And indeed, that is what is achieved here.

Note also that the bilateral filter parameters are very sensitive, increasing the second parameter to 120; this gives a Picasso-like rendering (Figure 3-29).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig29_HTML.jpg
Figure 3-29

Johasso

So, play around with parameters and see what works for you. The whole Origami setup is there to give immediate feedback anyway.

Also, canny is not the only option. Let’s see other techniques to achieve cartoon effects.

Bilateral Cartoon

The bilateral filter is actually doing a lot of the cartoon work, so let’s see if we can skip the canny processing and stick with just using the bilateral filter step.

We will create a new function called cartoon-0. That new function will
  • turn the input image to gray

  • apply a very large bilateral filter

  • apply successive smoothing functions

  • then turn back to an RGB mat

A possible implementation is shown in the following:

(defn cartoon-0!
  [buffer]
  (-> buffer
    (cvt-color! COLOR_RGB2GRAY)
    (bilateral-filter! 10 250 30)
    (median-blur! 7)
    (adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY 9 3)
    (cvt-color! COLOR_GRAY2BGR)))

The output of cartoon-0! applied to Johan makes it to Figure 3-30.

(-> "resources/chapter03/johan.jpg"
(imread IMREAD_REDUCED_COLOR_8)
cartoon-0!
u/mat-view)
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig30_HTML.jpg
Figure 3-30

No canny cartoon

Here again, the parameters of the bilateral filter pretty much make all the work.

Changing (bilateral-filter! 10 250 30) to (bilateral-filter! 9 9 7) gives a completely different feeling.

(defn cartoon-1!
  [buffer]
  (-> buffer
    (cvt-color! COLOR_RGB2GRAY)
    (bilateral-filter! 9 9 7)
    (median-blur! 7)
    (adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY 9 3)
    (cvt-color! COLOR_GRAY2BGR)))
And Johan now looks even more artistic and thoughtful (Figure 3-31).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig31_HTML.jpg
Figure 3-31

Thoughtful Johan

Grayed with Update Channel

The last technique of this recipe will take us back to use the update-channel! function written in the previous recipe.

This new method uses update-channel with a function that
  • turns the gray channel’s value to 0 if the original value is less than 70;

  • turns it to 100 if the original value is greater than 80 but less than 180; and

  • turns it to 255 otherwise.

This gives the following slightly long but simple pipeline:

(->
  "resources/chapter03/johan.jpg"
  (imread IMREAD_REDUCED_COLOR_8)
  (median-blur! 1)
  (cvt-color! COLOR_BGR2GRAY)
  (update-channel! (fn[x] (cond (< x 70) 0 (< x 180) 100 :else 255)) 0)
  (bitwise-not!)
  (cvt-color! COLOR_GRAY2BGR)
  (u/mat-view))
This is nothing you would not understand by now, but the pipeline is quite a pleasure to write and its result even more so, because it gives more depth to the output than the other techniques used up to now (Figure 3-32).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig32_HTML.jpg
Figure 3-32

In-depth Johan

The output of the pipeline looks great, but the pixels have had quite a bit of processing, so it is hard to tell what’s inside each of them at this stage, and postprocessing after that needs a bit of care.

Say you want to increase the luminosity or change the color of the preceding output; it is usually better to switch again to HSV color space and increase the luminosity before changing anything on the colors, as highlighted in the following:

(->
"resources/chapter03/shinji.jpg"
(imread IMREAD_REDUCED_COLOR_4)
(cartoon! 70 180 false)
(cvt-color! COLOR_BGR2HSV)
(update-channel! (fn [x] 250) 1)
(update-channel! (fn [x] 5) 0)
(cvt-color! COLOR_HSV2BGR)
(bitwise-not!)
(flip! 1)
(u/mat-view))
The final processing pipeline gives us a shining blue Johan (Figure 3-33). The overall color is blue due to channel 0’s value set to 5 in HSV range, and the luminosity set to 250, almost the maximum value.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig33_HTML.jpg
Figure 3-33

Flipped and blue

As a bonus, we also just flipped the image horizontally to end this recipe on a forward-looking picture!

3.3 Creating Pencil Sketches

Problem

You have seen how to do some cartooning for portraits , but would like to give it a more artistic sense by combining front sketching with deep background colors.

Solution

To create backgrounds with impact, you will see how to use pyr-down and pyr-up combined with smoothing methods you have already seen.

To merge the result, we will again be using bitwise-and.

How it works

My hometown is in the French Alps, near the Swiss border, and there is a very nice canal flowing between the houses right in the middle of the old town (Figure 3-34).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig34_HTML.jpg
Figure 3-34

Annecy, France, in the summertime

The goal here is to create a painted-looking version of that picture.

The plan is to proceed in three phases.

A goal without a plan is just a wish.

Antoine de Saint-Exupéry

Phase 1: we completely remove all the contours of the picture by smoothing out the edges and doing loops of decreasing the resolution of the picture. This will be the background picture.

Phase 2: We do the opposite, meaning we focus on the contours, by applying similar techniques to what was done in the cartoon recipe, where we turn the picture to gray, find all the edges, and give them as much depth as possible. This will be the front part.

Phase 3: Finally, we combine the results of phase 1 and phase 2 to get the painting effect that we are looking for.

Background

pyr-down! is probably new to you. This decreases the resolution of an image . Let’s compare the mats before and after applying the change of resolution done by the following snippet.

(def factor 1)
(def work (clone img))
(dotimes [_ factor] (pyr-down! work))

Before:

#object[org.opencv.core.Mat 0x3f133cac "Mat [ 431*431*CV_8UC3...]"]

After:

#object[org.opencv.core.Mat 0x3f133cac "Mat [ 216*216*CV_8UC3...]"]

Basically, the resolution of the mat has been divided by 2, rounded to the pixel. (Yes, I have heard stories of 1/2 pixels before, but beware… those are not true!!)

Using a factor of 4, and thus applying the resolution downgrade four times, we get a mat that is now 27×27 and looks like the mat in Figure 3-35.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig35_HTML.jpg
Figure 3-35

Changed resolution

To create the background effect, we actually need a mat of the same size as the original, so there is a need to resize the output to the size of the original.

The first idea is of course to simply try the usual resize! function:

(resize! work (.size img))
But that does result in something not very satisfying to the eyes. Figure 3-36 indeed shows some quite visible weird pixelization of the resized mat.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig36_HTML.jpg
Figure 3-36

Hmmm… resizing

Let’s try something else. There is a reverse function of pyr-down, named pyr-up, which doubles the resolution of a mat. To use it effectively, we can apply pyr-up in a loop , and loop the same number of times as done with pyr-down.

(dotimes [_ factor] (pyr-up! work))
The resulting mat is similar to Figure 3-36, but is much smoother, as shown in Figure 3-37.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig37_HTML.jpg
Figure 3-37

Smooth blurring

The background is finalized by applying blur in the mat in between the pyr-down and pyr-up dance.

So:

(dotimes [_ factor] (pyr-down! work))
(bilateral-filter! work 11 11 7)
(dotimes [_ factor] (pyr-up! work))

The output is kept for later, and that’s it for the background ; let’s move to the edge-finding part for the foreground.

Foreground and Result

The foreground is going to be mostly a copy-paste exercise of the previous recipe. You can of course create your own variation at this stage; we will use here a cartooning function made of a median-blur and an adaptive-threshold step.

(def edge
  (-> img
    clone
    (resize! (new-size (.cols output) (.rows output)))
    (cvt-color! COLOR_RGB2GRAY)
    (median-blur! 7)
    (adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY 9 7)
    (cvt-color! COLOR_GRAY2RGB)))
Using the old town image as input, this time you get a mat showing only the prominent edges, as shown in Figure 3-38.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig38_HTML.jpg
Figure 3-38

Edges everywhere

To finish the exercise, we now combine the two mats using bitwise-and. Basically, since the edges are black, a bitwise-and operation keeps them black, and their values will be copied over as they are to the output mat.

This will have the consequence of copying the edges over unchanged onto the target result, and since the remaining part of the edges mat is made of white, bitwise-and will be the value of the other mat, and so the color of the background mat will take precedence.

(let [result (new-mat) ]
  (bitwise-and work edge result)
  (u/mat-view result))
This gives you the sketching effect of Figure 3-39.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig39_HTML.jpg
Figure 3-39

Sketching like the pros

With the adaptive threshold step, you can tune the way the front sketching looks.

(adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY
    edges-thickness edges-number)

We used 9 as edges-thickness and 7 as edges-number in the first sketch; let’s see what happens if we put those two parameters to 5.

This gives more space to the color of the background, by reducing the thickness of the edges (Figure 3-40).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig40_HTML.jpg
Figure 3-40

Thinner edges

It’s now up to you to play and improvise from there!

Summary

Finally, let’s get you equipped with a ready-to-use sketch! function. This is an exact copy of the code that has been used up to now, with places for the most important parameters for this sketching technique:
  • the factors, e.g., the number of loops in the dance, used to turn the resolution down and then turn it up again

  • the parameters of the bilateral filter of the background

  • the parameters of the adaptive threshold of the foreground

The sketch! function is made of smoothing! and edges!. First, let’s use smoothing! to create the background.

(defn smoothing!
  [img factor filter-size filter-value]
  (let [ work (clone img) output (new-mat)]
  (dotimes [_ factor] (pyr-down! work))
  (bilateral-filter work output filter-size filter-size filter-value)
  (dotimes [_ factor] (pyr-up! output))
  (resize! output (new-size (.cols img) (.rows img)))))

Then edges! to create the foreground.

(defn edges!
  [img e1 e2 e3]
  (-> img
    clone
    (cvt-color! COLOR_RGB2GRAY)
    (median-blur! e1)
    (adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY e2 e3)
    (cvt-color! COLOR_GRAY2RGB)))

Finally, we can use sketch!, the combination of background and foreground.

(defn sketch!
  [ img s1 s2 s3 e1 e2 e3]
    (let [ output (smoothing! img s1 s2 s3) edge (edges! img e1 e2 e3)]
    (bitwise-and output edge output)
    output))

Calling sketch! is relatively easy. You can try the following snippet:

  (sketch! 6 9 7  7 9 11)
And instantly turn the landscape picture of Figure 3-41
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig41_HTML.jpg
Figure 3-41

Trees

into the sketched version of Figure 3-42.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig42_HTML.jpg
Figure 3-42

Sketch landscape

A few others have been put in the samples, but now is indeed the time to take your own pictures and give those functions and parameters a shot.

3.4 Creating a Canvas Effect

Problem

Creating landscape art seems to have no more secrets for you, but you would like to emboss a canvas onto it, to make it more like a painting .

Solution

This short recipe will reuse techniques you have seen, along with two new mat functions: multiply and divide.

With divide, it is possible to create burning and dodging effects of a mat, and we will use those to create the wanted effect.

With multiply, it is possible to combine mats back with a nice depth effect, and so by using a paper-looking background mat, it will be possible to have a special draw on canvas output.

How it works

We will take another picture from the French Alps—I mean why not!—and since we would like to make it look slightly vintage, we will use an image of an old castle.

(def img
  (-> "resources/chapter03/montrottier.jpg"
  (imread IMREAD_REDUCED_COLOR_4)))
Figure 3-43 shows the castle of Montrottier, which you should probably visit when you have the time, or vacation (I do not even know what the second word means anymore).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig43_HTML.jpg
Figure 3-43

Wish upon a star

We first start by applying a bitwise-not!, then a gaussian-blur on a gray clone of the source picture; this is pretty easy to do with Origami pipelines.

We will need a grayed version for later as well, so let’s keep the two mats gray and gaussed separate.

(def gray
  (-> img clone (cvt-color! COLOR_BGR2GRAY)))
(def gaussed
  (-> gray
      clone
      bitwise-not!
      (gaussian-blur! (new-size 21 21) 0.0 0.0)))
Figure 3-44 shows the gaussed mat, which looks like a spooky version of the input image.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig44_HTML.jpg
Figure 3-44

Spooky castle

We will use this gaussed mat as a mask. The magic happens in the function dodge!, which uses the opencv function divide on the original picture, and an inverted version of the gaussed mat.

(defn dodge! [img_ mask]
  (let [ output (clone img_) ]
  (divide img_ (bitwise-not! (-> mask clone)) output 256.0)
  output))

Hmmm… okay. What does divide do? I mean, you know it divides things, but at the byte level, what is really happening?

Let’s take two matrices, a and b, and call divide on them for an example.

(def a (u/matrix-to-mat [[1 1 1]]))
(def b (u/matrix-to-mat [[0 1 2]]))
(def c (new-mat))
(divide a b c 10.0)
(dump c)

The output of the divide call is

[0 10 5]

which is

[ (a0 / b0) * 10.0, (a1 / b1) * 10.0, (a2 / b2) * 10.0]

which gives

[ 1 / 0 * 10.0, 1 / 1 * 10.0, 1 / 2 * 10.0]

then, given that OpenCV considers that dividing by 0 equals 0:

[0, 10, 5]

Now, let’s call dodge! on the gray mat and the gaussed mat:

(u/mat-view (dodge! gray gaussed))
And see the sharp result of Figure 3-45.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig45_HTML.jpg
Figure 3-45

Sharp pencil

Apply the Canvas

Now that the main picture has been turned to a crayon-styled art form, it would be nice to lay this out on a canvas-looking mat. As presented, this is done using the multiply function from OpenCV.

We want the canvas to look like a very old parchment, and we will use the one from Figure 3-46.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig46_HTML.jpg
Figure 3-46

Old parchment

Now we will create the apply-canvas! function, which takes the front-end sketch, and the canvas, and applies the multiply function between them. (/ 1 256.0) is the value used for the multiplication; since these are gray bytes here, the bigger the value the whiter, and so here (/ 1 256.0) makes the dark lines stand out quite nicely on the final result.

(defn apply-canvas! [ sketch canvas]
 (let [ out (new-mat)]
  (resize! canvas (new-size (.cols sketch) (.rows sketch)))
  (multiply
   (-> sketch clone (cvt-color! COLOR_GRAY2RGB)) canvas out (/ 1 256.0))
    out))

Whoo-hoo. Almost there; now let’s call this newly created function

(u/mat-view (apply-canvas! sketch canvas))
And enjoy the drawing on the canvas (Figure 3-47).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig47_HTML.jpg
Figure 3-47

Castle on old Parchment

Now is obviously the time for you to go and find/scan your own old papers, to try a few things using this technique ; or why not reuse the cartoon functions from previous recipes to lay on top of the different papers?

3.5 Highlighting Lines and Circles

Problem

This recipe is about teaching how to find and highlight lines , circles, and segments in a loaded mat.

Solution

A bit of preprocessing is usually needed to prepare the image to be analyzed with some canny and smoothing operations .

Once this first preparation step is done, finding circles is done with the opencv function hough-circles.

The version to find lines is called hough-lines, with its sibling hough-lines-p, which uses probability to find better lines.

Finally, we will see how to use a line-segment-detector to draw the found segments.

How it works

Find Lines of a Tennis Court with Hough-Lines

The first part of this tutorial shows how to find lines within an image. We will take the example of a tennis court .

(def tennis (-> "resources/chapter03/tennis_ground.jpg" imread ))
You have probably seen a tennis court before, and this one is not so different from the others (Figure 3-48). If you have never seen a tennis court before, this is a great introduction all the same, but you should probably stop reading and go play a game already.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig48_HTML.jpg
Figure 3-48

Tennis court

Preparing the target for the hough-lines function is done by converting the original tennis court picture to gray, then applying a simple canny transformation.

(def can  
    (-> tennis
        clone
        (cvt-color! COLOR_BGR2GRAY)
        (canny! 50.0 180.0 3 false)))
With the expected result of the lines standing out on a black background, as shown in Figure 3-49.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig49_HTML.jpg
Figure 3-49

Canny tennis court

Lines are collected in a mat in the underlying Java version of opencv, and so, no way to avoid this, we will also prepare a mat to receive the resulting lines .

The hough-lines function itself is called with a bunch of parameters. The full underlying polar system explanation for the hough transformation can be found on the OpenCV web site:

https://docs.opencv.org/3.3.1/d9/db0/tutorial_hough_lines.html

You don’t really need to read everything just now, but it’s good to realize what can be done and what cannot.

For now, we will just apply the same parameters suggested in the linked tutorial.

(def lines (new-mat))
(hough-lines can lines 1 (/ Math/PI 180) 100)

The resulting mat of lines is made of a list of rows with two values, rho and theta, on each row.

Creating the two points required to draw a line from rho and theta is a bit complicated but is described in the opencv tutorial .

For now, the following function does the work for you.

(def result (clone parking))
(dotimes [ i (.rows lines)]
   (let [ val_ (.get lines i 0)
          rho (nth val_ 0)
          theta (nth val_ 1)
          a (Math/cos theta)
          b (Math/sin theta)
          x0 (* a rho)
          y0 (* b rho)
          pt1 (new-point
              (Math/round (+ x0 (* 1000 (* -1 b))))
              (Math/round (+ y0 (* 1000 a))))
          pt2 (new-point
              (Math/round (- x0 (* 1000 (* -1 b))))
              (Math/round (- y0 (* 1000 a))))       
     ]
  (line result pt1 pt2 color/black 1)))
Drawing the found lines on top of the tennis court mat creates the image in Figure 3-50.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig50_HTML.jpg
Figure 3-50

Hough-lines result

Note that when calling hough-lines, changing the parameter with value 1 to a value of 2 gives you way more lines, but you may need to filter the lines yourself afterward.

Also by experience , changing the Math/PI rounding from 180 to 90 gives fewer lines but better results.

Hough-Lines-P

Another variant of the hough-lines function, named hough-lines-p , is an enhanced version with probabilistic mathematics added, and it usually gives a better set of lines by performing guesses.

To try hough-lines with P, we will this time take the example of… a soccer field.

(def soccer-field
  (-> "resources/chapter03/soccer-field.jpg"
      (imread IMREAD_REDUCED_COLOR_4)))
(u/mat-view soccer-field)

As per the original hough-lines example, we turn the soccer field to gray and apply a slight gaussian blur to remove possible imperfections in the source image.

(def gray       
  (-> soccer-field
      clone
      (cvt-color! COLOR_BGR2GRAY)
      (gaussian-blur! (new-size 1 1) 0 ) ))
The resulting grayed version of the soccer field is shown in Figure 3-51.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig51_HTML.jpg
Figure 3-51

Gray soccer field

Let’s now make a canny version of the court to create the edges.

(def edges (-> gray clone (canny! 100 220)))

Now, we call hough-lines-p. The parameters used are explained in line in the following code snippet. Lines are expected to be collected from the newly created edges mat .

; distance resolution in pixels of the Hough grid
(def rho 1)
; angular resolution in radians of the Hough grid
(def theta  (/ Math/PI 180))
; minimum number of votes (intersections in Hough grid cell)
(def min-intersections 30)
; minimum number of pixels making up a line
(def min-line-length  10)
; maximum gap in pixels between connectable line segments
(def max-line-gap  50)

The parameters are ready; let’s call hough-lines-p, with the result being stored in the lines mat.

(def lines (new-mat))
(hough-lines-p
  edges
  lines
  rho
  theta
  min-intersections
  min-line-length
  max-line-gap)

This time, the lines are slightly easier to draw than with the regular hough-lines function. Each line of the result mat is made of four values, for the two points needed to draw the line.

(def result (clone soccer-field))
(dotimes [ i (.rows lines)]
(let [ val (.get lines i 0)]
  (line result
    (new-point (nth val 0) (nth val 1))
    (new-point (nth val 2) (nth val 3))
    color/black 1)))
The result of drawing the results of hough-lines-p is displayed in Figure 3-52.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig52_HTML.jpg
Figure 3-52

Lines on a soccer field

Finding Pockets on a Pool Table

No more running around on a court; let’s move to… the billiard table!

In a similar way, opencv has a function named hough-circles to look for circle-looking shapes. What’s more, the function is pretty easy to put in action.

This time, let’s try to find the ball pockets of a billiard table. The exercise is slightly difficult because it is easy to wrongly count the regular balls as pockets.

You can’t knock on opportunity’s door and not be ready.

Bruno Mars

Let’s get the pool table ready first.

(def pool
  (->
    "resources/chapter03/pooltable.jpg"
    (imread IMREAD_REDUCED_COLOR_2)))

With hough-circles, it seems you can actually get better results by bypassing the canny step in the preprocessing .

The following snippet now shows where to put values for the min and max radius of the circles to look for in the source mat.

(def gray (-> pool clone (cvt-color! COLOR_BGR2GRAY)))
(def minRadius 13)
(def maxRadius 18)
(def circles (new-mat))
(hough-circles gray circles CV_HOUGH_GRADIENT 1
  minRadius 120 10 minRadius maxRadius)

Here again, circles are collected in a mat, with each line containing the x and y position of the center of the circle and its radius.

Finally, we simply draw circles on the result mat with the opencv circle function.

(def output (clone pool))
(dotimes [i (.cols circles)]
  (let [ _circle (.get circles 0 i)
         x (nth _circle 0)
         y (nth _circle 1)
         r (nth _circle 2)  
         p (new-point x y)]
(circle output p (int r) color/white 3)))
All the pockets are now highlighted in white in Figure 3-53.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig53_HTML.jpg
Figure 3-53

Pockets of the pool table in white!

Note that if you put the minRadius value too low, you quickly get false positives with the regular balls, as shown in Figure 3-54.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig54_HTML.jpg
Figure 3-54

False pockets

So defining precisely what is searched for is the recipe for success in most of your OpenCV endeavors (and maybe other ones too…).

And so, to avoid false positives here, it is also probably a good idea to filter on colors before accepting and drawing the lines. Let’s see how to do this next.

Finding Circles

In this short example, we will be looking for red circles in a mat where circles of multiple colors can be found.

(def bgr-image
  (-> "resources/detect/circles.jpg" imread (u/resize-by 0.5) ))
The bgr-image is shown in Figure 3-55.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig55_HTML.jpg
Figure 3-55

Colorful circles

You may not see it if you are reading straight from the black-and-white version of the book, but we will be focusing on the large bottom left circle, which is of a vivid red.

If you remember lessons from the previous recipes, you already know we need to change the color space to HSV and then filter on a hue range between 0 and 10.

The following snippet shows how to do this along with some extra blurring to ease processing later on.

(def ogr-image
  (-> bgr-image
   (clone)
   (median-blur! 3)
   (cvt-color! COLOR_BGR2HSV)
   (in-range! (new-scalar 0 100 100) (new-scalar 10 255 255))
   (gaussian-blur! (new-size 9 9) 2 2)))
All the circles we are not looking for have disappeared from the mat resulting from the small pipeline, and the only circle we are looking for is now standing out nicely (Figure 3-56).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig56_HTML.jpg
Figure 3-56

Red circle showing in white

Now we can apply the same hough-circles call as was seen just previously; again, the circle will be collected in the circle mat, which will be a 1×1 mat with three channels .

(def circles (new-mat))
(hough-circles ogr-image circles CV_HOUGH_GRADIENT 1 (/ (.rows bgr-image) 8) 100 20 0 0)
(dotimes [i (.cols circles)]
  (let [ _circle (.get circles 0 i)
         x (nth _circle 0)
         y (nth _circle 1)
         r (nth _circle 2)  
         p (new-point x y)]
  (circle bgr-image p (int r) rgb/greenyellow 5)))
The result of drawing the circle with a border is shown in Figure 3-57. The red circle has been highlighted with a green-yellow color and a thickness of 5.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig57_HTML.jpg
Figure 3-57

Highlighted red circle

Using Draw Segment

Sometimes, the easiest may be to simply use a technique using the provided segment detector. It is less origami friendly, since the methods used are straight Java method calls (so prefixed with a dot “.”), but the snippet is rather self-contained.

Let’s try that on the previously seen soccer field. We’ll load it straight to gray this time and see how the segment detector behaves.

(def soccer-field
    (-> "resources/chapter03/soccer-field.jpg"
    (imread IMREAD_REDUCED_GRAYSCALE_4)))
(def det (create-line-segment-detector))
(def lines (new-mat))
(def result (clone soccer-field))

We call detect on the line-segment-detector, using Clojure Java Interop for now.

(.detect det soccer-field lines)

At this stage, the lines mat metadata is 161*1*CV_32FC4, meaning 161 rows, each made of 1 column and 4 channels per dot, meaning 2 points per value.

The detector has a helpful drawSegments function, which we can call to get the resulting mat .

(.drawSegments det result lines)
The soccer field mat is now showing in Figure 3-58, this time with all the lines highlighted, including circles and semicircles.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig58_HTML.jpg
Figure 3-58

Premiere mi-temps ( first period)

3.6 Finding and Drawing Contours and Bounding Boxes

Problem

Since identifying and counting shapes are at the forefront of OpenCV usage, you would probably like to know how to use contour-finding techniques in Origami .

Solution

Apart from the traditional cleanup and image preparation, this recipe will introduce the find-contours function to fill in a list of contours.

Once the contours are found, we need to apply a simple filter to remove extremely large contours like the whole pictures as well as contours that are really too small to be useful.

Once filtering is done, we can draw the contours using either handmade circles and rectangles or the provided function draw-contours.

How it works

Sony Headphones

They are not so new anymore, but I love my Sony headphones . I simply bring them everywhere, and you can feed your narcissism and get all the attention you need by simply wearing them. They also get you the best sound, whether on the train or on the plane…

Let’s have a quick game of finding my headphones’ contours.

(def headphones
  (-> "resources/chapter03/sonyheadphones.jpg"
        (imread IMREAD_REDUCED_COLOR_4)))

My headphones still have a cable, because I like the sound better still, whatever some big companies are saying.

Anyway, the headphones are shown in Figure 3-59.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig59_HTML.jpg
Figure 3-59

Sony headphones with a cable

First, we need to prepare the headset to be easier to analyze. To do this, we create a mask of the interesting part, the headphones themselves.

(def mask
  (-> headphones
      (cvt-color! COLOR_BGR2GRAY)
      (clone)
      (threshold! 250 255 THRESH_BINARY_INV)
      (median-blur! 7)))
The inverted thresh binary output is shown in Figure 3-60.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig60_HTML.jpg
Figure 3-60

Masked headphones

Then with the use of the mask, we create a masked-input mat that will be used to ease the finding contours step.

(def masked-input
  (clone headphones))
(set-to masked-input (new-scalar 0 0 0) mask)
(set-to masked-input (new-scalar 255 255 255) (bitwise-not! mask))

Have you noticed? Yes, there was an easier way to create the input, by simply creating a noninverted mask in the first place, but this second method gives more control for preparing the input mat.

So here we basically proceed in two steps. First, set all the pixels of the original mat to black when the same pixel value of the mask is 1. Next, set all the other values to white, on the opposite version of the mask .

The prepared result mat is in Figure 3-61.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig61_HTML.jpg
Figure 3-61

Preparation of the input mat

Now that the mat that will be used to find contours is ready, you can almost directly call find-contours on it.

find-contours takes a few obvious parameters, and two ones, the last two, that are a bit more obscure.

RETR_LIST is the simplest one, and returns all the contours as a list, while RETR_TREE is the most often used, and means that the contours are hierarchically ordered.

CHAIN_APPROX_NONE means all the points of the found contours are stored. Usually though, when drawing those contours, you do not need all of the points defining them. In case you do not need all of the points, you can use CHAIN_APPROX_SIMPLE, which reduces the number of points defining the contours.

It eventually depends how you handle the contours afterward. But for now, let’s keep all the points!

(def contours
  (new-arraylist))
(find-contours
  masked-input
  contours
  (new-mat) ; mask
  RETR_TREE
  CHAIN_APPROX_NONE)

Alright, now let’s draw rectangles to highlight each found contour. We loop on the contour list, and for each contour we use the bounding-rect function to get a rectangle that wraps the contour itself.

The rectangle retrieve from the bounding-rect call can be used almost as is, and we will draw our first contours with it.

(def exercise-1 (clone headphones))
(doseq [c contours]
  (let [ rect (bounding-rect c)]
   (rectangle
     exercise-1
     (new-point (.x rect) (.y rect))
     (new-point (+ (.width rect) (.x rect)) (+ (.y rect) (.height rect)))
     (color/->scalar "#ccffcc")
     2)))
Contours are now showing in Figure 3-62.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig62_HTML.jpg
Figure 3-62

Headphone coutours

Right. Not bad. It is pretty obvious from the picture that the big rectangle spreading over the whole picture is not very useful. That’s why we need a bit of filtering.

Let’s filter the contours, by making sure they are
  • not too small, meaning that the area they should cover is at least 10,000, which is a surface of 125×80,

  • nor too big, meaning that the height shouldn’t cover the whole picture.

That filtering is now done in the following snippet.

(def interesting-contours
  (filter
    #(and
       (> (contour-area %) 10000 )
       (< (.height (bounding-rect %)) (- (.height headphones) 10)))
    contours))

And so, drawing only the interesting-contours this time gives something quite accurate.

(def exercise-1 (clone headphones))
(doseq [c interesting-contours]
    ...)
Figure 3-63 this time shows only useful contours.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig63_HTML.jpg
Figure 3-63

Headphones’ interesting contours

Drawing circles instead of rectangles should not be too hard, so here we go with the same loop on interesting-contours , but this time, drawing a circle based on the bounding-rect.

(def exercise-2 (clone headphones))
(doseq [c interesting-contours]
 (let [ rect (bounding-rect c) center (u/center-of-rect rect) ]
   (circle exercise-2
           center
           (u/distance-of-two-points center (.tl rect))
           (color/->scalar "#ccffcc")
            2)))
The resulting mat, exercise-2, is shown in Figure 3-64.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig64_HTML.jpg
Figure 3-64

Circling on it

Finally, while it’s harder to use for detection processing, you can also use the opencv function draw-contours to nicely draw the free shape of the contour.

We will still be looping on the interesting-contours list. Note that the parameters may feel a bit strange, since draw-contours uses an index along with the list instead of the contour itself, so be careful when using draw-contours.

(def exercise-3 (clone headphones))
(dotimes [ci (.size interesting-contours)]
 (draw-contours
   exercise-3
   interesting-contours
   ci
   (color/->scalar "#cc66cc")
   3))
And finally, the resulting mat can be found in Figure 3-65.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig65_HTML.jpg
Figure 3-65

Headset and pink contours

Things are not always so easy, so let’s take another example up in the sky!

Up in the Sky

This second example takes hot-air balloons in the sky, and wants to draw contours on them.

The picture of hot-air balloons in Figure 3-66 seems very innocent and peaceful.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig66_HTML.jpg
Figure 3-66

Hot-air balloons

Unfortunately, using the same technique as previously shown to prepare the picture does not reach a very sexy result .

(def wrong-mask
  (-> kikyu
      clone
      (cvt-color! COLOR_BGR2GRAY)
      (threshold! 250 255 THRESH_BINARY)
      (median-blur! 7)))
It’s pretty pitch-black in Figure 3-67.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig67_HTML.gif
Figure 3-67

Anybody up here?

So, let’s try another technique. What would you do to get a better mask?

Yes—why not? Let’s filter all this blue and create a blurred mask from it. This should give you the following snippet.

(def mask
  (-> kikyu
      (clone)
      (cvt-color! COLOR_RGB2HSV)
      (in-range! (new-scalar 10 30 30) (new-scalar 30 255 255))
      (median-blur! 7)))
Nice! Figure 3-68 shows that this actually worked out pretty neatly.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig68_HTML.jpg
Figure 3-68

Useful mask

We will now use the complement version of the mask to find the contours.

(def work (-> mask bitwise-not!))

Using the finding-contours function has no more secrets to hide from you. Or maybe it does? What’s the new-point doing in the parameter list? Don’t worry; it is just an offset value, and here we specify no offset, so 0 0.

(def contours (new-arraylist))
(find-contours work contours (new-mat) RETR_LIST CHAIN_APPROX_SIMPLE (new-point 0 0))

Contours are in! Let’s filter on the size and draw circles around them. This is simply a rehash of the previous example.

(def output_ (clone kikyu))
(doseq [c contours]
  (if (> (contour-area c) 50 )
    (let [ rect (bounding-rect c)]
      (if (and  (> (.height rect) 40)  (> (.width rect) 60))
      (circle
        output_
        (new-point (+ (/ (.width rect) 2) (.x rect))
                   (+ (.y rect) (/ (.height rect) 2)))
        100
        rgb/tan
        5)))))
Nice. You are getting pretty good at those things. Look at and enjoy the result of Figure 3-69.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig69_HTML.jpg
Figure 3-69

Circles over the hot-air balloons

Next, let’s filter ahead of the drawing, and let’s use the bounding-rect again to draw rectangles.

(def my-contours
  (filter
  #(and
  (> (contour-area %) 50 )
  (> (.height (bounding-rect %)) 40)  
  (> (.width (bounding-rect %)) 60))                    
contours))

And yes indeed, if you checked its content, my-contours has only three elements.

(doseq [c my-contours]
  (let [ rect (bounding-rect c)]
   (rectangle
     output
     (new-point (.x rect) (.y rect))
     (new-point (+ (.width rect) (.x rect)) (+ (.y rect) (.height rect)))
     rgb/tan
     5)))
Now drawing those rectangles results in Figure 3-70.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig70_HTML.jpg
Figure 3-70

Rectangles over hot-air balloons

3.7 More on Contours: Playing with Shapes

Problem

Following on the previous recipe, you would like to see what’s returned by the function find-contours. Drawing contours with all the dots is nice, but what if you want to highlight different shapes in different colors?

Also, what if the shapes are hand-drawn, or not showing properly in the source mat?

Solution

We still are going to use find-contours and draw-contours as we have done up to now, but we are going to do some preprocessing on each contour before drawing them to find out how many sides they have.

approx-poly- dp is the function that will be used to approximate shape, thus reducing the number of points and keeping only the most important dots of polygonal shapes. We will create a small function, approx, to turn shapes into polygons and count the number of sides they have.

We will also look at fill-convex-poly to see how we can draw the approximated contours of handwritten shapes.

Lastly, another opencv function named polylines will be used to draw only wireframes of the found contours.

How it works

Highlight Contours

We will use a picture with many shapes for the first part of this exercise, like the one in Figure 3-71.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig71_HTML.jpg
Figure 3-71

Shapes

The goal here is to draw the contours of each shape with different colors depending on the number of sides of each shape.

The shapes mat is loaded simply with the following snippet:

(def shapes
  (-> "resources/morph/shapes3.jpg" (imread IMREAD_REDUCED_COLOR_2)))

As was done in the previous recipe, we first prepare a thresh mat from the input by converting a clone of the input to gray, then applying a simple threshold to highlight the shapes.

(def thresh (->
    shapes
    clone
    (cvt-color! COLOR_BGR2GRAY)
    (threshold! 210 240 1)))
(def contours (new-arraylist))
Looking closely, we can see that the shapes are nicely highlighted, and if you look at Figure 3-72, the thresh is indeed nicely showing the shapes.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig72_HTML.jpg
Figure 3-72

Functional thresh

Ok, the thresh is ready, so you can now call find-contours on it.

(find-contours thresh contours (new-mat) RETR_LIST CHAIN_APPROX_SIMPLE)

To draw the contours , we first write a dump function that loops on the contours list and draws each one in magenta.

(defn draw-contours! [img contours]
 (dotimes [i (.size contours)]
    (let [c (.get contours i)]
     (draw-contours img contours i rgb/magenta-2 3)))
    img)
(-> shapes
      (draw-contours! contours)
      (u/mat-view))
The function works as expected, and the result is shown in Figure 3-73.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig73_HTML.jpg
Figure 3-73

Magenta contours

But, as we have said, we would like to use a different color for each contour, so let’s write a function that selects a color depending on the sides of the contour.

(defn which-color[c]
  (condp = (how-many-sides c)
   1 rgb/pink
   2 rgb/magenta-
   3 rgb/green
   4 rgb/blue
   5 rgb/yellow-1-
   6 rgb/cyan-2
   rgb/orange))

Unfortunately, even with CHAIN_APPROX_SIMPLE passed as parameter to find-contours, the number of points for each shape is way too high to make any sense.

8, 70, 132, 137...

So, let’s work on reducing the number of points by converting the shapes to approximations.

Two functions are used from opencv, arc-length, and approx-poly-dp. The factor 0.02 is the default proposed by opencv; we will see its impact with different values slightly later in this recipe.

(defn approx [c]
  (let[m2f (new-matofpoint2f (.toArray c))
       len (arc-length m2f true)
       ret (new-matofpoint2f)
       app (approx-poly-dp m2f ret (* 0.02 len) true)]
    ret))

Using this new approx function, we can now count the number of sides by counting the number of points of the approximation.

The following is the how-many-sides function that simply does that.

(defn how-many-sides[c]
  (let[nb-sides (.size (.toList  c))]
    nb-sides))

Everything is in place; let’s rewrite the dumb draw-contours! function into something slightly more evolved using which-color .

(defn draw-contours! [img contours]
 (dotimes [i (.size contours)]
    (let [c (.get contours i)]
     (draw-contours img contours i (which-color c) 3)))
    img)
And now calling the updated function properly highlights the polygons, counting the number of sides on an approximation of each of the found shapes (Figure 3-74).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig74_HTML.jpg
Figure 3-74

Different shapes, different colors

Note how the circle still goes slightly overboard, with too many sides , but that was to be expected.

Hand-Drawn Shapes

But perhaps you were going to say that the shapes were nicely showing already, so you still have some doubts about whether the approximation is really useful or not. So, let’s head to a beautiful piece of hand-drawn art that was prepared just for the purpose of this example.

(def shapes2
  (-> "resources/chapter03/hand_shapes.jpg"
      (imread IMREAD_REDUCED_COLOR_2)))
Figure 3-75 shows the newly loaded shapes.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig75_HTML.jpg
Figure 3-75

Piece of art

First, let’s call find-contours and draw the shapes defined by them.

Reusing the same draw-contours! function and drawing over the art itself gives Figure 3-76.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig76_HTML.jpg
Figure 3-76

Contours over art

Now this time, let’s try something different and use the function fill-convex-poly from the core opencv package.

It’s not very different from draw-contours, and we indeed just loop on the list and use fill-convex-poly on each of the contours .

(def drawing (u/mat-from shapes2))
(set-to drawing rgb/white)
(let[ contours (new-arraylist)]
    (find-contours thresh contours (new-mat) RETR_LIST CHAIN_APPROX_SIMPLE)
  (doseq [c contours]
    (fill-convex-poly drawing c rgb/blue-3- LINE_4 1)))
And so, we get the four shapes turned to blue (Figure 3-77).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig77_HTML.jpg
Figure 3-77

Piece of art turned to blue

As we can see, the contours and shapes are found and can be drawn.

Another way to draw the contours is to use the function polylines. Luckily, the function polylines hides the loop over each element of the contours, and you can just pass in as parameters the contour list as is.

(set-to drawing rgb/white)
(let[ contours (new-arraylist)]
    (find-contours
            thresh
            contours
            (new-mat)
            RETR_LIST
            CHAIN_APPROX_SIMPLE)
    (polylines drawing contours true rgb/magenta-2))
(-> drawing clone (u/resize-by 0.5) u/mat-view)
And this time, we nicely get the wireframe only of the contours (Figure 3-78).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig78_HTML.jpg
Figure 3-78

Wireframe of art

Alright, but again those shapes for now all have too many points.

Let’s again use the approx function that was created, and enhance it so we can specify the factor used by approx-poly-dp.

(defn approx_
  ([c] (approx_ c 0.02))
  ([c factor]
  (let[m2f (new-matofpoint2f (.toArray c))
       len (arc-length m2f true)
       ret (new-matofpoint2f)]
    (approx-poly-dp m2f ret (* factor len) true)
    (new-matofpoint (.toArray ret)))))

A higher factor means we force the reduction of points to a greater extent. And so, to that effect, let’s increase the usual value of 0.02 to 0.03.

(set-to drawing rgb/white)
(let[ contours (new-arraylist)]
    (find-contours thresh contours (new-mat) RETR_LIST CHAIN_APPROX_SIMPLE)
  (doseq [c contours]
   (fill-convex-poly drawing
                     (approx_ c 0.03)
                     (which-color c) LINE_AA 1)))
The shapes have been greatly simplified, and the number of sides has quite diminished: the shapes are now easier to identify (Figure 3-79).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig79_HTML.jpg
Figure 3-79

Art with simpler shapes

3.8 Moving Shapes

Problem

This is based on a problem found on stack overflow.

https://stackoverflow.com/questions/32590277/move-area-of-an-image-to-the-center-using-opencv

The problem was “Move area of an image to the center,” with the base picture shown in Figure 3-80.

The goal is to move the yellow shape and the black mark inside to the center of the mat.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig80_HTML.jpg
Figure 3-80

Moving shapes

Solution

I like this recipe quite a lot, because it brings in a lot of origami functions working together toward one goal, which is also the main theme of this chapter.

The plan to achieve our goal is as follows:
  • First, add borders to the original picture to see the boundaries

  • Switch to the HSV color space

  • Create a mask by selecting only the color in-range for yellow

  • Create a submat in the original picture from the bounding rect of the preceding mask

  • Create the target result mat, of the same size as the original

  • Create a submat in the target mat, to the place the content. That submat must be of same size, and it will be located in the center.

  • Set the rest of the target mat to any color …

  • We’re done!

Let’s get started.

How it works

Alright, so the first step was to highlight the border of the mat, because we could not really see up to where it was extending.

We will start by loading the picture and adding borders at the same time.

(def img
  (-> "resources/morph/cjy6M.jpg"
      (imread IMREAD_REDUCED_COLOR_2)
      (copy-make-border! 1 1 1 1 BORDER_CONSTANT (->scalar "#aabbcc"))))
Bordered input with the rounded yellow mark is now shown in Figure 3-81.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig81_HTML.jpg
Figure 3-81

Yellow mark and borders

We then switch to hsv color space and create a mask on the yellow mark , and this is where Origami pipelines make it so much easier to pipe the functions one after the other.

(def mask-on-yellow
  (->
    img
    (clone)
    (cvt-color! COLOR_BGR2HSV)
    (in-range! (new-scalar 20 100 100) (new-scalar 30 255 255))))
Our yellow mask is ready (Figure 3-82).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig82_HTML.jpg
Figure 3-82

Mask on yellow mark

Next is to find the contours in the newly created mask mat. Note here the usage of RETR_EXTERNAL, meaning we are only interested in external contours, and so the lines inside the yellow mark will not be included in the returned contour list.

(def contours (new-arraylist))
(find-contours mask-on-yellow contours (new-mat) RETR_EXTERNAL CHAIN_APPROX_SIMPLE)

Let’s now create an item mat, a submat of the original picture, where the rectangle defining it is made from the bounding rect of the contours.

(def background-color  (->scalar "#000000"))
; mask type CV_8UC1 is important !!
(def mask (new-mat (rows img) (cols img) CV_8UC1 background-color))
(def box
  (bounding-rect (first contours)))
(def item
  (submat img box))
The item submat is shown in Figure 3-83.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig83_HTML.jpg
Figure 3-83

Submat made of the bounding rect of the contour

We now create a completely new mat, of the same size of the item submat, and copy into the content of the segmented item. The background color has to be the same as the background color of the result mat.

(def segmented-item
  (new-mat (rows item) (cols item) CV_8UC3 background-color))
(copy-to item segmented-item (submat mask box) )
The newly computed segmented item is shown in Figure 3-84.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig84_HTML.jpg
Figure 3-84

Segmented item

Now let’s find the location of the rect that will be the target of the copy. We want the item to be moved to the center, and the rect should be of the same size as the original small box mat.

(def center
  (new-point (/ (.cols img ) 2 ) (/ (.rows img) 2)))
(def center-box
  (new-rect
    (- (.-x center ) (/ (.-width box) 2))
    (- (.-y center ) (/ (.-height box) 2))
    (.-width box)
    (.-height box)))

Alright, everything is in place; now we create the result mat and copy the content of the segmented item through a copy, via the submat, at the preceding computed centered location .

(def result (new-mat (rows img) (cols img) CV_8UC3 background-color))
(def final (submat result center-box))
(copy-to segmented-item final (new-mat))

And that’s it.

The yellow shape has been moved to the center of a new mat. We made sure the white color of the original mat was not copied over, by specifically using a black background for the final result mat (Figure 3-85).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig85_HTML.jpg
Figure 3-85

Victory

3.9 Looking at Trees

Problem

This is another recipe based on a stack overflow question. The interest this time is to focus on a tree plantation, and before counting the trees, being able to highlight them in an aerial picture.

The referenced question is here:

https://stackoverflow.com/questions/31310307/best-way-to-segment-a-tree-in-plantation-aerial-image-using-opencv

Solution

Recognizing the trees will be done with a call to in-range as usual. But the results, as we will see, will still be connected to each other, making it quite hard to actually count anything.

We will introduce the usage of morphology-ex! to erode the created mask back and forth, thus making for a better preprocessing mat , ready for counting.

How it works

We will use a picture of a hazy morning forest to work on (Figure 3-86).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig86_HTML.jpg
Figure 3-86

Hazy trees

Eventually, you would want to count the trees, but right now it is even difficult to see them with human eyes. (Any androids around?)

Let’s start by creating a mask on the green of the trees.

(def in-range-pict
  (-> trees
      clone
      (in-range! (new-scalar 100 80 100) (new-scalar 120 255 255)) (bitwise-not!)))
We get a mask of dots … as shown in Figure 3-87.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig87_HTML.jpg
Figure 3-87

Black and white

The trick of this recipe comes here. We will apply a MORPH_ERODE followed by a MORPH_OPEN on the in-range-pict mat. This will have the effect of clearing up the forest, and gives each tree its own space .

Morphing is done preparing a mat to pass, as parameter, a kernel matrix created from a small ellipse.

(def elem
  (get-structuring-element MORPH_ELLIPSE (new-size 3 3)))

If you call dump on elem, you will find its internal representation.

[0 1 0]
[1 1 1]
[0 1 0]

We then use this kernel matrix, by passing it to morpholy-ex!.

(morphology-ex! in-range-pict MORPH_ERODE elem (new-point -1 -1) 1)
(morphology-ex! in-range-pict MORPH_OPEN elem)
This has the desired effect of reducing the size of each tree dot, thus reducing the overlap between the trees (Figure 3-88).
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig88_HTML.jpg
Figure 3-88

Trees not overlapping after morph

To finish, we just apply a simple coloring on the original mat to highlight the position of the trees for the human eye. (Still no androids around?)

(def mask
  (->
    in-range-pict
    clone
    (in-range! (new-scalar 0 255 255) (new-scalar 0 0 0))))
(def target
  (new-mat (.size trees) CV_8UC3))
(set-to target rgb/greenyellow)
(copy-to original target mask)

This could be great to do in real time over a video stream.

You also already know what exercise awaits you next. Count the number of trees in the forest by using a quick call to find-contours …

This is of course left as a free exercise to the reader!

3.10 Detecting Blur

Problem

You have tons of pictures to sort, and you would like to have an automated process to just trash the ones that are blurred.

Solution

The solution is inspired from the pyimagesearch web site entry http://pyimagesearch.com/2015/09/07/blur-detection-with-opencv/ , which itself is pointing at the paper variation of the Laplacian by Pech-Pacheco et al, “Diatom autofocusing in brightfield microscopy:​ A comparative study.”

It does highlight cool ways of putting OpenCV and here origami into actions quickly for something useful.

Basically, you need to apply a Laplacian filter on the one-channel version of your image. Then, you compute the deviation of the result from the preceding and check if the deviation is below a given threshold.

The filter itself is applied with filter-2-d!, while the variance is computed with mean-std-dev.

How it works

The Laplacian matrix/kernel to be used for the filter puts emphasis on the center pixel and reduces emphasis on the left/right top/bottom ones.

This is the Laplacian kernel that we are going to use.

(def laplacian-kernel
  (u/matrix-to-mat
  [ [ 0 -1  0]
    [-1  4 -1]
    [ 0 -1  0]
   ]))

Let’s apply this kernel with filter-2-d!, followed by a call to mean-std-dev to compute the median and the deviation.

(filter-2-d! img -1 laplacian-kernel)
(def std (new-matofdouble))
(def median (new-matofdouble))
(mean-std-dev img median std)

When processing a picture, you can view the results of the averages with dump, since they are matrices. This is shown in the following:

(dump median)
; [19.60282552083333]
(dump std)
; [45.26957788759024]

Finally, the value to compare to detect blur will be the deviation raised to the power of 2.

(Math/pow (first (.get std 0 0)) 2)

We will then get a value that will be compared to 50. Lower than 50 means the image is blurred. Greater than 50 means the image is showing as not blurred.

Let’s create an is-image-blurred? function made of all the preceding steps:

(defn std-laplacian [img]
  (let [ std (new-matofdouble)]
    (filter-2-d! img -1 laplacian-kernel)
    (mean-std-dev img (new-matofdouble) std)
    (Math/pow (first (.get std 0 0)) 2)))
(defn is-image-blurred?[img]
  (< (std-laplacian (clone img)) 50))

Now let’s apply that function to a few pictures.

(-> "resources/chapter03/cat-bg-blurred.jpg"
    (imread IMREAD_REDUCED_GRAYSCALE_4)
    (is-image-blurred?))
And … our first test passes! The cat of Figure 3-89 indeed gives a deserved blurred result.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig89_HTML.jpg
Figure 3-89

Blurred cat

And what about one of the most beautiful cat on this planet? That worked too. The cat from Figure 3-90 is recognized as sharp!
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig90_HTML.jpg
Figure 3-90

Sharp but sleepy cat

Now, probably time to go and sort all your beachside summer pictures…

But yes, of course, yes, agreed, not all blurred pictures are to be trashed.

3.11 Making Photomosaics

Problem

In a project lab, now maybe 20 years ago, I saw a gigantic Star Wars poster, made of multiple small scenes of the first movie, A New Hope.

The poster was huge, and when seen from a bit far away, it was actually a picture of Darth Vader offering his hand to Luke.

The poster left a great impression, and I always wanted to do one of my own. Recently, I also learned there was a name for this type of created picture: photomosaic .

Solution

The concept is way simpler than what I originally thought. Basically, the hardest part is to download the pictures.

You mainly need two inputs, a final picture, and a set of pictures to use as subs.

The work consists of computing the mean average of the RGB channels for each picture, and creating an index from it.

Once this first preparation step is done, create a grid over the picture to be replicated, and then for each cell of the grid, compute the norm between the two averages: the one from the cell, and the one from each file of the index.

Finally, replace the sub of the big picture with the picture from the index that has the lowest mean average, meaning the picture that is visually closer to the submat.

Let’s put this in action!

How it works

The first step is to write a function that computes the mean average of the colors of a mat. We use again mean-std-dev for that effect, and since we are only interested in the mean for this exercise, this is the result returned by the function.

(defn mean-average-bgr [mat]
  (let [_mean (new-matofdouble)]
   (-> mat clone
   (median-blur! 3)
   (mean-std-dev _mean (new-matofdouble)))
    _mean))

Let’s call this on any picture to see what happens.

(-> "resources/chapter03/emilie1.jpg"
    (imread IMREAD_REDUCED_COLOR_8)
    get-averages-bgr-mat
    dump)

The return values are shown in the following. Those values are the mean average for each of the three RGB channels.

[123.182]
[127.38]
[134.128]

Let’s sidestep a bit and compare the norms of three matrices : ex1, ex2, and ex3. Looking at their content, you can “feel” that ex1 and ex2 are closer than ex1 and ex3.

(def ex1 (u/matrix-to-mat [[0 1 2]]))
(def ex2 (u/matrix-to-mat [[0 1 3]]))
(def ex3 (u/matrix-to-mat [[0 1 7]]))
(norm ex1 ex2)
; 1.0
(norm ex1 ex3)
; 5.0

This is confirmed by the result of the output of the norm function, which calculates the distance between the matrices.

And this is what we are going to use. First, we create an index of all the files available. The index is a map created by loading each image as a mat, and computing its mean-average-bgr.

  (defn indexing [files for-size]
    (zipmap files
        (map #(-> % imread (resize! for-size) mean-average-bgr) files)))

The output of the function is a map where each element is a set of key,val like filepath -> mean-average-bgr.

To find the closest image now that we have an index, we compute the norm of the mat (or submat later on) considered, and all the possible mean-bgr matrices of our index.

We then sort and take the lowest possible value . This is what find-closest does.

 (defn find-closest [ target indexed ]
  (let [mean-bgr-target (get-averages-bgr-mat target)]
    (first
      (sort-by val <
       (apply-to-vals indexed #(norm mean-bgr-target %))))))

apply-to-vals is a function that takes a hashmap and a function, applies a function to all the values in the map, and leaves the rest as is.

(defn apply-to-vals [m f]
  (into {} (for [[k v] m] [k (f v)])))

The hardest part is done; let’s get to the meat of the photomosaic algorithm.

The tile function is a function that creates a grid of the input picture and retrieves submats, one for each tile of the grid.

It then loops over all the submats one by one, computes the submat’s mean color average using the same function, and then calls find-closest with that average and the previously created index.

The call to find-closest returns a file path, which we load a submat from and then replace the tile’s submat in the target picture, just by copying the loaded mat with the usual copy-to.

See this in the function tile written here.

(defn tile [org indexed ^long grid-x ^long grid-y]
  (let[
    dst (u/mat-from org)
    width (/ (.cols dst) grid-x)
    height (/ (.rows dst) grid-y)
    total (* grid-x grid-y)
    cache (java.util.HashMap.)
    ]
    (doseq [^long i (range 0 grid-y)]
      (doseq [^long j (range 0 grid-x)]
      (let [
        square
         (submat org (new-rect (* j width) (* i height) width height ))
        best (first (find-closest square indexed))
        img  (get-cache-image cache best  width height)
        sub (submat dst (new-rect (* j width) (* i height) width height ))
        ]
         (copy-to img sub))))
    dst))

The main entry point is a function named photomosaic, which calls the tile algorithm by just creating the index of averages upfront, and passing it to the tile function.

(defn photomosaic
  [images-folder target-image output grid-x grid-y ]
  (let [files   (collect-pictures images-folder)
        indexed (indexing (collect-pictures images-folder) (new-size grid-x grid-y))
        target  (imread target-image )]
    (tile target indexed grid-x grid-y)))
Whoo-hoo. It’s all there. Creating the photomosaic is now as simple as calling the function of the same name with the proper parameters:
  • Folder of jpg images

  • The picture we want to mosaic

  • The size of the grid

Here is a simple sample:

(def lechat
  (photomosaic
    "resources/cat_photos"
    "resources/chapter03/emilie5.jpg"
    100 100))
And the first photomosaic ever of Marcel the cat is shown in Figure 3-91.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig91_HTML.jpg
Figure 3-91

Mosaic of a sleeping cat

Another photomosaic input/output, this from Kenji’s cat, is in Figure 3-92.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig92_HTML.jpg
Figure 3-92

Kogure-san’s cat

And, a romantic mosaic in Figure 3-93.
../images/459821_1_En_3_Chapter/459821_1_En_3_Fig93_HTML.jpg
Figure 3-93

Neko from Fukuoka

Cats used in the pictures are all included in the examples, not a single cat has been harmed, and so now is probably your turn to create your own awesome-looking mosaics … Enjoy!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.216.96.94