© Nicolas Modrzyk 2018
Nicolas ModrzykJava Image Processing Recipeshttps://doi.org/10.1007/978-1-4842-3465-5_2

2. OpenCV with Origami

Nicolas Modrzyk1 
(1)
Tokyo, Japan
 

After staring at origami directions long enough, you sort of become one with them and start understanding them from the inside.

Zooey Deschanel

../images/459821_1_En_2_Chapter/459821_1_En_2_Figa_HTML.jpg

The Origami library was born out of the motivation that computer vision–related programming should be simple to set up, simple to keep running, and easy to experiment with.

These days, when artificial intelligence and neural networks are all the rage, I was on a mission to prepare and generate data for various neural networks. It quickly became clear that you cannot just dump any kind of image or video data to a network and expect it to behave efficiently. You need to organize all those images or videos by size, maybe colors or content, and automate the processing of images as much as possible, because sorting those one billion images by hand may prove time consuming indeed.

So, in this chapter we present Origami, a Clojure wrapper, a project template, and samples to work with for the OpenCV library on the JavaVM, all of this working with a concise language.

The examples will be done in such a way that you will be introduced to the OpenCV code via Clojure.

The setup you have seen in the previous chapter can be almost entirely reused as is, so no time will be wasted learning what was already learned. Mainly, you will just need to add the library as a dependency to a newly created project.

Once this simple additional setup is done, we will review OpenCV concepts through the eyes of the Origami library.

2.1 Starting to Code with Origami

Life itself is simple…it’s just not easy.

Steve Maraboli

Problem

You have heard about this library wrapping OpenCV in a lightweight DSL named Origami and you would like to install it and give it a try on your machine.

Solution

If you have read or flipped through the first chapter of this book, you will remember that Leiningen was used to create a project template and lay out files in a simple project layout.

Here, you will use a different project template named clj-opencv , which will download the dependencies and copy the required files for you.

You will then be presented with the different coding styles that can be used with this new setup.

How it works

With Leiningen still installed on your machine, you can create a new project based on a template in the same way used for creating a Java opencv-based project.

Project Setup with a Leiningen Template

The project template this time is named clj-opencv and is called with Leiningen using the following one-liner on the terminal or console:

lein new clj-opencv myfirstcljcv

This will download the new template and create a myfirstcljcv folder with approximately the following content:

├── notes
│   ├── empty.clj
│   └── practice.clj
├── output
├── project.clj
├── resources
│   ├── XML
│   │   ├── aGest.xml
│   │   ├── closed_frontal_palm.xml
│   │   ├── face.xml
│   │   ├── fist.xml
│   │   ├── haarcascade_eye_tree_eyeglasses.xml
│   │   ├── haarcascade_frontalface_alt2.xml
│   │   └── palm.xml
│   ├── cat.jpg
│   ├── minicat.jpg
│   ├── nekobench.jpg
│   ├── souslesoleil.jpg
│   └── sunflower.jpg
└── test
    └── opencv3
        ├── ok.clj
        ├── simple.clj
        ├── tutorial.clj
        └── videosample.clj
6 directories, 19 files
In the preceding file structure
  • notes is a folder containing code in the form of notes, for gorilla and lein-gorilla. We will review how to use those two beasts right after.

  • project.clj is the already seen leiningen project file.

  • resources contains sample images and XML files for exercising and opencv recognition feature.

  • test contains sample Clojure code showing how to get started with opencv and origami.

The project.clj file, as you remember, holds almost all of the project metadata. This time we will use a version that is slightly updated from what you have seen in Chapter 1.

The main differences from the previous chapter are highlighted in the following, so let’s review it quickly.

 (defproject sample5 "0.1-SNAPSHOT"
:injections [
 (clojure.lang.RT/loadLibrary org.opencv.core.Core/NATIVE_LIBRARY_NAME)]
:plugins [[lein-gorilla "0.4.0"]]
:test-paths ["test"]
:resource-paths ["rsc"]
:main opencv3.ok
:repositories [
  ["vendredi" "https://repository.hellonico.info/repository/hellonico/"]]
:aliases {"notebook" ["gorilla" ":ip" "0.0.0.0" ":port" "10000"]}
:profiles {:dev {
  :resource-paths ["resources"]
  :dependencies [
  ; used for proto repl
  [org.clojure/tools.nrepl "0.2.11"]
  ; proto repl
  [proto-repl "0.3.1"]
  ; use to start a gorilla repl
  [gorilla-repl "0.4.0"]
  [seesaw "1.4.5"]]}}
:dependencies [
 [org.clojure/clojure "1.8.0"]
 [org.clojure/tools.cli "0.3.5"]
 [origami "0.1.2"]])

As expected, the origami library has been added as a dependency in the dependencies section.

A plug-in named gorilla has also been added . This will help you run python’s notebook style code; we will cover that later on in this recipe.

The injections segment may be a bit obscure at first, but it mostly says that the loading of the native OpenCV library will be done on starting the environment, so you do not have to repeat it in all the examples, as was the problem in the first chapter.

Everything Is OK

The main namespace to run is opencv3.ok; let’s run it right now to make sure the setup is ready. This has not changed from the first chapter, and you still use the same command on the terminal or console to load code with:

lein run

After a short bit of output, you should be able to see something like

Using OpenCV Version:  3.3.1-dev ..
#object[org.opencv.core.Mat 0x69ce2f62 Mat [ 1200*1600*CV_8UC1, isCont=true, isSubmat=false, nativeObj=0x7fcb16cefa70, dataAddr=0x10f203000 ]]
A new gray neko has arisen!
The file grey-neko.jpg would have been created in the project folder and be like the picture in Figure 2-1.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig1_HTML.jpg
Figure 2-1

Grey Neko

The code of the opencv3.ok namespace is written in full as follows:

(ns opencv3.ok
    (:require [opencv3.core :refer :all]))
(defn -main [& args]
  (println "Using OpenCV Version: " VERSION "..")
  (->
   (imread "resources/cat.jpg")
   (cvt-color! COLOR_RGB2GRAY)
   (imwrite "grey-neko.jpg")
   (println "A new gray neko has arisen!")))

You would recognize the imread, cvtColor, imwrite opencv functions used in the previous chapter, and indeed the java opencv functions are simply wrapped in Clojure.

This first code sequence flow written in the origami DSL is shown in Figure 2-2.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig2_HTML.gif
Figure 2-2

Code flow from the first Origami example

Webcam Check

If you have a webcam plugged in, there is another sample that starts the camera and stream in a video. The file to run this is in samplevideo.clj.

As before, you can start the sample by specifying the namespace to the lein run command.

 lein run -m opencv3.videosample
When the command starts, you will be presented with a moving view of the coffee shop you are typing those few lines of code in, just as in Figure 2-3.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig3_HTML.jpg
Figure 2-3

Tokyo coffee shop

While this was just to run the examples included with the project template , you can already start writing your own experimental code in your own files and run them using the lein run command.

The Auto Plug-in Strikes Back

You will see soon why this is usually not the best way to work with origami, because this recompiles all your source files each time . This is however a technique that can be used to check that all your code compiles and runs without errors.

So here is a quick reminder on how to set up the auto plug-in solution presented in Chapter 1 for Java, Scala, and Kotlin, this time for Clojure/Origami code.

Modify the project.clj file to add the lein-auto plug-in so it matches the following code:

 :plugins [[lein-gorilla "0.4.0"][lein-auto "0.1.3"]]
 :auto {:default {:file-pattern #".(clj)$"}}

This is not in the project template by default because it’s probably not needed most of the time.

Once you have added this, you can run the usual auto command by prefixing the command you want to execute with auto. Here:

lein auto run

This will execute the main namespace and wait for the file change to compile and execute again.

And so, after modifying the main method of the ok.clj file like in the following:

(defn -main [& args]
    (->
     (imread "resources/cat.jpg")
     (cvt-color! COLORMAP_JET)
     (imwrite "jet-neko.jpg")
     (println "A new jet neko has arisen!")))
You can see a new file jet-neko.jpg created and a new fun-looking cat, as in Figure 2-4.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig4_HTML.jpg
Figure 2-4

Jet cat

Now while this setup with the auto plug-in is perfectly ok, let’s see how to minimize latency between your code typing and the processing output, by using a Clojure REPL .

At the REPL

We have just reviewed how to run samples and write some Origami code in a fashion similar to the setup with Java, Scala, and Kotlin, and saw again how to include and use the auto plug-in.

Better than that, Clojure comes with a Read-Eval-Print-Loop (REPL) environment, meaning you can type in lines of code, like commands, one by one, and get them executed instantly.

To start the Clojure REPL, Leiningen has a subcommand named repl, which can be started with

lein repl

After a few startup lines are printed on the terminal/console:

nREPL server started on port 64044 on host 127.0.0.1 - nrepl://127.0.0.1:64044
REPL-y 0.3.7, nREPL 0.2.11
Clojure 1.8.0
Java HotSpot(TM) 64-Bit Server VM 1.8.0_151-b12
    Docs: (doc function-name-here)
          (find-doc "part-of-name-here")
  Source: (source function-name-here)
 Javadoc: (javadoc java-object-or-class-here)
    Exit: Control+D or (exit) or (quit)
 Results: Stored in vars *1, *2, *3, an exception in *e

You will then be greeted with the REPL prompt:

opencv3.ok=>

opencv3.ok is the main namespace of the project, and you can type in code at the prompt just like you were typing code in the opencv3/ok.clj file. For example, let’s check whether the underlying OpenCV library is loaded properly by printing its version:

(println "Using OpenCV Version: " opencv3.core/VERSION "..")
; Using OpenCV Version:  3.3.1-dev ..

The library is indeed loaded properly, and native binding is found via Leiningen’s magic.

Let’s use it right now for a kick-start. The following two lines get some functions from the utils namespace, mainly to open a frame, and then load an image and open it into that frame:

(require '[opencv3.utils :as u])
(u/show (imread "resources/minicat.jpg"))
The cute cat from Figure 2-5 should now be showing up on your computer as well.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig5_HTML.jpg
Figure 2-5

Cute cat

Origami encourages the notion of pipelines for image manipulation. So, to read an image, convert the color of the loaded image, and show the resulting image in a frame, you would usually pipe all the function calls one after the other, using the Clojure threading macro ->, just like in the following one-liner:

(-> "resources/minicat.jpg" imread (cvt-color! COLOR_RGB2GRAY) (u/show))
Which now converts the minicat.jpg from Figure 2-5 to its gray version as in Figure 2-6.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig6_HTML.jpg
Figure 2-6

Grayed cute cat

-> does nothing more than reorganize code so that the first invocation result goes to the input of the next line and so on. This makes for very swift and compact image-processing code.

Note that the lines execute directly, so you don’t have to wait for file changes or anything and can just get the result onscreen as you press the Enter key.

Instant gratification.

Instant gratification takes too long.

Carrie Fisher

REPL from Atom

The REPL started by Leiningen is quite nice, with a bunch of other features you can discover through the documentation, but it’s hard to compete with the autocompletion provided by a standard text editor.

Using all the same project metadata from the project.clj file, the Atom editor can actually provide, via a plug-in, instant and visual completion choices.

The plug-in to install is named proto-repl. Effectively, you will need to install two plug-ins
  • the ink plug-in, required by prot-repl

  • the proto-repl plug-in

to get the same setup on your atom editor, as shown in Figure 2-7.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig7_HTML.jpg
Figure 2-7

Install two plug-ins in Atom: ink and proto-repl

The same Leiningen-based REPL can be started either by the atom menu as in Figure 2-8 or by the equivalent key shortcut.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig8_HTML.jpg
Figure 2-8

Start a REPL from within Atom

When starting the REPL, a window named Proto-REPL opens on the right-hand side of the Atom editor. This is exactly the same REPL that you have used when executing the lein repl command directly from the terminal. So, you can type in code there too.

But the real gem of this setup is to have autocompletion and choice presented to you when typing code, as in Figure 2-9.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig9_HTML.jpg
Figure 2-9

Instant completion

You can now retype the code to read and convert the color of an image directly in a file , let’s say ok.clj. Your setup should now be similar to that shown in Figure 2-10.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig10_HTML.jpg
Figure 2-10

Atom editor + Clojure code

Once you have typed the code in, you can select code and execute the selected lines of code by using Ctrl-Alt+s (on Mac, Command-Ctrl+s).

You can also execute the code block before the cursor by using Ctrl-Alt+b (on Mac, Command-Ctrl+b) and get your shot of instant gratification.

After code evaluation, and a slight tab arrangement, you can have instant code writing on the left-hand side, and the image transformation feedback on the right-hand side, just as in Figure 2-11.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig11_HTML.jpg
Figure 2-11

The ideal editor-based computer vision environment

The jet-set cat is now showing in the output.jpg file, and can be updated by updating and executing code in the opened editor tab.

For example, see by yourself what happens when adding the resize! function call in the processing flow, as in the following code.

(->
    (imread "resources/cat.jpg")
    (resize! (new-size 150 100))
    (cvt-color! COLORMAP_JET)
    (imwrite "output.jpg"))

Nice. A newly resized jet-set cat is now instantly showing on your screen .

Gorilla Notebook

To complete this recipe, let’s present how to use gorilla from within an Origami project.

Gorilla is a Leiningen plug-in, where you can write and run notebooks, à la python’s jupyter.

This means you can write code alongside documentation, and even better, you can also share those notes to the outside world.

How does that work? Gorilla takes your project setup and uses it to execute the code in a background REPL. Hence, it will find the origami/opencv setup taken from the project.clj file .

It will also start a web server whose goal is to serve notes or worksheets. Worksheets are pages where you can write lines of code and execute them.

You can also write documentation in the sheet itself in the form of markdown markup, which renders to HTML.

As a result, each of the notes, or worksheets, ends up being effectively a miniblog.

The project.clj file that comes with the clj-opencv template defines a convenient leiningen alias to start gorilla via the notebook alias:

:aliases {"notebook" ["gorilla" ":ip" "0.0.0.0" ":port" "10000"]}

This effectively tells leiningen to convert the notebook subcommand to the following gorilla command:

lein gorilla :ip 0.0.0.0 :port 10000

Let’s try it, by using the following command on a console or terminal:

lein notebook

After a few seconds, the Gorilla REPL is started. You can access it already at the following location:

http://localhost:10000/worksheet.html?filename=notes/practice.clj

You will be presented with a worksheet like in Figure 2-12.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig12_HTML.jpg
Figure 2-12

Gorilla notebook and a cat

In a gorilla notebook , every block of the page is either Clojure code or markdown text. You can turn the currently highlighted block to text mode by using Alt+g, Alt+m (or Ctrl+g, Ctrl+m on Mac) where m is for markdown, as in Figure 2-13.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig13_HTML.jpg
Figure 2-13

Markdown text mode

You can also turn back the highlighted block into code mode by using Alt+g, Alt+j (or Ctrl+g, Ctrl+j on Mac), where j is for Clojure, as in Figure 2-14.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig14_HTML.jpg
Figure 2-14

Block of code

To execute the highlighted block of code , you would use Shift+Enter, and the block turns into executed mode, as in Figure 2-15.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig15_HTML.jpg
Figure 2-15

Clojure code was executed

What that does is read from the code block, send the input to the background REPL via a websocket, retrieve the result, and print it the underlying div of the code block.

To make it easy to navigate a worksheet, the most used shortcuts have been gathered in Table 2-1.
Table 2-1

Most Used Key Shorcuts for the Gorilla REPL

Shortcut Windows/Linux

Shortcut Mac

Usage

Go to the block above

Go to the block below

Shift+Enter

Shift+Enter

Evaluate the highlighted block

Alt+g, Alt+b

Ctrl+g, Ctrl+b

Insert a block before the current one

Alt+g, Alt+n

Ctrl+g, Ctrl+n

Insert a block next to the current one

Alt+g, Alt+u

Ctrl+g, Ctrl+u

Move the current block up one block

Alt+g, Alt+d

Ctrl+g, Ctrl+d

Move the current block down one block

Alt+g, Alt+x

Ctrl+g, Ctrl+x

Delete the current block

Alt+space

Ctrl+space

Autocompletion options

Alt+g, Alt+s

Ctrl+g, Ctrl+s

Save the current worksheet

Alt+g, Alt+l

Ctrl+g, Ctrl+l

Load a worksheet (a file)

Alt+g, Alt+e

Ctrl+g, Ctrl+e

Save the current worksheet to a new file name

Alright; so now you know all that is needed to start typing code in the gorilla REPL. Let’s try this out right now. In a new code block of the worksheet, try to type in the following Clojure code .

(-> "http://eskipaper.com/images/jump-cat-1.jpg"
 (u/mat-from-url)
 (u/resize-by 0.3)
 (u/mat-view))
And now… Shift+Enter! This should bring you close to Figure 2-16 and a new shot of instant gratification .
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig16_HTML.jpg
Figure 2-16

Instant jumping cat

Remember that all of this is happening in the browser , which has three direct positive consequences.

The first one is that remote people can actually view your worksheets, and they can provide documentation directly from their own machines by connecting to the URL directly.

Second, they can also execute code directly block by block to understand the flow.

Third, the saved format of the worksheets is such that they can be used as standard namespaces and can be used through normal code-writing workflow. Conversely, it also means that standard Clojure files can be opened, and documentation can be added via the Gorilla REPL.

From now on, we won’t impose using either the Gorilla REPL or the Atom environment, or even simply typing on the REPL. Effectively, these are three different views on the same project setup.

Simply remember for now that to show a picture, the function to use is slightly different depending on whether you are in the Gorilla REPL or in a standard REPL.

In the Gorilla REPL:

(u/mat-view)

In the standard REPL:

(u/show)

In atom , you would save the file:

(imwrite mat “output.jpg”)

OK, this time you are really all set! Time for some computer vision basics.

2.2 Working with Mats

Problem

As you remember from Chapter 1, Mat is your best friend when working with OpenCV. You also remember functions like new Mat(), setTo, copyTo, and so on to manipulate Mat . Now, you wonder how you can do basic Mat operations using the Origami library.

Solution

Since Origami is mainly a wrapper around OpenCV, all the same functions are present in the API. This recipe shows basic Mat operations again, and takes them further by presenting code tricks made possible by using Clojure.

How it works

Creating a Mat

Remember that you need a height , a width, and a number of channels to create a mat. This is done using the new-mat function. The following snippet creates a 30×30 Mat, with one channel per pixel, each value being an integer.

(def mat (new-mat 30 30 CV_8UC1))
If you try to display the content of the mat, either with u/mat-view (gorilla repl) or u/show (standard repl), then the memory assigned to the mat is actually left as is. See Figure 2-17.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig17_HTML.jpg
Figure 2-17

New Mat with no assigned color

Let’s assign a color, the same to every pixel of the Mat. This is either done when creating the Mat, or can be done with set-to, which is a call to the .setTo Java function of OpenCV.

(def mat (new-mat 30 30 CV_8UC1 (new-scalar 105)))
; or
(def mat (new-mat 30 30 CV_8UC1))
(set-to mat (new-scalar 105))
Every pixel in the mat now has value 105 assigned to it (Figure 2-18).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig18_HTML.jpg
Figure 2-18

Mat with assigned color

To understand most of the underlying matrix concepts of OpenCV , it is usually a good idea for you to check the values of the underlying mat using .dump or simply dump.

This will be done a few times in this chapter. To use it, simply call dump on the mat you want to see the internals from.

(->>
  (new-scalar 128.0)
  (new-mat 3 3 CV_8UC1)
  (dump))

And the expected output is shown in the following, with the mat points all set to the value of 128.

[128 128 128]
[128 128 128]
[128 128 128]

.dump calls the original OpenCV function and will print all the row and column pixel values in one string.

"[128, 128, 128; 128, 128, 128; 128, 128, 128]"

Creating a Colored Mat

With one channel per pixel, you can only specify the white intensity of each pixel, and thus, you can only create gray mats .

To create a colored mat, you need three channels, and by default, each channel’s value representing the intensity of red, blue, and green.

To create a 30×30 red mat, the following snippet will create an empty three-channel mat with each point in the mat set to the RGB value of [255 0 0] (yes, this is inverted, so be careful):

(def red-mat    (new-mat 30 30 CV_8UC3 (new-scalar 0 0 255)))

In a similar way, to create a blue or green mat:

(def green-mat  (new-mat 30 30 CV_8UC3 (new-scalar 0 255 0)))
(def blue-mat   (new-mat 30 30 CV_8UC3 (new-scalar 255 0 0)))
If you execute all this in the gorilla REPL, each of the mats shows up, as in Figure 2-19.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig19_HTML.jpg
Figure 2-19

Red, green, and blue mats

Using a Submat

You will remember that we have seen how to use a submat in Chapter 1; let’s review how to use those submats using origami.

Here, we first create an RGB mat with three channels per pixel, and set all the pixels to a cyan color.

A submat can then be created using, well, the submat function and a rectangle to define the size of the submat.

This gives the following code snippet:

(def mat (new-mat 30 30 CV_8UC3 (new-scalar 255 255 0)))
(def sub (submat mat (new-rect 10 10 10 10)))
(set-to sub (new-scalar 0 255 255))
The resulting main mat, with yellow inside where the submat was defined, and the rest of the mat in cyan color, is shown in Figure 2-20.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig20_HTML.jpg
Figure 2-20

Submats with Origami

Just for the kicks at this stage, see what a one-liner of origami code can do, by using hconcat!, a function that concatenates multiple mats together, and clojure.core/repeat, which creates a sequence of the same item.

(u/mat-view (hconcat! (clojure.core/repeat 10 mat3)))
The resulting pattern is shown in Figure 2-21.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig21_HTML.jpg
Figure 2-21

Origami fun

At this point, you can already figure out some creative generative patterns by yourself.

Setting One Pixel Color

Setting all the colors of a mat was done using set-to. Setting one pixel to a color is done using the Java method put. The put function takes a position in the mat, and a byte array representing the RGB values of that pixel.

So, if you want to create a 3×3 mat with all its pixels to yellow, you would use the following code snippet.

(def yellow (byte-array [0 238 238]))
(def a (new-mat 3 3 CV_8UC3))
(.put a 0 0 yellow)
(.put a 0 1 yellow)
(.put a 0 2 yellow)
(.put a 1 0 yellow)
(.put a 1 1 yellow)
(.put a 1 2 yellow)
(.put a 2 0 yellow)
(.put a 2 1 yellow)
(.put a 2 2 yellow)

Unfortunately, the 3×3 mat is a bit too small for this book, so you should type in the code yourself.

The dump function works nicely here though, and you can see the content of the yellow mat in the following:

[0 238 238 0 238 238 0 238 238]
[0 238 238 0 238 238 0 238 238]
[0 238 238 0 238 238 0 238 238]

Typing all this line by line is a bit tiring though, so this where you use Clojure code to loop over the pixel as needed.

A call to Clojure core doseq gets convenient to reduce the boilerplate.

(doseq [x [0 1 2]
        y [0 1 2]]
  (prn  "x=" x "; y=" y))

The preceding simple doseq snippet simply loops over all the pixels of a 3×3 mat.

"x=" 0 "; y=" 0
"x=" 0 "; y=" 1
"x=" 0 "; y=" 2
"x=" 1 "; y=" 0
"x=" 1 "; y=" 1
...

So, to have a bit more fun, let’s display some random red variants for each pixel of a 100×100 colored mat. This would be pretty tiresome by hand, so let’s use the doseq sequence here too.

(def height 100)
(def width 100)
(def a (new-mat height width CV_8UC3))
(doseq [x (range width)
        y (range height)]
  (.put a x y (byte-array [0 0 (rand 255)])))
Figure 2-22 gives one version of the executed snippet.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig22_HTML.jpg
Figure 2-22

Randomly filled mat with variant of red pixels

Piping Process and Some Generative Art

You can already see how Origami makes it quite simple and fun to integrate generative work with OpenCV mats.

This short section will also be a quick introduction to the piping process that is encouraged by Origami.

Clojure has two main constructs (called macros) named -> and ->>. They pipe results throughout consecutive function calls.

The result of the first function call is passed as a parameter to the second function, and then the result of that call to the second function is passed on the third one, and so on.

The first macro, ->, passes the result as the first parameter to the next function call.

The second macro, ->>, passes the result as the last parameter to the next function call.

For example, creating a random gray mat could be done this way:

(->> (rand 255)
     (double)
     (new-scalar)
     (new-mat 30 30 CV_8UC1)
     (u/mat-view))
Which, read line by line, gives the following steps:
  • A random value is generated with rand; that value is between 0 and 255.

  • The generated value is a float, so we turn the value to double.

  • new-scalar is used to create the equivalent of a byte array that OpenCV can conveniently handle.

  • We then create a new 30×30 mat of one channel and pass the scalar to the new-mat function to set the color of the mat to the randomly generated value.

  • Finally, we can view the generated mat (Figure 2-23).
    ../images/459821_1_En_2_Chapter/459821_1_En_2_Fig23_HTML.jpg
    Figure 2-23

    Generated random gray mat

You could do the same with a randomly colored mat as well. This time, the rand function is called three times (Figure 2-24).

(->> (new-scalar (rand 255) (rand 255) (rand 255))
     (new-mat 30 30 CV_8UC3)
     (u/mat-view))

Or, with the same result, but using a few more Clojure core functions:

(->>
 #(rand 255)
 (repeatedly 3)
 (apply new-scalar)
 (new-mat 30 30 CV_8UC3)
 (u/mat-view))
where
  • # creates an anonymous function

  • repeatedly calls the preceding function three times to generate an array of three random values

  • apply uses the array as parameters to new-scalar

  • new-mat, as you have seen before, creates a mat

  • u/mat-view displays the mat (Figure 2-24) in the gorilla REPL
    ../images/459821_1_En_2_Chapter/459821_1_En_2_Fig24_HTML.jpg

    Figure 2-24.

You can see now how you could also build on those mini code flows to build different generative variations of mat. You can also combine those mats, of course, using hconcat! or vconcat! functions of OpenCV.

The following new snippet generates a sequence of 25 elements using range and then creates gray mats in the range of 0–255 by scaling the range values (Figure 2-25).

 (->> (range 25)
      (map #(new-mat 30 30 CV_8UC1 (new-scalar (double (* % 10)))))
      (hconcat!)
      (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig25_HTML.jpg
Figure 2-25

A gray gradient of 25 mats

You can also smooth things up by generating a range of 255 values, and making the created mat slightly smaller, with each mat of size 2×10 (Figure 2-26).

(->> (range 255)
     (map #(new-mat 20 2 CV_8UC1 (new-scalar (double %))))
     (hconcat!)
     (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig26_HTML.jpg
Figure 2-26

Smooth gray gradient of 255 mats

2.3 Loading, Showing, Saving Mats

Problem

You have seen how to create and generate mats; now you would like to save them, reopen them, and open mats located in a URL.

Solution

Origami wraps the two main opencv functions to interact with the filesystem, namely, imread and imwrite.

It also presents a new function called imshow that you may have seen before if you have used standard opencv before. It will be covered in greater detail here.

Finally, u/mat-from-url is an origami utility function that allows you to retrieve a mat that is hosted on the net.

How it works

Loading

imread works the exact same as its opencv equivalent; this mostly means that you simply give it a path from the filesystem, and the file is read and converted to a ready-to-be-used Mat object.

In its simplest form, loading an image can be done as in the following short code snippet:

(def mat (imread "resources/kitten.jpg"))

The file path, resources/kitten.jpg, is relative to the project, or can also be a full path on the file system.

The resulting loaded Mat object is shown in Figure 2-27.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig27_HTML.jpg
Figure 2-27

“This is not a cat.”

Following the opencv documentation, the following image file formats are currently supported by Origami :
  • Windows bitmaps - *.bmp, *.dib

  • JPEG files - *.jpeg, *.jpg, *.jpe

  • Portable Network Graphics - *.png

  • Sun rasters - *.sr, *.ras

The following are also usually supported by OpenCV but may not be supported on all platforms coming with Origami:
  • JPEG 2000 files - *.jp2

  • WebP - *.webp

  • Portable image format - *.pbm, *.pgm, *.ppm

  • TIFF files - *.tiff, *.tif

When loading an image, you can refer to Table 1-3 to specify the option used to load the image, such as grayscale, and resize at the same time.

To load in grayscale and resize the image to a quarter of its size, you could use the following snippet, written using the pipeline style you have just seen.

(-> "resources/kitten.jpg"
    (imread IMREAD_REDUCED_GRAYSCALE_4)
    (u/mat-view))
It loads the same picture, but the mat looks different this time , as its color has been converted, like in Figure 2-28.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig28_HTML.jpg
Figure 2-28

“This is not a gray cat.”

Saving

The imwrite function from Origami takes from opencv’s imwrite, but reverses the order of the parameters to make the function easy to use in processing pipes.

For example, to write the previously loaded gray cat to a new file, you would use

(imwrite mat "grey-neko.png")
A new file, grey-neko.png , will be created from the loaded mat object (Figure 2-29).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig29_HTML.jpg
Figure 2-29

grey-neko.png

You can observe that the resulting file image has actually been converted from jpg to png for you, just by specifying it as the extension in the file name.

The reason that the parameter order has been changed is that, in this case, you can save images from within the pipeline code flow.

See in the following how the image is saved during the flow of transformation.

(-> "resources/kitten.jpg"
    (imread IMREAD_REDUCED_GRAYSCALE_4)
    (imwrite "grey-neko.png")
    (u/mat-view))

The mat will be saved in the file image grey-neko.png, and the processing will go on to the next step, here mat-view.

Showing

Origami comes with a quick way of previewing images, and streams in the form of the imshow function, from the opencv3.utils namespace.

(-> "resources/kitten.jpg"
    (imread)
    (u/imshow))
The imshow function takes a mat as the parameter and opens a Java frame with the mat inside, as shown in Figure 2-30.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig30_HTML.jpg
Figure 2-30

Framed cat

The frame opened by imshow has a few default sets of key shortcuts, as shown in Table 2-2.
Table 2-2

Default Keys in Quick View

Key

Action

Q

Close Frame

F

Full Screen Frame; press again to return to window mode

S

Quick Save the picture currently showing

This is not all; you can pass a map when using imshow to define various settings from the background color of the frame to its size and so forth. Also, a handlers section can be added to the map, where you can define your own key shortcuts.

See an example of the configuration map for the following frame.

{:frame
  {:color "#000000" :title "image" :width 400 :height 400}
 :handlers
  { 85 #(gamma! % 0.1) 86 #(gamma! % -0.1)}}

In the handlers section, each entry of the map is made of an ASCII key code and a function. The function takes a mat and has to return a mat. Here, you can suppose gamma! is a function changing brightness on a mat, depending on a brightness parameter.

Figure 2-31 shows the mat after pressing u.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig31_HTML.jpg
Figure 2-31

Dark cat

Figure 2-32 shows the mat after pressing v.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig32_HTML.jpg
Figure 2-32

Bright cat

This is not the most important section of this book, but the quick frame becomes quite handy when playing with the video streams later on in Chapter 4.

Loading from URL

While it is usually the case that the picture can be accessed from the filesystem the code is running on, many times there is a need to process a picture that is remotely hosted.

Origami provides a basic mat-from-url function that takes a URL and turns it into an OpenCV mat.

The standard way to do this in origami is shown in the following snippet:

(-> "http://www.hellonico.info/static/cat-peekaboo.jpg"
    (u/mat-from-url)
    (u/mat-view))
And the resulting image is shown in Figure 2-33.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig33_HTML.jpg
Figure 2-33

Cat from the Internet

This was the only way to load a picture until recently. But then, most of the time, you would be doing something like

(-> "http://www.hellonico.info/static/cat-peekaboo.jpg"
    (u/mat-from-url)
    (u/resize-by 0.5)
    (u/mat-view))

to resize the picture right after loading it. Now, u/mat-from-url also accepts imread parameters. So, to load the remote picture in gray, and reduce its size altogether, you can directly pass in the IMREAD_* parameter. Note that this has the side effect of creating a temporary file on the filesystem.

(-> "http://www.hellonico.info/static/cat-peekaboo.jpg"
    (u/mat-from-url IMREAD_REDUCED_GRAYSCALE_4)
    (u/mat-view))
The same remote picture is now both smaller and loaded in black and white, as shown in Figure 2-34.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig34_HTML.jpg
Figure 2-34

Return of the cat in black and white

2.4 Working with Colors, ColorMaps, and ColorSpaces

Color is the place where our brain and the universe meet.

Paul Klee

Problem

You want to learn a bit more about how to handle colors in OpenCV. Up to now, we have only seen colors using the RGB encoding. There must be some more!

Solution

Origami provides two simple namespaces, opencv3.colors.html and opencv3.colors.rgb, to create the scalar values used for basic coloring, so we will start by reviewing how to use those two namespaces to set colors to mat.

A color map works like a color filter, where you make the mat redder or bluer, depending on your mood.

apply-color-map! and transform! are the two opencv core functions used to achieve the color switch.

Finally, cvt-color! is another core opencv function that brings a mat from one color space to another one, for example from RGB to black and white. This is an important key feature of OpenCV, as most recognition algorithms cannot be used properly in standard RGB.

How it works

Simple Colors

Colors from the origami packages need to be required, and so when you use them, you need to update your namespace declaration at the top of the notebook.

(ns joyful-leaves
   (:require
    [opencv3.utils :as u]
    [opencv3.colors.html :as html]
    [opencv3.colors.rgb :as rgb]
    [opencv3.core :refer :all]))

With the namespace rgb, you can create scalars for RGB values instead of guessing them.

So, if you want to use a red color , you can get your environment to help you find and autocomplete the scalar you are looking for, as shown in Figure 2-35.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig35_HTML.jpg
Figure 2-35

RGB colors

And so, using this in action, you can indeed use the following snippet to create a 20×20 mat of a red color.

(-> (new-mat 20 20 CV_8UC3 rgb/red-2)
    (u/mat-view ))

Note that since rgb/red-2 is a scalar, you can dump the values for each channel by just printing it:

#object[org.opencv.core.Scalar 0x4e73ed0 "[0.0, 0.0, 205.0, 0.0]"]

This is pretty nice to find color codes quickly.

The opencv3.colors.html namespace was created so that you could also use the traditional hexadecimal notation used in css. For a nice light green with a bit of blue, you could use this:

(html/->scalar "#66cc77")

In full sample mode, and using threading ->>, this gives

(->> (html/->scalar "#66cc77")
     (new-mat 20 20 CV_8UC3 )
     (u/mat-view ))
which creates a small mat of a light green/blue color (Figure 2-36).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig36_HTML.jpg
Figure 2-36

Colors using HTML codes

Printing the color itself gives you the assigned RGB values:

(html/->scalar "#66cc77")
; "[119.0, 204.0, 102.0, 0.0]"

And you can indeed check that the colors match by creating the RGB scalar yourself.

(->> (new-scalar 119 204 102)
     (new-mat 20 20 CV_8UC3 ))

This will give you a mat with the exact same RGB-based color .

Color Maps

Color maps can be understood by a simple color change, using a simple filter, which results in something similar to your favorite smartphone photo application.

There are a few default maps that can be used with OpenCV; let’s try one of them, say COLORMAP_AUTUMN, which turns the mat into a quite autumnal red.

To apply the map to a Mat, for example the cat from Figure 2-37, simply use the apply-color-map! function .
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig37_HTML.jpg
Figure 2-37

Cat to be colored

The following snippet shows how to make use of the usual imread and the apply-color-map sequentially.

(-> "resources/cat-on-sofa.jpg"
    (imread IMREAD_REDUCED_COLOR_4)
    (apply-color-map! COLORMAP_AUTUMN)
    (u/mat-view))
The resulting cat is shown in Figure 2-38.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig38_HTML.jpg
Figure 2-38

Autumn cat

Here is the full list of standard color maps available straight out of the box; try them out!
  • COLORMAP_HOT

  • COLORMAP_HSV

  • COLORMAP_JET

  • COLORMAP_BONE

  • COLORMAP_COOL

  • COLORMAP_PINK

  • COLORMAP_RAINBOW

  • COLORMAP_OCEAN

  • COLORMAP_WINTER

  • COLORMAP_SUMMER

  • COLORMAP_AUTUMN

  • COLORMAP_SPRING

You can also define your own color space conversion . This is done by a matrix multiplication, which sounds geeky, but is actually simpler than it sounds.

We will take the example of rgb/yellow-2. You may not remember, so if you print it, you’ll find out that this is actually coded as, no blue, some green, and some red, which translated into RGB gives the following: [0 238 238].

Then, we define a transformation matrix made of three columns and three rows; since we are working with RGB mats, we will do this in three-channel mode .

[0 0 0]     ; blue
[0 0.5 0]   ; green
[0 1 0.5]   ; red

What does this matrix do? Remember that we want to apply a color transformation for each pixel, meaning in output we want a set of RGB values for each pixel.

For any given pixel , the new RGB values are such that
  • Blue is 0 × Input Blue + 0 × Input Green + 0 × Input Red

  • Green is 0 × Input Blue + 0.5 × Input Green + 0 × Input Red

  • Red is 0 × Input Blue + 1 × Input Green + 0.5 Input Red

And so, since our Mat is all yellow, we have the following input:

[0 238 238]

And the output of each pixel is such as follows:

[0x0 + 0x238 + 0x238, 0x0 + 0.5x238 + 0 x 238, 0x0 + 1x238 + 0.5x238]

Or, since 255 is the maximum value for a channel:

[0 119 255]

Now in origami code, this gives the following:

(def custom
  (u/matrix-to-mat [
  [0 0 0]          ; blue
  [0 0.5 0]        ; green
  [0 1 0.5]        ; red
  ]))
(-> (new-mat 3 3 CV_8UC3 rgb/yellow-2)
    (dump))

Here, the mat content is shown with dump:

[0 238 238 0 238 238 0 238 238]
[0 238 238 0 238 238 0 238 238]
[0 238 238 0 238 238 0 238 238]

Then:

(-> (new-mat 30 30 CV_8UC3 rgb/yellow-2) u/mat-view)
(-> (new-mat 3 3 CV_8UC3 rgb/yellow-2)
    (transform! custom)
    (dump))

And the result of the transformation is shown in the following, as expected consists of a matrix of [0 119 255] values.

[0 119 255 0 119 255 0 119 255]
[0 119 255 0 119 255 0 119 255]
[0 119 255 0 119 255 0 119 255]
(-> (new-mat 30 30 CV_8UC3 rgb/yellow-2)
    (transform! custom)
    u/mat-view)

Make sure you execute the statements one by one to see the different RGB values in the output, along with the colored mats.

You may look around in the literature, but a nice sepia transformation would use the following matrix:

(def sepia-2 (u/matrix-to-mat [
  [0.131 0.534 0.272]
  [0.168 0.686 0.349]
  [0.189 0.769 0.393]]))
(-> "resources/cat-on-sofa.jpg"
   (imread IMREAD_REDUCED_COLOR_4)
   (transform! sepia-2)
(u/mat-view ))
With the resulting sepia cat in Figure 2-39.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig39_HTML.jpg
Figure 2-39

Sepia cat

Time to go out and make your own filters!

We have seen how transform is applied to each pixel in RGB. Later on, when switching to other colorspaces, you can also remember that even though the values won’t be red, blue, green anymore, this transform! function can still be used in the same way.

Color Space

You have been working almost uniquely in the RGB color space up to now, which is the simplest one to use. In most computing cases, RGB is not the most efficient, so many other color spaces have been created in the past and are available for use. With Origami, to switch from one to the other, you usually use the function cvt-color!

What does a color space switch do?

It basically means that the three-channel values for each pixel have different meanings.

For example, red in RGB can be encoded in RGB as 0 0 238 (and its graphical representation is shown in Figure 2-40):

(-> (new-mat 1 1 CV_8UC3 rgb/red-2)
    (.dump))
; "[  0,   0, 238]"
(-> (new-mat 30 30 CV_8UC3 rgb/red-2)
    (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig40_HTML.jpg
Figure 2-40

Red in RGB color space

However, when you change the color space and convert it to another namespace, say HSV , Hue-Saturation-Value, the values of the matrix are changed.

(-> (new-mat 1 1 CV_8UC3 rgb/red-2)
    (cvt-color! COLOR_RGB2HSV)
    (.dump))
(-> (new-mat 30 30 CV_8UC3 rgb/red-2)
    (cvt-color! COLOR_RGB2HSV)
    (u/mat-view))
And of course, the simple display of the mat content is not really relevant anymore; as shown in Figure 2-41, it turned to yellow!!
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig41_HTML.jpg
Figure 2-41

Red in HSV color space

Changing color space does not mean changing anything to the colors of the mat, but changing the way those are represented internally.

Why would you want to change colorspace?

While each colorspace has its own advantages, color space HSV is widely used due to the fact that it is easy to use ranges to identify and find shapes of a given color in a mat.

In RGB, as you remember, each value of each channel represents the intensity of red, green, or blue.

In opencv cv terms, let’s say we want to see a linear progression of red; we can increase or decrease the value of the two other channels, green and blue.

(->> (range 255)
     (map #(new-mat 20 1 CV_8UC3 (new-scalar % % 255)))
     (hconcat!)
     (u/mat-view))
That shows the line of Figure 2-42.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig42_HTML.jpg
Figure 2-42

Linear intensity of red in RGB

But what if in a picture, we are trying to look for orange-looking shapes? Hmm… How does that orange color look in RGB again?

Yes, it starts to get slightly difficult. Let’s take a different approach and look into the HSV color space.

As mentioned, HSV stands for Hue-Saturation-Value :
  • Hue is the color as you would understand it: it is usually a value between 0 and 360, for 360 degrees, even though OpenCV eight-bit pictures, the ones we use the most, actually use a range between 0 and 180, or half.

  • Saturation is the amount of gray, and it ranges between 0 and 255.

  • Value stands for brightness, and it ranges between 0 and 255.

In that case, let’s see what happens if we draw this ourselves, with what we have learned so far.

The function hsv-mat creates a mat from a hue value.

As you can read, the code switches the color space of the mat twice, once to set the color space to HSV and set the hue, and then back to RGB so we can draw it later with the usual function imshow or mat-view.

(defn hsv-mat [h]
  (let[m (new-mat 20 3 CV_8UC3)]
    (cvt-color! m COLOR_BGR2HSV)
    (set-to m (new-scalar h 255 255))
    (cvt-color! m COLOR_HSV2BGR)
    m))

We have seen the hue ranges from 0 to 180 in OpenCV, so let’s do a range on it and create a concatenated mat of all the small mats with hconcat.

(->> (range 180)
     (map hsv-mat)
     (hconcat!)
     (u/mat-view))
The drawn result is shown in Figure 2-43.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig43_HTML.jpg
Figure 2-43

Hue values

First, you may notice that toward the end of the bar, the color goes back to red again. It is often considered a cylinder for that reason.

The second thing you may notice is that it is easier to just tell which color you are looking for by providing a range. 20-25 is usually used for yellow, for example.

Because it can be annoying to select red in one range, you can sometimes use the reverse RGB during the color conversion: instead of using COLOR_BGR2HSV, you can try to use COLOR_RGB2HSV (Figure 2-44).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig44_HTML.jpg
Figure 2-44

Inverted hue spectrum

This makes it easier to select red colors, with a hue range between 105 and 150.

Let’s try that on a red cat . It is hard to find a red cat in nature, so we will use a picture instead.

The cat is loaded with the following snippet (Figure 2-45).

(-> "resources/redcat.jpg"
    (imread IMREAD_REDUCED_COLOR_2)
    (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig45_HTML.jpg
Figure 2-45

Natural red cat

Then, we define a range of lower red and upper red. The remaining saturation and value are set to 30 30 (sometimes 50 50) and 255 255 (sometimes 250 250), so from very dark and grayed to full-blown hue color.

(def lower-red  (new-scalar 105 30 30))
(def upper-red  (new-scalar 150 255 255))

Now, we use the opencv in-range function, which we will see again later in recipe 2-7, to say we want to find colors in a specified range and store the result in a mask, which is initialized as an empty mat.

(def mask (new-mat))
(-> "resources/redcat.jpg"
    (imread IMREAD_REDUCED_COLOR_2)
    (cvt-color! COLOR_RGB2HSV)
    (in-range lower-red upper-red mask))
(u/mat-view mask)
Et voila: the resulting mask mat is in Figure 2-46.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig46_HTML.jpg
Figure 2-46

Mask of the red colors from the picture

We will see that finding-color technique in more detail later, but now you see why you would want to switch color space from RGB to something that is easier to work with, here again HSV.

2.5 Rotating and Transforming Mats

I shall now recall to mind that the motion of the heavenly bodies is circular, since the motion appropriate to a sphere is rotation in a circle.

Nicolaus Copernicus

Problem

You would like to start rotating mats and applying simple linear transformations.

Solution

There are three ways of achieving rotation in OpenCV.

In very simple cases, you can simply use flip, which will flip the picture horizontally, vertically, or both.

Another way is to use the rotate function , which is a simple function basically taking only an orientation constant and rotating the mat according to that constant.

The all-star way is to use the function warp-affine . More can be done with it, but it is slightly harder to master, making use of matrix computation to perform the transformation .

Let’s see how all this works!

How it works

We will make use of a base image throughout this tutorial, so let’s start by loading it now for further reference (Figure 2-47). And of course, yes, you can already load your own at this stage.

(def neko (imread "resources/ai3.jpg" IMREAD_REDUCED_COLOR_8))
(u/mat-view neko)
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig47_HTML.jpg
Figure 2-47

Kitten ready for flipping and rotation

Flipping

Alright; this one is rather easy. You just need to call flip on the image with a parameter telling how you want the flip to be done.

Note here the first-time usage of clone in the image-processing flow.

While flip! does transformation in place, thus modifying the picture that it is passed, clone creates a new mat, so that the original neko is left untouched.

(->  neko
     (clone)
     (flip! 0)
     (u/mat-view))
And the result is shown in Figure 2-48.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig48_HTML.jpg
Figure 2-48

Flipped Neko

Most of the Origami functions work like this. The standard version, here flip, needs an input mat and an output mat, while flip! does the conversion in place and only needs an input/output mat. Also, while flip has no return value, flip! returns the output mat so it can be used in a pipeline.

Similarly, you have already seen cvt-color, and cvt-color!, or hconcat and hconcat!, and so on.

Let’s play a bit with Clojure and use a sequence to show all the possible flips on a mat.

(->> [1 -1 0]
     (map #(-> neko clone (flip! %)))
     (hconcat!)
     (u/mat-view))
This time, all the flips are showing (Figure 2-49).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig49_HTML.jpg
Figure 2-49

Flip-flop

Rotation

The function rotate! also takes a rotation parameter and turns the image according to it.

(->  neko
     (clone)
     (rotate! ROTATE_90_CLOCKWISE)
     (u/mat-view))
Note again the use of clone to create an intermediate mat in the processing flow, and the result in Figure 2-50.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig50_HTML.jpg
Figure 2-50

Clockwise-rotated cat

Note also how clone and ->> can be used to create multiple mats from a single source.

(->> [ROTATE_90_COUNTERCLOCKWISE ROTATE_90_CLOCKWISE]
     (map #(-> neko clone (rotate! %)))
     (hconcat!)
     (u/mat-view))
In the final step, the multiple mats are concatenated using hconcat! (Figure 2-51) or vconcat! (Figure 2-52).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig51_HTML.jpg
Figure 2-51

Using hconcat! on rotated mats

../images/459821_1_En_2_Chapter/459821_1_En_2_Fig52_HTML.jpg
Figure 2-52

Using vconcat! on rotated mats

Thanks to the usage of clone , the original mat is left untouched and can still be used in other processing pipelines as if it had just been freshly loaded.

Warp

The last one, as promised, is the slightly more complicated version of rotating a picture using the opencv function warp- affine along with a rotation matrix.

The rotation matrix is created using the function get-rotation-matrix-2-d and three parameters:
  • a rotation point,

  • a rotation angle,

  • a zoom value.

In this first example, we keep the zoom factor to 1 and take a rotation angle of 45 degrees.

We also make the rotation point the center of the original mat.

(def img (clone neko))
(def rotation-angle 45)
(def zoom 1)
(def matrix
  (get-rotation-matrix-2-d
    (new-point (/ (.width img) 2) (/ (.height img) 2))
    rotation-angle
    zoom))

matrix is also a 2×3 Mat, made of Float values, as you can see if you print it out. The rotation matrix can then be passed to the warp function. Warp also takes a size to create the resulting mat with the proper dimension.

(warp-affine! img matrix (.size img))
(u/mat-view img)
And the 45-degrees-rotated cat is shown in Figure 2-53.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig53_HTML.jpg
Figure 2-53

45 degrees

Let’s now push the fun a bit more with some autogeneration techniques . Let’s create a mat that is made of the concatenation of multiple mats of rotated cats, each cat rotated with a different rotation factor.

For this purpose, let’s create a function rotate-by! , which takes an image and an angle and applies the rotation internally, using get-rotation-matrix-2-d.

(defn rotate-by! [img angle]
  (let [M2
   (get-rotation-matrix-2-d  
    (new-point (/ (.width img) 2) (/ (.height img) 2)) angle 1)]
    (warp-affine! img M2 (.size img))))

Then you can use that function in a small pipeline. The pipeline takes a range of rotations between 0 and 360, and applies each angle in sequence to the original neko mat.

(->> (range 0 360 40)
     (map #(-> neko clone (rotate-by! % )))
     (hconcat!)
     (u/mat-view))
And the fun concatenated mats are shown in Figure 2-54.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig54_HTML.jpg
Figure 2-54

Range and rotation

Furthermore, let’s enhance the rotate-by! function to also use an optional zoom parameter. If the zoom factor is not specified, its value defaults to 1.

(defn rotate-by!
  ([img angle] (rotate-by! img angle 1))
  ([img angle zoom]
   (let
     [M2
       (get-rotation-matrix-2-d
          (new-point (/ (.width img) 2) (/ (.height img) 2)) angle zoom)]
    (warp-affine! img M2 (.size img)))))

The zoom parameter is then passed to the get-rotation-matrix-2-d function.

This time, the snippet simply does a range over seven random zoom values.

(->> (range 7)
     (map (fn[_] (-> neko clone (rotate-by! 0 (rand 5)))))
     (hconcat!)
     (u/mat-view))
And the result is shown in Figure 2-55. Also note that when the zoom value is too small, default black borders can be seen in the resulting small mat.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig55_HTML.jpg
Figure 2-55

Seven randomly zoomed cats

In the same way, many other image transformations can be done with warp-affine, by passing matrixes created with a transformation matrix using get-affine-transform, get-perspective-transform, and so forth.

Most of the transformations take a source matrix of points and a target matrix of points, and each of the opencv get-** functions creates a transformation matrix to accordingly map from one set of points to the others.

When OpenCV requires a mat of “something,” you can use the origami constructors, matrix-to-matofxxx from the util package.

(def src
  (u/matrix-to-matofpoint2f [[0 0]
                             [5 5]
                             [4 6]]))
(def dst
  (u/matrix-to-matofpoint2f [[2 0]
                             [5 5]
                             [4 6]]))
(def transform-mat (get-affine-transform src dst))

Applying the transformation is done in the same way with warp-affine.

(-> neko clone (warp-affine! transform-mat (.size neko)) u/mat-view)
Figure 2-56 shows the result of the affine transformation .
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig56_HTML.jpg
Figure 2-56

Feline affine transformation

2.6 Filtering Mats

Problem

In contrast to mat transformations, where shapes are distorted and points are moved, filtering applies an operation to each pixel of the original mat.

This recipe is about getting to know the different filtering methods available.

Solution

In this recipe, we will first look at how to create and apply a manual filter by manually changing the values of each pixel in the mat.

Since this is boring, we will then move on to using multiply! to efficiently change the colors and luminosity of the mat by applying a coefficient of each channel value .

Next, we will move to some experiments with filter-2-d, which is used to apply a custom-made filter to the mat.

The recipe will finish with examples of how to use threshold and adaptive-threshold to keep only part of the information in a mat.

How it works

Manual Filter

The first example is a function that sets all but one of a channel’s values to 0, in a three-channel picture . That has the effect of completely changing the color of the mat.

Notice how the function internally creates a fully sequential byte array of all the bytes of the mat. 3 is used here because we are supposing that we are working with a mat made of three channels per pixel.

(defn filter-buffer! [image _mod]
  (let [ total (* 3 (.total image))
         bytes (byte-array total)]
         (.get image 0 0 bytes)
         (doseq [^int i (range 0 total)]
         (if (not (= 0 (mod (+ i _mod) 3)))
           (aset-byte bytes i 0)))
        (.put image 0 0 bytes)
        image))

The mod if statement makes it so we set all values of that channel to 0 for all pixels in the mat.

We then use a new cat picture (Figure 2-57).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig57_HTML.jpg
Figure 2-57

Beautiful French cat

And simply put our function into action. The value 0 in the parameter means that all but the blue channel will be set to 0.

(->
  "resources/emilie1.jpg"
  (imread)
  (filter-buffer! 0)
  (u/mat-view))
And yes, the resulting picture is overly blue (Figure 2-58).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig58_HTML.jpg
Figure 2-58

Blue cat

Playing with Clojure code generative capability here again, we range over the channels to create a concatenated mat of all three mats (Figure 2-59).

(def source
    (imread "resources/emilie1.jpg"))
(->> (range 0 3)
     (map #(filter-buffer! (clone source) %))
     (hconcat!)
     (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig59_HTML.jpg
Figure 2-59

Three cats

Multiply

It was nice to create a filter manually to see the details of how its filters are working, but actually, OpenCV has a function called multiply that does exactly all of this already for you.

The function takes a mat, created with origami’s matrix-to-mat-of-double, to apply a multiplication to the value of each channel in a pixel.

So, in an RGB-encoded picture, using matrix [1.0 0.5 0.0] means that
  • the blue channel will stay as is; the blue channel value will be multiplied by 1.0

  • the green channel value will be halved; its values will be multiplied by 0.5

  • The red channel value will be set to 0; its values will be multiplied by 0.

Putting this straight into action, we use the following short snippet to turn the white cat into a mellow blue picture (Figure 2-60).

(->
  "resources/emilie1.jpg"
  (imread)
  (multiply! (u/matrix-to-mat-of-double [ [1.0 0.5 0.0]] ))
  (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig60_HTML.jpg
Figure 2-60

Mellow cat

Luminosity

Combined with what you have learned already in chapter 2 about changing the channels, you may remember that while RGB is great at changing the intensity of a specific color channel, changing the luminosity value can be easily done in the HSV color space.

Here again, we use the multiply function of OpenCV, but this time, the color space of the mat is changed to HSV ahead of the multiplication.

(->
  "resources/emilie1.jpg"
  (imread)
  (cvt-color! COLOR_BGR2HSV)
  (multiply! (u/matrix-to-mat-of-double [ [1.0 1.0 1.5]] ))
  (cvt-color! COLOR_HSV2RGB)
(u/mat-view))
Note how the matrix used with multiply only applies a 1.5 factor to the third channel of each pixel, which in the HSV color space is indeed the luminosity . A bright result is shown in Figure 2-61.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig61_HTML.jpg
Figure 2-61

Bright cat

Highlight

The preceding short snippet actually gives you a nice way of highlighting an element in a mat. Say you create a submat, or you have access to it through some finding shape algorithm; you can apply the luminosity effect to highlight only that part of the whole mat.

This is what the following new snippet does:
  • It loads the main mat into the img variable

  • It creates a processing pipeline focusing on a submat of img

  • The color conversion and the multiply operation are done only on the submat

 (def img (->
  "resources/emilie1.jpg"
  (imread)))
(-> img
  (submat (new-rect 100 50 100 100))
  (cvt-color! COLOR_RGB2HLS)
  (multiply! (u/matrix-to-mat-of-double [ [1.0 1.3 1.3]] ))
  (cvt-color! COLOR_HLS2RGB))
(u/mat-view img)
The resulting highlight mat is shown in Figure 2-62.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig62_HTML.jpg
Figure 2-62

Cat face

Filter 2d

filter-2-d , the new OpenCV function introduced here, also performs operations on bytes. But this time, it computes the value of each pixel of the target mat, depending on the value of the src pixel and the values of the surrounding pixel.

To understand how it is possible to do absolutely nothing, let’s take an example where the multiplication keeps the value of the pixel as is, by applying a filter that multiplies the value of current’s pixel by 1, and ignoring the values of its neighbors. For this effect, the 3×3 filter matrix has a value of 1 in the center (the target pixel) and 0 for all the other ones, the surrounding neighbor pixels.

(-> "resources/emilie4.jpg"
    (imread)
    (filter-2-d! -1 (u/matrix-to-mat
      [[0 0 0]
       [0 1 0]
       [0 0 0]]))
    (u/mat-view))
This does nothing! Great. We all want more of that. The filter-2-d function call really just keeps the image as is, as shown in Figure 2-63.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig63_HTML.jpg
Figure 2-63

Undisturbed cat

Let’s get back to matrixes and raw pixel values to understand a bit more about how things work under the hood, with an example using a simple gray matrix.

(def m (new-mat 100 100 CV_8UC1 (new-scalar 200.0)))
The preceding snippet, as you know by now, creates a small 100×100 gray mat (Figure 2-64).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig64_HTML.jpg
Figure 2-64

Gray mat

Now, we’ll focus on a portion of that gray mat using submat and apply the filter-2-d function only on the submat.

We take a 3×3 matrix for the operation and use a 0.3 value for the main center pixel. This means that when we apply the filter, the value of the corresponding pixel in the target matrix will be 200×0.25=50.

(def s (submat m (new-rect 10 10 50 50)))
(filter-2-d! s -1
    (u/matrix-to-mat
      [[0 0 0]
       [0 0.25 0]
       [0 0 0]]))
Here, that means the entire submat will be darker than the pixels not located in the submat, as confirmed in Figure 2-65.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig65_HTML.jpg
Figure 2-65

Submat has changed

And if you look at the pixel values themselves on a much smaller mat, you’ll see that the value of the center pixel (the submat) has been divided by exactly 4.

(def m (new-mat 3 3 CV_8UC1 (new-scalar 200.0)))
(def s (submat m (new-rect 1 1 1 1)))
(filter-2-d! s -1 (u/matrix-to-mat
      [[0 0 0]
       [0 0.25 0]
       [0 0 0]]))
(dump m)
;  [200 200 200]
;  [200  50 200]
;  [200 200 200]

What else can you do with filter-2-d? It can be used for art effects as well; you can create your own filters with your custom values. So, go ahead and experiment.

(-> "resources/emilie4.jpg"
    (imread)
    (filter-2-d! -1 (u/matrix-to-mat
     [[17.8824    -43.5161     4.11935]
      [ -3.45565    27.1554    -3.86714]
      [ 0.0299566   0.184309   -1.46709]]))
    (bitwise-not!)
    (u/mat-view))
The preceding filter turns the cat image into a mat ready to receive some brushes of watercolors (Figure 2-66).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig66_HTML.jpg
Figure 2-66

Artful cat

Threshold

Threshold is another filtering technique that resets values in a mat to a default, when they are originally above or below a threshold.

Uh, what did you say?

To understand how that works, let’s go back to a small mat at the pixel level again, with a simple 3×3 mat.

(u/matrix-to-mat [[0 50 100] [100 150 200] [200 210 250]])
; [0,   50,  100
;  100, 150, 200
;  200, 210, 250]
We can apply a threshold that sets the value of a pixel to
  • 0, if the original pixel is below 150

  • 250 otherwise

Here is how this works.

(->
  (u/matrix-to-mat [[0 50 100] [100 150 200] [200 210 250]])
  (threshold! 150 250 THRESH_BINARY)
  (.dump))

And the resulting matrix is

[0,   0,   0
 0,   0,   250
 250, 250, 250]

As you can see, only pixels with values greater than 150 are left to nonzero values.

You can create the complementary matrix by using THRESH_BINARY_INV, as seen in the following.

(->
  (u/matrix-to-mat [[0 50 100] [100 150 200] [200 210 250]])
  (threshold! 150 250 THRESH_BINARY_INV)
  (.dump))
; [250, 250, 250
   250, 250,   0
     0,   0,   0]

Now applying this technique to a picture makes things quite interesting by leaving only the interesting shapes of the content of the mat.

(-> "resources/emilie4.jpg"
  (imread)
  (cvt-color! COLOR_BGR2GRAY)
  (threshold! 150 250 THRESH_BINARY_INV)
  (u/mat-view))
Figure 2-67 shows the resulting mat after applying the threshold to my sister’s white cat.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig67_HTML.jpg
Figure 2-67

Thresholded cat

For reference, and for the next chapter’s adventures, there is also another method named adaptive-threshold , which computes the target value depending on the values from the surrounding pixels.

(-> "resources/emilie4.jpg"
  (imread)
  (u/resize-by 0.07)
  (cvt-color! COLOR_BGR2GRAY)
  (adaptive-threshold! 255 ADAPTIVE_THRESH_MEAN_C THRESH_BINARY 9 20)
  (u/mat-view))
  • 255 is the resulting value if the threshold is validated.

  • You have just seen THRESH_BINARY or THRESH_BINARY_INV

  • 9 is the size of the neighboring area to consider

  • 20 is a value subtracted from the sum

Figure 2-68 shows the result of the adaptive threshold.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig68_HTML.jpg
Figure 2-68

Adaptive cat

Adaptive threshold is usually used in recipe 2-8 with blurring techniques that we will study very shortly.

2.7 Applying Simple Masking Techniques

Problem

Masks can be used in a variety of situations where you want to apply mat functions only to a certain part of a mat.

You would like to know how to create masks and how to put them into action.

Solution

We will review again the use of in-range to create masks based on colors.

Then, we will use copy-to and bitwise- to apply functions on the main mat, but only on pixels selected by the mask.

How it works

Let’s start by picking a romantic rose from the garden and loading with imread .

(def rose
  (-> "resources/red_rose.jpg"
      (imread IMREAD_REDUCED_COLOR_2)))
(u/mat-view rose)
Figure 2-69 shows the flower that will be the source of this exercise.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig69_HTML.jpg
Figure 2-69

Rose

To search for colors, as we have seen, let’s first convert the rose to a different color space.

You know how to achieve this by now. Since the color we will be looking for is red, let’s convert from RGB to HSV .

(def hsv
  (-> rose clone (cvt-color! COLOR_RGB2HSV)))
(u/mat-view hsv)
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig70_HTML.jpg
Figure 2-70

Rose in HSV color space

Let’s then filter on red, and since the rose is a bit dark too, let’s make low values for saturation and luminosity on the lower bound red.

(def lower-red  (new-scalar 120 30 15))
(def upper-red (new-scalar 130 255 255))
(def mask (new-mat))
(in-range hsv lower-red upper-red mask)
(u/mat-view mask)

We used that method notably in recipe 2-4, but we forgot to have a look at the created mask. Basically, the mask is a mat of the same size as the input of in-range, with pixels set to 0 where the source pixel is not in range and to 1 where it is in range. Here indeed, in-range works a bit like a threshold.

The resulting mask is shown in Figure 2-71.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig71_HTML.jpg
Figure 2-71

Mask of the red rose

The mask can now be used along with bitwise-and! and the original source rose so that we copy pixels only where the mask mat has values not equal to 0.

(def res (new-mat))
(bitwise-and! rose res mask)
(u/mat-view res)
And now you have a resulting mat (Figure 2-72) of only the red part of the picture.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig72_HTML.jpg
Figure 2-72

Only the rose

As a small exercise, we’ll change the luminosity of the mat by using convert-to and with it apply the following formula on each pixel:

original*alpha+ beta

And so, the following code snippet just does that by calling convert-to .

(def res2 (new-mat))
(convert-to res res2 -1 1 100)
(u/mat-view res2)
The resulting masked rose is a slightly brighter version of the original rose (Figure 2-73).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig73_HTML.jpg
Figure 2-73

Bright rose

Let’s copy that resulting bright rose back to the original picture , or a clone of it (Figure 2-74).

(def cl (clone rose))
(copy-to res2 cl mask)
(u/mat-view cl)
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig74_HTML.jpg
Figure 2-74

Coming together

The concepts are nicely coming together.

Finally, let’s try something different, for example, copying a completely different mat in place of the rose, again using a mask.

We can reuse the mask that was created in the preceding, and in a similar fashion use copy-to to copy only specific points of a mat.

To perform the copy, we need the source and the target in copy-to to be of the exact same size, as well as the mask. You will get quite a bad error when this is not the case.

The resizing of mat is done as a first step.

(def cl2
  (imread "resources/emilie1.jpg"))
(resize! cl2 (new-size (cols mask) (rows mask)))

Then, on a clone of the original rose picture, we can perform the copy, specifying the mask as the last parameter of copy-to.

(def cl3
  (clone rose))
(copy-to cl2 cl3 mask)
(u/mat-view cl3)
The cat mat is thus copied onto the rose, but only where the mask allows the copy to happen (Figure 2-75).
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig75_HTML.jpg
Figure 2-75

The cat and the rose

2.8 Blurring Images

I’m giving in to my tendency to want to blur and blend the lines between art and life […]

Lia Ices

Problem

As promised, this is a recipe to review blur techniques. Blurring is a simple and frequent technique used in a variety of situations.

You would like to see the different kinds of blur available, and how to use them with Origami.

Solution

There are four main methods to blur in OpenCV : blur , gaussian-blur , median-blur , and bilateral-filter .

Let’s review each of them one by one.

How it works

As usual, let’s load a base cat picture to use throughout this exercise.

(def neko
  (-> "resources/emilie5.jpg"
      (imread)
      (u/resize-by 0.07)))
(u/mat-view neko)
Figure 2-76 shows another picture of my sister’s cat.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig76_HTML.jpg
Figure 2-76

Cat on bed

Simple Blur and Median Blur

The flow to apply a simple blur is relatively simple. Like many other image-processing techniques, we use a kernel, a square matrix with the main pixel in the center, like 3×3 or 5×5. The kernel is the matrix in which each pixel is given a coefficient.

In its simplest form, we just need to give it a kernel size for the area to consider for the blur: the bigger the kernel area, the more blurred the resulting picture will be.

Basically, each pixel of the output is the mean of its kernel neighbors.

(-> neko
    (clone)
    (blur! (new-size 3 3))
    (u/mat-view))
The result can be seen in Figure 2-77.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig77_HTML.jpg
Figure 2-77

Blurred cat on bed

And the bigger the kernel, the more blurred the picture will be.

Figure 2-78 shows the result of using different kernel sizes with the blur function.

(->> (range 3 10 2)
     (map #(-> neko  clone (u/resize-by 0.5) (blur! (new-size % %))))
     (hconcat!)
     (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig78_HTML.jpg
Figure 2-78

Bigger kernels

Gaussian Blur

This type of blur gives more weight to the center of the kernel. We will see that in the next chapter, but this type of blur is actually good at removing extra noise from pictures.

(-> neko clone (gaussian-blur! (new-size 5 5) 17) (u/mat-view))
The result of the gaussian blur is shown in Figure 2-79.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig79_HTML.jpg
Figure 2-79

Gaussian blurred cat

Bilateral Filter

Those filters are used when you want to smooth the picture, but at the same time would also like to keep the edges.

What are edges? Edges are contours that define the shapes available in a picture.

The first example shows a simple usage of this bilateral filter.

(-> neko
    clone
    (bilateral-filter! 9 9 7)
    (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig80_HTML.jpg
Figure 2-80

Gaussian blur

This second example shows an example where we want to keep the edges. Edges can be easily found with the famous opencv function canny. We will spend some more time with canny in the next chapter.

For now, let’s focus on the output and lines of Figure 2-81.

(-> neko
    clone
    (cvt-color! COLOR_BGR2GRAY)
    (bilateral-filter! 9 9 7)
    (canny! 50.0 250.0 3 true)
    (bitwise-not!)
    (u/mat-view))
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig81_HTML.jpg
Figure 2-81

Gaussian blur and canny

The third example quickly shows why you would want to use a bilateral filter instead of a simple blur. We keep the same small processing pipeline but this time use a simple blur instead of a bilateral filter.

    (-> neko
      clone
      (cvt-color! COLOR_BGR2GRAY)
      (blur! (new-size 3 3))
      (canny! 50.0 250.0 3 true)
      (bitwise-not!)
      (u/mat-view))
The output clearly highlights the problem: defining lines have disappeared, and Figure 2-82 shows a disappearing cat …
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig82_HTML.jpg
Figure 2-82

Lines and cats have disappeared!

Median Blur

Median blur is a friend of simple blur.

(-> neko
    clone
    (median-blur! 27)
    (u/mat-view))

It is worth noting that at high kernel length, or a kernel length of greater than 21, we get something more artistic.

It is less useful for shape detection, as seen in Figures 2-83 and 2-84, but still combines with other mats for creative impact, as we will see in chapter 3.
../images/459821_1_En_2_Chapter/459821_1_En_2_Fig83_HTML.jpg
Figure 2-83

Artistic cat (kernel length 31)

../images/459821_1_En_2_Chapter/459821_1_En_2_Fig84_HTML.jpg
Figure 2-84

Median blur with kernel 7 makes lines disappear

Voila! Chapter 2 has been an introduction to Origami and its ease of use: the setup, the conciseness, the processing pipelines, and the various transformations.

This is only the beginning. Chapter 3 will be taking this setup to the next level by combining principles and functions of OpenCV to find shapes, count things, and move specific parts of mats to other locations.

The future belongs to those who prepare for it today.

Malcolm X

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.128.105