© Nicolas Modrzyk 2018
Nicolas ModrzykJava Image Processing Recipeshttps://doi.org/10.1007/978-1-4842-3465-5_1

1. OpenCV on the JavaVM

Nicolas Modrzyk1 
(1)
Tokyo, Japan
 

A few years ago, while on a trip to Shanghai, a very good friend of mine bought me a massive book on OpenCV. It had tons of photography manipulation, real–time video analysis samples, and in-depth explanations that were very appealing, and I just could not wait to get things up and running on my local environment.

OpenCV, as you may know, stands for Open Source Computer Vision ; it is an open source library that gives you ready-to-use implementations of advanced imaging algorithms, going from simple-to-use but advanced image manipulations to shape recognition and real-time video analysis spy powers.

The very core of OpenCV is a multidimensional matrix object named Mat. Mat is going to be our best friend all along this book. Input objects are Mat, operations are run on Mat, and the output of our work is also going to be Mat.

Mat , even though it is going to be our best friend, is a C++ object, and as such, is not the easiest of friends to bring and show around. You have to recompile, install, and be very gentle about the new environment almost anywhere you take him.

But Mat can be packaged.

Mat, even though he is a native (i.e., runs natively), can be dressed to run on the Java Virtual Machine almost without anyone noticing.

This first chapter wants to get you introduced to work with OpenCV with some of the main languages of the Java Virtual Machine, namely Java of course, but also the easier-to-read Scala and the Google-hyped Kotlin.

To run all these different languages in a similar fashion, you will first get (re-?)introduced to a Java build tool named leiningen and then you will move on to use simple OpenCV functions with it.

The road of this first chapter will take you to the door of the similarly JVM-based language Clojure, which will give your OpenCV code instant visual feedback for great creativity. That will be for Chapter 2.

1.1 Getting Started with Leiningen

Problem

You remember the write-once-run-everywhere quote, and you would like to compile Java code and run the Java program in an easy and portable manner across different machines. Obviously, you can always revert to using the plain javac command to compile Java code, and pure Java on the command line to run your compiled code , but we are in the 21st century, and hey, you are looking for something more.

Whatever the programming language, setting up your working environment by hand is quite a task, and when you are done, it is hard to share with other people.

Using a build tool, you can define in simple ways what is required to work on your project, and get other users to get started quickly.

You would like to get started with an easy-to-work-with build tool.

Solution

Leiningen is a build tool targeting (mostly) the JavaVM. It is similar in that sense to other famous ones like (Remember? The) Ant, (Oh My God) Maven, and (it used to work) Gradle.

Once the leiningen command is installed, you can use it to create new JavaVM projects based on templates, and run them without the usual headaches.

This recipe shows how to install Leiningen quickly and run your first Java program with it.

How it works

You will start by simply installing Leiningen where you need it, and then creating a blank Java project with it.

Note

Installing Leiningen requires Java 8 to be installed on your machine. Note also that due to the fact that Java 9 is solving old problems by breaking current solutions, we will choose to keep Java 8 for now.

Installing Leiningen

The Leiningen web site itself is hosted and can be found at

https://leiningen.org/

At the top of the Leiningen page, you can find the four easy steps to install the tool manually yourself.

So here it goes, on macOS and Unix :
  1. 1.
     
  2. 2.

    Place it on your $PATH where your shell can find it (e.g., ~/bin)

     
  3. 3.

    Set it to be executable (chmod a+x ~/bin/lein)

     
  4. 4.

    Run it from a terminal, lein, and it will download the self-install package

    And on Windows:

     
  5. 1.
     
  6. 2.

    Place it on your C:/Windows/System32 folder, using admin permission

     
  7. 3.

    Open a command prompt and run it, lein, and it will download the self-install package

     

On Unix , you can almost always use a package manager. Brew, on macOS, has a package for leiningen.

On Windows , there is also a good Windows installer, located at https://djpowell.github.io/leiningen-win-installer/ .

If you are a Chocolatey fan, Windows has a package for Chocolatey as well: https://chocolatey.org/packages/Lein .

If you finished the install process successfully on a terminal or command prompt, you should be able to check the version of the installed tool. On the first run, Leiningen downloads its own internal dependencies, but any other subsequent runs will regularly be fast.

NikoMacBook% lein -v  
Leiningen 2.7.1 on Java 1.8.0_144 Java HotSpot(TM) 64-Bit Server VM

Creating a New OpenCV-Ready Java Project with Leiningen

Leiningen mainly works around a text file, named project.clj , where the metadata, dependencies, plug-ins, and settings for those projects are defined in a simple map.

When you execute commands on the project calling the lein command, lein will look into that project.clj to find the relevant information it needs regarding that project.

Leiningen comes with ready-to-use project templates, but in order to understand them properly, let’s first walk through a first example step by step.

For a leiningen Java project, you need two files:
  • One that describes the project, project.clj

  • One file with some Java code in it, here Hello.java

A first project simple directory structure looks like this:

.
├── java
│   └── Hello.java
└── project.clj
1 directory, 2 files

For peace of mind, we will keep the code of this first Java example pretty simple.

public class Hello {
    public static void main(String[] args) {
            System.out.println("beginning of a journey");
    }
}

Now let’s see the content of the text file named project.clj in a bit of detail:

(defproject hellojava "0.1"
  :java-source-paths ["java"]
  :dependencies [[org.clojure/clojure "1.8.0"]]
  :main Hello)

This is actually Clojure code, but let’s simply think of it as a domain specific language (DSL), a language to describe a project in simple terms.

For convenience, each term is described in Table 1-1.
Table 1-1

Leiningen Project Metadata

Word

Usage

Defproject

Entry point to define a project

Hellojava

The name of the project

0.1

A string describing the version

:java-source-paths

A list of directories relative to the project folder, where you will put Java code files

:dependencies

The list of external libraries and their versions needed to run the project

[[org.clojure/clojure “1.8.0”]]

By default, the list contains Clojure, which is needed to run leiningen. You will put OpenCV libraries here later on

:main

The name of the Java class that will be executed by default

Now go ahead and create the preceding directory and file structure, and copy-paste the content of each file accordingly.

Once done, run your first leiningen command:

lein run

The command will generate the following output on your terminal or console depending on your environment.

Compiling 1 source files to /Users/niko/hellojava/target/classes
beginning of a journey

Whoo-hoo! The journey has begun! But, wait, what happened just there?

A bit of magic was involved. The leiningen run command will make Leiningen execute a compiled Java class main method. The class to be executed was defined in the project’s metadata, and as you remember, that would be Hello.

Before executing the Java compiled class there is a need to… compile it. By default, Leiningen does compilation before performing the run command, and so this is where the “Compiling …” message came out from.

Along the way, you may have noticed that a target folder was created inside your project folder, with a classes folder, and a Hello.class file inside.

.
├── dev
├── java
│   └── Hello.java
├── project.clj
├── src
├── target
│   ├── classes
│   │   ├── Hello.class

The target/classes folder is where the compiled Java bytecode goes by default, and that same target folder is then added to the Java execution runtime (classpath).

The execute phase triggered by “lein run” follows, and the code block from the main method of the Hello class is executed; then the message prints out.

beginning of a journey.

You may ask: “What if I have multiple Java files, and want to run a different one than the main one?”

This is a very valid question, as you will be probably doing that a few times in this first chapter to write and run the different code samples.

Say you write a second Java class in a file named Hello2.java in the same Java folder, along with some updated journey content.

import static java.lang.System.out;
public class Hello2 {
    public static void main(String[] args) {
            String[] text = new String[]{
                    "Sometimes it's the journey that ",
                    "teaches you a lot about your destination.",
                    "--",
                    "- Drake"};
            for(String t : text) out.println(t);
    }
}

To run that exact main method from the Hello2.java file , you would call lein run with the optional –m option, where m stands for main, and then the name of the main Java class to use.

lein run –m Hello2

This gives you the following output:

Compiling 1 source files to /Users/niko/hellojava/target/classes
Sometimes it's the journey that
teaches you a lot about your destination.
--
- Drake

Great. With those instructions, you now know enough to go ahead and run your first OpenCV Java program.

1.2 Writing Your First OpenCV Java Program

Problem

You would like to use Leiningen to have a Java project setup where you can use OpenCV libraries directly.

You would like to run Java code making use of OpenCV, but you got headaches already (while compiling the opencv wrapper yourself), so you would like to make this step as simple as possible.

Solution

Recipe 1-1 presented Leiningen to help you with all the basic required setup. From there, you can add a dependency on the OpenCV C++ library and its Java wrapper.

How it works

For this first OpenCV example, we will get set up with a Leiningen project template , where the project.clj file and the project folders are already defined for you. Leiningen project templates do not have to be downloaded separately and can be called upon to create new projects using Leiningen’s integrated new command.

To create this project on your local machine, on the command line , let’s call the command of lein.

Whether on Windows or Mac, the command gives

lein new jvm-opencv hellocv
What the preceding command basically does is
  1. 1.

    create a new project folder named hellocv

     
  2. 2.

    create directories and files with the content of the folder based on a template named jvm-opencv

     

After running the command, the rather simple following project files are created:

.
├── java
│   └── HelloCv.java
└── project.clj

That does not seem too impressive, but actually those are almost the same as the two files from the previous recipe: a project descriptor and a Java file.

The project.clj content is a slightly modified version from before:

(defproject hellocv "0.1.0-SNAPSHOT"
  :java-source-paths ["java"]
  :main HelloCv
  :repositories [
   ["vendredi" "http://hellonico.info:8081/repository/hellonico/"]]
  :dependencies [[org.clojure/clojure "1.8.0"]
                 [opencv/opencv "3.3.1"]
                 [opencv/opencv-native "3.3.1"]])

You probably have noticed straightaway three new lines you have not seen before.

First of all is the repositories section, which indicates a new repository location to find dependencies. The one provided here is the author’s public repository where custom builds of opencv (and others) can be found.

The opencv core dependency and the native dependency have been compiled and uploaded on that public repository and provided for your convenience.

The two dependencies are as follows:
  • opencv

  • opencv-native

Why two dependencies, you might ask?

Well one of these dependencies is the opencv code in c++ for macOS, Windows, or Linux. The opencv core is the platform-independent Java wrapper that calls the platform-dependent c++ code.

This is actually the way the opencv code is delivered when you do the compilation of OpenCV yourself.

For convenience, the packaged opencv-native dependency contains the native code for Windows, Linux, and macOS.

The Java code in file HelloCv.java , located in the Java folder, is a simple helloworld kind of example, which will simply load OpenCV native libraries; its content is shown in the following.

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
public class HelloCv {
    public static void main(String[] args) throws Exception {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME); // ①
        Mat hello = Mat.eye(3,3, CvType.CV_8UC1); // ②
        System.out.println(hello.dump()); // ③
    }
}
What does the code do?
  • ① It tells the Java runtime to load the native opencv library via loadLibrary. This is a required step when working with OpenCV and needs to be done once in the lifetime of your application.

  • ② A native Mat object can then be created via a Java object.

  • Mat is basically an image container, like a matrix, and here we tell it to be of size 3×3: height of three pixels, width of three pixels, where each pixel is of type 8UC1, a weird name that simply means one channel (C1) of eight bits (unsigned) integer (8U).

  • ③ Finally, it prints the content of the mat (matrix) object .

The project is ready to be run as you have done before, and whichever platform you are running on, the same leiningen run command will do the job:

NikoMacBook% lein run

The command output is shown in the following.

Retrieving opencv/opencv-native/3.3.1/opencv-native-3.3.1.jar from vendredi
Compiling 1 source files to /Users/niko/hellocv2/target/classes
[  1,   0,   0;
   0,   1,   0;
   0,   0,   1]

The 1s and 0s you see printed are the actual content of the matrix that was created.

1.3 Automatically Compiling and Running Code

Problem

While the lein command is pretty versatile, you would like to start the process in the background and get the code to be automatically run for you as you change the code.

Solution

Leiningen comes with an auto plug-in . Once enabled, that plug-in watches changes in patterns of files and triggers a command. Let’s use it!

How it works

When you create a project using the jvm-opencv template (see Recipe 1-2), you will notice that the content of the file project.clj is slightly longer than presented in the recipe. It was actually more like this:

(defproject hellocv3 "0.1.0-SNAPSHOT"
  :java-source-paths ["java"]
  :main HelloCv
  :repositories [
   ["vendredi" "http://hellonico.info:8081/repository/hellonico/"]]
  :plugins [[lein-auto "0.1.3"]]
  :auto {:default {:file-pattern #".(java)$"}}
  :dependencies [[org.clojure/clojure "1.8.0"]
                 [opencv/opencv "3.3.1"]
                 [opencv/opencv-native "3.3.1"]])

Two extra lines have been highlighted. One line is the addition of the lein-auto plug-in in a :plugins section of the project metadata.

The second line, the :auto section, defines the file pattern to watch for changes; here, files that end in Java will activate the refresh of the auto subcommand.

Let’s go back to the command line, where now we will be prepending the auto command before the usual run command. The command you need to write is now as follows:

lein auto run

The first time you run it, it will give the same output as the previous recipe, but with some added extra lines:

auto> Files changed: java/HelloCv.java
auto> Running: lein run
Compiling 1 source files to /Users/niko/hellocv3/target/classes
[  1,   0,   0;
   0,   1,   0;
   0,   0,   1]
auto> Completed.

Nice; note here that the leiningen command has not finished running and is actually listening for file changes .

From there, go ahead and update the Java code of HelloCv, with a Mat object of a different size. So replace the following line:

Mat hello = Mat.eye(3,3, CvType.CV_8UC1);

with

Mat hello = Mat.eye(5,5, CvType.CV_8UC1);

The updated code says that the Mat object is now a 5×5 matrix, each pixel still being represented by a one-byte integer.

And look at the terminal or console where the leiningen command was running to see the output being updated:

auto> Files changed: java/HelloCv.java
auto> Running: lein run
Compiling 1 source files to /Users/niko/hellocv3/target/classes
[  1,   0,   0,   0,   0;
   0,   1,   0,   0,   0;
   0,   0,   1,   0,   0;
   0,   0,   0,   1,   0;
   0,   0,   0,   0,   1]
auto> Completed.

Note how this time the printed matrix of the mat object is made of five rows of five columns.

1.4 Using a Better Text Editor

Problem

You may have used your own text editor to type in code up to now, but you would like a slightly better working environment for working with OpenCV.

Solution

While this is not a final solution and other different environments may be more productive for you, I found using a simple setup based on Github’s Atom editor to be quite effective. That editor will be of great use as well when typing code in real time.

One of the main reasons to enjoy working in Atom is that pictures are reloaded on the fly, so that when working on an image, updates to that image will be automatically reflected directly on your screen. As far as I know, this is the only editor with such a support. Let’s see how it works!

How it works

Installing the base Atom editor should be a simple matter of going to the web site and downloading the software, so simply go ahead and download the installer.

https://atom.io/

Not only is atom a good editor by default, but it is easy to add plug-ins to match your work style.

Here for OpenCV, we would like to add three plug-ins:
  • one generic IDE plug-in

  • one plug-in for the Java language, making use of the

  • last one for a terminal inside the editor.

The three plug-ins are shown in Figures 1-1, 1-2, and 1-3.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig1_HTML.jpg
Figure 1-1

Atom ide-ui plug-in

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig2_HTML.jpg
Figure 1-2

Atom Java language plug-in

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig3_HTML.jpg
Figure 1-3

Atom ide-terminal plug-in

The terminal that opens at the bottom will let you type the same “lein auto run” command, so you do not need a separate command prompt or terminal window for the autorunning feature of Leiningen. That keeps all your code writing in a single window.

Ideally, your Atom layout would look something like either Figure 1-4 or Figure 1-5.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig4_HTML.jpg
Figure 1-4

Atom IDE standard layout

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig5_HTML.jpg
Figure 1-5

Atom IDE clean layout

Note that autocompletion for Java is now enabled through Atom’s Java plug-in too, so typing function names will show a drop-down menu of available options, as shown in Figure 1-6:
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig6_HTML.jpg
Figure 1-6

Atom IDE autocompletion

Finally, updates on the image, while not able to be seen in real time, can be seen while saving the file, and if you open that file in the background it will be refreshed on each save, a save being done with OpenCV’s function imwrite.

So, with the leiningen auto run command running in the background, when the Java file is saved, the compilation/run cycle is triggered and the image is updated.

Figure 1-7 shows how the picture onscreen is visually updated, even without a single user action (apart from file save).
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig7_HTML.jpg
Figure 1-7

Automatically updated image on Java file save

You will see that later in this chapter, but for reference right now, here is the code snippet changing the color of one subsection of the Mat object using the submat function .

import org.opencv.core.Core;
import org.opencv.core.CvType;
import org.opencv.core.Mat;
import org.opencv.core.Scalar;
import static org.opencv.imgcodecs.Imgcodecs.imwrite;
public class HelloCv {
    public static void main(String[] args) {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);
        Mat hello = new Mat(150,150, CvType.CV_8UC3);
        hello.setTo(new Scalar(180,80,250));
        Mat sub = hello.submat(0,50,0,50);
        sub.setTo(new Scalar(0,0,100));        
        imwrite("dev/hello.png", hello);
    }
}

You now have a setup to enjoy full-blown OpenCV powers. Let’s use them.

1.5 Learning the Basics of the OpenCV Mat Object

Problem

You would live to get a better grasp of the OpenCV object Mat, since it is at the core of the OpenCV framework.

Solution

Let’s review how to create mat objects and inspect their content through a few core samples.

How it works

This recipe needs the same setup that was completed in the previous recipe.

To create a very simple matrix with only one channel per “dot,” you would usually use one of the following three static functions from the Mat class: zeros, eye, ones.

It easier to see what each of those does by looking at each output in Table 1-2.
Table 1-2

Static Functions to Create One Channel per Pixel Mat

Function Name

Code

Usage

Output

zeros

Mat.zeros(3,3,CV_8UC1)

When you want the new mat to be all zeros

[0,  0,  0;

0,  0,  0;

0,  0,  0]

eye

Mat.eye(3, 3, CV_8UC1)

When you want all zeros except when x=y

[ 1,  0,  0;

0,  1,  0;

0,  0,  1]

ones

Mat.ones(3,3,CV_8UC1)

When you want all ones

[ 1,  1,  1;  1,  1,  1;

1,  1,  1]

(any of the preceding)

Mat.ones(1,1,CV_8UC3)

Each pixel is of 3 channels

[ 1,  0,  0]

If you have used OpenCV before (and if you haven’t yet, please trust us), you will remember that CV_8UC1 is the OpenCV slang word for eight bits unsigned per channel, and one channel per pixel , so a matrix of 3×3 will therefore have nine values.

Its cousin CV_8UC3, as you would have guessed, assigns three channels per pixel, and thus a 1×1 Mat object would have three values. You would usually use this type of Mat when working with Red, Blue, Green, or RGB, images. It also is the default format when loading images.

This first example simply shows three ways of loading a one-channel-per-pixel Mat object and one way to load a three-channels-per-pixel Mat object.

import org.opencv.core.Core;
import org.opencv.core.Mat;
import static java.lang.System.loadLibrary;
import static java.lang.System.out;
import static org.opencv.core.CvType.CV_8UC1;
import static org.opencv.core.CvType.CV_8UC3;
public class SimpleOpenCV {
    static {
            loadLibrary(Core.NATIVE_LIBRARY_NAME);
    }
    public static void main(String[] args) {
        Mat mat = Mat.eye(3, 3, CV_8UC1);
        out.println("mat = ");
        out.println(mat.dump());
        Mat mat2 = Mat.zeros(3,3,CV_8UC1);
        out.println("mat2 = ");
        out.println(mat2.dump());
        Mat mat3 = Mat.ones(3,3,CV_8UC1);
        out.println("mat3 = " );
        out.println(mat3.dump());
        Mat mat4 = Mat.zeros(1,1,CV_8UC3);
        out.println("mat4 = " );
        out.println(mat4.dump());
    }
}

The last Mat object , mat4, is the one containing three channels per pixel. As you can see when you try to dump the object, a three-zeros array is created.

CV_8UC1 and CV_8UC3 are the two most common types of format per pixel, but many others exist and are defined in the CvType class.

When doing mat-to-mat computations, you may eventually also need to use float values per channel. Here is how to achieve that:

Mat mat5 = Mat.ones(3,3,CvType.CV_64FC3);
out.println("mat5 = " );
out.println(mat5.dump());

And the output matrix:

mat5 =
[1, 0, 0, 1, 0, 0, 1, 0, 0;
 1, 0, 0, 1, 0, 0, 1, 0, 0;
 1, 0, 0, 1, 0, 0, 1, 0, 0]

In many situations, you would probably not create the matrix from scratch yourself, but simply load the image from a file.

1.6 Loading Images from a File

Problem

You would like to load an image file to convert it to a Mat object for digital manipulation.

Solution

OpenCV has a simple function to read an image from a file, named imread . It usually takes only a file path on the local file system to the image, but it may also have a type parameter. Let’s see how to use the different forms of imread.

How it works

The imread function is located in the Imgcodecs class of the same named package.

Its standard usage is down to simply giving the path of the file. Supposing you have downloaded an image of kittens from a Google search and stored it in images/kittenjpg (Figure 1-8), the code gives the following:

Mat mat = Imgcodecs.imread("images/kitten.jpg");
out.println("mat ="+mat.width()+" x "+mat.height()+","+mat.type());
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig8_HTML.jpg
Figure 1-8

Running kitten

If the kitten image is found and loaded properly, the following message will be shown on the output of the console:

mat =350 x 234,16

Note that if the file is not found, no exception is thrown, and no error message is reported, but the loaded Mat object will be empty, so no row and no column:

mat =0 x 0,0

Depending on how you code, you may feel the need to wrap the loading code with a size check to make sure that the file was found and the image decoded properly.

It is also possible to load the picture in black-and-white mode (Figure 1-9). This is done by passing another parameter to the imread function.

Mat mat = Imgcodecs.imread(
  "images/kitten.jpg",
  Imgcodecs.IMREAD_GRAYSCALE);
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig9_HTML.jpg
Figure 1-9

Grayscale loading

That other parameter is taken from the same Imgcodecs class.

Here, IMREAD_GRAYSCALE forces the reencoding of the image on load, and turns the Mat object into grayscale mode.

Other options can be passed to the imread function for some specific handling of channels and depth of the image; the most useful of them are described in Table 1-3.
Table 1-3

Image Reading Options

Parameter

Effect

IMREAD_REDUCED_GRAYSCALE_2

IMREAD_REDUCED_COLOR_2

IMREAD_REDUCED_GRAYSCALE_4

IMREAD_REDUCED_COLOR_4

IMREAD_REDUCED_GRAYSCALE_8

IMREAD_REDUCED_COLOR_8

Reduce the size of the image on load by a factor of 2, 4, or 8. This means dividing the width and the height by that number.

At the same time, specify the color or grayscale mode. Grayscale means one-channel grayscale mode. Color means three-channel RGB.

IMREAD_LOAD_GDAL

Use the GDAL driver to load raster format images.

IMREAD_GRAYSCALE

Load the picture in one-channel grayscale mode.

IMREAD_IGNORE_ORIENTATION

If set, do not rotate the image according to EXIF’s orientation flag.

Figure 1-10 shows what happens when the image is loaded in REDUCED_COLOR_8.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig10_HTML.jpg
Figure 1-10

Reduced size loading

As you may have noticed, no indication of the image format was needed when loading the image with imread. OpenCV does all the image decoding, depending on a combination of the file extension and binary data found in the file.

1.7 Saving Images into a File

Problem

You want to be able to save an image using OpenCV.

Solution

OpenCV has a sibling function to imread used to write files, named imwrite , similarly hosted by the class Imgcodecs. It usually takes only a file path on the local file system pointing where to store the image, but it can also take some parameters to modify the way the image is stored.

How it works

The function imwrite works similarly to imread, except of course it also needs the Mat object to store, along with the path.

The first code snippet simply saves the cat image that was loaded in color:

Mat mat = imread("images/glasses.jpg");
imwrite("target/output.jpg", mat);
Figure 1-11 shows the content of output.jpg picture.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig11_HTML.jpg
Figure 1-11

JPEG formatted image on disk

Now, you can also change the format while saving the Mat object simply by specifying a different extension. For example, to save in Portable Network Graphic (PNG) format , just specify a different extension when calling imwrite.

Mat mat = imread("images/glasses.jpg");
imwrite("target/output.png", mat);

Without working with encoding and crazy byte manipulation, your output file is indeed saved in PNG format.

You can give saving parameters to imwrite, the most needed ones being compression parameters.

For example, as per the official documentation:
  • For JPEG , you can use the parameter CV_IMWRITE_JPEG_QUALITY, which value is in the range 0 to 100 (the higher the better). The default value is 95.

  • For PNG , it can be the compression level () from 0 to 9. A higher value means a smaller size and longer compression time. The default value is 3.

Compressing the output file by using a compression parameter is done through another opencv object named MatOfInt, which is a matrix of integers, or in simpler terms, an array.

MatOfInt moi = new MatOfInt(CV_IMWRITE_PNG_COMPRESSION, 9);
Imgcodecs.imwrite("target/output.png", mat, moi);

This will enable compression on the png. And by checking the filesize you can actually see that the png file is at least 10% smaller.

1.8 Cropping a Picture with Submat

Problem

You would like to save only a given subsection of an image.

Solution

The main focus of this short recipe is to introduce the submat function. Submat gives you back a Mat object that is a submatrix or subsection of the original.

How it works

We will take a cat picture and extract only the part we want with submat. The cat picture used for this example is shown in Figure 1-12.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig12_HTML.jpg
Figure 1-12

A cat

Of course, you can use whichever cat picture you like. Let’s start by reading the file normally, with imread .

Mat mat = Imgcodecs.imread("images/cat.jpg");
out.println(mat);

As you may notice, println gives you some info about the Mat object itself. Most of it is informative memory addressing, so you can hack the memory directly, but it also shows whether the Mat object is a submat or not. In this case, since this is the original picture, it is set to false.

 [ 1200*1600*CV_8UC3,
  isCont=true,
  isSubmat=false,
  nativeObj=0x7fa7da5b0a50,
dataAddr=0x122c63000 ]
Autocompletion in the Atom editor presents you the different versions of the submat function as shown in Figure 1-13.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig13_HTML.jpg
Figure 1-13

Submat with different parameters

Now let’s use the submat function in its first form, where submat takes start and end parameters, one for each row and column:

Mat submat = mat.submat(250,650,600,1000);
out.println(submat);

Printing the object shows that the newly created Mat object is indeed a submat.

Mat [ 400*400*CV_8UC3,
isCont=false,
isSubmat=true,
nativeObj=0x7fa7da51e730,
dataAddr=0x122d88688 ]

You can act directly on the submat just like a regular Mat, so you could start for example by saving it.

Imgcodecs.imwrite("output/subcat.png", submat);
With the range nicely adapted to the original cat picture, the output of the saved image is shown in Figure 1-14:
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig14_HTML.jpg
Figure 1-14

Sub-cat

The nice thing is that not only can you act on the submat, but it also reflects on the original Mat object as well. So if you apply a blur effect to the cat’s face on the submat and save the whole mat (not the submat), only the cat’s face will look blurry. See how that works:

Imgproc.blur(submat,submat, new Size(25.0, 25.0));
out.println(submat);
Imgcodecs.imwrite("output/blurcat.png", mat);

blur is a key function of class org.opencv.imgproc.Imgproc. It takes a size object as a parameter , to specify the surface to consider per pixel when applying the blur effect, and so the bigger the size, the stronger the blur effect.

See the result in Figure 1-15, where if you look carefully, only the face of the cat is actually blurred, and this is the exact face we saved earlier on.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig15_HTML.jpg
Figure 1-15

Poor blurred cat

As you have seen in the contextual helper menu for the submat function, there are two other ways to grab the submat.

One way is with two ranges , the first one for a range of rows (y, or height), and the second ones for a range of columns (x, or width), both created using the Range class.

Mat submat2 = mat.submat(new Range(250,650), new Range(600,1000));

Another way is with a rectangle, where you give top left coordinates first, then the size of the rectangle .

Mat submat3 = mat.submat(new Rect(600, 250, 400, 400));

This last way of using submat is one of the most used since it is the most natural. Also, when finding objects within a picture, you can use the bounding box of that object, which type is a Rect object .

Note that, as you have seen, changing a submat has collateral damage effects on the underlying Mat. So if you decide to set the color of a submat to blue :

submat3.setTo(new Scalar(255,0,0));
Imgcodecs.imwrite("output/submat3_2.png", submat3);
Imgcodecs.imwrite("output/submat3_3.png", submat2);
Imgcodecs.imwrite("output/submat3_4.png", mat);
Then Figure 1-16 shows the blue cat face of both submat3_2.png and submat3_3.png.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig16_HTML.jpg
Figure 1-16

Blue cat face

But those changes to the submat also update the underlying mat, as shown in Figure 1-17!!
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig17_HTML.jpg
Figure 1-17

Blue cat face in big picture

So the idea here is to be careful where and when using submat, but most of the time this is a powerful technique for image manipulation.

1.9 Creating a Mat from Submats

Problem

You would like to create a Mat manually from scratch, made of different submats.

Solution

setTo and copyTo are two important functions of OpenCV. setTo will set the color of all the pixels of a mat to the color specified, and copyTo will copy an existing Mat to another one. When using either setTo or copyTo you will probably work with submats, thus affecting only parts of the main mat.

To use setTo, we will use colors defined using OpenCV’s Scalar object, which, for now, will be created with a set of values in the RGB color space. Let’s see all this in action.

How it works

The first example will use setTo to create a mat made of submats, each of them of a different color.

Mat of Colored Submats

First, let’s define the colors using RGB values. As mentioned, colors are created using a Scalar object, with three int values, where each value is between 0 and 255.

The first color is the blue intensity, the second is the green intensity, and the last one is the red intensity. Thus to create red, green, or blue, you put its main color value to its max intensity, so 255, and the others to 0.

See how it goes for red, green, and blue:

Scalar RED   = new Scalar(0, 0, 255); // Blue=0, Green=0, Red=255
Scalar GREEN = new Scalar(0, 255, 0); // Blue=0, Green=255, Red=0
Scalar BLUE  = new Scalar(255, 0, 0); // Blue=255, Green=0, Red=0

To define cyan, magenta, and yellow, let’s think of those colors as the complementary colors of RGB, so we set the other channels to the max value of 255, and the main one to 0.

Cyan is complementary to red, so the red channel value is set to 0, and the other two channels are set to 255:

Scalar CYAN    = new Scalar(255, 255, 0);

Magenta is complementary to green, and yellow to blue. These are defined as follows:

 Scalar MAGENTA = new Scalar(255, 0, 255);
 Scalar YELLOW  = new Scalar(0, 255, 255);

Alright. We have the colors all set up; let’s use them to create a mat of all the defined colors. The following setColors method takes the main mat object and fills a row with either the main RGB colors or the complementary colors CMY.

See how the submat content is filled using the setTo function on a submat with a scalar color.

  static void setColors(Mat mat, boolean comp, int row) {
      for(int i = 0 ; i < 3 ; i ++) {
        Mat sub = mat.submat(row*100, row*100+100, i*100, i*100+100);
        if(comp) {  // RGB
          if(i==0) sub.setTo(RED);
          if(i==1) sub.setTo(GREEN);
          if(i==2) sub.setTo(BLUE);
        } else {    // CMY
          if(i==0) sub.setTo(CYAN);
          if(i==1) sub.setTo(MAGENTA);
          if(i==2) sub.setTo(YELLOW);
        }
      }
}

Then, the calling code creates the mat in three-channel RGB color mode and fills the first and second rows.

    Mat mat = new Mat(200,300,CV_8UC3);
    setColors(mat, false, 1);
    setColors(mat, true, 0);
    Imgcodecs.imwrite("output/rgbcmy.jpg", mat);
The result is a mat made of two rows, each of them filled with the created colored submats , as shown in Figure 1-18.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig18_HTML.jpg
Figure 1-18

Mat of colored submats

Mat of Picture Submats

Colors are great, but you will probably be working with pictures. This second example is going to show you how to use submats filled with a picture content.

First start by creating a 200×200 mat and two submats: one for the top of the main mat, one for the bottom of the main mat.

int width = 200,height = 200;
Mat mat = new Mat(height,width,CV_8UC3);
Mat top = mat.submat(0,height/2,0,width);
Mat bottom = mat.submat(height/2,height,0,width);

Let’s then create another small Mat by loading a picture into it and resizing it to the size of the top (or bottom) submat. Here you are introduced to the resize function of the Imgproc class.

Mat small = Imgcodecs.imread("images/kitten.jpg");
Imgproc.resize(small,small,top.size());
You are free to choose the picture, of course; for now, let’s suppose the loaded small mat is like Figure 1-19:
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig19_HTML.jpg
Figure 1-19

Kitten power

The small cat mat is then copied to both the top and bottom submats.

Note that the preceding resize step is crucial; the copy succeeds because the small mat and the submat sizes are identical, and thus no problem occurs while copying.

small.copyTo(top);
small.copyTo(bottom);
Imgcodecs.imwrite("output/matofpictures.jpg", mat);
This gives a matofpictures.jpg file of two kittens as shown in Figure 1-20.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig20_HTML.jpg
Figure 1-20

Double kitten power

If you forget to resize the small mat, the copy fails very badly, resulting in something like Figure 1-21.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig21_HTML.jpg
Figure 1-21

Kitten gone wrong

1.10 Highlighting Objects in a Picture

Problem

You have a picture with a set of objects, animals, or shapes that you would like to highlight, maybe because you want to count them.

Solution

OpenCV offers a famous function named Canny , which can highlight lines in a picture. You will see how to use canny in more detail later in this chapter; for now, let’s focus on the basic steps using Java.

OpenCV’s canny works on gray mat for contour detection. While you can leave it to canny to do it for you, let’s explicitly change the color space of the input mat to be in grayspace.

Changing color space is easily done with OpenCV using the cvtColor function found in the Core class.

How it works

Suppose you have a picture of tools as shown in Figure 1-22.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig22_HTML.jpg
Figure 1-22

Tools at work

We start by loading the picture into a Mat as usual:

Mat tools = imread("images/tools.jpg");

We then convert the color of that tools mat using the cvtColor function , which takes a source mat, a target mat, and a target color space. Color space constants are found in the Imgproc class and have a prefix like COLOR_.

So to turn the mat to black and white, you can use the COLOR_RGB2GRAY constant.

cvtColor(tools, tools, COLOR_RGB2GRAY);
The black-and-white picture is ready to be sent to canny. Parameters for the canny function are as follows:
  • Source mat

  • Target mat

  • Low threshold: we will use 150.0

  • High threshold: usually approximately low threshold*2 or low threshold*3

  • Aperture: an odd value between 3 and 7; we will use 3. The higher the aperture, the more contours will be found.

  • L2Gradient value, for now set to true

Canny computes a gradient value for each pixel, using a convolution matrix with the center pixels and neighboring pixels. If the gradient value is higher than the high threshold, then it is kept as an edge. If it’s in between, it is kept if it has a high gradient connected to it.

Now, we can call the Canny function.

Canny(tools,tools,150.0,300.0,3,true);
imwrite("output/tools-01.png", target);
This outputs a picture as shown in Figure 1-23:
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig23_HTML.jpg
Figure 1-23

Canny tools

For the eyes, the printer, and the trees, it may be sometimes easier to draw the inverted Mat where white is turned to black, and black is turned to white. This is done using the bitwise_not function from the Core class.

Mat invertedTools = tools.clone();
bitwise_not(invertedTools, invertedTools);
imwrite("output/tools-02.png", invertedTools);
The result is shown in Figure 1-24.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig24_HTML.jpg
Figure 1-24

Inverted canny tools

You can of course apply the same Canny processing to ever more kitten pictures . Figures 1-25, 1-26, and 1-27 show the same code applied to a picture of kittens.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig25_HTML.jpg
Figure 1-25

Ready to be canny kittens

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig26_HTML.jpg
Figure 1-26

Canny kittens

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig27_HTML.jpg
Figure 1-27

Inverted canny kittens

1.11 Using a Canny Result as a Mask

Problem

While canny is awesome at edge detection, another way of using its output is as a mask, which will give you a nice artistic image.

Let’s experiment drawing the result of a canny operation on top of another picture.

Solutions

When performing a copy operation, you can use what is called a mask as a parameter. A mask is a regular one-channel Mat, thus with values of 0 and 1.

When performing a copy with a mask, if the mask value for that pixel is 0, the source mat pixel is not copied, and if the value is 1, the source pixel is copied to the target Mat.

How it works

In the previous recipe, from the result of the bitwise_not function we have obtained a new Mat object.

Mat kittens = imread("images/three_black_kittens.jpg");
cvtColor(kittens,kittens,COLOR_RGB2GRAY);
Canny(kittens,kittens,100.0,300.0,3, true);
bitwise_not(kittens,kittens);

If you decide to dump the kittens (probably not a good idea, because the file is pretty big…), you will see a bunch of zeros and ones; this is how the mask is created.

Now that we have the mask , let’s create a white mat, named target, to be the target of the copy function.

Mat target = new Mat(kittens.height(), kittens.width(), CV_8UC3, WHITE );

Then we load a source for the copy, and as you remember, we need to make sure it is of the same size as the other component of the copy operation, so target.

Let’s perform a resize operation on the background object.

Mat bg = imread("images/light-blue-gradient.jpg");
Imgproc.resize(bg, bg, target.size());

There you go; you are ready for the copy.

bg.copyTo(target, kittens);
imwrite("output/kittens-03.png", target);
The resulting Mat is shown in Figure 1-28.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig28_HTML.jpg
Figure 1-28

Kittens on blue background

Now can you answer the following question: Why are the cats drawn in white?

The correct answer is indeed that the underlying Mat was initialized to be all white; see the new Mat(…, WHITE) statement. When the mask prevents the copy of a pixel, that is, when its value for that pixel is zero, then the original color of the mat will show up, here WHITE, and this is how the kittens are shown in white in Figure 1-28. You could of course go ahead and try with a black underlying Mat, or a picture of your choice.

We will see some more of those examples in the coming chapters.

1.12 Detecting Edges with Contours

Problem

From the result of the canny operation, you would like to find a list of drawable contours, as well as drawing them on a Mat.

Solution

OpenCV has a set of two functions that often go hand in hand with the canny function: these functions are findContours and drawContours.

findContours takes a Mat and finds the edges, or the lines that define shapes, in that Mat. Since the original picture probably contains a lot of noise from colors and brightness, you usually use a preprocessed image, a black-and-white Mat where the canny function has been applied.

drawContours takes the results of findContours, a list of contour objects, and allows you to draw them with specific features, like the thickness of the line used to draw and the color.

How it works

As presented in the solution, OpenCV’s findContours method takes a preprocessed picture along with other parameters:
  1. 1.

    The preprocessed Mat

     
  2. 2.

    An empty List that will receive the contour object (MatOfPoint)

     
  3. 3.

    A hierarchy Mat; you can ignore this for now and leave it as an empty Mat

     
  4. 4.

    The contour retrieval mode, for example whether to create relationship between contours or return all of them

     
  5. 5.

    The type of approximation used to store the contours; for example, draw all the points or only key defining points

     

First, let’s wrap the preprocessing of the original picture, and the finding contours, in our own custom method, find_ contours .

static List find_contours(Mat image, boolean onBlank) {
    Mat imageBW = new Mat();
    Imgproc.cvtColor(image, imageBW, Imgproc.COLOR_BGR2GRAY);
    Canny(imageBW,imageBW,100.0,300.0,3, true);
    List contours = new ArrayList<MatOfPoint>();
    Imgproc.findContours(imageBW, contours, new Mat(),
            Imgproc.RETR_LIST,
            Imgproc.CHAIN_APPROX_SIMPLE);
    return contours;
}

This method returns the list of found contours, where each contour is itself a list of points, or in OpenCV terms, a MatOfPoint object.

Next, we write a draw_contours method that will take the original Mat to find out the size of each contours found in the previous step, and the thickness we want each contour to be drawn with.

To draw the contours à la opencv, you usually use a for loop and give the index of the contour to draw to the drawContours method.

static Mat draw_contours(Mat originalMat, List contours, int thickness) {
    Mat target =
      new Mat(originalMat.height(), originalMat.width(), CV_8UC3, WHITE);
    for (int i = 0; i < contours.size(); i++)
      Imgproc.drawContours(target, contours, i, BLACK, thickness);
    return target;
}

Great; the building blocks of this recipe have been written so you can put them in action. You can use the same picture of kittens as before as the base picture.

Mat kittens = imread("images/three_black_kittens.jpg");
List contours = find_contours(kittens, true);
Mat target = draw_contours(kittens, contours, 7);
imwrite("output/kittens-contours-7.png", target);
The draw_contours result is shown in Figure 1-29.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig29_HTML.jpg
Figure 1-29

Kitten contours, thickness=7

Go ahead and change the thickness used when drawing contours . For example, with the thickness set to 3, the slightly different result, with thinner lines, is shown in Figure 1-30.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig30_HTML.jpg
Figure 1-30

Kitten contours, thickness=3

From there, we can again use the resulting Mat as a mask when doing a background copy.

The following snippet is code taken from the previous recipe; the function takes a mask and does a copy using that mask.

    static Mat mask_on_bg(Mat mask, String backgroundFilePath) {
        Mat target = new Mat(mask.height(),mask.width(),CV_8UC3,WHITE);
        Mat bg = imread(backgroundFilePath);
        Imgproc.resize(bg, bg, target.size());
        bg.copyTo(target, mask);
        return target;
    }
Figure 1-31 shows the result of a copy with the mask created while drawing contours on thickness set to 3.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig31_HTML.jpg
Figure 1-31

White kittens on blue background

Notably in Chapter 3, you will be introduced to cooler ways of using masks and backgrounds for some artsy results, but for now, let’s wrap this recipe up.

1.13 Working with Video Streams

Problem

You would like to use OpenCV on a video stream and do image processing in real time.

Solution

The Java version of OpenCV presents a videoio package, and in particular a VideoCapture object , that provides ways to read a Mat object directly from a connected video device.

You will see first how to retrieve a Mat object from the video device, with a given size, and then save the Mat to a file.

Using a Frame, you will also see how to plug previous processing code in the real-time image acquisition.

How it works

Taking Still Pictures

Let’s introduce the do_still_captures function . It will take a number of frames to grab, how much time to wait between each frame, and which camera_id to take pictures from.

A camera_id is the index of the capture device connected to your machine. You would usually use 0, but you may come to plug in and use other external devices, and in that case, use the corresponding camera_id.

First a VideoCapture object is created, with the camera_id in parameter.

Then you create a blank Mat object and pass it to receive data from the camera.read() function .

The Mat object is the standard OpenCV Mat object you have worked with up to now, and so you can easily apply the same transformations you have learned.

For now, let’s simply save the frames one by one, with timestamped file names.

Once finished, you can put the camera back to standby mode with the release method on the VideoCapture object.

See how it goes in the following listing.

static void do_still_captures(int frames, int lapse, int camera_id) {
      VideoCapture camera = new VideoCapture(camera_id);
      camera.set(Videoio.CV_CAP_PROP_FRAME_WIDTH, 320);
      camera.set(Videoio.CV_CAP_PROP_FRAME_HEIGHT, 240);
      Mat frame = new Mat();
      for(int i = 0 ; i <frames;i++) {
          if (camera.read(frame)){
             String filename = "video/"+new Date()+".jpg";
             Imgcodecs.imwrite(filename, frame);
             try {Thread.sleep(lapse*1000);}
             catch (Exception e) {e.printStackTrace();}
          }
      }
      camera.release();
}

Calling the newly created function is simply a matter of filling the parameters, and so the following will take ten pictures from device with ID 0, and will wait 1 second between each shot.

do_still_captures(10,1,0);
As is shown in Figure 1-32, ten pictures should be created in the video folder of the project. And, indeed, time flies; it is already past midnight.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig32_HTML.jpg
Figure 1-32

Mini–time lapse of still bedroom

Working in Real Time

Alright; so the bad news here is that the OpenCV Java wrapper does not include obvious ways to convert a Mat to a BufferedImage , which is the de facto object to work with images in the Java graphic packages.

Without going into much detail here, let’s say you actually need this MatToBufferedImage to work in real time in a Java frame, by converting a Mat object to a BufferedImage and thus being able to render it into standard Java GUI objects.

Let’s quickly write a method that converts an OpenCV Mat object to a standard Java BufferedImage .

public static BufferedImage MatToBufferedImage(Mat frame) {
    int type = 0;
    if(frame==null) return null;
    if (frame.channels() == 1) {
        type = BufferedImage.TYPE_BYTE_GRAY;
    } else if (frame.channels() == 3) {
        type = BufferedImage.TYPE_3BYTE_BGR;
    }
    BufferedImage image =
        new BufferedImage(frame.width(), frame.height(), type);
    WritableRaster raster = image.getRaster();
    DataBufferByte dataBuffer = (DataBufferByte) raster.getDataBuffer();
    byte[] data = dataBuffer.getData();
    frame.get(0, 0, data);
    return image;
}

Once you have this building block of code, it actually gets easier, but you will still need one more glue piece of code; a custom panel that extends the Java Panel class JPanel.

What this custom panel, which we will call MatPanel , is made of is a field which is the Mat object to draw. Then MatPanel extends the Java JPanel class in a way that the paint() method now converts the Mat directly using the method you have just seen before: MatToBufferedImage .

class MatPanel extends JPanel {
    public Mat mat;
    public void paint(Graphics g) {
        g.drawImage(WebcamExample.MatToBufferedImage(mat), 0, 0, this);
    }
}

Alright; the somehow missing code in the default OpenCV packages has now been reimplemented and you can create a Java frame ready to receive Mat objects .

MatPanel t = new MatPanel();
JFrame frame0 = new JFrame();
frame0.getContentPane().add(t);
frame0.setDefaultCloseOperation(JFrame.DISPOSE_ON_CLOSE);
frame0.setSize(320, 240);
frame0.setVisible(true);
frame0.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);

The final step of this exercise is to simply use code similar to the do_still_captures method , but instead of stopping after a number of frames, you will write a forever loop to give the video streaming impression.

VideoCapture camera = new VideoCapture(0);
camera.set(Videoio.CV_CAP_PROP_FRAME_WIDTH, 320);
camera.set(Videoio.CV_CAP_PROP_FRAME_HEIGHT, 240);
Mat frame = new Mat();
while(true){
  if (camera.read(frame)){
    t.mat=frame;
    t.repaint();
  }
}
Figure 1-33 gives a real-time view of a Japanese room at 1 am, painted in real time in a Java frame.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig33_HTML.jpg
Figure 1-33

Real-time stream in Java frame

Obviously, the goal of this is to be able to work with the Mat object in real time, so now a good exercise for you is to write the necessary code that leads to the screenshot seen in Figure 1-34.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig34_HTML.jpg
Figure 1-34

Canny picture in real time

The answer is shown in the following code snippet, and as you would have guessed, this is a simple matter of applying the already seen canny transformation to the Mat object read from the camera.

if (camera.read(frame)){
    Imgproc.cvtColor(frame,frame, Imgproc.COLOR_RGB2GRAY);
    Mat target = new Mat();
    Imgproc.Canny(frame,target,100.0,150.0,3,true);
    t.mat=target;
t.repaint();
}

1.14 Writing OpenCV Code in Scala

Problem

Now that you can write a bit of OpenCV code in Java, you are starting to enjoy it, but would like to use Scala instead to reduce boilerplate code.

Solution

The current OpenCV setup you have done so far makes it easy to run any class compiled for the JavaVM. So if you manage to compile Scala classes, and there is a Leiningen plug-in just for that, then the rest is pretty much identical.

What that means is that with the current Leiningen setup you have used so far, you will just need to update the project metadata, in project.clj , in a few places to get things going.

This works in two steps. First, add the scala compiler and libraries, and then update the directory where the files with scala code are found.

How it works

Basic Setup

The project.clj needs be updated in a few places as highlighted in the following.
  • The project name; that is optional, of course.

  • The main class; you may keep the same name, but if you do, make sure to delete the old Java code with lein clean.

  • Next, the lein-zinc plug-in is the all-in-one scala plug-in for Leiningen and needs to be added.

  • The lein-zinc plug-in needs to be triggered before lein performs compilation, so we will add a step to the prep-tasks key of the project metadata. The prep-tasks key is responsible for defining tasks that need to be executed before similar commands run.

  • Finally, the scala library dependency is added to the dependencies key.

The resulting project.clj file can be found in the following.

(defproject opencv-scala-fun "0.1.0-SNAPSHOT"
  :java-source-paths ["scala"]
  :repositories [["vendredi"    
     "http://hellonico.info:8081/repository/hellonico/"]]
  :main SimpleOpenCV
  :plugins [
  [lein-zinc "1.2.0"]
  [lein-auto "0.1.3"]]
  :prep-tasks ["zinc" "compile"]
  :auto {:default {:file-pattern #".(scala)$"}}
  :dependencies [
   [org.clojure/clojure "1.8.0"]
   [org.scala-lang/scala-library "2.12.4"]
   [opencv/opencv "3.3.1"]
   [opencv/opencv-native "3.3.1"]
])
Your new project file setup for scala should look something like the structure shown in Figure 1-35.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig35_HTML.jpg
Figure 1-35

Scala project directory structure

As you can see, not so much is changed from the Java setup, but make sure your source files are in the scala folder now.

To confirm that the whole thing is in place and set up properly, let’s try a simplistic OpenCV example again, but this time in Scala.

You will need to load the OpenCV native library as was done before in the Java examples. If you put the loadLibrary call anywhere in the scala object definition, it will be treated as a static call for the JVM and will load the library when loading the newly Scala written SimpleOpenCV class .

The rest of the code is a rather direct translation of the Java code.

import org.opencv.core._
import org.opencv.core.CvType._
import clojure.lang.RT.loadLibrary
object SimpleOpenCV {
    loadLibrary(Core.NATIVE_LIBRARY_NAME)
    def main(args: Array[String]) {
      val mat = Mat.eye(3, 3, CV_8UC1)
      println("mat = " + mat.dump())
    }
}

When compiling the preceding code , some Java bytecode is generated from the scala sources, in the target folder, in the same way it was done with the Java code.

Thus, you can run the scala code in the exact same way as you were doing with Java, or in command terms:

lein auto run

The output in the console shows the expected OpenCV 3x3 eye mat dumped onscreen.

NikoMacBook% lein auto run
auto> Files changed: scala/DrawingContours.scala, scala/SimpleOpenCV.scala, scala/SimpleOpenCV1.scala, scala/SimpleOpenCV2.scala, scala/SimpleOpenCV3.scala
auto> Running: lein run
scala version:  2.12.4
sbt   version:  0.13.9
fork java?      false
[warn] Pruning sources from previous analysis, due to incompatible CompileSetup.
mat =
[  1,   0,   0;
   0,   1,   0;
   0,   0,   1]
auto> Completed.
An overview of the updated setup in Atom for scala can be found in Figure 1-36.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig36_HTML.jpg
Figure 1-36

Scala setup

Blurred

Agreed, the first Scala example was a little bit too simple, so let’s use some of the power of the OpenCV blurring effect in Scala now.

import clojure.lang.RT.loadLibrary
import org.opencv.core._
import org.opencv.imgcodecs.Imgcodecs._
import org.opencv.imgproc.Imgproc._
object SimpleOpenCV2 {
  loadLibrary(Core.NATIVE_LIBRARY_NAME)
  def main(args: Array[String]) {
    val neko = imread("images/bored-cat.jpg")
    imwrite("output/blurred_cat.png", blur_(neko, 20))
  }
  def blur_(input: Mat, numberOfTimes:Integer) : Mat = {
    for(_ <- 1 to numberOfTimes )
      blur(input, input, new Size(11.0, 11.0))
    input
  }
}

As you can see, the blur effect is called successively many times in a row to incrementally apply the blur effect on the same Mat object.

And the bored cat from Figure 1-37 can be blurred to a blurred bored cat in Figure 1-38.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig37_HTML.jpg
Figure 1-37

Bored cat

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig38_HTML.jpg
Figure 1-38

Blurred and bored

Surely you have tried this on your local machine and found two things that are quite nice with the scala setup .

Compilation times are reduced a bit, and it is actually quicker to see your OpenCV code in action. The scala compiler seems to determine the required compilation steps from incremental code changes.

Also, static imports, even though they exist already in Java, seem to be more naturally integrated with the scala setup .

Canny Effect

In an effort to reduce boilerplate code a little bit more, Scala makes it easy to import not only classes but also methods.

This third example in the scala recipe will show how to apply the canny transformation after changing the color space of a loaded OpenCV Mat.

The code is quite clean; the only sad part is that the OpenCV function vconcat is expecting a java.util.Array and cannot take native scala objects as parameters, and so you’ll need to use the Arrays.asList Java function instead.

import java.util.Arrays
import org.opencv.core._
import org.opencv.core.CvType._
import org.opencv.core.Core._
import org.opencv.imgproc.Imgproc._
import org.opencv.imgcodecs.Imgcodecs._
import clojure.lang.RT.loadLibrary
object SimpleOpenCV3 {
    loadLibrary(Core.NATIVE_LIBRARY_NAME)
    def main(args: Array[String]) {
      val cat = imread("images/cat3.jpg")
      cvtColor(cat,cat,COLOR_RGB2GRAY)
      Canny(cat,cat, 220.0,230.0,5,true)
      val cat2 = cat.clone()
      bitwise_not(cat2,cat2)
      val target = new Mat
      vconcat(Arrays.asList(cat,cat2), target)
      imwrite("output/canny-cat.png", target)
    }
}
The canny parameters have been taken to output something in the simple art space, and this time it’s not really effective to find out edges at all. Figures 1-39 and 1-40 show the before/after of the canny effect on a loaded cat image.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig39_HTML.jpg
Figure 1-39

Not afraid of Scala

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig40_HTML.jpg
Figure 1-40

I has been warned

The Drawing contours example written for Java has also been ported to Scala and is available in the source code of the samples available with this book; for now, this is left as a simple exercise to the reader.

1.15 Writing OpenCV Code in Kotlin

Problems

Writing OpenCV transformations in Scala was quite exciting, but now that Google is pushing for Kotlin you would like to be like the new kids on the block and write OpenCV code in Kotlin.

Solutions

Of course, there is also a Kotlin plug-in for Leiningen. As for the scala setup, you will need to update the project metadata, again the file project. clj .

You will mostly need to add the Kotlin plug-in, as well as the path to the Kotlin source files.

How it works

Basic Setup

The places to update in the project.clj file are very similar to those for the updates required for the scala setup and have been highlighted in the following snippet.

(defproject opencv-kotlin-fun "0.1.0-SNAPSHOT"
  :repositories [
   ["vendredi" "http://hellonico.info:8081/repository/hellonico/"]]
  :main First
  :plugins [
   [hellonico/lein-kotlin "0.0.2.1"]
   [lein-auto "0.1.3"]]
  :prep-tasks ["javac" "compile" "kotlin" ]
  :kotlin-source-path "kotlin"
  :java-source-paths ["kotlin"]
  :auto {:default {:file-pattern #".(kt)$"}}
  :dependencies [
   [org.clojure/clojure "1.8.0"]
   [opencv/opencv "3.3.1"]
   [opencv/opencv-native "3.3.1"]])

Since the Kotlin classes are compiled to JavaVM bytecode transparently by the plug-in, you can refer to the compiled classes as you have done up to now.

Obviously, the first test is to find out whether you can load a Mat object and dump its nice zero and one values.

The following ultrashort Kotlin snippet does just that.

import org.opencv.core.*
import org.opencv.core.CvType.*
import clojure.lang.RT
object First {
    @JvmStatic fun main(args: Array<String>) {
        RT.loadLibrary(Core.NATIVE_LIBRARY_NAME)
        val mat = Mat.eye(3, 3, CV_8UC1)
        println(mat.dump())
    }
}

The First.kt file should be in the Kotlin folder before you run the usual Leiningen run command.

lein auto run –m First

And the command output , showing the OpenCV object properly created and printed on the console, is also necessary.

auto> Files changed: kotlin/Blurring.kt, kotlin/ColorMapping.kt, kotlin/First.kt, kotlin/ui/World0.kt, kotlin/ui/World1.kt, kotlin/ui/World2.kt, kotlin/ui/World3.kt, kotl
in/ui/World4.kt
auto> Running: lein run -m First
[  1,   0,   0;
   0,   1,   0;
   0,   0,   1]
auto> Completed.

That was an easy one. Let’s see how to do something slightly more complex with Kotlin and OpenCV.

Color Mapping

The following new example shows how to switch between different color maps using the applyColorMap function of Imgproc, everything now coded in Kotlin.

import org.opencv.core.*
import org.opencv.imgproc.Imgproc.*
import org.opencv.imgcodecs.Imgcodecs.*
object ColorMapping {
    @JvmStatic fun main(args: Array<String>) {
        System.loadLibrary(Core.NATIVE_LIBRARY_NAME)
        val mat = imread("resources/kitten.jpg")
        applyColorMap(mat,mat,COLORMAP_WINTER)
        imwrite("output/winter.png", mat)
        applyColorMap(mat,mat,COLORMAP_BONE)
        imwrite("output/bone.png", mat)
        applyColorMap(mat,mat,COLORMAP_HOT)
        val mat2 = mat.clone()
        val newSize =
              Size((mat.width()/2).toDouble(),(mat.height()/2).toDouble())
        resize(mat2,mat2,newSize)
        imwrite("output/hot.png", mat2)
    }
}

As you may know, constructor calls in Kotlin do not need the verbose new keyword, and just like in Scala, methods can be statically imported.

You can see this in action and with the original input image in Figure 1-41.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig41_HTML.jpg
Figure 1-41

Cat ready for anything

You can see three files being created; those three output files are shown in Figures 1-42, 1-43, and 1-44.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig42_HTML.jpg
Figure 1-42

Bone cat

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig43_HTML.jpg
Figure 1-43

Winter cat

../images/459821_1_En_1_Chapter/459821_1_En_1_Fig44_HTML.jpg
Figure 1-44

Hot cat , changed its size

Proper type conversion seems to be a bit challenging in Kotlin, but the code is again very compact and just like in Scala removes quite a bit of boilerplate code.

User Interface

One main reason you may want to use Kotlin is for its quite incredible tornadofx library , which make it easier to write simple user interface in the JVM underlying GUI framework JavaFX.

Small applications like this are quite useful to give the user the chance to change OpenCV parameters and see the results in pseudo–real time.

Kotlin Setup

The tornadofx library can be added to the project.clj file in the dependencies section, like the extracted snippet in the following:

(defproject opencv-kotlin-fun "0.1.0-SNAPSHOT"
  ...
  :dependencies [
   [org.clojure/clojure "1.8.0"]
   [opencv/opencv "3.3.1"]
   [no.tornado/tornadofx "1.7.11"]
   [opencv/opencv-native "3.3.1"]])

Since the goal of this recipe is to give you ideas of creativity, we are not going to get deep into learning how to write Kotlin code and write Kotlin code with tornadofx. But you will quickly enjoy a few Kotlin examples on how to integrate those with OpenCV.

The coming first example shows you how to bootstrap your Kotlin code to show an image within a frame.

UI for Dummies
A simple tornadofx application basically follows a given Launcher ➤ App ➤ View structure, as shown in the graph of Figure 1-45.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig45_HTML.gif
Figure 1-45

Tornadofx application graph

With this diagram in mind, we need to create three classes .
  • HelloWorld0: the main view of the User Interface application

  • MyApp0: the JavaFX application object to send to the JavaFX launcher

  • World0: the main class, created only once, thus using object instead of class to define it, to start the JVM-based application

A view in tornadofx is made of a root panel, which can be customized with the javafx widgets as you want.
  • The following code creates a single view, where the view is composed of an image embedded with the imageview widget.

  • The size of the image of the imageview is set within the block defining the widget.

  • The view initialization is done in the init {..} block, and the root object, since it cannot be instantiated again, is using the magical function with.

package ui;
import tornadofx.*
import javafx.application.Application
import javafx.scene.layout.*
class HelloWorld0 : View() {
    override val root = VBox()
    init {
        with(root) {
            imageview("cat.jpg") {
              fitHeight = 160.0
              fitWidth = 200.0
            }
        }
    }
}

The rest of the code is standard tornadofx/javafx boilerplate to start the JavaFX-based application properly.

class MyApp0: App(HelloWorld0::class)
object World0 {
    @JvmStatic fun main(args: Array<String>) {
            Application.launch(MyApp0::class.java, *args)
    }
}

Running the preceding code with leiningen in auto mode is done as you have done up to now with

lein auto run –m ui.World0
And a graphical frame should show up on your screen (Figure 1-46).
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig46_HTML.jpg
Figure 1-46

Image in frame

Actually, the code and the frame are slightly different. A title was set in the root block with the following snippet added at the proper location. You should find out where!

title = "Image in Frame"
UI with Reactive Buttons

The next example builds on the previous one and adds a button that when clicked increments an internal counter, and the value of that counter is then displayed onscreen in real time.

A reactive value can be created with a SimpleIntegerProperty , or SimpleXXXProperty from the javafx.beans package.

That reactive value can then bound to a widget, and in the coming example it will be bound to a label, so that the value of the label is equal to the value of the property.

A button is a simple UI widget on which you can define an action handler . The handler code can be either inside the block or in a different Kotlin function.

With the goal and explanation in place, let’s go to the following code snippet.

package ui;
import tornadofx.*
import javafx.application.Application
import javafx.scene.layout.*
import javafx.beans.property.SimpleIntegerProperty
import javafx.geometry.Pos
class CounterView : View() {
  override val root = BorderPane()
  val counter = SimpleIntegerProperty()
  init {
    title = "Counter"
    with (root) {
      style {
        padding = box(20.px)
      }
      center {
        vbox(10.0) {
          alignment = Pos.CENTER
          label() {
            bind(counter)
            style { fontSize = 25.px }
          }
          button("Click to increment") {
            action {increment()} }}}}}
  fun increment() {counter.value += 1}
}
class CounterApp : App(CounterView::class)
object Counter {
  @JvmStatic fun main(args: Array<String>) {
    Application.launch(CounterApp::class.java, *args)
  }
}
The result of running the counter application is shown in Figure 1-47.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig47_HTML.jpg
Figure 1-47

Simple counter app

And after a few clicks on the beautiful button, you will get something as in Figure 1-48.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig48_HTML.jpg
Figure 1-48

A few button clicks to increase the counter

Blurring Application

Well, that was cool, but it looked like a course on creating GUI, and had not much to do with OpenCV.

Right.

So, this last Kotlin application builds on the two previous examples and shows how to build a blurring application, where the amount of blur is set by a reactive property.

You have to go back and forth between the Image object of the Java land and the Mat object of the OpenCV land. The following example shows a quick way of doing this by using the imencode function from OpenCV, which encodes a Mat object to bytes without turning all this to a file.

The blurring application has a val of type SimpleObjectProperty, which when changes as its graphical view is being updated.

The longer list of imports is a bit annoying, but you would probably not need much more of those for your own custom application.

package ui.cv;
import org.opencv.core.*
import org.opencv.imgproc.Imgproc.*
import org.opencv.imgcodecs.Imgcodecs.*
import clojure.lang.RT
import tornadofx.*
import javafx.application.Application
import javafx.scene.layout.*
import javafx.scene.paint.Color
import javafx.application.Platform
import javafx.beans.property.SimpleIntegerProperty
import javafx.beans.property.SimpleObjectProperty
import javafx.geometry.Pos
import javafx.scene.image.Image
class CounterView : View() {
    override val root = BorderPane()
    val counter = SimpleIntegerProperty(1)
    val imageObj = SimpleObjectProperty(Image("/cat.jpg"))
    val source = imread("images/cat.jpg")
    init {
        title = "Blur"
        with (root) {
            style {
                padding = box(20.px)
            }
            center {
                vbox(10.0) {
                    alignment = Pos.CENTER
                    label() {
                        bind(counter)
                        style { fontSize = 25.px }
                    }
                    imageview(imageObj) {
                        fitWidth = 150.0
                        fitHeight = 100.0
                    }
                    button("Click to increment") {
                            action {
                          increment()
                            randomImage()
                          }
                    }
                    button("Click to decrement {
                          action {
                         decrement()
                         randomImage()
                          }
                    }
                }
            }
        }
    }
    fun blurImage() {
      val result_mat = Mat()
      blur(source, result_mat,
         Size(counter.value.toDouble(),counter.value.toDouble()))
      val mat_of_bytes = MatOfByte()
      imencode(".jpg", result_mat, mat_of_bytes)
      imageObj.value =
         Image(java.io.ByteArrayInputStream(mat_of_bytes.toArray()))
    }
    fun increment() {
        counter.value += 6
    }
    fun decrement() {
        if(counter.value>6)
          counter.value -= 6
    }
}
class MyBlurApp : App(CounterView::class)
object Blur {
    @JvmStatic fun main(args: Array<String>) {
      RT.loadLibrary(Core.NATIVE_LIBRARY_NAME)
      Application.launch(MyBlurApp::class.java, *args)
    }
}
As usual, Leiningen takes care of doing all the Kotlin compilation automatically for you on file change, and the blurring application appears as in Figure 1-49.
../images/459821_1_En_1_Chapter/459821_1_En_1_Fig49_HTML.jpg
Figure 1-49

Blurring application

When you click the increment button, the cat image becomes more blurred, and when you click decrement, it becomes smoother again.

There are a few more tornadofx examples in the code samples along with this book, so do not hesitate to check them out. You will probably get more UI with OpenCV ideas; for example a drag-and-drop panel of images, when images can be blurred at will. Doesn’t sound that out of reach anymore, does it?
../images/459821_1_En_1_Chapter/459821_1_En_1_Figa_HTML.jpg

The first chapter has been filled with recipes, starting from creating a small project in OpenCV on the JavaVM , working through gradually more complicated image manipulation examples, first in Java, and then finally enjoying the JavaVM runtime environment and thus working with Scala and then Kotlin code with the expressive tornadofx library.

The door is now wide open to introduce the origami library, which is a Clojure wrapper for OpenCV. The environment will bring you even more concise code and more interactiveness to try new things and be creative. Time to get excited.

I have a general sense of excitement about the future, and I don’t know what that looks like yet. But it will be whatever I make it.

Amanda Lindhout

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.190.239.166