Intelligent Video Content Analytics
This chapter focuses on the development of Java programs using the Watson Visual Recognition service to analyze video files and generate video content description.
The Intelligent Video Content Analytics sample application in this chapter performs object classification and face detection on videos instead of images.
In addition, VideoCapture and some other classes of the OpenCV library permit reading a video file and getting frames from it. See the OpenCV website.
This chapter focuses on the development of Java programs using the Watson Visual Recognition service and OpenCV classes to analyze video content.
The program can be run in Eclipse on Linux or Windows.
The following topics are covered in this chapter:
4.1 Getting started
To start, read through the objectives, prerequisites, and expected results of this use case.
4.1.1 Objectives
After completing this chapter, you should be able to accomplish these objectives:
Investigate the set of built-in classes of Watson Visual Recognition and OpenCV to perform object classification and face detection on video files instead of a photographic image.
Use Watson Visual Recognition service and OpenCV for your own projects using video captured from any source (file, camera, or others).
4.1.2 Prerequisites
You must have the following accounts, resources, knowledge, and experiences:
An IBM Bluemix account (register for a new account or log in to Bluemix if you already have an account)
Eclipse IDE Luna
Java 8
OpenCV 3.x.x for Java, installed
4.1.3 Expected results
The video file you analyze in this chapter contains various scenes that IBM created. It summarizes a diversity of objects and people in different but real daily situations and will serve as a real test of the program.
The following images illustrate a subset of sample output results that are displayed when running the sample program:
Figure 4-1 on page 85: Result obtained for a control center scene in video input
Figure 4-2 on page 85: Result obtained for road scene in input video
Figure 4-3 on page 86: Result obtained for surveillance system scene in input video
Figure 4-4 on page 86: Result obtained for person in scene in input video
Figure 4-1 Result obtained for a control center scene in video input
Figure 4-2 Result obtained for road scene in input video
Figure 4-3 Result obtained for surveillance system scene in input video
Figure 4-4 Result obtained for person in scene in input video
4.2 Architecture
Figure 4-5 summarizes the main steps of the program:
1. First, the video is loaded using VideoCapture (an OpenCV class).
2. The video is divided into individual frames that are processed sequentially.
3. Each frame is passed to the Watson Visual Recognition service, which detects faces and classifies objects contained in the frame.
4. The results are sent to the display method which displays the video frame, the detected objects (or faces), and additional descriptive information.
Figure 4-5 Flow chart of the Intelligent Video Content Analytics program
Before starting, you will need an input video file and credentials of a Watson Visual Recognition service instance. The program reads the input video file and displays a JSON object describing its content:
1. VideoCapture (an OpenCV class) captures video from the input video file.
Steps 2, 3, and 4 are repeated until the video ends.
2. The video is read frame by frame.
3. The current frame is used to create an options object (either the ClassifyImagesOptions class or the VisualRecognitionOptions class).
This options object is used as an argument when accessing the Watson Visual Recognition service (either the classify or detectFaces method on the VisualRecognition class) depending if you want to classify objects or detect faces.
4. The result of both methods is a JSON object that describes the frame content. An internal display method is called to display the current frame and the description.
4.3 Implementation
Implementing this use case involves the following steps:
4.3.1 Creating a Visual Recognition service instance
Before you can use the Watson services, you must create an instance of the service in Bluemix. For this use case, create a Visual Recognition service instance as described in 1.2, “Creating a Watson Visual Recognition service instance and getting the API key” on page 2.
After creating the service instance, view the credentials (Figure 4-6). Copy and save the following values for later use:
url, which is the API endpoint
api-key, which is the API key
Figure 4-6 Credentials of Visual Recognition service instance
4.3.2 Downloading the project from Git
A Git repository is provided for this use case which includes the code to implement the IntelligentVideoContentAnalytics application with comments to make it easier to understand.
1. Download the repository from the following GitHub location:
2. Download IntelligentVideoContentAnalytics_student.zip file.
3. Extract the file, which then creates a Java Eclipse Project folder.
4.3.3 Importing the project to Eclipse
In this section you will import the IntelligentContentVideoAnalytics project into the Eclipse workspace as an existing project.
After you extract the project, complete these steps:
1. Launch the Eclipse IDE. When prompted for a workspace, keep the existing workspace or change the workspace as desired, and click OK.
2. In the Eclipse environment, select File  Import (Figure 4-7).
Figure 4-7 Import project menu
3. Select General  Existing Projects into Workspace (Figure 4-8) and click Next. The import process has three pages.
Figure 4-8 Type imported project dialog
4. Select a root directory. Click Browse to navigate to your project’s directory (Figure 4-9).
Figure 4-9 Select root directory
5. Find the IntelligentVideoContentAnalytics folder (Figure 4-10), and then click OK.
Figure 4-10 Navigation window to import project
6. Under Projects, select the IntelligentVideoContentAnalytics check box and click Finish (Figure 4-11).
Figure 4-11 Last import project dialog
7. Verify that the IntelligentVideoContentAnalytics project folder is imported to Eclipse Package Explorer (Figure 4-12) and explore its structure (for more details, see the README.txt file).
Figure 4-12 Eclipse Package Explorer dialog
4.3.4 Importing Watson Java SDK and additional OpenCV libraries
You might notice some errors when you import the source code. Correcting those errors requires adding an extra dependency and libraries.
Fix Java problems
Figure 4-13 shows Java problems that you might see.
Figure 4-13 Java problems
To correct the problem, complete these steps:
1. Right-click the IntelligentVideoContentAnalytics project, and select Build Path  Configure Build Path (Figure 4-14).
Figure 4-14 Configure Build Path
2. Select the Libraries tab, click the library that shows errors, and click Edit (Figure 4-15).
Figure 4-15 Select the library in error
3. Do one of the following steps:
 – If no default JRE was previously defined: Skip to step 4 on page 97.
 – If a default JRE was previously defined: Select Workspace default JRE, and click Finish (Figure 4-16). You can now skip to “Add Watson Java SDK with dependencies to your project” on page 101.
Figure 4-16 Select Workspace default JRE, if one was previously defined
4. This step through step 9 on page 101 are needed only if no default JRE was installed previously. Click Installed JREs (Figure 4-17).
Figure 4-17 Installed JREs
5. Click Add (Figure 4-18).
Figure 4-18 Add a JRE definition
6. Select Standard VM and click Next (Figure 4-19).
Figure 4-19 Standard VM installed JRE type
7. Click Directory, select a JDK installation path, and click OK (Figure 4-20).
Figure 4-20 Select root directory of JRE installation
8. Your panel should look similar to the one shown in Figure 4-21. Click Finish.
Figure 4-21 Sample valid Java library
9. Now you can select Workspace default JRE, and click Finish (Figure 4-22).
Figure 4-22 Select Workspace default JRE
Add Watson Java SDK with dependencies to your project
Complete the following steps:
1. Download the Watson Java SDK dependencies JAR (with dependencies) files:
2. Scroll to the Downloads section and click java-sdk-3.7.0-jar-with-dependencies.jar (Figure 4-23).
Figure 4-23 Download Watson Java SDK
3. After the JAR file is downloaded, open Eclipse, right-click the project name, and then select Build Path  Configure Build Path (Figure 4-24).
Figure 4-24 Configure Build Path
4. Open the Libraries tab, and then click Add External JARs (Figure 4-25).
Figure 4-25 Configure Java Build Path
5. Navigate to the JAR file (java-sdk-3.5.2-jar-with-dependencies.jar), select it, and then click OK (Figure 4-26).
 
Note: The JAR file name (java-sdk-x.x.x-jar-with-dependencies.jar) will vary depending on the version available when you download it.
Figure 4-26 Select the Java SDK JAR file
6. Check that the JAR file is added to your project and click OK (Figure 4-27).
Figure 4-27 Window to check the addition of Java SDK
7. After the Watson Java SDK is imported to the project, verify that the Java errors concerning Visual Recognition are resolved (as shown in lines 23 and 24 of Figure 4-28).
Figure 4-28 Import of Visual Recognition classes
Create OpenCV3.x.x Java as a user library to Eclipse
To resolve import errors of OpenCV, define OpenCV as a user library in Eclipse. Complete the following steps:
 
Note: These steps are from the Using OpenCV Java with Eclipse web page.
1. After the OpenCV3.x.x Java library is installed, return to Eclipse and select Window  Preferences (Figure 4-29).
Figure 4-29 Select Preferences
2. Expand Java  Build Path  User Libraries and click New (Figure 4-30 on page 107).
Figure 4-30 Add new user library
3. Provide a name for your new user library, for example opencv3.x.x (Figure 4-31), and then click OK.
Figure 4-31 Fill user library name dialog
4. Select your new user library (opencv3.x.x) and click Add External JARs. A dialog opens where you can navigate folders (Figure 4-32 on page 108) to find the opencv-3xx.jar file.
Select the opencv-3xx.jar file that is in the installation folder of OpenCV library.The location of the JAR file depends on the operating system you use:
 – For Linux:   /opencv3.x.x/build/bin/
 – For Windows:  C:OpenCV-3.x.xuildjavax64 (or x86 if you have a 32-bit OS)
After you select the opencv-3xx.jar, click OK.
Figure 4-32 Navigate folders dialog
5. Select Native library location and click Edit. The Native Library Folder Configuration dialog opens (Figure 4-33).
Figure 4-33 Native Library Folder Configuration dialog
6. Click External Folder and browse to select the folder of the Native Library Location:
 – For Linux:   /opencv3.x.x/build/lib
 – For Windows:   C:OpenCV-3.x.xuildjavax64 (if you have a 32-bit OS, select the x86 folder instead x64).
After the OpenCV Native Library Location is determined, click OK on the Native Library Folder Configuration dialog and then click OK on the User Libraries page (Figure 4-34).
Figure 4-34 Native library folder configuration dialog
7. After you add the OpenCV library, right-click the project name and select Build Path  Configure Build Path (Figure 4-35 on page 110).
Figure 4-35 Configure Build Path
8. Click the Libraries tab and click Add Library to open the Add Library wizard (Figure 4-36).
Figure 4-36 Add Library window
9. Select User Library and click Next (Figure 4-37).
Figure 4-37 Add Library dialog
10. Select the opencv3.x.x check box and click Finish (Figure 4-38).
Figure 4-38 Add Library dialog
11. Now that all required libraries are added, verify that no import errors exist (Figure 4-39).
Figure 4-39 No errors
4.3.5 Exploring and completing the sample code provided with the use case
You imported the project and resolved the import errors. Now you can use the Java editor in Eclipse to explore and understand the code and make a few changes to the source code in order to complete it. These steps focus mainly on removing comments around several key instructions and customizing the program with your Watson Visual Recognition service credentials.
1. The starting point of the execution of a stand-alone Java program is the main method. Figure 4-40 shows a snippet of the main method.
 
Update the code: Remove the block comment around the three first instructions (lines 121, 122, and 123 in Figure 4-40).
On line 121 the VisualRecognition class is instantiated. This Java class is used to access the Watson Visual Recognition service.
Figure 4-40 Instantiation of Visual Recognition service code
2. The first instruction in Figure 4-41 on page 113 instantiates a new VisualRecognition object to access the Watson Visual Recognition V3 service.
 
Update the code: In the next two lines of code (121 and 122), replace the values of the ApiKey and EndPoint with your values that you copied previously in 4.3.1, “Creating a Visual Recognition service instance” on page 88.
Your code should now appear similar to Figure 4-41.
Figure 4-41 Code overview after removing comments and setting ApiKey and EndPoint URL
3. Copy a video file, for example ibmvideo.mp4, to the project file directory (Figure 4-42). You can download the video file from this location:
 
Figure 4-42 A video file in the project directory
4. Now, load the input video file using VideoCapture (an OpenCV class).
 
Update the code: This simple step involves removing the comment characters in line 135. Figure 4-43 shows how the code looks before and after the change.
Figure 4-43 Load video line code
5. A while loop reads the video frame by frame (Figure 4-44) and analyzes the video content. Note that the program does not analyze every frame, the main reason being that there is a lot of redundancy in consecutive frames. This sample program analyzes one out of every 40 frames. You can change this by simply updating the frequency variable.
Figure 4-44 while loop to read frames from video
6. Figure 4-45 shows the code that classifies the objects in the video frame.
Figure 4-45 Classify objects code with comments
 
Update the code: Remove the block comments so your code looks identical to Figure 4-46.
Figure 4-46 Classify objects code
About the code:
 – The first line of code shows how to create a ClassifyImagesOptions object based on the current video frame image. Consider this information:
 • To create the new options for the new image, instantiate a new builder (ClassifyImagesOptions.Builder()), call the images() method to set the new image to classify. This function accepts an image file as the parameter and returns the builder.
 • At the end, call the build() method with any argument that builds and returns the profile options (ClassifyImagesOptions).
 – The second line of code shows how to call the Watson Visual Recognition service that performs the actual classification of objects within the current video frame image. The result of the classification is saved in the result variable. Consider this information:
 • To classify an object, call the classify() method, service.classify(ClassifyImagesOptions).
 • It accepts options (ClassifyImagesOptions) as argument and returns a VisualClassification JSON object. The classify() method of the Visual Recognition service analyzes images and detects details of objects.
 • The execute() function is used to run the service.
7. Figure 4-47 shows how the sample program calls the display() method to display the video frame and the result of the classification, before moving on to the next frame. This method receives two arguments:
 – Frame
 – Frame description (str = result.toString())
Figure 4-47 Display frame and description objects code
 
Update the code: As before, remove the comment from line 193. Your code should now look like the code in Figure 4-48.
Figure 4-48 Result
The code of the display() method is shown in Figure 4-49.
Figure 4-49 The display method code
8. To enhance this application, a graphical interface (GUI) is created to display the video and the description of the content (Figure 4-50).
Figure 4-50 instantiate VideoAnalytics class to create graphical interface
9. Declare all graphic components as class attributes (Figure 4-51).
Figure 4-51 Graphic components declaration
Figure 4-52 shows the code in the class constructor that builds the graphical interface. The graphical interface is used to display video and its content description.
Figure 4-52 Creation of graphic interface
10. Save the project (File  Save) and run the application as described in the next section.
4.3.6 Running the application
To run the Intelligent Video Content Analytics application, complete these steps:
1. Copy the path of your video or use the paths described in this project. You can get the video (IBM Intelligent Video Analytics Overview) at either of the following locations:
2. Run the project: Right-click the project and select Run As  Run Configurations (Figure 4-53).
Figure 4-53 Run Configurations
3. Select Java Application and click the New button (Figure 4-54) to create a configuration.
Figure 4-54 The New button
4. On the Main page (Figure 4-55), click Browse to find and select the project (IntelligentVideoContentAnalytics), click Search to find and select the main class, and then click Run.
Figure 4-55 Select the project and main class
The program runs and displays the results shown in 4.1.3, “Expected results” on page 84.
4.4 Changing your application to detect faces
You can change the application to detect faces instead of performing object classification. Complete these steps:
1. To detect faces, use the detectFaces() method instead of the classify() method of VisualRecognition class.
 
Update the code: Comment out the first two lines of code (lines 174 and 175) and remove the comments for the next two (lines 180 and 181). Figure 4-56 shows what your code should look like after you update the code.
Figure 4-56 Change to detectFaces instead classify
2. Understand the code. Figure 4-56 shows the code that detects faces in the video frame:
 – The first line of code shows how to create a VisualRecognitionOptions object based on the current video frame image.
 • To create the new options for the new image, instantiate a new builder (VisualRecognitionOptions.Builder()), call the images() method to set the new image to analyze. This function accepts an image file as the parameter and returns the builder.
 • At the end, call the build() method with any argument that builds the profile options and returns the profile options (VisualRecognitionOptions).
 – The second line of code shows how to call the Watson Visual Recognition service which performs the actual detection of faces within the current video frame image. The result of the face detection is saved in the result variable.
 • To detect faces, call the detectFaces() method:
service.detectFaces(VisualRecognitionOptions)
 • It accepts options (VisualRecognitionOptions) as argument and returns a DetectedFaces JSON object. The detectFaces() method of the Visual Recognition service analyzes images and detects faces.
 • The execute() function is used to run the service.
3. After you change your code to detect faces instead of classifying objects, save the change and rerun the program. The results for the same input video but if no faces are detected in the video frame are shown in Figure 4-57.
Figure 4-57 Result if no person is in the scene
If a person appears in the video, the results differ, as shown in Figure 4-58.
Figure 4-58 Result if a person appears in the scene
 
Using video from the camera: You can extend this program to use video from the camera:
1. Find this instruction:
VideoCapture camera = new VideoCapture("path of video file ")
2. Change that instruction as follows:
VideoCapture camera = new VideoCapture()
This program can be extended to other use cases.
4.5 References
See the following resources:
OpenCV 3.0.0-dev documentation (Using OpenCV Java with Eclipse):
Move your Java application into a hybrid cloud using Bluemix, Part 3 (IBM developerWorks):
Watson Developer Cloud:
 
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.196.175