The UI and Java part of the project

Our application will have the following user interface:

It has a button that we can use to launch a native camera application to take a photo, a text view to show image classification descriptions, and an image view to show the photo. We should define these UI elements in the activity_main.xml file of the project. The following snippet shows this file:

<?xml version="1.0" encoding="utf-8"?>
<RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:paddingLeft="10dp"
android:paddingRight="10dp">
<TextView
android:text="@string/btn_name"
android:textStyle="bold"
android:id="@+id/textViewClass"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_centerHorizontal="true"
android:layout_above="@+id/btnTakePicture"/>
<Button
android:id="@+id/btnTakePicture"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/btn_name"
android:textStyle="bold"
android:layout_centerHorizontal="true"
android:layout_alignParentBottom="true" />
<ImageView
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:id="@+id/capturedImage"
android:layout_above="@+id/textViewClass"
android:contentDescription="@string/img_desc"/>
</RelativeLayout>

We should also define the text captions for the UI elements in the strings.xml file, which you can find in the res folder. The following snippet shows an interesting part of this file:

<resources>
<string name="app_name">Camera2</string>
<string name="btn_name">Take a photo</string>
<string name="img_desc">Photo</string>
</resources>

Now that the UI elements have been defined, we can connect them to event handlers in the MainActivity class to make our application respond to users' actions. The following code sample shows how we can modify the MainActivity class so that it suits our needs:

public class MainActivity extends AppCompatActivity {
private ImageView imgCapture;
private static final int Image_Capture_Code = 1;
...
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);

initClassifier(getAssets());

setContentView(R.layout.activity_main);
imgCapture = findViewById(R.id.capturedImage);
Button btnCapture = findViewById(R.id.btnTakePicture);
btnCapture.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
Intent cInt = new Intent(MediaStore.ACTION_IMAGE_CAPTURE);
startActivityForResult(cInt, Image_Capture_Code);
}
});
}
...
}

Here, we added a reference to the ImageView element and the imgCapture member value to the MainActivity class. We also defined the Image_Capture_Code value in order to identify the activity event that corresponds to the user's request for image classification.

We made connections between the UI elements and their event handlers in the onCreate() method of the MainActivity class. In this method, we defined the UI element's layout by calling the setContentView() method and passing the identifier of our Main Activity XML definition to it. Then, we saved the reference to the ImageView element and the imgCapture variable. The findViewById() method was used to get the UI element's object reference from the Activity layout. In the same way, we took the reference to the button element. With the setOnClickListener() method of the button element, we defined the event handler for the button click event. This event handler is the OnClickListener class instance where we overrode the onClick() method. We asked the Android system to capture a photo with the default camera application by instantiating the Intent class object with the MediaStore.ACTION_IMAGE_CAPTURE parameter in the onClick() method.

We identified the image that was captured with our previously defined Image_Capture_Code code and passed it into the startActivityForResult() method with the intent class object, cInt. The startActivityForResult() method launches the image capturing software and then passes the result to the onActivityResult event handler of our Activity object. The following code shows its implementation:

    @Override
protected void onActivityResult(int requestCode, int resultCode,
Intent data) {
if (requestCode == Image_Capture_Code) {
if (resultCode == RESULT_OK) {
Bitmap bp = (Bitmap) Objects.requireNonNull(
data.getExtras()).get("data");
if (bp != null) {
Bitmap argb_bp = bp.copy(Bitmap.Config.ARGB_8888,
true);
if (argb_bp != null) {
float ratio_w = (float) bp.getWidth() / (float)
bp.getHeight();
float ratio_h = (float) bp.getHeight() / (float)
bp.getWidth();

int width = 224;
int height = 224;

int new_width = Math.max((int) (height * ratio_w),
width);
int new_height = Math.max(height, (int) (width *
ratio_h));

Bitmap resized_bitmap =
Bitmap.createScaledBitmap(argb_bp,
new_width, new_height, false);
Bitmap cropped_bitmap =
Bitmap.createBitmap(resized_bitmap, 0, 0,
width, height);


int[] pixels = new int[width * height];
cropped_bitmap.getPixels(pixels, 0, width, 0, 0,
width, height);
String class_name = classifyBitmap(pixels, width,
height);

imgCapture.setImageBitmap(cropped_bitmap);

TextView class_view =
findViewById(R.id.textViewClass);
class_view.setText(class_name);
}
}
} else if (resultCode == RESULT_CANCELED) {
Toast.makeText(this, "Cancelled",
Toast.LENGTH_LONG).show();
}
}
}

The onActivityResult() method processes the application's results. Activity called this method when the user created a photo after they pressed the button on the main application view. In the first lines of this method, we checked that we had the Image_Capture_Code code, which identifies that Intent contains a bitmap. We also checked whether there were any errors by comparing resultCode with the predefined RESULT_OK value. Then, we got a Bitmap object from the Intent data object by accessing the data field of the Bundle object that was returned by the getExtras method. If the Bitmap object isn't null, we convert it into ARGB format with the copy() method of the Bitmap object; the Bitmap.Config.ARGB_8888 parameter specifies the desired format. The acquired Bitmap object was scaled and cropped to 224x224, as required by the ResNet architecture. The Bitmap class from the Android framework already has a method named createScaledBitmap for bitmap scaling. We also used the createBitmap() method to crop the original image because the createScaledBitmap() method created a new bitmap from the captured image but with new dimensions passed as parameters. We performed image resizing, which preserves the original width to height ratio because one of the dimensions can be larger than 224; that is why we used cropping to make the final image.

The Bitmap getPixels() method was used to get raw color values from the Bitmap object. This method filled the array with the color values of the Int type. Each of the 4 bytes in this array represents one color component, the highest byte is the Alpha value, while the lowest one represents the Blue value. The method filled the color values in the row-major format. Then, the pixels values were passed to the native library for classification; see the classifyBitmap() method call for more details. When the native library finished performing classification, we displayed the cropped image that was used for classification by passing it into the ImageView object with the setImageBitmap() method call. We also displayed the classification text in the TextField object by calling the setText method.

There are two methods, classifyBitmap and initClassifier, which are JNI calls to the native library functions that are implemented with C++. To connect the native library with the Java code, we use the Java Native Interface (JNI). This is a standard mechanism that's used for calling C/C++ functions from Java. First, we have to load the native library with the system.LoadLibrary call. Then, we have to define the methods that are implemented in the native library by declaring them as public native. The following snippet shows how to define these methods in Java:

public class MainActivity extends AppCompatActivity {
...
static {
System.loadLibrary("native-lib");
}
public native String classifyBitmap(int[] pixels, int width, int height);
public native void initClassifier(AssetManager assetManager);
...
}

Notice that we called the initClassifier() method in the onCreate() method and passed it into the AssetManager object, which was returned by the getAssets Activity() method. The AssetManager manager object allows us to read assets, such as data files that were packaged into the Android APK application bundle. To add assets to the Android application, you have to create the assets folder in the main folder of the project and place the required file there. For this example, we used the model.pt and synset.txt files as application assets. The model.pt file is the TorchScript model snapshot, which we use for classification. The synset.txt file, on the other hand, contains the ordered list of classification descriptions; their order is the same as the order that was used for the class indices for model training. You can download the file from https://github.com/onnx/models/blob/master/vision/classification/synset.txt.

In the next section, we will discuss the C++ part of the project.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.220.237.24