LSTM on Google Cloud Shell

After having thoroughly analyzed the Python code, it is time to run it around to classify the images contained in the dataset. To do this, we work in a similar way to what was done in the case of the example on CNN. So we will use the Google Cloud Shell. Google Cloud Shell provides command-line access to Cloud resources directly from your browser. You can easily manage projects and resources without having to install the Google Cloud SDK or other tools in your system. With Cloud Shell, the gcloud command-line tool from Cloud SDK and other necessary utilities are always available, updated and fully authenticated when you need them.

To start Cloud Shell, just click the Activate Google Cloud Shell button at the top of the console window, as shown in the following screenshot:

A Cloud Shell session opens inside a new frame at the bottom of the console and displays a command-line prompt. It can take a few seconds for the shell session to be initialized. Now, our Cloud Shell session is ready to use, as shown in the following screenshot:

At this point, we need to transfer the rnn_hwr.py file containing the Python code in the Google Cloud Platform. We have seen that to do so, we can use the resources made available by Google Cloud Storage. Then we open the Google Cloud Storage browser and create a new bucket.

To transfer the cnn_hwr.py file on Google Storage, follow these steps:

  1. Just click on CREATE BUCKET icon
  2. Type the name of the new bucket (rnn-hwr) in the create a bucket window
  3. After this, a new bucket is available in the buckets list
  4. Click on the rnn-hwr bucket
  5. Click on UPLOAD FILES icon in the window opened
  6. Select the file in the dialog window opened
  7. Click Open

At this point, our file will be available in the new bucket, as shown in the following screenshot:

Now we can access the file from the Cloud Shell. To do this, we create a new folder in the shell. Type this command in the shell prompt:

mkdir RNN-HWR

Now, to copy the file from the Google Storage bucket to the CNN-HWR folder, simply type the following command in the shell prompt:

gsutil cp gs://rnn-hwr-mlengine/rnn_hwr.py RNN-HWR

The following code is displayed:

giuseppe_ciaburro@progetto-1-191608:~$ gsutil cp gs://rnn-hwr/rnn_hwr.py RNN-HWR
Copying gs://rnn-hwr/rnn_hwr.py...
/ [1 files][ 4.0 KiB/ 4.0 KiB]
Operation completed over 1 objects/4.0 KiB.

Now let's move into the folder and verify the presence of the file:

$cd RNN-HWR
$ls
rnn_hwr.py

We just have to run the file:

$ python rnn_hwr.py

A series of preliminary instructions is displayed:

Extracting /tmp/data/train-images-idx3-ubyte.gz
Extracting /tmp/data/train-labels-idx1-ubyte.gz
Extracting /tmp/data/t10k-images-idx3-ubyte.gz
Extracting /tmp/data/t10k-labels-idx1-ubyte.gz

They indicate that the data download was successful, as was the invocation of the TensorFlow library. From this point on, the training of the network begins, which, as we have anticipated, may be quite long. At the end of the algorithm execution, the following information will be returned:

Step 1, Minibatch Loss= 2.9727, Training Accuracy= 0.117
Step 1000, Minibatch Loss= 1.8381, Training Accuracy= 0.430
Step 2000, Minibatch Loss= 1.4021, Training Accuracy= 0.602
Step 3000, Minibatch Loss= 1.1560, Training Accuracy= 0.672
Step 4000, Minibatch Loss= 0.9748, Training Accuracy= 0.727
Step 5000, Minibatch Loss= 0.8156, Training Accuracy= 0.750
Step 6000, Minibatch Loss= 0.7572, Training Accuracy= 0.758
Step 7000, Minibatch Loss= 0.5930, Training Accuracy= 0.812
Step 8000, Minibatch Loss= 0.5583, Training Accuracy= 0.805
Step 9000, Minibatch Loss= 0.4324, Training Accuracy= 0.914
Step 10000, Minibatch Loss= 0.4227, Training Accuracy= 0.844
Step 11000, Minibatch Loss= 0.2818, Training Accuracy= 0.906
Step 12000, Minibatch Loss= 0.3205, Training Accuracy= 0.922
Step 13000, Minibatch Loss= 0.4042, Training Accuracy= 0.891
Step 14000, Minibatch Loss= 0.2918, Training Accuracy= 0.914
Step 15000, Minibatch Loss= 0.1991, Training Accuracy= 0.938
Step 16000, Minibatch Loss= 0.2815, Training Accuracy= 0.930
Step 17000, Minibatch Loss= 0.1790, Training Accuracy= 0.953
Step 18000, Minibatch Loss= 0.2627, Training Accuracy= 0.906
Step 19000, Minibatch Loss= 0.1616, Training Accuracy= 0.945
Step 20000, Minibatch Loss= 0.1017, Training Accuracy= 0.992
Optimization Finished!
Testing Accuracy: 0.9765625

In this case, we've achieved an accuracy of 97.6 percent on our test dataset.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.149.214.21