How to do it...

We proceed with the recipe as follows:

  1. Import the pre-built models and additional modules needed for processing and showing images:
from keras.applications import ResNet50
from keras.applications import InceptionV3
from keras.applications import Xception # TensorFlow ONLY
from keras.applications import VGG16
from keras.applications import VGG19
from keras.applications import imagenet_utils
from keras.applications.inception_v3 import preprocess_input
from keras.preprocessing.image import img_to_array
from keras.preprocessing.image import load_img
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
from PIL import Image
%matplotlib inline
  1. Define a map used for memorizing the size of images used for training the networks. These are well-known constants for each model:
MODELS = {
"vgg16": (VGG16, (224, 224)),
"vgg19": (VGG19, (224, 224)),
"inception": (InceptionV3, (299, 299)),
"xception": (Xception, (299, 299)), # TensorFlow ONLY
"resnet": (ResNet50, (224, 224))
}
  1. Define an auxiliary function used for loading and converting each image. Note that the pre-trained networks have been trained on a tensor with a shape that also includes an additional dimension for batch_size. Therefore, we need to add this dimension to our image for compatibility:
def image_load_and_convert(image_path, model):
pil_im = Image.open(image_path, 'r')
imshow(np.asarray(pil_im))
# initialize the input image shape
# and the pre-processing function (this might need to be changed
inputShape = MODELS[model][1]
preprocess = imagenet_utils.preprocess_input
image = load_img(image_path, target_size=inputShape)
image = img_to_array(image)
# the original networks have been trained on an additional
# dimension taking into account the batch size
# we need to add this dimension for consistency
# even if we have one image only
image = np.expand_dims(image, axis=0)
image = preprocess(image)
return image
  1. Define an auxiliary function used for classifying the image and loop over the predictions and display the rank-5 predictions along with the probabilities:
def classify_image(image_path, model):
img = image_load_and_convert(image_path, model)
Network = MODELS[model][0]
model = Network(weights="imagenet")
preds = model.predict(img)
P = imagenet_utils.decode_predictions(preds)
# loop over the predictions and display the rank-5 predictions
# along with probabilities
for (i, (imagenetID, label, prob)) in enumerate(P[0]):
print("{}. {}: {:.2f}%".format(i + 1, label, prob * 100))

5. Then start to test different types of pre-trained networks:

classify_image("images/parrot.jpg", "vgg16")

Following you will see a list of predictions with respective probabilities:
1. macaw: 99.92%
2. jacamar: 0.03%
3. lorikeet: 0.02%
4. bee_eater: 0.02%
5. toucan: 0.00%

classify_image("images/parrot.jpg", "vgg19")

1. macaw: 99.77%
2. lorikeet: 0.07%
3. toucan: 0.06%
4. hornbill: 0.05%
5. jacamar: 0.01%

classify_image("images/parrot.jpg", "resnet")

1. macaw: 97.93%
2. peacock: 0.86%
3. lorikeet: 0.23%
4. jacamar: 0.12%
5. jay: 0.12%

classify_image("images/parrot_cropped1.jpg", "resnet")

1. macaw: 99.98%
2. lorikeet: 0.00%
3. peacock: 0.00%
4. sulphur-crested_cockatoo: 0.00%
5. toucan: 0.00%

classify_image("images/incredible-hulk-180.jpg", "resnet")

1. comic_book: 99.76%
2. book_jacket: 0.19%
3. jigsaw_puzzle: 0.05%
4. menu: 0.00%
5. packet: 0.00%

classify_image("images/cropped_panda.jpg", "resnet")

giant_panda: 99.04%
2. indri: 0.59%
3. lesser_panda: 0.17%
4. gibbon: 0.07%
5. titi: 0.05%

classify_image("images/space-shuttle1.jpg", "resnet")

1. space_shuttle: 92.38%
2. triceratops: 7.15%
3. warplane: 0.11%
4. cowboy_hat: 0.10%
5. sombrero: 0.04%

classify_image("images/space-shuttle2.jpg", "resnet")

1. space_shuttle: 99.96%
2. missile: 0.03%
3. projectile: 0.00%
4. steam_locomotive: 0.00%
5. warplane: 0.00%

classify_image("images/space-shuttle3.jpg", "resnet")

1. space_shuttle: 93.21%
2. missile: 5.53%
3. projectile: 1.26%
4. mosque: 0.00%
5. beacon: 0.00%

classify_image("images/space-shuttle4.jpg", "resnet")

1. space_shuttle: 49.61%
2. castle: 8.17%
3. crane: 6.46%
4. missile: 4.62%
5. aircraft_carrier: 4.24%

Note that some errors are possible. For instance:

classify_image("images/parrot.jpg", "inception")

1. stopwatch: 100.00%
2. mink: 0.00%
3. hammer: 0.00%
4. black_grouse: 0.00%
5. web_site: 0.00%

classify_image("images/parrot.jpg", "xception")

1. backpack: 56.69%
2. military_uniform: 29.79%
3. bib: 8.02%
4. purse: 2.14%
5. ping-pong_ball: 1.52%

  1.  Define an auxiliary function used for showing the internal architecture of each pre-built and pre-trained network:
def print_model(model):
print ("Model:",model)
Network = MODELS[model][0]
model = Network(weights="imagenet")
model.summary()
print_model('vgg19')
('Model:', 'vgg19')
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_14 (InputLayer) (None, 224, 224, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, 56, 56, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_conv4 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, 28, 28, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_conv4 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, 14, 14, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_conv4 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
fc1 (Dense) (None, 4096) 102764544
_________________________________________________________________
fc2 (Dense) (None, 4096) 16781312
_________________________________________________________________
predictions (Dense) (None, 1000) 4097000
=================================================================
Total params: 143,667,240
Trainable params: 143,667,240
Non-trainable params: 0
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.28.36