CNN’s consider images as 3D objects. A CNN processes RGB images in a way that R, G, and
B are ingested separately like they are stacked vertically together. This means that standard color
images are perceived by CNN as a rectangle; its height and width are determined via its pixels
and dimensions while the depth is considered as three-layer deep because of the RGB encoding.
When images move in CNN, they have to be described by the volumes of their input and output
with a mathematical representation of their multidimensionality through the 30×30×3 format.
Before working with CNN, it is mandatory to gain a rudimentary knowledge of linear
algebra. The pixel intensity of images is represented by a number. This number is used in the
stacked 2D matrices, generating the volume of the image.
These values are the beginning features which are taken as input by the CNN. Now, the
purpose of CNN is to search for the right signals, from which these value of images can be clas-
sified properly. A convolutional net does not deal with a single pixel at a time. Instead, it uses a
filter to pass square-based patches of pixels. The filter is also shaped like a square and has a smaller
size than the image. It is referred to as a Kernel. The purpose of a filter is to look in the pixels for a
pattern.
Bayesian Models
To understand Bayesian Models, it is important to comprehend some standard jargons.
Conditional Probability: The estimation for a probability P (A|B) an event B was
triggered by an event A.
Random Variable: The value of such a variable is determined by the outcome of a
random event.
Probability Distribution: A function used to specify the probability type of multiple
values for a random variable. It is classified into two types: discrete and continuous.
Now, let’s suppose we are working with the famous coin-flip example. You have a
brand-new coin and you are interested to generate the “fairness” of it but you cannot flip it.
Now, by assuming the coin to be non-biased, you might be inclined to think that like other
conventional coins, the probability of your coin to get “heads” as an outcome is 50% or 0.5.
Assume that the coin’s fairness can be ascertained by 10 flips. Now, you may have two
possible observations. In the first observation, you might get both the heads and tails equally
that is, 5 times. Secondly, you might get heads for 10t.
The first scenario is relatively simple where you carry a fair coin. However, in the case of the
latter scenario, you have two approaches.
Either forget your prior understanding of coin flips and only rely on the fresh data for
predictions by using the heads/10 formula.
Or you can increase your understanding of coin flips and make some change in the
recently-calculated h value to process the probability of heads.
The second method is easier and sensible because we cannot base the fairness of a coin from
only 10 observations. Hence, it helps to improve our decision making by integrating the latest
observations as well as our past understanding to predict an outcome. Such a strategy which uses
both the recent results as well as the existing knowledge is also known as Bayesian thinking.
To gain a better insight, you have to familiarize yourself with the Bayes’ Theorem. It facilitates
to calculate an event’s conditional probability with the use of prior knowledge and evidence.
P(θ|X) = P(X|θ) P(θ) P(X)
262 Internet of Things
Internet_of_Things_CH10_pp249-270.indd 262 9/3/2019 10:15:58 AM
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.15.219.217