How to do it...

Here is how we start with function approximation using MLP:

  1. Import the modules needed--sklearn for datasets, preprocessing data, and splitting it into train and test; Pandas to understand the dataset; and matplotlib and seaborn to visualize:
import tensorflow as tf 
import tensorflow.contrib.layers as layers 
from sklearn import datasets 
import matplotlib.pyplot as plt 
from sklearn.model_selection  import train_test_split 
from sklearn.preprocessing import MinMaxScaler 
import pandas as pd 
import seaborn as sns 
%matplotlib inline
  1. Load the dataset and create a Pandas dataframe to understand the data:
# Data 
boston = datasets.load_boston() 
df = pd.DataFrame(boston.data, columns=boston.feature_names) 
df['target'] = boston.target 
  1. Let us get some details about the data:
#Understanding Data 
df.describe() 

The following image gives a good idea of the concept:

  1. Find correlation between different input features and the target:
# Plotting correlation 
color map _ , ax = plt.subplots( figsize =( 12 , 10 ) )
corr = df.corr(method='pearson')
cmap = sns.diverging_palette( 220 , 10 , as_cmap = True )
_ = sns.heatmap( corr, cmap = cmap, square=True, cbar_kws={ 'shrink' : .9 }, ax=ax, annot = True, annot_kws = { 'fontsize' : 12 })

Following is the output of the preceding code:

  1. From the preceding code, we can see that three parameters--RM, PTRATIO, and LSTAT--have a correlation greater than 0.5 in magnitude. We choose them for the training. Split the dataset into Train and Test datasets. We also use MinMaxScaler to normalize our data set. One important change to note is that since our neural network is using the Sigmoid activation function (output of sigmoid can be between 0-1 only), we will have to normalize the target value Y as well:
# Create Test Train Split 
X_train, X_test, y_train, y_test = train_test_split(df [['RM', 'LSTAT', 'PTRATIO']], df[['target']], test_size=0.3, random_state=0)
# Normalize data
X_train = MinMaxScaler().fit_transform(X_train)
y_train = MinMaxScaler().fit_transform(y_train)
X_test = MinMaxScaler().fit_transform(X_test)
y_test = MinMaxScaler().fit_transform(y_test)
  1. Define the constants and hyperparameters:
#Network Parameters 
m = len(X_train)
n = 3 # Number of features
n_hidden = 20 # Number of hidden neurons
# Hyperparameters
batch_size = 200
eta = 0.01
max_epoch = 1000
  1. Create a multilayer perceptron model with one hidden layer:
def multilayer_perceptron(x): 
fc1 = layers.fully_connected(x, n_hidden, activation_fn=tf.nn.relu, scope='fc1')
out = layers.fully_connected(fc1, 1, activation_fn=tf.sigmoid, scope='out')
return out
  1. Declare the placeholders for the training data and define the loss and optimizer:
# build model, loss, and train op 
x = tf.placeholder(tf.float32, name='X', shape=[m,n])
y = tf.placeholder(tf.float32, name='Y')
y_hat = multilayer_perceptron(x)
correct_prediction = tf.square(y - y_hat)
mse = tf.reduce_mean(tf.cast(correct_prediction, "float"))
train = tf.train.AdamOptimizer(learning_rate= eta).minimize(mse)
init = tf.global_variables_initializer()
  1. Execute the computational graph:
# Computation Graph 
with tf.Session() as sess: # Initialize variables
sess.run(init) writer = tf.summary.FileWriter('graphs', sess.graph)
# train the model for 100 epcohs
for i in range(max_epoch):
_, l, p = sess.run([train, loss, y_hat], feed_dict={x: X_train, y: y_train})
if i%100 == 0:
print('Epoch {0}: Loss {1}'.format(i, l))
print("Training Done")
print("Optimization Finished!")
# Test model correct_prediction = tf.square(y - y_hat)
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(" Mean Error:", accuracy.eval({x: X_train, y: y_train})) plt.scatter(y_train, p)
writer.close()
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.222.179.161