How to do it...

We proceed with the recipe as follows:

  1. Install TFLearn with pip:
pip install -I tflearn
  1. Import a number of useful modules and download an example of text written by Shakespeare. In this case, we use one available at https://raw.githubusercontent.com/tflearn/tflearn.github.io/master/resources/shakespeare_input.txt:
import os
import pickle
from six.moves import urllib
import tflearn
from tflearn.data_utils import *
path = "shakespeare_input.txt"
char_idx_file = 'char_idx.pickle'
if not os.path.isfile(path): urllib.request.urlretrieve("https://raw.githubusercontent.com/tflearn/tflearn.github.io/master/resources/shakespeare_input.txt", path)
  1. Transform the input text into a vector and return the parsed sequences and targets, along with the associated dictionary, by using string_to_semi_redundant_sequences(), which returns a tuple (inputs, targets, dictionary):
maxlen = 25
char_idx = None
if os.path.isfile(char_idx_file):
print('Loading previous char_idx')
char_idx = pickle.load(open(char_idx_file, 'rb'))
X, Y, char_idx =
textfile_to_semi_redundant_sequences(path, seq_maxlen=maxlen, redun_step=3,
pre_defined_char_idx=char_idx)
pickle.dump(char_idx, open(char_idx_file,'wb'))
  1. Define an RNN made up of three LSTMs, each of which has 512 nodes and returns the full sequence instead of the last sequence output only. Note that we use drop-out modules with a probability of 50% for connecting the LSTM modules. The last layer is a dense layer applying a softmax with length equal to the dictionary size. The loss function is categorical_crossentropy and the optimizer is Adam:
g = tflearn.input_data([None, maxlen, len(char_idx)])
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512, return_seq=True)
g = tflearn.dropout(g, 0.5)
g = tflearn.lstm(g, 512)
g = tflearn.dropout(g, 0.5)
g = tflearn.fully_connected(g, len(char_idx), activation='softmax')
g = tflearn.regression(g, optimizer='adam', loss='categorical_crossentropy',
learning_rate=0.001)
  1. Given the network defined in step 4, we can now generate the sequence with the library flearn.models.generator.SequenceGenerator (network, dictionary=char_idx, seq_maxlen=maxle, clip_gradients=5.0, checkpoint_path='model_shakespeare'):
m = tflearn.SequenceGenerator(g, dictionary=char_idx,
seq_maxlen=maxlen,
clip_gradients=5.0,
checkpoint_path='model_shakespeare')
  1. For 50 iterations, we take a random sequence from the input text and we generate a new text. Temperature is controlling the novelty of the created sequence; a temperature close to 0 will look like samples used for training, while the higher the temperature, the more the novelty:
for i in range(50):
seed = random_sequence_from_textfile(path, maxlen)
m.fit(X, Y, validation_set=0.1, batch_size=128,
n_epoch=1, run_id='shakespeare')
print("-- TESTING...")
print("-- Test with temperature of 1.0 --")
print(m.generate(600, temperature=1.0, seq_seed=seed))
print("-- Test with temperature of 0.5 --")
print(m.generate(600, temperature=0.5, seq_seed=seed))
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.16.67.85