Wednesday, June 5, 2019

Pandas - 51 Develop neural networks with TensorFlow - Implementing the learning phase

Let's  define two lists that will serve as a container for the results obtained during the learning phase. In avg_set we will enter all the cost values for each epoch (learning cycle), while in epoch_set we  will enter the relative epoch number. These data will be useful at the end to visualize the cost trend during the learning phase of the neural network, which will be very useful for understanding the efficiency of the chosen learning method for the neural network.

avg_set = []
epoch_set=[]

Next we'll initialize all the variables with the function tf.global_variables_initializer():

init = tf.global_variables_initializer()

Now is the time to start the session:

with tf.Session() as sess:
      sess.run(init)
     

Using a for loop we'll intervene within each epoch and scan all the values of training_epochs.

Within this cycle for each epoch, we will optimize using the sess.run (optimizer) command. Furthermore, every 50 epochs, the condition if% display_step == 0 will be satisfied. Then we will extract the cost value via sess.run(cost) and insert it in the c variable that you will use for printing on the terminal as the print() argument that stores the avg_set list, using the append() function. In the end, when the for loop has been completed, we will print a message on the terminal informing the end of the learning phase.

with tf.Session() as sess:
    sess.run(init)
    for i in range(training_epochs):
        sess.run(optimizer, feed_dict = {x: inputX, y: inputY})
        if i % display_step == 0:
            c = sess.run(cost, feed_dict = {x: inputX, y: inputY})
            print("Epoch:", '%04d' % (i), "cost=", "{:.9f}".format(c))
            avg_set.append(c)
            epoch_set.append(i + 1)
    print("Training phase finished")

With this the learning phase is over and it is a good practice to print a summary table on the terminal that shows the trend of the cost during it. The following code prints the summary:

training_cost = sess.run(cost, feed_dict = {x: inputX, y:inputY})
    print("Training cost =", training_cost, "\nW=", sess.run(W),"\nb=", sess.run(b))
    last_result = sess.run(y_, feed_dict = {x:inputX})
    print("Last result =",last_result)

The complete program is shown below:

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

#Training set

inputX = np.array([[1.,3.],[1.,2.],[1.,1.5],[1.5,2.],[2.,3.],[2.5,1.5]
,[2.,1.],[3.,1.],[3.,2.],[3.5,1.],[3.5,3.]])

inputY = [[1.,0.]]*6+ [[0.,1.]]*5
yc = [0]*6 + [1]*5

plt.scatter(inputX[:,0],inputX[:,1],c=yc, s=50, alpha=0.9)

learning_rate = 0.01
training_epochs = 2000
display_step = 50
n_samples = 11
batch_size = 11
total_batch = int(n_samples/batch_size)
n_input = 2 # size data input (# size of each element of x)
n_classes = 2 # n of classes

# tf Graph input
x = tf.placeholder("float", [None, n_input])
y = tf.placeholder("float", [None, n_classes])

# Set model weights

W = tf.Variable(tf.zeros([n_input, n_classes]))
b = tf.Variable(tf.zeros([n_classes]))

evidence = tf.add(tf.matmul(x, W), b)
y_ = tf.nn.softmax(evidence)

cost = tf.reduce_sum(tf.pow(y-y_,2))/ (2 * n_samples)

optimizer =tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)

avg_set = []

epoch_set=[]
init = tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)
    for i in range(training_epochs):
        sess.run(optimizer, feed_dict = {x: inputX, y: inputY})
        if i % display_step == 0:
            c = sess.run(cost, feed_dict = {x: inputX, y: inputY})
            print("Epoch:", '%04d' % (i), "cost=", "{:.9f}".format(c))
            avg_set.append(c)
            epoch_set.append(i + 1)
    print("\nTraining phase finished\n")
    training_cost = sess.run(cost, feed_dict = {x: inputX, y:inputY})
    print("Training cost =", training_cost, "\nW=", sess.run(W),"\nb=", sess.run(b))
    last_result = sess.run(y_, feed_dict = {x:inputX})
    print("\nLast result =",last_result)



The output of the program is shown below:

Epoch: 0000 cost= 0.249360293
Epoch: 0050 cost= 0.221041098
Epoch: 0100 cost= 0.198898271
Epoch: 0150 cost= 0.181669712
Epoch: 0200 cost= 0.168204829
Epoch: 0250 cost= 0.157555178
Epoch: 0300 cost= 0.149002746
Epoch: 0350 cost= 0.142023861
Epoch: 0400 cost= 0.136240512
Epoch: 0450 cost= 0.131379008
Epoch: 0500 cost= 0.127239138
Epoch: 0550 cost= 0.123672649
Epoch: 0600 cost= 0.120568052
Epoch: 0650 cost= 0.117840447
Epoch: 0700 cost= 0.115424216
Epoch: 0750 cost= 0.113267884
Epoch: 0800 cost= 0.111330733
Epoch: 0850 cost= 0.109580085
Epoch: 0900 cost= 0.107989423
Epoch: 0950 cost= 0.106537104
Epoch: 1000 cost= 0.105205178
Epoch: 1050 cost= 0.103978693
Epoch: 1100 cost= 0.102845177
Epoch: 1150 cost= 0.101793967
Epoch: 1200 cost= 0.100816056
Epoch: 1250 cost= 0.099903703
Epoch: 1300 cost= 0.099050261
Epoch: 1350 cost= 0.098249912
Epoch: 1400 cost= 0.097497642
Epoch: 1450 cost= 0.096789040
Epoch: 1500 cost= 0.096120216
Epoch: 1550 cost= 0.095487759
Epoch: 1600 cost= 0.094888613
Epoch: 1650 cost= 0.094320126
Epoch: 1700 cost= 0.093779817
Epoch: 1750 cost= 0.093265578
Epoch: 1800 cost= 0.092775457
Epoch: 1850 cost= 0.092307687
Epoch: 1900 cost= 0.091860712
Epoch: 1950 cost= 0.091433071

Training phase finished

Training cost = 0.09103153
W= [[-0.70927787  0.7092778 ]
 [ 0.6299924  -0.62999237]]
b= [ 0.34513065 -0.34513068]

Last result = [[0.9548542  0.04514586]
 [0.85713255 0.14286745]
 [0.76163834 0.23836163]
 [0.7469474  0.2530526 ]
 [0.83659446 0.16340555]
 [0.2756484  0.7243516 ]
 [0.29175714 0.70824283]
 [0.090675   0.909325  ]
 [0.26010245 0.7398975 ]
 [0.04676624 0.9532338 ]
 [0.37878013 0.6212199 ]]
------------------
(program exited with code: 0)

Press any key to continue . . .


From the output we can see the cost is gradually improving during the epoch, up to a value of 0.168. Then it is interesting to see the values of the W weights and the bias of the neural network. These values represent the parameters of the model, i.e., the neural network instructed to analyze this type of data and to carry out this type of classification.

These parameters are very important, because once they are obtained and knowing the structure of the neural network used, it will be possible to reuse them anywhere without repeating the learning phase. Our example takes only a minute to do the calculation; in real cases it may take days to do it, and often we have to make many attempts and adjust and calibrate the different parameters before
developing an efficient neural network that is very accurate at class recognition, or at performing any other task.

It's always better to have the results in a graphical form as they are easier to understand. Using matplotlib we can do this as shown below:

plt.plot(epoch_set,avg_set,'o',label = 'SLP Training phase')
plt.ylabel('cost')
plt.xlabel('epochs')
plt.legend()
plt.show()

This code when added to our program yields the following graph:
The above graph shows that the cost value decreases during the learning phase (cost optimization). Now let's see the results of the classification during the last step of the learning phase.

yc = last_result[:,1]
plt.scatter(inputX[:,0],inputX[:,1],c=yc, s=50, alpha=1)
plt.show()

This code when added to our program yields the following graph which is the representation of the points in the Cartesian plane:
The graph represents the points in the Cartesian plane, with a color ranging from blue (belonging to 100% to the first group) to yellow (belonging to 100% to the second group). As we can see, the division in the two classes of the points of the training set is quite optimal, with an uncertainty for the four points on the central diagonal (green).

This chart shows in some way the learning ability of the neural network used. As we can see, despite the learning epochs with the training set used, the neural network failed to learn that point 6 (x = 2.5, y = 1.5) belongs to the first class. This is a result we could expect, as it represents an exception and adds an effect of uncertainty to other points in the second class (the green dots).

Here I am ending today's discussion wherein we learned Implementation of the learning phase. In the next post I'll focus on Test Phase and Accuracy Calculation . So till we meet again keep learning and practicing Python as Python is easy to learn!
Share:

0 comments:

Post a Comment