Академический Документы
Профессиональный Документы
Культура Документы
In this tutorial we will implement a simple Convolutional Neural Network in TensorFlow with
two convolutional layers, followed by two fully-connected layers at the end. The network
structure is shown in the following figure and has classification accuracy of above 99% on
MNIST data.
CNN
0. imports
First, we have to import the required libraries
Now we can use the defined helper function in "train" mode which loads the train and
validation images and their corresponding labels. We'll also display their sizes:
2. Hyperparameters
[5] logs_path = "./logs" # path to the folder that we want to save the logs for Te
lr = 0.001 # The optimization initial learning rate
epochs = 10 # Total number of training epochs
batch_size = 100 # Training batch size
display_freq = 100 # Frequency of displaying the training results
3. Network configuration
# Fully-connected layer.
h1 = 128 # Number of neurons in fully-connected layer.
def bias_variable(shape):
"""
Create a bias variable with appropriate initialization
:param name: bias variable name
:param shape: bias variable shape
:return: initialized bias variable
"""
initial = tf.constant(0., shape=shape, dtype=tf.float32)
return tf.get_variable('b',
dtype=tf.float32,
initializer=initial)
5. Network graph
5.1. Placeholders for the inputs (x) and corresponding labels (y)
5.3. Define the loss function, optimizer, accuracy, and predicted class
6. Train
if iteration % display_freq == 0:
# Calculate and display the batch loss and accuracy
loss_batch, acc_batch, summary_tr = sess.run([loss, accuracy,
feed_dict=feed_dict_ba
summary_writer.add_summary(summary_tr, global_step)
Training epoch: 1
iter 0: Loss=2.30, Training Accuracy=25.0%
iter 100: Loss=0.43, Training Accuracy=85.0%
iter 200: Loss=0.33, Training Accuracy=92.0%
iter 300: Loss=0.24, Training Accuracy=93.0%
iter 400: Loss=0.08, Training Accuracy=98.0%
iter 500: Loss=0.08, Training Accuracy=99.0%
---------------------------------------------------------
Epoch: 1, validation loss: 0.11, validation accuracy: 97.0%
---------------------------------------------------------
Training epoch: 2
iter 0: Loss=0.12, Training Accuracy=98.0%
iter 100: Loss=0.12, Training Accuracy=96.0%
iter 200: Loss=0.09, Training Accuracy=98.0%
iter 300: Loss=0.10, Training Accuracy=98.0%
iter 400: Loss=0.09, Training Accuracy=97.0%
iter 500: Loss=0.09, Training Accuracy=98.0%
---------------------------------------------------------
Epoch: 2, validation loss: 0.07, validation accuracy: 97.9%
---------------------------------------------------------
Training epoch: 3
iter 0: Loss=0.02, Training Accuracy=99.0%
iter 100: Loss=0.08, Training Accuracy=97.0%
iter 200: Loss=0.11, Training Accuracy=96.0%
iter 300: Loss=0.09, Training Accuracy=97.0%
iter 400: Loss=0.04, Training Accuracy=98.0%
iter 500: Loss=0.06, Training Accuracy=98.0%
---------------------------------------------------------
Epoch: 3, validation loss: 0.06, validation accuracy: 98.4%
---------------------------------------------------------
Training epoch: 4
7. Test
ax.set_title(ax_title)
if title:
plt.suptitle(title, size=20)
plt.show(block=False)
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
---------------------------------------------------------
Test loss: 0.04, test accuracy: 98.9%
---------------------------------------------------------
A er we are finished the testing, we will close the session to free the memory.
[19] # close the session after you are done with testing
sess.close()
At this step our coding is done. We can inspect more in our network using the Tensorboard
open your terminal and type:
tensorboard --logdir=logs --host localhost
Thanks for reading! If you have any question or doubt, feel free to leave a comment in our
website.