Exercise 3

In the videos you looked at how you would improve Fashion MNIST using Convolutions. For your exercise see if you can improve MNIST to 99.8% accuracy or more using only a single convolutional layer and a single MaxPooling 2D. You should stop training once the accuracy goes above this amount. It should happen in less than 20 epochs, so it's ok to hard code the number of epochs for training, but your training must end once it hits the above metric. If it doesn't, then you'll need to redesign your layers.

I've started the code for you -- you need to finish it!

When 99.8% accuracy has been hit, you should print out the string "Reached 99.8% accuracy so cancelling training!"

In [1]:
import tensorflow as tf
from os import path, getcwd, chdir

# DO NOT CHANGE THE LINE BELOW. If you are developing in a local
# environment, then grab mnist.npz from the Coursera Jupyter Notebook
# and place it inside a local folder and edit the path to that location
path = f"{getcwd()}/../tmp2/mnist.npz"
In [2]:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
In [3]:
# GRADED FUNCTION: train_mnist_conv
def train_mnist_conv():
    # Please write your code only where you are indicated.
    # please do not remove model fitting inline comments.

    # YOUR CODE STARTS HERE
    class Reachs_99_Callback(tf.keras.callbacks.Callback):
        def on_epoch_end(self, epoch, logs={}):
            if logs.get('acc')>0.998:
                print("Reached 99.8% accuracy so cancelling training!")
                self.model.stop_training = True
    # YOUR CODE ENDS HERE

    mnist = tf.keras.datasets.mnist
    (training_images, training_labels), (test_images, test_labels) = mnist.load_data(path=path)
    # YOUR CODE STARTS HERE
    training_images = training_images.reshape(60000, 28, 28, 1)/255.0
    test_images = test_images.reshape(10000, 28, 28, 1)/255.0
    
    callback = Reachs_99_Callback()
    # YOUR CODE ENDS HERE

    model = tf.keras.models.Sequential([
            # YOUR CODE STARTS HERE
            tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(28, 28, 1)),
            tf.keras.layers.MaxPooling2D((2, 2)),
            tf.keras.layers.Flatten(),
            tf.keras.layers.Dense(units=128, activation='relu'),
            tf.keras.layers.Dense(units=10, activation='softmax')
            # YOUR CODE ENDS HERE
    ])

    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    # model fitting
    history = model.fit(
        # YOUR CODE STARTS HERE
        training_images, training_labels,
        epochs=20,
        callbacks=[callback]
        # YOUR CODE ENDS HERE
    )
    # model fitting
    return history.epoch, history.history['acc'][-1]
In [4]:
_, _ = train_mnist_conv()
WARNING: Logging before flag parsing goes to stderr.
W1217 07:13:18.535989 140141467141952 deprecation.py:506] From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/init_ops.py:1251: calling VarianceScaling.__init__ (from tensorflow.python.ops.init_ops) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Call initializer instance with the dtype argument instead of passing it to the constructor
Epoch 1/20
60000/60000 [==============================] - 14s 238us/sample - loss: 0.1505 - acc: 0.9543
Epoch 2/20
60000/60000 [==============================] - 11s 185us/sample - loss: 0.0479 - acc: 0.9855
Epoch 3/20
60000/60000 [==============================] - 11s 183us/sample - loss: 0.0304 - acc: 0.9906
Epoch 4/20
60000/60000 [==============================] - 11s 180us/sample - loss: 0.0192 - acc: 0.9938
Epoch 5/20
60000/60000 [==============================] - 11s 183us/sample - loss: 0.0128 - acc: 0.9961
Epoch 6/20
60000/60000 [==============================] - 11s 180us/sample - loss: 0.0102 - acc: 0.9967 - los
Epoch 7/20
60000/60000 [==============================] - 11s 178us/sample - loss: 0.0076 - acc: 0.9976
Epoch 8/20
59488/60000 [============================>.] - ETA: 0s - loss: 0.0056 - acc: 0.9981- ETA: 2s - lReached 99.8% accuracy so cancelling training!
60000/60000 [==============================] - 10s 174us/sample - loss: 0.0056 - acc: 0.9981
In [5]:
# Now click the 'Submit Assignment' button above.
# Once that is complete, please run the following two cells to save your work and close the notebook
In [ ]:
%%javascript
<!-- Save the notebook -->
IPython.notebook.save_checkpoint();
In [ ]:
%%javascript
IPython.notebook.session.delete();
window.onbeforeunload = null
setTimeout(function() { window.close(); }, 1000);