TensorFlow(keras)入门课程--04 卷积神经网络
发布日期:2021-06-29 15:45:30
浏览次数:2
分类:技术文章
本文共 5354 字,大约阅读时间需要 17 分钟。
目录
- 1 简介
- 2 使用卷积提高计算机视觉准确度
- 3 可视化卷积核池
1 简介
在本节中,我们将学习如何使用卷积神经网络来改进图像分类模型。
2 使用卷积提高计算机视觉准确度
在之前的实验中 ,使用了包含了三个层的深度神经网络进行时尚图像识别-输入层(以输入数据的形状)、输出层(以及所需输出的形状)和一个隐藏层,
为方便起见,先运行DNN的代码并打印出测试精度。
import tensorflow as tffashion_mnist = tf.keras.datasets.fashion_mnist(training_images,training_labels),(test_images,test_labels) = fashion_mnist.load_data()trainging_images = training_images / 255.0test_images = test_images / 255.0model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128,activation="relu"), tf.keras.layers.Dense(10,activation="softmax")])model.compile(optimizer="adam",loss='sparse_categorical_crossentropy', metrics=['accuracy'])model.fit(trainging_images,training_labels,epochs=5)test_loss, test_accuracy = model.evaluate(test_images, test_labels)print ('Test loss: {}, Test accuracy: {}'.format(test_loss, test_accuracy*100))
Epoch 1/560000/60000 [==============================] - 4s 72us/sample - loss: 0.4982 - acc: 0.8257Epoch 2/560000/60000 [==============================] - 4s 74us/sample - loss: 0.3746 - acc: 0.8649Epoch 3/560000/60000 [==============================] - 5s 77us/sample - loss: 0.3388 - acc: 0.8765Epoch 4/560000/60000 [==============================] - 4s 74us/sample - loss: 0.3133 - acc: 0.8858Epoch 5/560000/60000 [==============================] - 4s 73us/sample - loss: 0.2991 - acc: 0.890510000/10000 [==============================] - 0s 28us/sample - loss: 0.3888 - acc: 0.8607Test loss: 0.38882760289907453, Test accuracy: 86.0700011253357
DNN测试集的准确率为86%。
import tensorflow as tfprint(tf.__version__)fashion_mnist = tf.keras.datasets.fashion_mnist(trainging_images,training_labels),(test_images,test_labels) = fashion_mnist.load_data()training_images = trainging_images.reshape(60000,28,28,1)trainging_images = trainging_images / 255.0test_images = test_images.reshape(10000,28,28,1)test_images = test_images / 255.0model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(64,(3,3),activation="relu",input_shape=(28,28,1)), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Conv2D(64,(3,3),activation="relu"), tf.keras.layers.MaxPooling2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128,activation="relu"), tf.keras.layers.Dense(10,activation="softmax")])model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])model.summary()model.fit(training_images,training_labels,epochs=5)test_loss, test_accuracy = model.evaluate(test_images, test_labels)print ('Test loss: {}, Test accuracy: {}'.format(test_loss, test_accuracy*100))
1.13.1_________________________________________________________________Layer (type) Output Shape Param # =================================================================conv2d_4 (Conv2D) (None, 26, 26, 64) 640 _________________________________________________________________max_pooling2d_4 (MaxPooling2 (None, 13, 13, 64) 0 _________________________________________________________________conv2d_5 (Conv2D) (None, 11, 11, 64) 36928 _________________________________________________________________max_pooling2d_5 (MaxPooling2 (None, 5, 5, 64) 0 _________________________________________________________________flatten_2 (Flatten) (None, 1600) 0 _________________________________________________________________dense_4 (Dense) (None, 128) 204928 _________________________________________________________________dense_5 (Dense) (None, 10) 1290 =================================================================Total params: 243,786Trainable params: 243,786Non-trainable params: 0_________________________________________________________________Epoch 1/560000/60000 [==============================] - 75s 1ms/sample - loss: 13.4033 - acc: 0.1681Epoch 2/560000/60000 [==============================] - 74s 1ms/sample - loss: 14.5063 - acc: 0.1000Epoch 3/560000/60000 [==============================] - 72s 1ms/sample - loss: 14.5063 - acc: 0.1000Epoch 4/560000/60000 [==============================] - 73s 1ms/sample - loss: 14.5063 - acc: 0.1000Epoch 5/560000/60000 [==============================] - 73s 1ms/sample - loss: 14.5063 - acc: 0.100010000/10000 [==============================] - 4s 364us/sample - loss: 8.4485 - acc: 0.1000Test loss: 8.44854741897583, Test accuracy: 10.000000149011612
3 可视化卷积和池化
import matplotlib.pyplot as pltf, axarr = plt.subplots(3,4)FIRST_IMAGE=0SECOND_IMAGE=23THIRD_IMAGE=28CONVOLUTION_NUMBER = 6from tensorflow.keras import modelslayer_outputs = [layer.output for layer in model.layers]activation_model = tf.keras.models.Model(inputs = model.input, outputs = layer_outputs)for x in range(0,4): f1 = activation_model.predict(test_images[FIRST_IMAGE].reshape(1, 28, 28, 1))[x] axarr[0,x].imshow(f1[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[0,x].grid(False) f2 = activation_model.predict(test_images[SECOND_IMAGE].reshape(1, 28, 28, 1))[x] axarr[1,x].imshow(f2[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[1,x].grid(False) f3 = activation_model.predict(test_images[THIRD_IMAGE].reshape(1, 28, 28, 1))[x] axarr[2,x].imshow(f3[0, : , :, CONVOLUTION_NUMBER], cmap='inferno') axarr[2,x].grid(False)
转载地址:https://codingchaozhang.blog.csdn.net/article/details/90718102 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!
发表评论
最新留言
能坚持,总会有不一样的收获!
[***.219.124.196]2024年04月29日 03时44分06秒
关于作者
喝酒易醉,品茶养心,人生如梦,品茶悟道,何以解忧?唯有杜康!
-- 愿君每日到此一游!
推荐文章
Redis 六种淘汰策略和三种删除策略
2019-04-29
Java LinkedHashMap
2019-04-29
JPA 多线程同时对一条数据进行Update的问题
2019-04-29
JPA 多线程对数据进行更新,Update和Insert同时存在的问题
2019-04-29
Java 高性能队列Disruptor
2019-04-29
SpringBoot 使用https
2019-04-29
Java 读写锁
2019-04-29
JVM Minor GC、Full GC和Major GC
2019-04-29
SpringBoot @Scheduled 执行两次的问题
2019-04-29
tomcat配置JVM
2019-04-29
Ubuntu软件安装&卸载
2019-04-29
面试笔试易错知识点Java篇八
2019-04-29
弹性事务框架ETF4J——面向Java微服务的交易最终一致性解决方案
2019-04-29
【Scala 教程】Scala 条件与循环语句
2019-04-29
【Scala 教程】Scala 集合类型
2019-04-29
JAVA 线程同步机制 synchronized
2019-04-29
MySQL 安装教程(无脑版)
2019-04-29
IDEA 怎么删除一个Module
2019-04-29
走进数据科学:最好是通过比网课更好的方法
2019-04-29