python边写边总结(十四)mnist的简单使用
发布日期:2021-06-29 06:03:31 浏览次数:2 分类:技术文章

本文共 2576 字,大约阅读时间需要 8 分钟。

mnist是一份手写数字集,今天利用网站对mnist进行了简单的学习

 

from tensorflow.examples.tutorials.mnist import input_datamnist = input_data.read_data_sets("MNIST_data",one_hot=True)import tensorflow as tfx = tf.placeholder("float", [None, 784])w = tf.Variable(tf.zeros([784, 10]))b = tf.Variable(tf.zeros([10]))y = tf.nn.softmax(tf.matmul(x, w)+b)y_ = tf.placeholder("float", [None, 10])cross_entropy = -tf.reduce_sum(y_*tf.log(y))train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)init = tf.initialize_all_variables()print("init")sess = tf.Session()sess.run(init)print("开始计算")for i in range(1000):  batch_xs, batch_ys = mnist.train.next_batch(100) #重点内容  sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))print (sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

总结一下,使用了softmax分类,并用了交叉熵作为loss函数,使用的优化器为随机梯度下降法

batch_xs, batch_ys = mnist.train.next_batch(100)

这一句是全文的重点,读取数据

然后查阅了相关资料,看到了关于mnist的数据存储结构

 (train-labels-idx1-ubyte):

[offset] [type]          [value]          [description]

0000     32 bit integer  0x00000801(2049) magic number (MSB first)
0004     32 bit integer  60000            number of items
0008     unsigned byte   ??               label
0009     unsigned byte   ??               label
........
xxxx     unsigned byte   ??               label

The labels values are 0 to 9.

(train-images-idx3-ubyte):

[offset] [type]          [value]          [description]

0000     32 bit integer  0x00000803(2051) magic number
0004     32 bit integer  60000            number of images
0008     32 bit integer  28               number of rows
0012     32 bit integer  28               number of columns
0016     unsigned byte   ??               pixel
0017     unsigned byte   ??               pixel
........
xxxx     unsigned byte   ??               pixel

Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).

TEST SET LABEL FILE (t10k-labels-idx1-ubyte):

[offset] [type]          [value]          [description]

0000     32 bit integer  0x00000801(2049) magic number (MSB first)
0004     32 bit integer  10000            number of items
0008     unsigned byte   ??               label
0009     unsigned byte   ??               label
........
xxxx     unsigned byte   ??               label

The labels values are 0 to 9.

TEST SET IMAGE FILE (t10k-images-idx3-ubyte):

[offset] [type]          [value]          [description]

0000     32 bit integer  0x00000803(2051) magic number
0004     32 bit integer  10000            number of images
0008     32 bit integer  28               number of rows
0012     32 bit integer  28               number of columns
0016     unsigned byte   ??               pixel
0017     unsigned byte   ??               pixel
........
xxxx     unsigned byte   ??               pixel

Pixels are organized row-wise. Pixel values are 0 to 255. 0 means background (white), 255 means foreground (black).

 

转载地址:https://blog.csdn.net/zhouzhouasishuijiao/article/details/85227023 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:python边写边总结(十五)CNN理解记录
下一篇:算法总结(二)关于神经网络

发表评论

最新留言

路过,博主的博客真漂亮。。
[***.116.15.85]2024年04月13日 03时14分00秒