Tensorflow怎么训练MNIST手写数字识别模型-创新互联
今天就跟大家聊聊有关Tensorflow怎么训练MNIST手写数字识别模型,可能很多人都不太了解,为了让大家更加了解,小编给大家总结了以下内容,希望大家根据这篇文章可以有所收获。
成都创新互联公司主要从事网站建设、网站设计、网页设计、企业做网站、公司建网站等业务。立足成都服务赫章,10余年网站建设经验,价格优惠、服务专业,欢迎来电咨询建站服务:18980820575import tensorflow as tffrom tensorflow.examples.tutorials.mnist import input_dataINPUT_NODE = 784 # 输入层节点=图片像素=28x28=784OUTPUT_NODE = 10 # 输出层节点数=图片类别数目LAYER1_NODE = 500 # 隐藏层节点数,只有一个隐藏层BATCH_SIZE = 100 # 一个训练包中的数据个数,数字越小 # 越接近随机梯度下降,越大越接近梯度下降LEARNING_RATE_BASE = 0.8 # 基础学习率LEARNING_RATE_DECAY = 0.99 # 学习率衰减率REGULARIZATION_RATE = 0.0001 # 正则化项系数TRAINING_STEPS = 30000 # 训练轮数MOVING_AVG_DECAY = 0.99 # 滑动平均衰减率# 定义一个辅助函数,给定神经网络的输入和所有参数,计算神经网络的前向传播结果def inference(input_tensor, avg_class, weights1, biases1, weights2, biases2): # 当没有提供滑动平均类时,直接使用参数当前取值 if avg_class == None: # 计算隐藏层前向传播结果 layer1 = tf.nn.relu(tf.matmul(input_tensor, weights1) + biases1) # 计算输出层前向传播结果 return tf.matmul(layer1, weights2) + biases2 else: # 首先计算变量的滑动平均值,然后计算前向传播结果 layer1 = tf.nn.relu( tf.matmul(input_tensor, avg_class.average(weights1)) + avg_class.average(biases1)) return tf.matmul( layer1, avg_class.average(weights2)) + avg_class.average(biases2)# 训练模型的过程def train(mnist): x = tf.placeholder(tf.float32, [None, INPUT_NODE], name='x-input') y_ = tf.placeholder(tf.float32, [None, OUTPUT_NODE], name='y-input') # 生成隐藏层参数 weights1 = tf.Variable( tf.truncated_normal([INPUT_NODE, LAYER1_NODE], stddev=0.1)) biases1 = tf.Variable(tf.constant(0.1, shape=[LAYER1_NODE])) # 生成输出层参数 weights2 = tf.Variable( tf.truncated_normal([LAYER1_NODE, OUTPUT_NODE], stddev=0.1)) biases2 = tf.Variable(tf.constant(0.1, shape=[OUTPUT_NODE])) # 计算前向传播结果,不使用参数滑动平均值 avg_class=None y = inference(x, None, weights1, biases1, weights2, biases2) # 定义训练轮数变量,指定为不可训练 global_step = tf.Variable(0, trainable=False) # 给定滑动平均衰减率和训练轮数的变量,初始化滑动平均类 variable_avgs = tf.train.ExponentialMovingAverage( MOVING_AVG_DECAY, global_step) # 在所有代表神经网络参数的可训练变量上使用滑动平均 variables_avgs_op = variable_avgs.apply(tf.trainable_variables()) # 计算使用滑动平均值后的前向传播结果 avg_y = inference(x, variable_avgs, weights1, biases1, weights2, biases2) # 计算交叉熵作为损失函数 cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( logits=y, labels=tf.argmax(y_, 1)) cross_entropy_mean = tf.reduce_mean(cross_entropy) # 计算L2正则化损失函数 regularizer = tf.contrib.layers.l2_regularizer(REGULARIZATION_RATE) regularization = regularizer(weights1) + regularizer(weights2) loss = cross_entropy_mean + regularization # 设置指数衰减的学习率 learning_rate = tf.train.exponential_decay( LEARNING_RATE_BASE, global_step, # 当前迭代轮数 mnist.train.num_examples / BATCH_SIZE, # 过完所有训练数据的迭代次数 LEARNING_RATE_DECAY) # 优化损失函数 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize( loss, global_step=global_step) # 反向传播同时更新神经网络参数及其滑动平均值 with tf.control_dependencies([train_step, variables_avgs_op]): train_op = tf.no_op(name='train') # 检验使用了滑动平均模型的神经网络前向传播结果是否正确 correct_prediction = tf.equal(tf.argmax(avg_y, 1), tf.argmax(y_, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # 初始化会话并开始训练 with tf.Session() as sess: tf.global_variables_initializer().run() # 准备验证数据,用于判断停止条件和训练效果 validate_feed = {x: mnist.validation.images, y_: mnist.validation.labels} # 准备测试数据,用于模型优劣的最后评价标准 test_feed = {x: mnist.test.images, y_: mnist.test.labels} # 迭代训练神经网络 for i in range(TRAINING_STEPS): if i%1000 == 0: validate_acc = sess.run(accuracy, feed_dict=validate_feed) print("After %d training step(s), validation accuracy using average " "model is %g " % (i, validate_acc)) xs, ys = mnist.train.next_batch(BATCH_SIZE) sess.run(train_op, feed_dict={x: xs, y_: ys}) # 训练结束后在测试集上检测模型的最终正确率 test_acc = sess.run(accuracy, feed_dict=test_feed) print("After %d training steps, test accuracy using average model " "is %g " % (TRAINING_STEPS, test_acc))# 主程序入口def main(argv=None): mnist = input_data.read_data_sets("/tmp/data", one_hot=True) train(mnist)# Tensorflow主程序入口if __name__ == '__main__': tf.app.run()
输出结果如下:
Extracting /tmp/data/train-images-idx3-ubyte.gzExtracting /tmp/data/train-labels-idx1-ubyte.gzExtracting /tmp/data/t10k-images-idx3-ubyte.gzExtracting /tmp/data/t10k-labels-idx1-ubyte.gzAfter 0 training step(s), validation accuracy using average model is 0.0462After 1000 training step(s), validation accuracy using average model is 0.9784After 2000 training step(s), validation accuracy using average model is 0.9806After 3000 training step(s), validation accuracy using average model is 0.9798After 4000 training step(s), validation accuracy using average model is 0.9814After 5000 training step(s), validation accuracy using average model is 0.9826After 6000 training step(s), validation accuracy using average model is 0.9828After 7000 training step(s), validation accuracy using average model is 0.9832After 8000 training step(s), validation accuracy using average model is 0.9838After 9000 training step(s), validation accuracy using average model is 0.983After 10000 training step(s), validation accuracy using average model is 0.9836After 11000 training step(s), validation accuracy using average model is 0.9822After 12000 training step(s), validation accuracy using average model is 0.983After 13000 training step(s), validation accuracy using average model is 0.983After 14000 training step(s), validation accuracy using average model is 0.9844After 15000 training step(s), validation accuracy using average model is 0.9832After 16000 training step(s), validation accuracy using average model is 0.9844After 17000 training step(s), validation accuracy using average model is 0.9842After 18000 training step(s), validation accuracy using average model is 0.9842After 19000 training step(s), validation accuracy using average model is 0.9838After 20000 training step(s), validation accuracy using average model is 0.9834After 21000 training step(s), validation accuracy using average model is 0.9828After 22000 training step(s), validation accuracy using average model is 0.9834After 23000 training step(s), validation accuracy using average model is 0.9844After 24000 training step(s), validation accuracy using average model is 0.9838After 25000 training step(s), validation accuracy using average model is 0.9834After 26000 training step(s), validation accuracy using average model is 0.984After 27000 training step(s), validation accuracy using average model is 0.984After 28000 training step(s), validation accuracy using average model is 0.9836After 29000 training step(s), validation accuracy using average model is 0.9842After 30000 training steps, test accuracy using average model is 0.9839
看完上述内容,你们对Tensorflow怎么训练MNIST手写数字识别模型有进一步的了解吗?如果还想了解更多知识或者相关内容,请关注创新互联行业资讯频道,感谢大家的支持。
名称栏目:Tensorflow怎么训练MNIST手写数字识别模型-创新互联
网页路径:http://scyanting.com/article/dhhgdj.html