tensorflow使用tensorboard实现数据可视化

时间:2022-01-13 23:53:27

撰写时间:2017.5.17

网上关于这方面的教程很多,不过都偏向与如何整理图,就是通过增加命名域使得图变得好看.下面主要讲解如何搭建起来tensorboard.

系统环境:ubuntu14.04,python2.7,tensorflow-0.11

创建summary op

1.需要在图中创建summary的操作.常用的summary操作有tf.summary.scalar和tf.summary.histogram.
注:图中必须存在summary节点.不然会报错,如下图报错
tensorflow使用tensorboard实现数据可视化

merge合并操作

2.调用tf.summary.merge_all(),原因如下.

在TensorFlow中,所有的操作只有当你执行,或者另一个操作依赖于它的输出时才会运行。我们刚才创建的这些节点(summary nodes)都围绕着你的图像:没有任何操作依赖于它们的结果。因此,为了生成汇总信息,我们需要运行所有这些节点。这样的手动工作是很乏味的,因此可以使用tf.summaries.merge_all\来将他们合并为一个操作。
然后你可以执行合并命令,它会依据特点步骤将所有数据生成一个序列化的Summary protobuf对象。最后,为了将汇总数据写入磁盘,需要将汇总的protobuf对象传递给tf.train.Summarywriter。

创建writer对象

summary_writer = tf.summary.FileWriter('/tmp/mnist_logs',sess.graph)

运行

和正常运行训练过程是一样的.对于placeholder的图要带上feed参数.

summary_str = sess.run(merged_summary_op,feed_dict={x: batch[0], y_: batch[1]});
summary_writer.add_summary(summary_str, i);

最后帖上我的代码

#coding:utf-8


#######softmax

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data


######prepare data
mnist = input_data.read_data_sets('MNIST_data/',one_hot = True);


#######create the graph

x = tf.placeholder(tf.float32,shape = [None,784],name = 'x');
y_ = tf.placeholder(tf.float32,shape = [None,10],name = 'y_');

#initialize weights and bias;
#with tf.name_scope('input_weight'):
W = tf.Variable(tf.zeros([784,10]));

#with tf.name_scope('input_bias'):
b = tf.Variable(tf.zeros([10]),name = 'input_bias');

#Predict class and loss function
#with tf.name_scope('y'):
y = tf.nn.softmax(tf.matmul(x,W) + b)
tf.summary.histogram('y',y);
#cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y, y_));
#cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = y_))
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
tf.summary.scalar('loss_function', cross_entropy)

#train_step
#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy);
#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)


#####optimization:梯度下降
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)



#Cannot evaluate tensor using `eval()`: No default session is registered.
#Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`

#set sess as default
sess = tf.InteractiveSession();





########create session
#sess = tf.Session();
#init op
init = tf.global_variables_initializer();
sess.run(init);
####TensorBoard
merged_summary_op = tf.summary.merge_all()
summary_writer = tf.summary.FileWriter('/tmp/mnist_logs',sess.graph)


#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(100);
sess.run(train_step,feed_dict={x: batch[0], y_: batch[1]});
summary_str = sess.run(merged_summary_op,feed_dict={x: batch[0], y_: batch[1]});
summary_writer.add_summary(summary_str, i);

if i % 50 == 0:
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32"))
print "Setp: ", i, "Accuracy: ",sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})




########evaluate
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(session=sess,feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

代码修改

注:上面的代码是我在初学tensorflow的时候写的,所以逻辑比较混乱,下面略微改一下,也回复一下第一位评论的问题,由于上面乱的代码给予大家带来带来的困扰,深感抱歉!

#coding:utf-8


#######softmax

import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data


######prepare data
mnist = input_data.read_data_sets('MNIST_data/',one_hot = True);


#######create the graph
x = tf.placeholder(tf.float32,shape = [None,784],name = 'x');
y_ = tf.placeholder(tf.float32,shape = [None,10],name = 'y_');
W = tf.Variable(tf.zeros([784,10]));
b = tf.Variable(tf.zeros([10]),name = 'input_bias');
y = tf.nn.softmax(tf.matmul(x,W) + b)
tf.summary.histogram('y',y);
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
tf.summary.scalar('loss_function', cross_entropy)
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32"))
tf.summary.scalar('accuracy', accuracy)
merged_summary_op = tf.summary.merge_all()



#Cannot evaluate tensor using `eval()`: No default session is registered.
#Use `with sess.as_default()` or pass an explicit session to `eval(session=sess)`
#set sess as default
sess = tf.InteractiveSession();
init = tf.global_variables_initializer();
sess.run(init);
if tf.gfile.Exists("/tmp/mnist_logs"):
tf.gfile.DeleteRecursively("/tmp/mnist_logs");
summary_writer = tf.summary.FileWriter('/tmp/mnist_logs',sess.graph)

#train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
for i in range(1000):
batch = mnist.train.next_batch(100);
sess.run(train_step,feed_dict={x: batch[0], y_: batch[1]});
summary_str = sess.run(merged_summary_op,feed_dict={x: batch[0], y_: batch[1]});
summary_writer.add_summary(summary_str, i);

if i % 50 == 0:
print "Setp: ", i, "Accuracy: ",sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels})

print(accuracy.eval(session=sess,feed_dict={x: mnist.test.images, y_: mnist.test.labels}))

打开tensorboard

在终端下运行

tensorboard --logdir='/tmp/mnist_logs'//目录是我在创建writer的时候设定的.

然后打开浏览器运行地址:0.0.0.0:6006

结果:

tensorflow使用tensorboard实现数据可视化

遇到的没有解决的问题:jupyter中运行的时候第一次没有错误.第二次就报错.到目前为止没有解决.问题地址:https://github.com/tensorflow/tensorflow/issues/225

tensorflow使用tensorboard实现数据可视化

目前已经不使用jupyter,改用sublime+shell.

如果搭建成功没有问题啦.就可以试着使用优化图的方法了.