TFboy养成记 tensorboard

时间:2023-12-31 10:40:20

首先介绍几个用法:

with tf.name_scope(name = "inputs"):

这个是用于区分区域的。如,train,inputs等。

TFboy养成记 tensorboard

xs = tf.placeholder(tf.float32,[None,1],name = "x_input")

name用于对节点的命名。

merged = tf.summary.merge_all()

注:这里很多代码可能跟莫烦老师的代码并不一样,主要是由于版本变迁,tensorflow很多函数改变。

这一步很重要!!!如果你想看loss曲线,一定要记得家上这一步。还有

 with tf.name_scope("loss"):
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-l2),reduction_indices=[1]))
tf.summary.scalar("loss",loss)

对于你绘制的曲线。一定要记得是分成各个点来绘制的。所以在最后也会有改动,每一个都要加到summary 里面。

TFboy养成记 tensorboard

所有代码:

 for i in range(1000):
sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
if i%50 == 0:
rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
writer.add_summary(rs, i)

至于启动tensorboard:

首先在cmd或者terminal切换到当前文件所在的文件夹,然后输入:
tensorboard --logdir=logs/(貌似不需要斜杠也可以可以试一下),当然 这里直接输入路径也是可以的。

最后会给你一个网址:0.0.0.0:6006还是什么。很多windows同学打不开,那就把前面的ip直接换成localhost即可

所有代码:

 # -*- coding: utf-8 -*-
"""
Created on Wed Jun 14 17:26:15 2017 @author: Jarvis
""" import tensorflow as tf
import numpy as np
def addLayer(inputs,inSize,outSize,level,actv_func = None):
layername = "layer%s"%(level)
with tf.name_scope("Layer"):
with tf.name_scope("Weights"):
Weights = tf.Variable(tf.random_normal([inSize,outSize]),name="W")
# tf.summary.histogram(layername+"/Weights",Weights)
with tf.name_scope("bias"):
bias = tf.Variable(tf.zeros([1,outSize]),name = "bias")
# tf.summary.histogram(layername+"/bias",bias) with tf.name_scope("Wx_plus_b"):
Wx_plus_b = tf.matmul(inputs,Weights)+bias
# tf.summary.histogram(layername+"/Wx_plus_b",Wx_plus_b)
if actv_func == None:
outputs = Wx_plus_b
else:
outputs = actv_func(Wx_plus_b)
tf.summary.histogram(layername+"\outputs",outputs)
return outputs
x_data = np.linspace(-1,1,300)[:,np.newaxis]
noise= np.random.normal(0, 0.05, x_data.shape).astype(np.float32)
y_data = np.square(x_data)+0.5+noise
with tf.name_scope("inputs"):
xs = tf.placeholder(tf.float32,[None,1],name = "x_input")
ys = tf.placeholder(tf.float32,[None,1],name = "y_input") l1 = addLayer(xs,1,10,level = 1,actv_func=tf.nn.relu)
l2 = addLayer(l1,10,1,level=2,actv_func=None)
with tf.name_scope("loss"):
loss = tf.reduce_mean(tf.reduce_sum(tf.square(ys-l2),reduction_indices=[1]))
tf.summary.scalar("loss",loss)
with tf.name_scope("train"):
train_step = tf.train.GradientDescentOptimizer(0.1).minimize(loss) sess = tf.Session()
merged = tf.summary.merge_all()
writer = tf.summary.FileWriter("logs/",sess.graph)#很关键一定要在run之前把这个加进去 sess.run(tf.global_variables_initializer()) for i in range(1000):
sess.run(train_step, feed_dict={xs:x_data, ys:y_data})
if i%50 == 0:
rs = sess.run(merged,feed_dict={xs:x_data,ys:y_data})
writer.add_summary(rs, i)

很多用spyder的同学可能老师报一些莫名奇妙的错误,你不妨试试重启一下kernel试试