应用Tensorflow2.0的Eager模式是怎么快速构建神经网络的
应用Tensorflow2.0的Eager模式是怎么快速构建神经网络的,相信很多没有经验的人对此束手无策,为此本文总结了问题出现的原因和解决方法,通过这篇文章希望你能解决这个问题。
网站建设哪家好,找成都创新互联公司!专注于网页设计、网站建设、微信开发、重庆小程序开发公司、集团企业网站建设等服务项目。为回馈新老客户创新互联还提供了息县免费建站欢迎大家使用!
import tensorflow as tf
a = tf.constant(3.0)
b = tf.placeholder(dtype = tf.float32)
c = tf.add(a,b)
sess = tf.Session() #创建会话对象
init = tf.global_variables_initializer()
sess.run(init) #初始化会话对象
feed = {
b: 2.0
} #对变量b赋值
c_res = sess.run(c, feed) #通过会话驱动计算图获取计算结果
print(c_res)
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tf.enable_eager_execution()
def add(num1, num2):
a = tf.convert_to_tensor(num1) #将数值转换为TF张量,这有利于加快运算速度
b = tf.convert_to_tensor(num2)
c = a + b
return c.numpy() #将张量转换为数值
add_res = add(3.0, 4.0)
print(add_res)
from sklearn import datasets, preprocessing, model_selection
data = datasets.load_iris() #加载数据到内存
x = preprocessing.MinMaxScaler(feature_range = (-1, 1)).fit_transform(data['data']) #将数据数值预处理到(-1,1)之间方便网络识别
#把不同分类的品种用向量表示,例如有三个不同品种,那么分别用(1,0,0),(0,1,0),(0,0,1)表示
y = preprocessing.OneHotEncoder(sparse = False).fit_transform(data['target'].reshape(-1, 1))
x_train, x_test, y_train, y_test = model_selection.train_test_split(x, y, test_size = 0.25, stratify = y) #将数据分成训练集合测试集
print(len(x_train))
class IrisClassifyModel(object): def __init__(self, hidden_unit, output_unit): #这里只构建两层网络,第一层是输入数据 self.hidden_layer = tf.keras.layers.Dense(units = hidden_unit, activation = tf.nn.tanh, use_bias = True, name="hidden_layer") self.output_layer = tf.keras.layers.Dense(units = output_unit, activation = None, use_bias = True, name="output_layer") def __call__(self, inputs): return self.output_layer(self.hidden_layer(inputs))
#构造输入数据检验网络是否正常运行
model = IrisClassifyModel(10, 3)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
for x, y in tfe.Iterator(train_dataset.batch(32)):
output = model(x)
print(output.numpy())
break
def make_loss(model, inputs, labels):
return tf.reduce_sum(tf.nn.softmax_cross_entropy_with_logits_v2(logits = model(inputs), labels = labels))
opt = tf.train.AdamOptimizer(learning_rate = 0.01)
def train(model, x, y):
opt.minimize(lambda:make_loss(model, x, y))
accuracy = tfe.metrics.Accuracy()
def check_accuracy(model, x_batch, y_batch): #统计网络判断结果的准确性
accuracy(tf.argmax(model(tf.constant(x_batch)), axis = 1), tf.argmax(tf.constant(y_batch), axis = 1))
return accuracy
import numpy as np
model = IrisClassifyModel(10, 3)
epochs = 50
acc_history = np.zeros(epochs)
for epoch in range(epochs):
for (x_batch, y_batch) in tfe.Iterator(train_dataset.shuffle(1000).batch(32)):
train(model, x_batch, y_batch)
acc = check_accuracy(model, x_batch, y_batch)
acc_history[epoch] = acc.result().numpy()
import matplotlib.pyplot as plt
plt.figure()
plt.plot(acc_history)
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.show()
看完上述内容,你们掌握应用Tensorflow2.0的Eager模式是怎么快速构建神经网络的的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注创新互联行业资讯频道,感谢各位的阅读!
分享文章:应用Tensorflow2.0的Eager模式是怎么快速构建神经网络的
网站链接:http://scyanting.com/article/gcpddd.html