Python深度学习之Tensorflow基础

前言

本文参考Andrew Ng deeplearning.ai 所作笔记,tensorflow为2.x版本

Tensorflow基础

1.数据预处理

1、获取数据

import h5py
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.python.framework.ops import EagerTensor
from tensorflow.python.ops.resource_variable_ops import ResourceVariable
import time

读取hdf5文件

train_dataset = h5py.File('datasets/train_signs.h5', "r")
test_dataset = h5py.File('datasets/test_signs.h5', "r")

Tensorflow获取训练集和测试集,获取的四个变量类型为“TensorSliceDataset”

x_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_x'])
y_train = tf.data.Dataset.from_tensor_slices(train_dataset['train_set_y'])

x_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_x'])
y_test = tf.data.Dataset.from_tensor_slices(test_dataset['test_set_y'])

这些数据类型是迭代器类型

print(x_train.element_spec)
print(y_train.element_spec)
print(x_test.element_spec)
print(y_test.element_spec)

out:

TensorSpec(shape=(64, 64, 3), dtype=tf.uint8, name=None)
TensorSpec(shape=(), dtype=tf.int64, name=None)
TensorSpec(shape=(64, 64, 3), dtype=tf.uint8, name=None)
TensorSpec(shape=(), dtype=tf.int64, name=None)

可以看到,这些图像矩阵大小为64x64x3,想要查看矩阵的具体内容,需要迭代器或者for+print()

# [法一]                           
print(next(iter(x_train)))
# [法二]
for element in x_train:
    print(element)
    break

2、tf.cast() 和 tf.reshape()

分别为数据类型转换和矩阵重组函数,通常采用float32类型

数据归一化,转换为列向量:

def normalize(image):
    image = tf.cast(image, tf.float32) / 256.0
    image = tf.reshape(image, [-1,1])
    return image

3、map()

如果想要把函数作用于数据,可以使用map()

new_train = x_train.map(normalize)
new_test = x_test.map(normalize)

因此,训练数据类型为:

TensorSpec(shape=(12288, 1), dtype=tf.float32, name=None)

二、前向传播初始化

1、tf.constant() ,tf.add(),tf.matmul() 介绍

X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W,X), b, name = "Y")

2、tf.keras.activitations.sigmoid() & tf.keras.activations.relu计算激活

3、tf.one_hot(label, depth,axis=0) 创建onehot编码

label是一个scalar或者vector,depth不同种类的标签数,axis=0表示新的维度产生在行

例如,输入是一个标量

label = tf.constant(0)
depth = 4
tf.one_hot(label, depth, axis=0)

out:

tf.Tensor(
[[0.]
 [1.]
 [0.]
 [0.]], shape=(4, 1), dtype=float32)

输入为行向量

label = tf.constant([1,2])
depth = 4
tf.one_hot(label, depth, axis=0)

out:

tf.Tensor(
[[0. 0.]
 [1. 0.]
 [0. 1.]
 [0. 0.]], shape=(4, 2), dtype=float32)

3、tf.variable() 初始化网络参数w和b

随机初始化w和b——用(-1,1)之间的高斯分布填充

initializer = tf.keras.initializers.GlorotNormal(seed=1)   
W = tf.Variable(initializer(shape = (25,12288)))
b = tf.Variable(initializer(shape = (25,1)))
parameters = {"W": W1,
              "b": b1}    
return parameters

其他方法(仅供参考)

w = tf.Variable(tf.random_normal([25,12288], stddev=0),name=“w”)
b = tf.Variable(tf.zeros([25,1]), name=“b”)

tf.variable()类型的数据是变量,tf.constant()类型数据是常量(不可改变)

4、tf.keras.losses.binary_crossentropy计算交叉熵损失

cost = tf.reduce_mean(tf.keras.losses.binary_crossentropy(y_true = labels, y_pred = logits, from_logits=True))

四、训练模型

这基本上是一个技巧

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
          num_epochs = 1500, minibatch_size = 32, print_cost = True):  
    
    costs = []   # To keep track of the cost
    
    # Initialize your parameters
    parameters = initialize_parameters()

    W1 = parameters['W1']
    b1 = parameters['b1']
    W2 = parameters['W2']
    b2 = parameters['b2']
    W3 = parameters['W3']
    b3 = parameters['b3']

    optimizer = tf.keras.optimizers.SGD(learning_rate)

    X_train = X_train.batch(minibatch_size, drop_remainder=True).prefetch(8)# <<< extra step    
    Y_train = Y_train.batch(minibatch_size, drop_remainder=True).prefetch(8) # loads memory faster 

    # Do the training loop
    for epoch in range(num_epochs):

        epoch_cost = 0.
        
        for (minibatch_X, minibatch_Y) in zip(X_train, Y_train):
            # Select a minibatch
            with tf.GradientTape() as tape:
                # 1. predict
                Z3 = forward_propagation(minibatch_X, parameters)
                # 2. loss
                minibatch_cost = compute_cost(Z3, minibatch_Y)
                
            trainable_variables = [W1, b1, W2, b2, W3, b3]
            grads = tape.gradient(minibatch_cost, trainable_variables)
            optimizer.apply_gradients(zip(grads, trainable_variables))
            epoch_cost += minibatch_cost / minibatch_size

        # Print the cost every epoch
        if print_cost == True and epoch % 10 == 0:
            print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
        if print_cost == True and epoch % 5 == 0:
            costs.append(epoch_cost)

    # Plot the cost
    plt.plot(np.squeeze(costs))
    plt.ylabel('cost')
    plt.xlabel('iterations (per fives)')
    plt.title("Learning rate =" + str(learning_rate))
    plt.show()

    # Save the parameters in a variable
    print ("Parameters have been trained!")
    return parameters

reference

deeplearning.ai by Andrew Ng on Couresa

版权声明:本文为博主英雄各有见原创文章,版权归属原作者,如果侵权,请联系我们删除!

原文链接:https://blog.csdn.net/qq_51539256/article/details/122921530

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
乘风的头像乘风管理团队
上一篇 2022年2月19日 下午1:31
下一篇 2022年2月19日

相关推荐