吴恩达深度学习第2课第3周编程作业 的坑(Tensorflow+Tutorial)

时间:2022-11-19 16:09:18

可能因为Andrew Ng用的是python3,而我是python2.7的缘故,我发现了坑.如下:

在辅助文件tf_utils.py中的random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)函数中,把 math.floor(m/mini_batch_size) 改成 int(math.floor(m/mini_batch_size))就ok了.

就是下面的这个函数:

def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
"""
Creates a list of random minibatches from (X, Y) Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
mini_batch_size - size of the mini-batches, integer
seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours. Returns:
mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
""" m = X.shape[1] # number of training examples
mini_batches = []
np.random.seed(seed) # Step 1: Shuffle (X, Y)
permutation = list(np.random.permutation(m))
shuffled_X = X[:, permutation]
shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m)) # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
# num_complete_minibatches = math.floor(m/mini_batch_size) # original <------ 坑在这
num_complete_minibatches = int(math.floor(m/mini_batch_size)) # <--------修改后
for k in range(0, num_complete_minibatches):
mini_batch_X = shuffled_X[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch) # Handling the end case (last mini-batch < mini_batch_size)
if m % mini_batch_size != 0:
mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
mini_batch = (mini_batch_X, mini_batch_Y)
mini_batches.append(mini_batch) return mini_batches