Keras 中的二值图像分类器显示适当减少损失但保持不变的准确性 [关闭]

xiaoxingxing tensorflow 293

原文标题Binary Image Classifer in Keras Shows Properly Decreasing Loss but Constant accuracy [closed]

我有一个基本的 keras 图像分类器,用于从本地文件夹中提取的灰度 64×64 图像。分类器运行时没有错误,但准确率在各个时期保持在接近恒定的 50%,并且存在某种破坏程序的错误。

model = Sequential()
trainData = tensorflow.keras.preprocessing.image_dataset_from_directory(
    directory='TrainingData/',
    labels='inferred',
    label_mode='int',
    color_mode="grayscale",
    batch_size=5,
    image_size=(64, 64))
testData = tensorflow.keras.preprocessing.image_dataset_from_directory(
    directory='TestingData/',
    labels='inferred',
    label_mode='int',
    color_mode="grayscale",
    batch_size=5,
    image_size=(64, 64))

trainDataImage = np.concatenate([ x for x, y in trainData ], axis=0)
trainDataLabel = np.concatenate([ y for x, y in trainData ], axis=0)

testDataImage  = np.concatenate([ x for x, y in testData  ], axis=0)
testDataLabel  = np.concatenate([ y for x, y in testData  ], axis=0)
model.add(Conv2D(60, kernel_size = 1, activation='relu', input_shape=(64, 64, 1), padding='same'))
model.add(Dropout(0.2))
model.add(Conv2D(35, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(MaxPool2D(2))
model.add(Conv2D(20, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(MaxPool2D(2))
model.add(Conv2D(10, kernel_size = 1, activation='relu', padding='same'))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(2, activation='softmax'))
model.summary()
def Reshaper(var, imNumber, isImage):
    if(isImage==True):
        np.reshape(var, (imNumber, 64, 64, 1))
    else:
        np.reshape(var, (imNumber, 1))
    return var

trainDataImage = Reshaper(trainDataImage, 2800, True)
trainDataLabel = Reshaper(trainDataLabel, 2800, False)

def OneHotEncode(DataLabel, labelNum):
    OneHot = np.zeros((labelNum, 2))
    count=0
    while (count < labelNum):
    OneHot[count][DataLabel[count].astype(int)] = 1
    count=count+1
    return OneHot

trainDataLabel = OneHotEncode(trainDataLabel, 2800)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

model.fit(trainDataImage, trainDataLabel, epochs=5, batch_size=5)

testDataImage = Reshaper(testDataImage, 400, True)
testDataLabel = Reshaper(testDataLabel, 400, False)
testDataLabel = OneHotEncode(testDataLabel, 400)

output = model.evaluate(testDataImage, testDataLabel, verbose=True, batch_size=5)

print("The model loss and accuracy respectively:", output)

model.save('Model.h5')

原文链接:https://stackoverflow.com//questions/71899472/binary-image-classifer-in-keras-shows-properly-decreasing-loss-but-constant-accu

回复

我来回复
  • Jasper Fadden的头像
    Jasper Fadden 评论

    我能够解决这个问题。我仍然不确定问题到底是什么,但错误出现在初始化 trainData/testData 的代码行以及将它们拆分为图像/标签的代码中。我通过简单地导入 image_dataset_loader 来修复它,它允许我从一个文件结构中导入,其中数据集已经拆分为图像和标签。

    这是我使用的:

    (trainDataImage, trainDataLabel), (testDataImage, testDataLabel) = image_dataset_loader.load('./Dataset', ['TrainingData', 'TestingData'])
    
    2年前 0条评论
  • Shreyash Pandey的头像
    Shreyash Pandey 评论

    尝试使用平衡的数据集。我在上一个项目中遇到了这个问题。

    2年前 0条评论