Skip to content

Keras 损失函数categorical_crossentropy 是作为分类算法使用

代码如下:

def prefix_cnn_show(train_x, train_y):
    model = Sequential()
    model.add(Dense(64, activation='relu', input_dim=10))
    model.add(Dropout(0.2))

    model.add(Dense(6, activation='softmax'))
    model.summary()

    # categorical_crossentropy sparse_categorical_crossentropy
    model.compile(loss='categorical_crossentropy',
              optimizer="rmsprop",
              metrics=['accuracy'])

    model.fit(train_x, train_y, epochs=50, batch_size=64)
    return model

对于多个类别的 Y 值,需要进行增加维度;

y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)

对于 print(np_utils.to_categorical([1,3,4,5,0,2])) 输出:

[[0. 1. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0.]
 [0. 0. 0. 0. 1. 0.]
 [0. 0. 0. 0. 0. 1.]
 [1. 0. 0. 0. 0. 0.]
 [0. 0. 1. 0. 0. 0.]]

因此,对于 categorical_crossentropy 分类结果一般为同维度的结果值:

result = model.predict(X_test)
    for r in result:
        print([round(x, 2) for x in r])

结果如下:

[0.0, 0.02, 0.97, 0.01, 0.0, 0.0]
[0.0, 0.97, 0.03, 0.0, 0.0, 0.0]
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.0, 0.02, 0.98, 0.01, 0.0]
[0.0, 0.96, 0.04, 0.0, 0.0, 0.0]
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
[0.0, 0.02, 0.97, 0.01, 0.0, 0.0]
[0.01, 0.0, 0.0, 0.02, 0.98, 0.0]
[1.0, 0.0, 0.0, 0.0, 0.0, 0.0]
Test loss: 0.10338683462692821
Test accuracy: 0.9860748472267695
Baseline Error: 1.39%

如果的行为分类结果,我们可以认为每个 x 对应的预测结果落在预知的几个分类上的概率。

概率越大,则越策值接近该真实分类。