Binary Cross-Entropy loss is a special case of Cross-Entropy loss used for multilabel classification (taggers). It is the cross entropy loss when there are only two classes involved. It is reliant on Sigmoid activation functions.
Mathematically, it is given as,
Where is the true label and is the probability of the label.
Hello, thank you for using the code provided by CloudFactory. Please note that some code blocks might not be 100% complete and ready to be run as is. This is done intentionally as we focus on implementing only the most challenging parts that might be tough to pick up from scratch. View our code block as a LEGO block - you can’t use it as a standalone solution, but you can take it and add it to your system to complement it.
python
# importing the libraryimport torch
import torch.nn as nn
input = torch.randn(3, 5, requires_grad=True)
# Binary Cross-Entropy Loss
target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10
output = torch.full([10, 64], 1.5) # A prediction (logit)
pos_weight = torch.ones([64]) # All weights are equal to 1
criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)
criterion(output, target) # -log(sigmoid(1.5))
Hello, thank you for using the code provided by CloudFactory. Please note that some code blocks might not be 100% complete and ready to be run as is. This is done intentionally as we focus on implementing only the most challenging parts that might be tough to pick up from scratch. View our code block as a LEGO block - you can’t use it as a standalone solution, but you can take it and add it to your system to complement it.
python
# importing the libraryimport tensorflow as tf
y_true = [[0., 1.], [0., 0.]]
y_pred = [[0.6, 0.4], [0.4, 0.6]]
# Using 'auto'/'sum_over_batch_size' reduction type.
bce = tf.keras.losses.BinaryCrossentropy()
bce(y_true, y_pred).numpy()
# Calling with 'sample_weight'.
bce(y_true, y_pred, sample_weight=[1, 0]).numpy()