{"id":452602,"date":"2025-03-21T21:00:03","date_gmt":"2025-03-21T21:00:03","guid":{"rendered":"http:\/\/savepearlharbor.com\/?p=452602"},"modified":"-0001-11-30T00:00:00","modified_gmt":"-0001-11-29T21:00:00","slug":"","status":"publish","type":"post","link":"https:\/\/savepearlharbor.com\/?p=452602","title":{"rendered":"<span>\u041a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u0435 loss-\u0444\u0443\u043d\u043a\u0446\u0438\u0438 \u0432 TensorFlow\/Keras \u0438 PyTorch<\/span>"},"content":{"rendered":"<div><!--[--><!--]--><\/div>\n<div id=\"post-content-body\">\n<div>\n<div class=\"article-formatted-body article-formatted-body article-formatted-body_version-2\">\n<div xmlns=\"http:\/\/www.w3.org\/1999\/xhtml\">\n<p>\u041f\u0440\u0438\u0432\u0435\u0442, \u0425\u0430\u0431\u0440!<\/p>\n<p>\u0421\u0442\u0430\u043d\u0434\u0430\u0440\u0442\u043d\u044b\u0435 loss\u2011\u0444\u0443\u043d\u043a\u0446\u0438\u0438, \u0442\u0430\u043a\u0438\u0435 \u043a\u0430\u043a\u00a0MSE \u0438\u043b\u0438\u00a0CrossEntropy, \u0445\u043e\u0440\u043e\u0448\u0438, \u043d\u043e\u00a0\u0447\u0430\u0441\u0442\u043e \u0438\u043c \u043d\u0435\u00a0\u0445\u0432\u0430\u0442\u0430\u0435\u0442 \u0433\u0438\u0431\u043a\u043e\u0441\u0442\u0438 \u0434\u043b\u044f\u00a0\u0441\u043b\u043e\u0436\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447. \u0414\u043e\u043f\u0443\u0441\u0442\u0438\u043c, \u0435\u0441\u0442\u044c \u0442\u043e\u0442\u00a0\u0436\u0435 \u043f\u0440\u043e\u0435\u043a\u0442 \u0441\u00a0\u043e\u0433\u0440\u043e\u043c\u043d\u044b\u043c \u0434\u0438\u0441\u0431\u0430\u043b\u0430\u043d\u0441\u043e\u043c \u043a\u043b\u0430\u0441\u0441\u043e\u0432, \u0438\u043b\u0438\u00a0\u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u043d\u0435\u0434\u0440\u0438\u0442\u044c \u0441\u043f\u0435\u0446\u0438\u0444\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u0440\u0435\u0433\u0443\u043b\u044f\u0440\u0438\u0437\u0430\u0446\u0438\u044e \u043f\u0440\u044f\u043c\u043e \u0432\u00a0\u0444\u0443\u043d\u043a\u0446\u0438\u044e \u043f\u043e\u0442\u0435\u0440\u044c. \u0421\u0442\u0430\u043d\u0434\u0430\u0440\u0442\u043d\u044b\u0439 \u0444\u0443\u043d\u043a\u0446\u0438\u043e\u043d\u0430\u043b \u0442\u0443\u0442 \u0431\u0435\u0441\u0441\u0438\u043b\u0435\u043d\u00a0\u2014 \u0442\u0443\u0442 \u043d\u0430\u00a0\u043f\u043e\u043c\u043e\u0449\u044c \u043f\u0440\u0438\u0445\u043e\u0434\u044f\u0442 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u0435 loss&#8217;\u044b.<\/p>\n<h3>Custom Loss Functions \u0432 TensorFlow\/Keras<\/h3>\n<p>TensorFlow\/Keras \u0440\u0430\u0434\u0443\u044e\u0442 \u0443\u0434\u043e\u0431\u043d\u044b\u043c API, \u043d\u043e\u00a0\u0437\u0430\u00a0\u043f\u0440\u043e\u0441\u0442\u043e\u0442\u0443 \u043f\u0440\u0438\u0445\u043e\u0434\u0438\u0442\u0441\u044f \u043f\u043b\u0430\u0442\u0438\u0442\u044c \u0432\u043d\u0438\u043c\u0430\u043d\u0438\u0435\u043c \u043a\u00a0\u0434\u0435\u0442\u0430\u043b\u044f\u043c. <\/p>\n<h4>Focal Loss<\/h4>\n<p>Focal Loss \u043f\u043e\u043c\u043e\u0433\u0430\u0435\u0442 \u0441\u043c\u0435\u0441\u0442\u0438\u0442\u044c \u0444\u043e\u043a\u0443\u0441 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f \u043d\u0430\u00a0\u0441\u043b\u043e\u0436\u043d\u044b\u0435 \u043f\u0440\u0438\u043c\u0435\u0440\u044b, \u0441\u043d\u0438\u0436\u0430\u044f \u0432\u043b\u0438\u044f\u043d\u0438\u0435 \u043b\u0435\u0433\u043a\u043e \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u0446\u0438\u0440\u0443\u0435\u043c\u044b\u0445 \u0434\u0430\u043d\u043d\u044b\u0445:<\/p>\n<pre><code class=\"python\">import tensorflow as tf from tensorflow.keras import backend as K  def focal_loss(gamma=2., alpha=0.25):     \"\"\"     \u0420\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u044f Focal Loss \u0434\u043b\u044f \u0437\u0430\u0434\u0430\u0447 \u0441 \u0434\u0438\u0441\u0431\u0430\u043b\u0430\u043d\u0441\u043e\u043c \u043a\u043b\u0430\u0441\u0441\u043e\u0432.     :param gamma: \u0444\u043e\u043a\u0443\u0441\u0438\u0440\u0443\u044e\u0449\u0438\u0439 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440 \u0434\u043b\u044f \u0443\u0441\u0438\u043b\u0435\u043d\u0438\u044f \u0432\u043b\u0438\u044f\u043d\u0438\u044f \u0441\u043b\u043e\u0436\u043d\u044b\u0445 \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432.     :param alpha: \u043a\u043e\u044d\u0444\u0444\u0438\u0446\u0438\u0435\u043d\u0442 \u0431\u0430\u043b\u0430\u043d\u0441\u0438\u0440\u043e\u0432\u043a\u0438 \u043a\u043b\u0430\u0441\u0441\u043e\u0432.     :return: \u0444\u0443\u043d\u043a\u0446\u0438\u044f \u043f\u043e\u0442\u0435\u0440\u044c, \u043f\u0440\u0438\u043d\u0438\u043c\u0430\u044e\u0449\u0430\u044f (y_true, y_pred).     \"\"\"     def focal_loss_fixed(y_true, y_pred):         # \u0417\u0430\u0449\u0438\u0442\u0430 \u043e\u0442 log(0) \u2013 \u043e\u0431\u0440\u0435\u0437\u0430\u0435\u043c \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u044f \u043f\u0440\u0435\u0434\u0441\u043a\u0430\u0437\u0430\u043d\u0438\u0439.         y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())         # \u0412\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u043c \u043a\u0440\u043e\u0441\u0441-\u044d\u043d\u0442\u0440\u043e\u043f\u0438\u044e \u0434\u043b\u044f \u043a\u0430\u0436\u0434\u043e\u0433\u043e \u043f\u0440\u0438\u043c\u0435\u0440\u0430.         cross_entropy = -y_true * tf.math.log(y_pred)         # \u041f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u043c \u0432\u0435\u0441 \u0434\u043b\u044f \"\u0442\u044f\u0436\u0451\u043b\u044b\u0445\" \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432.         weight = alpha * tf.pow(1 - y_pred, gamma)         loss = weight * cross_entropy         # \u0423\u0441\u0440\u0435\u0434\u043d\u044f\u0435\u043c \u043f\u043e \u0431\u0430\u0442\u0447\u0443 \u0438 \u043a\u043b\u0430\u0441\u0441\u0430\u043c.         return tf.reduce_mean(tf.reduce_sum(loss, axis=-1))     return focal_loss_fixed  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Focal Loss: if __name__ == \"__main__\":     # \u0422\u0435\u0441\u0442\u043e\u0432\u044b\u0435 \u0434\u0430\u043d\u043d\u044b\u0435 \u0434\u043b\u044f \u043e\u0442\u043b\u0430\u0434\u043a\u0438 (\u0434\u0430, \u044f \u0442\u043e\u0436\u0435 \u043b\u044e\u0431\u043b\u044e \u043c\u0430\u043b\u0435\u043d\u044c\u043a\u0438\u0435 \u044d\u043a\u0441\u043f\u0435\u0440\u0438\u043c\u0435\u043d\u0442\u044b)     y_true = tf.constant([[1, 0], [0, 1]], dtype=tf.float32)     y_pred = tf.constant([[0.9, 0.1], [0.2, 0.8]], dtype=tf.float32)          loss_fn = focal_loss(gamma=2.0, alpha=0.25)     loss_value = loss_fn(y_true, y_pred)     print(\"Focal Loss:\", loss_value.numpy())<\/code><\/pre>\n<h4>\u0418\u043d\u0442\u0435\u0433\u0440\u0430\u0446\u0438\u044f \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0433\u043e loss \u0432 \u043c\u043e\u0434\u0435\u043b\u044c Keras<\/h4>\n<p>\u0421\u043e\u0437\u0434\u0430\u0434\u0438\u043c \u043f\u0440\u043e\u0441\u0442\u0443\u044e CNN\u2011\u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f\u00a0\u0440\u0430\u0441\u043f\u043e\u0437\u043d\u0430\u0432\u0430\u043d\u0438\u044f \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u0438 \u043f\u043e\u0434\u043a\u043b\u044e\u0447\u0438\u043c Focal Loss:<\/p>\n<pre><code class=\"python\">import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense  def create_model(input_shape=(28, 28, 1), num_classes=10):     model = Sequential([         Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape),         MaxPooling2D(pool_size=(2, 2)),         Flatten(),         Dense(128, activation='relu'),         Dense(num_classes, activation='softmax')     ])     return model  # \u041a\u043e\u043c\u043f\u0438\u043b\u0438\u0440\u0443\u0435\u043c \u043c\u043e\u0434\u0435\u043b\u044c \u0441 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 \u0444\u0443\u043d\u043a\u0446\u0438\u0435\u0439 \u043f\u043e\u0442\u0435\u0440\u044c model = create_model() model.compile(optimizer='adam', loss=focal_loss(gamma=2.0, alpha=0.25), metrics=['accuracy'])  # \u0421\u043e\u0437\u0434\u0430\u0434\u0438\u043c \u0442\u0435\u0441\u0442\u043e\u0432\u044b\u0435 \u0434\u0430\u043d\u043d\u044b\u0435 (\u043d\u0430\u0431\u043e\u0440 \u0438\u0437 \u0441\u043b\u0443\u0447\u0430\u0439\u043d\u044b\u0445 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u0438 \u043c\u0435\u0442\u043e\u043a) import numpy as np X_train = np.random.rand(100, 28, 28, 1) y_train = tf.keras.utils.to_categorical(np.random.randint(0, 10, 100), num_classes=10)  print(\"\u0417\u0430\u043f\u0443\u0441\u043a\u0430\u0435\u043c \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0435 \u043c\u043e\u0434\u0435\u043b\u0438 \u0441 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u043c Focal Loss...\") model.fit(X_train, y_train, epochs=3, batch_size=16)<\/code><\/pre>\n<p>\u041c\u043e\u0434\u0435\u043b\u044c \u043e\u0431\u0443\u0447\u0430\u0435\u0442\u0441\u044f \u0438 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u044b \u0441\u0445\u043e\u0434\u044f\u0442\u0441\u044f.<\/p>\n<h4>\u041d\u044e\u0430\u043d\u0441\u044b \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u044f \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043e\u0432<\/h4>\n<p>\u041d\u0435\u043b\u044c\u0437\u044f \u0437\u0430\u0431\u044b\u0432\u0430\u0442\u044c\u00a0\u2014 \u043b\u044e\u0431\u044b\u0435 \u043e\u043f\u0435\u0440\u0430\u0446\u0438\u0438, \u0432\u044b\u043f\u043e\u043b\u043d\u044f\u0435\u043c\u044b\u0435 \u0441\u00a0numpy, \u043b\u043e\u043c\u0430\u044e\u0442 \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043e\u0432. \u041f\u0440\u0438\u043c\u0435\u0440 \u043f\u043b\u043e\u0445\u043e\u0439 \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0438:<\/p>\n<pre><code class=\"python\">import numpy as np import tensorflow as tf  def loss_with_numpy(y_true, y_pred):     # \u041f\u043b\u043e\u0445\u0430\u044f \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0430: \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u0438\u043c \u0442\u0435\u043d\u0437\u043e\u0440\u044b \u0432 numpy \u0438 \u0440\u0430\u0437\u0440\u044b\u0432\u0430\u0435\u043c \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043d\u044b\u0439 \u043f\u043e\u0442\u043e\u043a.     y_true_np = y_true.numpy()  # \u041e\u0439-\u043e\u0439, \u043e\u0448\u0438\u0431\u043a\u0430 \u0432\u043d\u0443\u0442\u0440\u0438 GradientTape!     y_pred_np = y_pred.numpy()     loss_np = np.mean((y_true_np - y_pred_np) ** 2)     return tf.constant(loss_np, dtype=tf.float32)  if __name__ == \"__main__\":     x = tf.constant([[1.0], [2.0]])     y_true = tf.constant([[1.5], [2.5]])          with tf.GradientTape() as tape:         tape.watch(x)         y_pred = x * 2         try:             loss = loss_with_numpy(y_true, y_pred)             grad = tape.gradient(loss, x)             print(\"Gradient:\", grad)         except Exception as e:             print(\"\u041e\u0448\u0438\u0431\u043a\u0430 \u043f\u0440\u0438 \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u0430:\", e)<\/code><\/pre>\n<p>\u041e\u0441\u0442\u0430\u0432\u0430\u0439\u0442\u0435\u0441\u044c \u0432\u00a0\u043c\u0438\u0440\u0435 \u0442\u0435\u043d\u0437\u043e\u0440\u043e\u0432\u00a0\u2014 TensorFlow \u0443\u043c\u0435\u0435\u0442 \u0432\u0441\u0451, \u0447\u0442\u043e\u00a0\u043d\u0443\u0436\u043d\u043e, \u0435\u0441\u043b\u0438 \u0432\u044b \u043d\u0435\u00a0\u0440\u0435\u0448\u0438\u0442\u0435 \u043f\u043e\u0434\u043c\u0435\u0448\u0430\u0442\u044c \u0442\u0443\u0434\u0430 numpy.<\/p>\n<h3>Custom Loss Functions \u0432 PyTorch<\/h3>\n<h4>\u0420\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u044f \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 loss \u0447\u0435\u0440\u0435\u0437 torch.autograd.Function<\/h4>\n<p>\u041d\u0430\u0447\u043d\u0435\u043c \u0441\u00a0\u043f\u0440\u043e\u0441\u0442\u0435\u0439\u0448\u0435\u0439 \u0440\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0438 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 loss\u2011\u0444\u0443\u043d\u043a\u0446\u0438\u0438, \u043a\u043e\u0442\u043e\u0440\u0430\u044f \u0441\u0447\u0438\u0442\u0430\u0435\u0442 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u0438\u0447\u043d\u0443\u044e \u043e\u0448\u0438\u0431\u043a\u0443:<\/p>\n<pre><code class=\"python\">import torch  class CustomLossFunction(torch.autograd.Function):     @staticmethod     def forward(ctx, input, target):         \"\"\"         \u041f\u0440\u044f\u043c\u043e\u0439 \u043f\u0440\u043e\u0445\u043e\u0434: \u0432\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u043c MSE.         \"\"\"         ctx.save_for_backward(input, target)         loss = torch.mean((input - target) ** 2)         return loss      @staticmethod     def backward(ctx, grad_output):         \"\"\"         \u041e\u0431\u0440\u0430\u0442\u043d\u044b\u0439 \u043f\u0440\u043e\u0445\u043e\u0434: \u0430\u043a\u043a\u0443\u0440\u0430\u0442\u043d\u043e \u0441\u0447\u0438\u0442\u0430\u0435\u043c \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u044b.         \"\"\"         input, target = ctx.saved_tensors         grad_input = grad_output * 2 * (input - target) \/ input.numel()         return grad_input, None  # \u0422\u0435\u0441\u0442\u043e\u0432\u044b\u0439 \u043f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f: if __name__ == \"__main__\":     x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)     y = torch.tensor([1.5, 2.5, 3.5])          loss = CustomLossFunction.apply(x, y)     print(\"Custom Loss (PyTorch):\", loss.item())          loss.backward()     print(\"Gradient (PyTorch):\", x.grad)<\/code><\/pre>\n<h4>Focal Loss \u0432 PyTorch<\/h4>\n<p>Focal Loss \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0443\u0435\u0442 \u043d\u0435\u00a0\u0442\u043e\u043b\u044c\u043a\u043e \u0432\u00a0TensorFlow. \u0412\u00a0PyTorch \u043c\u043e\u0436\u043d\u043e \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u043d\u0435\u00a0\u0445\u0443\u0436\u0435:<\/p>\n<pre><code class=\"python\">import torch import torch.nn as nn import torch.nn.functional as F  class FocalLoss(nn.Module):     def __init__(self, alpha=0.25, gamma=2.0, reduction='mean'):         super(FocalLoss, self).__init__()         self.alpha = alpha         self.gamma = gamma         self.reduction = reduction      def forward(self, inputs, targets):         # \u0415\u0441\u043b\u0438 inputs \u2013 \u043b\u043e\u0433\u0438\u0442\u044b, \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c sigmoid \u0434\u043b\u044f \u043f\u0440\u0435\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u0430\u043d\u0438\u044f         BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')         pt = torch.exp(-BCE_loss)         focal_loss = self.alpha * (1 - pt) ** self.gamma * BCE_loss                  if self.reduction == 'mean':             return focal_loss.mean()         elif self.reduction == 'sum':             return focal_loss.sum()         else:             return focal_loss  # \u0422\u0435\u0441\u0442\u0438\u0440\u0443\u0435\u043c Focal Loss \u0432 PyTorch: if __name__ == \"__main__\":     inputs = torch.tensor([[0.2, -1.0], [1.5, 0.3]], requires_grad=True)     targets = torch.tensor([[0, 1], [1, 0]], dtype=torch.float32)          criterion = FocalLoss(alpha=0.25, gamma=2.0)     loss = criterion(inputs, targets)     print(\"Focal Loss (PyTorch):\", loss.item())          loss.backward()     print(\"Gradients (Focal Loss):\", inputs.grad)<\/code><\/pre>\n<h4>\u0420\u0430\u0431\u043e\u0442\u0430 \u0441 \u044d\u043c\u0431\u0435\u0434\u0434\u0438\u043d\u0433\u0430\u043c\u0438<\/h4>\n<p>\u0414\u043b\u044f\u00a0\u0437\u0430\u0434\u0430\u0447, \u0433\u0434\u0435 \u043d\u0443\u0436\u043d\u043e \u0441\u0440\u0430\u0432\u043d\u0438\u0432\u0430\u0442\u044c \u0441\u0445\u043e\u0436\u0435\u0441\u0442\u044c \u043e\u0431\u044a\u0435\u043a\u0442\u043e\u0432, \u043f\u043e\u0434\u043e\u0439\u0434\u0443\u0442 Contrastive \u0438 Triplet Loss. \u0420\u0435\u0430\u043b\u0438\u0437\u0443\u0435\u043c \u0438\u0445 \u0432\u00a0PyTorch.<\/p>\n<p><strong>Contrastive Loss<\/strong><\/p>\n<pre><code class=\"python\">class ContrastiveLoss(nn.Module):     def __init__(self, margin=1.0):         super(ContrastiveLoss, self).__init__()         self.margin = margin      def forward(self, output1, output2, label):         # \u0415\u0432\u043a\u043b\u0438\u0434\u043e\u0432\u0430 \u0434\u0438\u0441\u0442\u0430\u043d\u0446\u0438\u044f \u043c\u0435\u0436\u0434\u0443 \u044d\u043c\u0431\u0435\u0434\u0434\u0438\u043d\u0433\u0430\u043c\u0438         euclidean_distance = F.pairwise_distance(output1, output2)         loss_contrastive = torch.mean((1 - label) * torch.pow(euclidean_distance, 2) +                                       (label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))         return loss_contrastive  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Contrastive Loss: if __name__ == \"__main__\":     output1 = torch.tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)     output2 = torch.tensor([[1.5, 2.5], [2.5, 3.5]], requires_grad=True)     # label: 0 \u0434\u043b\u044f \u043f\u043e\u0445\u043e\u0436\u0438\u0445 \u043f\u0430\u0440, 1 \u0434\u043b\u044f \u043d\u0435\u043f\u043e\u0445\u043e\u0436\u0438\u0445.     label = torch.tensor([0, 1], dtype=torch.float32)          criterion = ContrastiveLoss(margin=1.0)     loss = criterion(output1, output2, label)     print(\"Contrastive Loss:\", loss.item())     loss.backward()<\/code><\/pre>\n<p><strong>Triplet Loss<\/strong><\/p>\n<pre><code class=\"python\">class TripletLoss(nn.Module):     def __init__(self, margin=1.0):         super(TripletLoss, self).__init__()         self.margin = margin      def forward(self, anchor, positive, negative):         pos_distance = F.pairwise_distance(anchor, positive, p=2)         neg_distance = F.pairwise_distance(anchor, negative, p=2)         losses = torch.relu(pos_distance - neg_distance + self.margin)         return losses.mean()  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Triplet Loss: if __name__ == \"__main__\":     anchor = torch.tensor([[1.0, 2.0], [2.0, 3.0]], requires_grad=True)     positive = torch.tensor([[1.1, 2.1], [1.9, 2.9]], requires_grad=True)     negative = torch.tensor([[3.0, 4.0], [4.0, 5.0]], requires_grad=True)          criterion = TripletLoss(margin=1.0)     loss = criterion(anchor, positive, negative)     print(\"Triplet Loss:\", loss.item())     loss.backward()<\/code><\/pre>\n<hr\/>\n<p>\u0415\u0441\u043b\u0438 \u0432\u0430\u043c \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u043f\u043e\u0434\u0435\u043b\u0438\u0442\u044c\u0441\u044f \u043e\u043f\u044b\u0442\u043e\u043c\u00a0\u2014 \u043f\u0438\u0448\u0438\u0442\u0435 \u0432\u00a0\u043a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u044f\u0445.<\/p>\n<p><em>\u0412\u0441\u0435 \u0430\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u044b\u0435 \u043c\u0435\u0442\u043e\u0434\u044b \u0438 \u0438\u043d\u0441\u0442\u0440\u0443\u043c\u0435\u043d\u0442\u044b DS \u0438 ML \u043c\u043e\u0436\u043d\u043e \u043e\u0441\u0432\u043e\u0438\u0442\u044c \u043d\u0430 \u043e\u043d\u043b\u0430\u0439\u043d-\u043a\u0443\u0440\u0441\u0430\u0445 OTUS: <\/em><a href=\"https:\/\/otus.pw\/VXD7\/\"><em>\u0432 \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0435<\/em><\/a><em> \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u0441\u043c\u043e\u0442\u0440\u0435\u0442\u044c \u0441\u043f\u0438\u0441\u043e\u043a \u0432\u0441\u0435\u0445 \u043f\u0440\u043e\u0433\u0440\u0430\u043c\u043c, \u0430 <\/em><a href=\"https:\/\/otus.pw\/R1pw\/\"><em>\u0432 \u043a\u0430\u043b\u0435\u043d\u0434\u0430\u0440\u0435<\/em><\/a><em> \u2014 \u0437\u0430\u043f\u0438\u0441\u0430\u0442\u044c\u0441\u044f \u043d\u0430 \u043e\u0442\u043a\u0440\u044b\u0442\u044b\u0435 \u0443\u0440\u043e\u043a\u0438.<\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><!----><!----><\/div>\n<p><!----><!----><br \/> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/articles\/892462\/\"> https:\/\/habr.com\/ru\/articles\/892462\/<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div><!--[--><!--]--><\/div>\n<div id=\"post-content-body\">\n<div>\n<div class=\"article-formatted-body article-formatted-body article-formatted-body_version-2\">\n<div xmlns=\"http:\/\/www.w3.org\/1999\/xhtml\">\n<p>\u041f\u0440\u0438\u0432\u0435\u0442, \u0425\u0430\u0431\u0440!<\/p>\n<p>\u0421\u0442\u0430\u043d\u0434\u0430\u0440\u0442\u043d\u044b\u0435 loss\u2011\u0444\u0443\u043d\u043a\u0446\u0438\u0438, \u0442\u0430\u043a\u0438\u0435 \u043a\u0430\u043a\u00a0MSE \u0438\u043b\u0438\u00a0CrossEntropy, \u0445\u043e\u0440\u043e\u0448\u0438, \u043d\u043e\u00a0\u0447\u0430\u0441\u0442\u043e \u0438\u043c \u043d\u0435\u00a0\u0445\u0432\u0430\u0442\u0430\u0435\u0442 \u0433\u0438\u0431\u043a\u043e\u0441\u0442\u0438 \u0434\u043b\u044f\u00a0\u0441\u043b\u043e\u0436\u043d\u044b\u0445 \u0437\u0430\u0434\u0430\u0447. \u0414\u043e\u043f\u0443\u0441\u0442\u0438\u043c, \u0435\u0441\u0442\u044c \u0442\u043e\u0442\u00a0\u0436\u0435 \u043f\u0440\u043e\u0435\u043a\u0442 \u0441\u00a0\u043e\u0433\u0440\u043e\u043c\u043d\u044b\u043c \u0434\u0438\u0441\u0431\u0430\u043b\u0430\u043d\u0441\u043e\u043c \u043a\u043b\u0430\u0441\u0441\u043e\u0432, \u0438\u043b\u0438\u00a0\u0445\u043e\u0447\u0435\u0442\u0441\u044f \u0432\u043d\u0435\u0434\u0440\u0438\u0442\u044c \u0441\u043f\u0435\u0446\u0438\u0444\u0438\u0447\u0435\u0441\u043a\u0443\u044e \u0440\u0435\u0433\u0443\u043b\u044f\u0440\u0438\u0437\u0430\u0446\u0438\u044e \u043f\u0440\u044f\u043c\u043e \u0432\u00a0\u0444\u0443\u043d\u043a\u0446\u0438\u044e \u043f\u043e\u0442\u0435\u0440\u044c. \u0421\u0442\u0430\u043d\u0434\u0430\u0440\u0442\u043d\u044b\u0439 \u0444\u0443\u043d\u043a\u0446\u0438\u043e\u043d\u0430\u043b \u0442\u0443\u0442 \u0431\u0435\u0441\u0441\u0438\u043b\u0435\u043d\u00a0\u2014 \u0442\u0443\u0442 \u043d\u0430\u00a0\u043f\u043e\u043c\u043e\u0449\u044c \u043f\u0440\u0438\u0445\u043e\u0434\u044f\u0442 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u0435 loss&#8217;\u044b.<\/p>\n<h3>Custom Loss Functions \u0432 TensorFlow\/Keras<\/h3>\n<p>TensorFlow\/Keras \u0440\u0430\u0434\u0443\u044e\u0442 \u0443\u0434\u043e\u0431\u043d\u044b\u043c API, \u043d\u043e\u00a0\u0437\u0430\u00a0\u043f\u0440\u043e\u0441\u0442\u043e\u0442\u0443 \u043f\u0440\u0438\u0445\u043e\u0434\u0438\u0442\u0441\u044f \u043f\u043b\u0430\u0442\u0438\u0442\u044c \u0432\u043d\u0438\u043c\u0430\u043d\u0438\u0435\u043c \u043a\u00a0\u0434\u0435\u0442\u0430\u043b\u044f\u043c. <\/p>\n<h4>Focal Loss<\/h4>\n<p>Focal Loss \u043f\u043e\u043c\u043e\u0433\u0430\u0435\u0442 \u0441\u043c\u0435\u0441\u0442\u0438\u0442\u044c \u0444\u043e\u043a\u0443\u0441 \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u044f \u043d\u0430\u00a0\u0441\u043b\u043e\u0436\u043d\u044b\u0435 \u043f\u0440\u0438\u043c\u0435\u0440\u044b, \u0441\u043d\u0438\u0436\u0430\u044f \u0432\u043b\u0438\u044f\u043d\u0438\u0435 \u043b\u0435\u0433\u043a\u043e \u043a\u043b\u0430\u0441\u0441\u0438\u0444\u0438\u0446\u0438\u0440\u0443\u0435\u043c\u044b\u0445 \u0434\u0430\u043d\u043d\u044b\u0445:<\/p>\n<pre><code class=\"python\">import tensorflow as tf from tensorflow.keras import backend as K  def focal_loss(gamma=2., alpha=0.25):     \"\"\"     \u0420\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u044f Focal Loss \u0434\u043b\u044f \u0437\u0430\u0434\u0430\u0447 \u0441 \u0434\u0438\u0441\u0431\u0430\u043b\u0430\u043d\u0441\u043e\u043c \u043a\u043b\u0430\u0441\u0441\u043e\u0432.     :param gamma: \u0444\u043e\u043a\u0443\u0441\u0438\u0440\u0443\u044e\u0449\u0438\u0439 \u043f\u0430\u0440\u0430\u043c\u0435\u0442\u0440 \u0434\u043b\u044f \u0443\u0441\u0438\u043b\u0435\u043d\u0438\u044f \u0432\u043b\u0438\u044f\u043d\u0438\u044f \u0441\u043b\u043e\u0436\u043d\u044b\u0445 \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432.     :param alpha: \u043a\u043e\u044d\u0444\u0444\u0438\u0446\u0438\u0435\u043d\u0442 \u0431\u0430\u043b\u0430\u043d\u0441\u0438\u0440\u043e\u0432\u043a\u0438 \u043a\u043b\u0430\u0441\u0441\u043e\u0432.     :return: \u0444\u0443\u043d\u043a\u0446\u0438\u044f \u043f\u043e\u0442\u0435\u0440\u044c, \u043f\u0440\u0438\u043d\u0438\u043c\u0430\u044e\u0449\u0430\u044f (y_true, y_pred).     \"\"\"     def focal_loss_fixed(y_true, y_pred):         # \u0417\u0430\u0449\u0438\u0442\u0430 \u043e\u0442 log(0) \u2013 \u043e\u0431\u0440\u0435\u0437\u0430\u0435\u043c \u0437\u043d\u0430\u0447\u0435\u043d\u0438\u044f \u043f\u0440\u0435\u0434\u0441\u043a\u0430\u0437\u0430\u043d\u0438\u0439.         y_pred = K.clip(y_pred, K.epsilon(), 1. - K.epsilon())         # \u0412\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u043c \u043a\u0440\u043e\u0441\u0441-\u044d\u043d\u0442\u0440\u043e\u043f\u0438\u044e \u0434\u043b\u044f \u043a\u0430\u0436\u0434\u043e\u0433\u043e \u043f\u0440\u0438\u043c\u0435\u0440\u0430.         cross_entropy = -y_true * tf.math.log(y_pred)         # \u041f\u0440\u0438\u043c\u0435\u043d\u044f\u0435\u043c \u0432\u0435\u0441 \u0434\u043b\u044f \"\u0442\u044f\u0436\u0451\u043b\u044b\u0445\" \u043f\u0440\u0438\u043c\u0435\u0440\u043e\u0432.         weight = alpha * tf.pow(1 - y_pred, gamma)         loss = weight * cross_entropy         # \u0423\u0441\u0440\u0435\u0434\u043d\u044f\u0435\u043c \u043f\u043e \u0431\u0430\u0442\u0447\u0443 \u0438 \u043a\u043b\u0430\u0441\u0441\u0430\u043c.         return tf.reduce_mean(tf.reduce_sum(loss, axis=-1))     return focal_loss_fixed  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Focal Loss: if __name__ == \"__main__\":     # \u0422\u0435\u0441\u0442\u043e\u0432\u044b\u0435 \u0434\u0430\u043d\u043d\u044b\u0435 \u0434\u043b\u044f \u043e\u0442\u043b\u0430\u0434\u043a\u0438 (\u0434\u0430, \u044f \u0442\u043e\u0436\u0435 \u043b\u044e\u0431\u043b\u044e \u043c\u0430\u043b\u0435\u043d\u044c\u043a\u0438\u0435 \u044d\u043a\u0441\u043f\u0435\u0440\u0438\u043c\u0435\u043d\u0442\u044b)     y_true = tf.constant([[1, 0], [0, 1]], dtype=tf.float32)     y_pred = tf.constant([[0.9, 0.1], [0.2, 0.8]], dtype=tf.float32)          loss_fn = focal_loss(gamma=2.0, alpha=0.25)     loss_value = loss_fn(y_true, y_pred)     print(\"Focal Loss:\", loss_value.numpy())<\/code><\/pre>\n<h4>\u0418\u043d\u0442\u0435\u0433\u0440\u0430\u0446\u0438\u044f \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0433\u043e loss \u0432 \u043c\u043e\u0434\u0435\u043b\u044c Keras<\/h4>\n<p>\u0421\u043e\u0437\u0434\u0430\u0434\u0438\u043c \u043f\u0440\u043e\u0441\u0442\u0443\u044e CNN\u2011\u043c\u043e\u0434\u0435\u043b\u044c \u0434\u043b\u044f\u00a0\u0440\u0430\u0441\u043f\u043e\u0437\u043d\u0430\u0432\u0430\u043d\u0438\u044f \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u0438 \u043f\u043e\u0434\u043a\u043b\u044e\u0447\u0438\u043c Focal Loss:<\/p>\n<pre><code class=\"python\">import tensorflow as tf from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense  def create_model(input_shape=(28, 28, 1), num_classes=10):     model = Sequential([         Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=input_shape),         MaxPooling2D(pool_size=(2, 2)),         Flatten(),         Dense(128, activation='relu'),         Dense(num_classes, activation='softmax')     ])     return model  # \u041a\u043e\u043c\u043f\u0438\u043b\u0438\u0440\u0443\u0435\u043c \u043c\u043e\u0434\u0435\u043b\u044c \u0441 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 \u0444\u0443\u043d\u043a\u0446\u0438\u0435\u0439 \u043f\u043e\u0442\u0435\u0440\u044c model = create_model() model.compile(optimizer='adam', loss=focal_loss(gamma=2.0, alpha=0.25), metrics=['accuracy'])  # \u0421\u043e\u0437\u0434\u0430\u0434\u0438\u043c \u0442\u0435\u0441\u0442\u043e\u0432\u044b\u0435 \u0434\u0430\u043d\u043d\u044b\u0435 (\u043d\u0430\u0431\u043e\u0440 \u0438\u0437 \u0441\u043b\u0443\u0447\u0430\u0439\u043d\u044b\u0445 \u0438\u0437\u043e\u0431\u0440\u0430\u0436\u0435\u043d\u0438\u0439 \u0438 \u043c\u0435\u0442\u043e\u043a) import numpy as np X_train = np.random.rand(100, 28, 28, 1) y_train = tf.keras.utils.to_categorical(np.random.randint(0, 10, 100), num_classes=10)  print(\"\u0417\u0430\u043f\u0443\u0441\u043a\u0430\u0435\u043c \u043e\u0431\u0443\u0447\u0435\u043d\u0438\u0435 \u043c\u043e\u0434\u0435\u043b\u0438 \u0441 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u044b\u043c Focal Loss...\") model.fit(X_train, y_train, epochs=3, batch_size=16)<\/code><\/pre>\n<p>\u041c\u043e\u0434\u0435\u043b\u044c \u043e\u0431\u0443\u0447\u0430\u0435\u0442\u0441\u044f \u0438 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u044b \u0441\u0445\u043e\u0434\u044f\u0442\u0441\u044f.<\/p>\n<h4>\u041d\u044e\u0430\u043d\u0441\u044b \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u044f \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043e\u0432<\/h4>\n<p>\u041d\u0435\u043b\u044c\u0437\u044f \u0437\u0430\u0431\u044b\u0432\u0430\u0442\u044c\u00a0\u2014 \u043b\u044e\u0431\u044b\u0435 \u043e\u043f\u0435\u0440\u0430\u0446\u0438\u0438, \u0432\u044b\u043f\u043e\u043b\u043d\u044f\u0435\u043c\u044b\u0435 \u0441\u00a0numpy, \u043b\u043e\u043c\u0430\u044e\u0442 \u0430\u0432\u0442\u043e\u043c\u0430\u0442\u0438\u0447\u0435\u0441\u043a\u043e\u0435 \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0435 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043e\u0432. \u041f\u0440\u0438\u043c\u0435\u0440 \u043f\u043b\u043e\u0445\u043e\u0439 \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0438:<\/p>\n<pre><code class=\"python\">import numpy as np import tensorflow as tf  def loss_with_numpy(y_true, y_pred):     # \u041f\u043b\u043e\u0445\u0430\u044f \u043f\u0440\u0430\u043a\u0442\u0438\u043a\u0430: \u043f\u0435\u0440\u0435\u0432\u043e\u0434\u0438\u043c \u0442\u0435\u043d\u0437\u043e\u0440\u044b \u0432 numpy \u0438 \u0440\u0430\u0437\u0440\u044b\u0432\u0430\u0435\u043c \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u043d\u044b\u0439 \u043f\u043e\u0442\u043e\u043a.     y_true_np = y_true.numpy()  # \u041e\u0439-\u043e\u0439, \u043e\u0448\u0438\u0431\u043a\u0430 \u0432\u043d\u0443\u0442\u0440\u0438 GradientTape!     y_pred_np = y_pred.numpy()     loss_np = np.mean((y_true_np - y_pred_np) ** 2)     return tf.constant(loss_np, dtype=tf.float32)  if __name__ == \"__main__\":     x = tf.constant([[1.0], [2.0]])     y_true = tf.constant([[1.5], [2.5]])          with tf.GradientTape() as tape:         tape.watch(x)         y_pred = x * 2         try:             loss = loss_with_numpy(y_true, y_pred)             grad = tape.gradient(loss, x)             print(\"Gradient:\", grad)         except Exception as e:             print(\"\u041e\u0448\u0438\u0431\u043a\u0430 \u043f\u0440\u0438 \u0432\u044b\u0447\u0438\u0441\u043b\u0435\u043d\u0438\u0438 \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u0430:\", e)<\/code><\/pre>\n<p>\u041e\u0441\u0442\u0430\u0432\u0430\u0439\u0442\u0435\u0441\u044c \u0432\u00a0\u043c\u0438\u0440\u0435 \u0442\u0435\u043d\u0437\u043e\u0440\u043e\u0432\u00a0\u2014 TensorFlow \u0443\u043c\u0435\u0435\u0442 \u0432\u0441\u0451, \u0447\u0442\u043e\u00a0\u043d\u0443\u0436\u043d\u043e, \u0435\u0441\u043b\u0438 \u0432\u044b \u043d\u0435\u00a0\u0440\u0435\u0448\u0438\u0442\u0435 \u043f\u043e\u0434\u043c\u0435\u0448\u0430\u0442\u044c \u0442\u0443\u0434\u0430 numpy.<\/p>\n<h3>Custom Loss Functions \u0432 PyTorch<\/h3>\n<h4>\u0420\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u044f \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 loss \u0447\u0435\u0440\u0435\u0437 torch.autograd.Function<\/h4>\n<p>\u041d\u0430\u0447\u043d\u0435\u043c \u0441\u00a0\u043f\u0440\u043e\u0441\u0442\u0435\u0439\u0448\u0435\u0439 \u0440\u0435\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u0438 \u043a\u0430\u0441\u0442\u043e\u043c\u043d\u043e\u0439 loss\u2011\u0444\u0443\u043d\u043a\u0446\u0438\u0438, \u043a\u043e\u0442\u043e\u0440\u0430\u044f \u0441\u0447\u0438\u0442\u0430\u0435\u0442 \u043a\u0432\u0430\u0434\u0440\u0430\u0442\u0438\u0447\u043d\u0443\u044e \u043e\u0448\u0438\u0431\u043a\u0443:<\/p>\n<pre><code class=\"python\">import torch  class CustomLossFunction(torch.autograd.Function):     @staticmethod     def forward(ctx, input, target):         \"\"\"         \u041f\u0440\u044f\u043c\u043e\u0439 \u043f\u0440\u043e\u0445\u043e\u0434: \u0432\u044b\u0447\u0438\u0441\u043b\u044f\u0435\u043c MSE.         \"\"\"         ctx.save_for_backward(input, target)         loss = torch.mean((input - target) ** 2)         return loss      @staticmethod     def backward(ctx, grad_output):         \"\"\"         \u041e\u0431\u0440\u0430\u0442\u043d\u044b\u0439 \u043f\u0440\u043e\u0445\u043e\u0434: \u0430\u043a\u043a\u0443\u0440\u0430\u0442\u043d\u043e \u0441\u0447\u0438\u0442\u0430\u0435\u043c \u0433\u0440\u0430\u0434\u0438\u0435\u043d\u0442\u044b.         \"\"\"         input, target = ctx.saved_tensors         grad_input = grad_output * 2 * (input - target) \/ input.numel()         return grad_input, None  # \u0422\u0435\u0441\u0442\u043e\u0432\u044b\u0439 \u043f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f: if __name__ == \"__main__\":     x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)     y = torch.tensor([1.5, 2.5, 3.5])          loss = CustomLossFunction.apply(x, y)     print(\"Custom Loss (PyTorch):\", loss.item())          loss.backward()     print(\"Gradient (PyTorch):\", x.grad)<\/code><\/pre>\n<h4>Focal Loss \u0432 PyTorch<\/h4>\n<p>Focal Loss \u0441\u0443\u0449\u0435\u0441\u0442\u0432\u0443\u0435\u0442 \u043d\u0435\u00a0\u0442\u043e\u043b\u044c\u043a\u043e \u0432\u00a0TensorFlow. \u0412\u00a0PyTorch \u043c\u043e\u0436\u043d\u043e \u0441\u0434\u0435\u043b\u0430\u0442\u044c \u043d\u0435\u00a0\u0445\u0443\u0436\u0435:<\/p>\n<pre><code class=\"python\">import torch import torch.nn as nn import torch.nn.functional as F  class FocalLoss(nn.Module):     def __init__(self, alpha=0.25, gamma=2.0, reduction='mean'):         super(FocalLoss, self).__init__()         self.alpha = alpha         self.gamma = gamma         self.reduction = reduction      def forward(self, inputs, targets):         # \u0415\u0441\u043b\u0438 inputs \u2013 \u043b\u043e\u0433\u0438\u0442\u044b, \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c sigmoid \u0434\u043b\u044f \u043f\u0440\u0435\u043e\u0431\u0440\u0430\u0437\u043e\u0432\u0430\u043d\u0438\u044f         BCE_loss = F.binary_cross_entropy_with_logits(inputs, targets, reduction='none')         pt = torch.exp(-BCE_loss)         focal_loss = self.alpha * (1 - pt) ** self.gamma * BCE_loss                  if self.reduction == 'mean':             return focal_loss.mean()         elif self.reduction == 'sum':             return focal_loss.sum()         else:             return focal_loss  # \u0422\u0435\u0441\u0442\u0438\u0440\u0443\u0435\u043c Focal Loss \u0432 PyTorch: if __name__ == \"__main__\":     inputs = torch.tensor([[0.2, -1.0], [1.5, 0.3]], requires_grad=True)     targets = torch.tensor([[0, 1], [1, 0]], dtype=torch.float32)          criterion = FocalLoss(alpha=0.25, gamma=2.0)     loss = criterion(inputs, targets)     print(\"Focal Loss (PyTorch):\", loss.item())          loss.backward()     print(\"Gradients (Focal Loss):\", inputs.grad)<\/code><\/pre>\n<h4>\u0420\u0430\u0431\u043e\u0442\u0430 \u0441 \u044d\u043c\u0431\u0435\u0434\u0434\u0438\u043d\u0433\u0430\u043c\u0438<\/h4>\n<p>\u0414\u043b\u044f\u00a0\u0437\u0430\u0434\u0430\u0447, \u0433\u0434\u0435 \u043d\u0443\u0436\u043d\u043e \u0441\u0440\u0430\u0432\u043d\u0438\u0432\u0430\u0442\u044c \u0441\u0445\u043e\u0436\u0435\u0441\u0442\u044c \u043e\u0431\u044a\u0435\u043a\u0442\u043e\u0432, \u043f\u043e\u0434\u043e\u0439\u0434\u0443\u0442 Contrastive \u0438 Triplet Loss. \u0420\u0435\u0430\u043b\u0438\u0437\u0443\u0435\u043c \u0438\u0445 \u0432\u00a0PyTorch.<\/p>\n<p><strong>Contrastive Loss<\/strong><\/p>\n<pre><code class=\"python\">class ContrastiveLoss(nn.Module):     def __init__(self, margin=1.0):         super(ContrastiveLoss, self).__init__()         self.margin = margin      def forward(self, output1, output2, label):         # \u0415\u0432\u043a\u043b\u0438\u0434\u043e\u0432\u0430 \u0434\u0438\u0441\u0442\u0430\u043d\u0446\u0438\u044f \u043c\u0435\u0436\u0434\u0443 \u044d\u043c\u0431\u0435\u0434\u0434\u0438\u043d\u0433\u0430\u043c\u0438         euclidean_distance = F.pairwise_distance(output1, output2)         loss_contrastive = torch.mean((1 - label) * torch.pow(euclidean_distance, 2) +                                       (label) * torch.pow(torch.clamp(self.margin - euclidean_distance, min=0.0), 2))         return loss_contrastive  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Contrastive Loss: if __name__ == \"__main__\":     output1 = torch.tensor([[1.0, 2.0], [3.0, 4.0]], requires_grad=True)     output2 = torch.tensor([[1.5, 2.5], [2.5, 3.5]], requires_grad=True)     # label: 0 \u0434\u043b\u044f \u043f\u043e\u0445\u043e\u0436\u0438\u0445 \u043f\u0430\u0440, 1 \u0434\u043b\u044f \u043d\u0435\u043f\u043e\u0445\u043e\u0436\u0438\u0445.     label = torch.tensor([0, 1], dtype=torch.float32)          criterion = ContrastiveLoss(margin=1.0)     loss = criterion(output1, output2, label)     print(\"Contrastive Loss:\", loss.item())     loss.backward()<\/code><\/pre>\n<p><strong>Triplet Loss<\/strong><\/p>\n<pre><code class=\"python\">class TripletLoss(nn.Module):     def __init__(self, margin=1.0):         super(TripletLoss, self).__init__()         self.margin = margin      def forward(self, anchor, positive, negative):         pos_distance = F.pairwise_distance(anchor, positive, p=2)         neg_distance = F.pairwise_distance(anchor, negative, p=2)         losses = torch.relu(pos_distance - neg_distance + self.margin)         return losses.mean()  # \u041f\u0440\u0438\u043c\u0435\u0440 \u0438\u0441\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u043d\u0438\u044f Triplet Loss: if __name__ == \"__main__\":     anchor = torch.tensor([[1.0, 2.0], [2.0, 3.0]], requires_grad=True)     positive = torch.tensor([[1.1, 2.1], [1.9, 2.9]], requires_grad=True)     negative = torch.tensor([[3.0, 4.0], [4.0, 5.0]], requires_grad=True)          criterion = TripletLoss(margin=1.0)     loss = criterion(anchor, positive, negative)     print(\"Triplet Loss:\", loss.item())     loss.backward()<\/code><\/pre>\n<hr\/>\n<p>\u0415\u0441\u043b\u0438 \u0432\u0430\u043c \u0445\u043e\u0447\u0435\u0442\u0441\u044f \u043f\u043e\u0434\u0435\u043b\u0438\u0442\u044c\u0441\u044f \u043e\u043f\u044b\u0442\u043e\u043c\u00a0\u2014 \u043f\u0438\u0448\u0438\u0442\u0435 \u0432\u00a0\u043a\u043e\u043c\u043c\u0435\u043d\u0442\u0430\u0440\u0438\u044f\u0445.<\/p>\n<p><em>\u0412\u0441\u0435 \u0430\u043a\u0442\u0443\u0430\u043b\u044c\u043d\u044b\u0435 \u043c\u0435\u0442\u043e\u0434\u044b \u0438 \u0438\u043d\u0441\u0442\u0440\u0443\u043c\u0435\u043d\u0442\u044b DS \u0438 ML \u043c\u043e\u0436\u043d\u043e \u043e\u0441\u0432\u043e\u0438\u0442\u044c \u043d\u0430 \u043e\u043d\u043b\u0430\u0439\u043d-\u043a\u0443\u0440\u0441\u0430\u0445 OTUS: <\/em><a href=\"https:\/\/otus.pw\/VXD7\/\"><em>\u0432 \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0435<\/em><\/a><em> \u043c\u043e\u0436\u043d\u043e \u043f\u043e\u0441\u043c\u043e\u0442\u0440\u0435\u0442\u044c \u0441\u043f\u0438\u0441\u043e\u043a \u0432\u0441\u0435\u0445 \u043f\u0440\u043e\u0433\u0440\u0430\u043c\u043c, \u0430 <\/em><a href=\"https:\/\/otus.pw\/R1pw\/\"><em>\u0432 \u043a\u0430\u043b\u0435\u043d\u0434\u0430\u0440\u0435<\/em><\/a><em> \u2014 \u0437\u0430\u043f\u0438\u0441\u0430\u0442\u044c\u0441\u044f \u043d\u0430 \u043e\u0442\u043a\u0440\u044b\u0442\u044b\u0435 \u0443\u0440\u043e\u043a\u0438.<\/em><\/p>\n<\/div>\n<\/div>\n<\/div>\n<p><!----><!----><\/div>\n<p><!----><!----><br \/> \u0441\u0441\u044b\u043b\u043a\u0430 \u043d\u0430 \u043e\u0440\u0438\u0433\u0438\u043d\u0430\u043b \u0441\u0442\u0430\u0442\u044c\u0438 <a href=\"https:\/\/habr.com\/ru\/articles\/892462\/\"> https:\/\/habr.com\/ru\/articles\/892462\/<\/a><br \/><\/br><\/br><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[],"tags":[],"class_list":["post-452602","post","type-post","status-publish","format-standard","hentry"],"_links":{"self":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/452602","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=452602"}],"version-history":[{"count":0,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=\/wp\/v2\/posts\/452602\/revisions"}],"wp:attachment":[{"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=452602"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=452602"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/savepearlharbor.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=452602"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}