Interface LossFunction

  • All Known Implementing Classes:
    BinaryCrossEntropyLoss, CrossEntropyLoss, MeanSquaredErrorLoss

    public interface LossFunction
    Base Interface for all loss functions. Loss function is a component of a deep learning algorithm which calculates an error, as a difference of actual (or predicted) and desired (target) output of a neural network. The total error for some training is usually calculated as a average of errors for all individual input-output pairs. The higher value of loss function, means higher error and lower accuracy of prediction.
    Author:
    Zoran Sevarac
    See Also:
    MeanSquaredErrorLoss, CrossEntropyLoss
    • Method Summary

      Modifier and Type Method Description
      float[] addPatternError​(float[] predictedOutput, float[] targetOutput)
      Calculates pattern error for singe pattern for the specified predicted and target outputs, adds the error to total error, and returns the pattern error.
      void addRegularizationSum​(float regSum)
      Adds specified regularization sum to total loss.
      float getTotal()
      Returns the total error calculated by this loss function.
      void reset()
      Resets the total error and pattern counter.
      default float valueFor​(NeuralNetwork nnet, javax.visrec.ml.data.DataSet<? extends MLDataItem> dataSet)
      Calculates and returns loss function value for the given neural network and data set.
    • Method Detail

      • addPatternError

        float[] addPatternError​(float[] predictedOutput,
                                float[] targetOutput)
        Calculates pattern error for singe pattern for the specified predicted and target outputs, adds the error to total error, and returns the pattern error.
        Parameters:
        predictedOutput - predicted/actual network output vector
        targetOutput - target network output vector
        Returns:
        error vector error vector for the given predicted and target vectors
      • addRegularizationSum

        void addRegularizationSum​(float regSum)
        Adds specified regularization sum to total loss.
        Parameters:
        regSum - regularization sum
      • getTotal

        float getTotal()
        Returns the total error calculated by this loss function.
        Returns:
        total error calculated by this loss function
      • reset

        void reset()
        Resets the total error and pattern counter.
      • valueFor

        default float valueFor​(NeuralNetwork nnet,
                               javax.visrec.ml.data.DataSet<? extends MLDataItem> dataSet)
        Calculates and returns loss function value for the given neural network and data set.
        Parameters:
        nnet -
        dataSet -
        Returns: