Optimization methods used by training algorithm.
Interface Summary Interface Description OptimizerOptimization technique to tune network's weights parameters used by training algorithm.
Class Summary Class Description AbstractOptimizerSkeletal implementation of the Optimizer interface to minimize effort to implement specific optimizers. AdaDeltaOptimizerImplementation of ADADELTA
Optimizerwhich is a modification od
AdaGradthat uses only a limited window or previous gradients.
AdaGradOptimizerImplementation of ADAGRAD
Optimizer, which uses sum of squared previous gradients to adjust a global learning rate for each weight.
AdamOptimizerImplementation of Adam optimizer which is a variation of RmsProp which includes momentum-like factor. LearningRateDecayhttps://www.coursera.org/learn/deep-neural-network/lecture/hjgIA/learning-rate-decay MomentumOptimizerMomentum optimization adds momentum parameter to basic Stochastic Gradient Descent, which can accelerate the process. RmsPropOptimizerA variation of AdaDelta optimizer. SgdOptimizerBasic Stochastic Gradient Descent optimization algorithm, which iteratively change weights towards value which gives minimum error.
Enum Summary Enum Description OptimizerTypeSupported types of optimization methods used by back-propagation training algorithm.