All Classes and Interfaces
Class
Description
Base class for different types of layers.
Skeletal implementation of the Optimizer interface to
minimize effort to implement specific optimizers.
Base class to simplify implementation of custom normalization procedure.
Common base interface for all activation functions used in layers.
Supported types of activation functions.
Implementation of ADADELTA
that uses only a limited window or previous gradients.
Optimizer
which is a modification od
invalid reference
AdaGrad
Implementation of ADAGRAD
Optimizer
, which uses sum of squared previous gradients
to adjust a global learning rate for each weight.Implementation of Adam optimizer which is a variation of RmsProp which includes momentum-like factor.
just move 2(x) pix to left right up down
Backpropagation training algorithm for feed forward and convolutional neural networks.
Cross Entropy Loss is a loss function used for binary classification tasks (two classes, single output which represents probability ).
Koristi se verovatno u labeleru.
Callable int consumer.
Callable int consumer.
Callable range consumer.
Center images on backgounds and save at target path.
Various metrics that tell us how good is a classifier.
Average values of commonly used classification metrics.
Evaluation method for binary and multi-class classifiers.
Confusion matrix contains raw classifier test results.
Convolutional layer performs image convolution operation on outputs of a
previous layer using filters.
Convolutional neural network is an extension of feed forward network, which can
include 2D and 3D adaptive preprocessing layers (Convolutional and MaxPooling layer),
which is specialized to learn to recognize features in images.
Builder for a convolutional neural network.
Average Cross Entropy Loss function commonly used for multi class classification problems.
CSV file format options: delimiter, header(column names) and column types.
Singleton for native CUDA handles
Bridge to tensor on cuda/gpu device.
CUDA Tensor layout
Data set utility methods ans constants.
Decimal scale normalization for the given data set.
Global configuration and settings for Deep Netts Engine.
Dedicated thread pool for Deep Netts Engine.
just move 2(x) pix to left right up down
Utility methods for evaluating machine learning models.
Example image to train a deep learning model.
Feed forward neural network architecture, also known as Multi Layer Perceptron.
Builder of a
FeedForwardNetwork
instance.Factory for FeedForwardNetwork.
File utilities for saving and loading neural networks.
Settings of a convolutional filter which is used to learn to detect pixel patterns.
Transforms outputs from previous 3D layer into a flatten 1D tensor in forward pass,
Backward pass propagates weighted errors/deltas from the next fully connected layer.
Fully connected layer is used as a hidden layer in a neural network, and it
has a single row of units/nodes/neurons connected to all neurons in
previous and next layer.
The core automl class that performs automated model building and evaluation with specified parameters.
Image pre-processing to be used at inference time, after feeding input to the network.
Data set with images that will be used to train convolutional neural network.
Utility methods to work with images.
Input layer in a neural network.
Split data set into k parts of equal sizes (folds), then
train model with k-1 folds, and validate with remaining 1 fold.
Builder object for KFoldCrossValidation.
Common base interface for all types of neural network layers.
Supported types of layers.
Leaky Rectified Linear Activation and its Derivative.
https://www.coursera.org/learn/deep-neural-network/lecture/hjgIA/learning-rate-decay
Provides methods that perform license file checks.
This exception is thrown if there is some issue with the license file (invalid, expired or not found).
Linear activation function and its derivative.
Base Interface for all loss functions.
Supported types of Loss Functions in Deep Netts engine.
Misc math utility functions.
This layer performs max pooling operation in convolutional neural network, which
scales down output from previous layer by taking max outputs from small predefined filter areas.
Performs max normalization, rescales data to corresponding max value in each column.
Mean Squared Error Loss function.
Performs Min Max normalization on the given data set.
Single data item that will be used to train machine learning model.
Momentum optimization adds momentum parameter to basic Stochastic Gradient Descent, which can accelerate the process.
Base interface for all network factories.
Neural network architecture types.
Base class for all neural networks in Deep Netts.
Center images on backgounds and save at target path.
Optimization technique to tune network's weights parameters used by training algorithm.
Supported types of optimization methods used by back-propagation training algorithm.
Output layer of a neural network.
A single parameter with name and all possible values to try.
Parameter search space: a collection of parameters and methods for generating all possible combinations.
TODO: setMethod, throw meannigfull exceptions
A single combination of parameters with list of parameters (with all possible values).
Static utility class for finding the number of physical CPU cores.
Data pre-processing abstraction.
Random number generator singleton.
just move 2(x) pix to left right up down
Weights randomization utility methods.
Types of supported weights randomization techniques.
A value range of type T for the parameter.
Normalize data set to specified range.
Evaluates regressor neural network for specified data set.
Common metrics for regression models.
Rectified Linear Activation and its Derivative.
A variation of AdaDelta optimizer.
A measure of error for regression tasks.
Strategy for hyper-parameter search.
Basic Stochastic Gradient Descent optimization algorithm, which iteratively change weights towards value which gives minimum error.
Immutable class that represents Tensor shape.
Sigmoid activation function
Output layer with softmax activation function.
Soft sign activation function
Performs standardization on inputs in order to get desired statistical properties of the data set (zero mean and one standard deviation).
Statistical functions.
Basic data set with tabular data.
Represents a basic data set item (single row) with input tensor and
target vector in a data set.
Hyperbolic tangens activation function
One dimensional tensor - a vector or an array.
A 2D tensor / matrix with specified number of rows and columns..
This class represents a wrapper for multidimensional array.
Static utility methods for tensors.
Generic interface for deep learning training algorithm.
This interface is implemented by trainable deep learning models,
in order to provide access to training algorithm.
TrainingEvent is used to notify interested parties that training event has happened.
Type of a training event.
The listener interface for receiving notifications about training events.
All information about the completed training including training settings, epochs, loss and evaluation metrics.
This class holds training and test data set pair.
Provides methods for getting typed properties for the given key.