Standard backpropagation algorithm, where the weights are updated after each training pattern.
This means that the weights are updated many times during a single epoch. For this reason some problems,
will train very fast with this algorithm, while other more advanced problems will not train very well.
Standard backpropagation algorithm, where the weights are updated after calculating the mean square error
for the whole training set. This means that the weights are only updated once during a epoch.
For this reason some problems, will train slower with this algorithm. But since the mean square
error is calculated more correctly than in incremental training, some problems will reach a better
solutions with this algorithm.
A more advanced batch training algorithm which achieves good results for many problems. The RPROP
training algorithm is adaptive, and does therefore not use the learning_rate. Some other parameters
can however be set to change the way the RPROP algorithm works, but it is only recommended
for users with insight in how the RPROP training algorithm works. The RPROP training algorithm
is described by [Riedmiller and Braun, 1993], but the actual learning algorithm used here is
the iRPROP- training algorithm which is described by [Igel and Husken, 2000] which is an variety
of the standard RPROP training algorithm.
A more advanced batch training algorithm which achieves good results for many problems.
The quickprop training algorithm uses the learning_rate parameter along with other more advanced parameters,
but it is only recommended to change these advanced parameters, for users with insight in how the quickprop
training algorithm works. The quickprop training algorithm is described by [Fahlman, 1988].
Tanh error function, usually better but can require a lower learning rate. This error function agressively
targets outputs that differ much from the desired, while not targetting outputs that only differ a little that much.
This activation function is not recommended for cascade training and incremental training.
Stop criteria is number of bits that fail. The number of bits means the number of output neurons
which differs more than the bit fail limit (see fann_get_bit_fail_limit, fann_set_bit_fail_limit). The bits are counted
in all of the training data, so this number can be higher than the number of training data.