Optimizer - Maple Help

DeepLearning

 Optimizer
 create an Optimizer object

 Calling Sequence Optimizer(X)

Parameters

 X - function; optimizer name with parameters.

Description

 • The Optimizer(X) command creates an Optimizer using the specified function X with parameters.
 • An Optimizer computes gradients for a loss function and applies gradients to variables. Among the implemented optimizers are included classic optimization algorithms such as gradient descent and Adagrad.

Supported Optimizers

 • In the Usage column in the table below, all arguments must be real-valued input parameters. The input learning_rate is mandatory for all optimizers, the remaining parameters may be omitted and will revert to default values.

 • This function is part of the DeepLearning package, so it can be used in the short form Optimizer(..) only after executing the command with(DeepLearning). However, it can always be accessed through the long form of the command by using DeepLearning[Optimizer](..).

Details

 • For more information on any of the above optimizers and the meaning of the input parameters, consult the TensorFlow Python API documentation for tf.train.

Examples

 > $\mathrm{with}\left(\mathrm{DeepLearning}\right):$
 > $f≔\mathrm{Optimizer}\left(\mathrm{Adagrad}\left(0.01\right)\right)$
 ${f}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Optimizer}}\\ {\mathrm{}}\end{array}\right]$ (1)
 > $g≔\mathrm{Optimizer}\left(\mathrm{Adam}\left(0.05\right)\right)$
 ${g}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Optimizer}}\\ {\mathrm{}}\end{array}\right]$ (2)

Compatibility

 • The DeepLearning[Optimizer] command was introduced in Maple 2018.