BatchJacobian - Maple Help
For the best experience, we recommend viewing online help using Google Chrome or Microsoft Edge.

Online Help

All Products    Maple    MapleSim


DeepLearning[GradientTape]

  

Gradient

  

compute gradient of Tensor or Variable

  

Jacobian

  

compute Jacobian of Tensor or Variable

  

BatchJacobian

  

compute Jacobian of Tensor or Variable

 

Calling Sequence

Parameters

Options

Description

Details

Examples

Compatibility

Calling Sequence

gt:-Gradient(y, xs, opts)

gt:-Jacobian(ys, xs, opts)

gt:-BatchJacobian(ys, xs, opts)

Parameters

gt

-

a GradientTape object

x, y

-

a Tensor or Variable object or arbitrarily nested list of these

opts

-

(optional) one or more options as specified below

Options

• 

unconnected = one of NULL, undefined, or 0

  

Specifies the flag value to be returned in the event that x and y are not connected when computing the derivative of y with respect to x. The default value is NULL.

Description

• 

gt:-Gradient(y,x) computes the gradient of y with respect to x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.

• 

gt:-Jacobian(y,x) computes the Jacobian of y with respect to x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.

• 

gt:-BatchJacobian(y,x) computes a number of Jacobians simultaneously. x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.

Details

• 

The implementations of Gradient, Jacobian, and BatchJacobian use the similarly named methods from tf.GradientTape in the TensorFlow Python API. Consult the TensorFlow Python API documentation for tf.GradientTape for more information.

Examples

withDeepLearning:

gGradientTape

gDeepLearning GradientTape<tensorflow.python.eager.backprop.GradientTape object at 0x7f70e56202e0>

(1)

Enterg

DeepLearning GradientTape<tensorflow.python.eager.backprop.GradientTape object at 0x7f70e56202e0>

(2)

xConstant1.&comma;2.&comma;3.&comma;4.&comma;datatype&equals;float4

xDeepLearning TensorShape: [2, 2]Data Type: float[4]

(3)

g:-Watchx

yx2

yDeepLearning TensorShape: [2, 2]Data Type: float[4]

(4)

batch_jacobianBatchJacobiany&comma;x

batch_jacobianBatchJacobianDeepLearning TensorShape: [2, 2]Data Type: float[4]&comma;DeepLearning TensorShape: [2, 2]Data Type: float[4]

(5)

Exitg

Compatibility

• 

The DeepLearning[GradientTape][Gradient], DeepLearning[GradientTape][Jacobian] and DeepLearning[GradientTape][BatchJacobian] commands were introduced in Maple 2022.

• 

For more information on Maple 2022 changes, see Updates in Maple 2022.

See Also

DeepLearning Overview

GradientTape