BatchJacobian - Maple Help
For the best experience, we recommend viewing online help using Google Chrome or Microsoft Edge.

 compute gradient of Tensor or Variable
 Jacobian
 compute Jacobian of Tensor or Variable
 BatchJacobian
 compute Jacobian of Tensor or Variable

 Calling Sequence gt:-Gradient(y, xs, opts) gt:-Jacobian(ys, xs, opts) gt:-BatchJacobian(ys, xs, opts)

Parameters

 gt - a GradientTape object x, y - a Tensor or Variable object or arbitrarily nested list of these opts - (optional) one or more options as specified below

Options

 • unconnected = one of NULL, undefined, or 0
 Specifies the flag value to be returned in the event that x and y are not connected when computing the derivative of y with respect to x. The default value is NULL.

Description

 • gt:-Gradient(y,x) computes the gradient of y with respect to x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.
 • gt:-Jacobian(y,x) computes the Jacobian of y with respect to x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.
 • gt:-BatchJacobian(y,x) computes a number of Jacobians simultaneously. x, where x and y are Tensors or Variables or arbitrarily nested lists of these which are tracked by the GradientTape gt.

Details

 • The implementations of Gradient, Jacobian, and BatchJacobian use the similarly named methods from tf.GradientTape in the TensorFlow Python API. Consult the TensorFlow Python API documentation for tf.GradientTape for more information.

Examples

 > $\mathrm{with}\left(\mathrm{DeepLearning}\right):$
 > $g≔\mathrm{GradientTape}\left(\right)$
 ${g}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning GradientTape}}\\ {\mathrm{}}\end{array}\right]$ (1)
 > $\mathrm{Enter}\left(g\right)$
 $\left[\begin{array}{c}{\mathrm{DeepLearning GradientTape}}\\ {\mathrm{}}\end{array}\right]$ (2)
 > $x≔\mathrm{Constant}\left(\left[\left[1.,2.\right],\left[3.,4.\right]\right],\mathrm{datatype}={\mathrm{float}}_{4}\right)$
 ${x}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Shape: \left[2, 2\right]}}\\ {\mathrm{Data Type: float\left[4\right]}}\end{array}\right]$ (3)
 > $g:-\mathrm{Watch}\left(x\right)$
 > $y≔{x}^{2}$
 ${y}{≔}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Shape: \left[2, 2\right]}}\\ {\mathrm{Data Type: float\left[4\right]}}\end{array}\right]$ (4)
 > $\mathrm{batch_jacobian}≔\mathrm{BatchJacobian}\left(y,x\right)$
 ${\mathrm{batch_jacobian}}{≔}{\mathrm{BatchJacobian}}{}\left(\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Shape: \left[2, 2\right]}}\\ {\mathrm{Data Type: float\left[4\right]}}\end{array}\right]{,}\left[\begin{array}{c}{\mathrm{DeepLearning Tensor}}\\ {\mathrm{Shape: \left[2, 2\right]}}\\ {\mathrm{Data Type: float\left[4\right]}}\end{array}\right]\right)$ (5)
 > $\mathrm{Exit}\left(g\right)$

Compatibility