Application Center - Maplesoft

# Classroom Tips and Techniques: Tensor Calculus with the Differential Geometry Package

You can switch back to the summary page by clicking here.

Classroom Tips and Techniques: Tensor Calculus with the Differential Geometry Package

Robert J. Lopez
Emeritus Professor of Mathematics and Maple Fellow
Maplesoft

Introduction

The Tensor subpackage of the DifferentialGeometry package supplants the now-deprecated tensor package in Maple.  The tensor package made essential use of the also-deprecated linalg package, so although worksheets that used these deprecated packages still work, it is imperative to move to using the new formalisms in the Tensor package.

This article is a "survivor's guide" for implementing tensor calculus in the new Tensor package. It explains the constructs in this package from the perspective of classical (i.e., indicial) tensor notation.

The DifferentialGeometry package itself contains some 34 command but also six subpackages, themselves contributing to a total of some 184 commands.  These subpackages, and a measure of their "size" are listed in Table 1.

 Subpackage Number of Commands Tensor 61 Tools 21 JetCalculus GroupActions 9 Library 4 LieAlgebras 33 Table 1   Subpackages in the DifferentialGeometry package

The complexity of mastering the DifferentialGeometry package is further increased when subpackages such as GroupActions themselves have subpackages (MovingFrames).  A comprehensive tutorial in the complete DifferentialGeometry package would require more than a textbook, so clearly, this is not our ambition here.  Fortunately, the package itself contains two sets of useful tutorials, a comprehensive collection of Lessons , and a set of Tutorials . The Lessons worksheets  provide a systematic approach to learning the commands in the DifferentialGeometry, Tensor, LieAlgebras and JetCalculus subpackages. Each lesson also contains a set of exercises that range in difficulty from simple computations to programming problems. Solutions are given. The tutorials present specialized applications of the DifferentialGeometry package.

Our more modest goal for this article is to show how to enter covariant and contravariant tensors, compute their covariant derivatives, obtain the equations of parallel transport and geodesics, and compute the basic tensors of general relativity.

Initializations

 >

 >

 >

Declaring the Frame in DifferentialGeometry

Before a vector or tensor can be entered, a frame must be declared by stating its variables and giving it a name.  For example, to declare as the Cartesian space with variables and , execute

 >

 >

The default behavior for DifferentialGeometry is that from this point onward, the prompt would be modified to display the frame name for as long as that frame were the active one.  Thus, without our having suspended this default with the Preferences command in the Initializations section, the prompt following the DGsetup command would be the one shown in Table 2.

 R2 >

Table 2   The default modification of the prompt

We have elected to suspend this default behavior for three reasons.  First, interactive editing of a worksheet can create a confusing display.  If in a section where one frame name appears in all prompts, a new frame is defined, the subsequent prompts that show the earlier name do not change to show the new name.  Only a "new" prompt will display the new frame name.  The result is a worksheet with prompts showing different frame names where they might not be relevant.  Such misplaced prompts have to be deleted manually if they are not to provide incorrect information.

Second, these modified prompts are persistent - they cannot be removed by any Maple command.  They have to be removed by deletion.  (The Maple interface command that modifies prompts does not cascade the change through existing prompts.  It only modifies new prompts.)

Finally, if the commands are executed in Document Blocks, and not at prompts, there will be no visible prompt to modify.  Thus, the modified prompt is a worksheet paradigm that does not carry through all of Maple usage.  For all these reasons, we will not have frame names visible in our prompts.

Anyone who had taken even a perfunctory dip into the waters of tensor calculus knows there are two words, covariant and contravariant, that must be faced.  We will not be able to enter a tensor in the Tensor package without making the distinction between these two terms.  Using the Einstein summation convention (repeated indices, one raised and one lowered, are summed), Table 3 defines contravariant and covariant vectors.

 Vector Type Basis Type Transformation Law Contravariant Tangents to coordinate curves Covariant Gradients (Normal to coordinate surfaces) Table 3   Contravariant and covariant vectors

In the rightmost column of Table 3 uses the notation and for components expressed in the -coordinate system, but and for components in the -coordinate system.  Texts also denote the new coordinate system bu the use of an overbar on the component, or a prime on the left side of the variable.

The components and are the contravariant and covariant components, respectively, of the vector V.  The basis vectors and are reciprocal, so that .  Thus, an orthonormal basis is self-reciprocal.  That is why the distinction between contravariant and covariant basis does not matter in Cartesian spaces.

If is the mapping from to via functions of the form , then the gradient vectors are the rows of the Jacobian matrix , where the upper index is interpreted as a row index, and the lower index , as a column index.

If is the mapping from to via functions of the form , then the tangent vectors are the columns of the Jacobian matrix

To facilitate the implementation of the contravariant transformation law, writing the components as a column vector v means the sums with the Jacobian matrix are along a row and across the columns of the matrix.  Hence, the matrix product implements the contravariant transformation.  Writing the components as the row vector w means the sums with the Jacobian matrix are down a column but across the rows.  Hence, the matrix product implements the covariant transformation.  This inherent distinction between tangent bases and normal bases induces the distinction between contravariant and covariant.  Using column and row vectors to express this difference is a convenient visual device in classical tensor calculus.

The Tensor as a Multilinear Object

If is a vector space, say, with basis , then a rank-two tensor is a multilinear object from , the direct product of with itself, having doubly-indexed basis objects .  The tensor is actually the object

linear in both indices.  Of course, the are the contravariant components of the tensor; and just as for vectors, there would be the equivalent covariant components, .  There are even mixed tensors that transform contravariantly in one index but covariantly in another.

In actual practice, one manipulates just the components of the tensor, and almost never explicitly exhibits the basis objects.  However, in the DifferentialGeometry package, vectors and tensors require an explicit use of the basis objects.

Bases and Their Duals

The basis for could be entered as

 >

 >

or could be extracted from Maple with the DGinfo command from the Tools package.

 >

 >

(To type the underscore in math (2D) mode, press the escape character (\) first.  Alternatively, enter such expressions in text (linear, 1D) mode and convert to math mode via the Context Menu.)

The reciprocal (or dual) basis is then

 >

 >

or

 >

 >

In actual fact, is considered a differential form, more in keeping with the modern approach to differential geometry.

Representing Vectors as DifferentialGeometry Objects

In the DifferentialGeometry package, the contravariant vector whose components are is given by

 >

 >

or by

 >

 >

The evalDG and the DGzip commands are two of the simpler ways to create an object whose data structure is intrinsic to the DifferentialGeometry package. When using the evalDG command, the asterisk is the explicit multiplication operator.  If this vector had been entered in math mode, the echo of the asterisk would be a centered dot.  The alternative to the asterisk would be the space.

The covariant vector whose components are is entered as

 >

 >

or as

 >

 >

There does not seem to be a simple way to represent a vector as a column or row vector.  The Tools subpackage provides the DGinfo command with which the components of a vector can be extracted.  Its use is illustrated by

 >

 >

or by

 >

 >

Representing Tensors as DifferentialGeometry Objects

A rank-two contravariant tensor would be entered as

 >

 >

The construct corresponds to the dyadic basis element , etc.

A rank-two covariant tensor would be entered as

 >

 >

The construct corresponds to the dyadic basis element , etc.

The components of a tensor can be recovered with the DGinfo command

 >

 >

or with the convert command.

 >

 >

The components of the rank-two tensor are often represented as the entries of a matrix.  If a matrix is used for such a representation, it is possible to convert the matrix to a DifferentialGeometry tensor.

 >

 >

Contraction of Indices

Given two tensors of conformable dimensions, say and , forming the sum of products is the operation contraction of indices.  Recall that the Einstein summation convention indicates a sum on the same index when it appears once raised and once lowered.  This operation, implemented in the Tensor package with the command ContractIndices, is illustrated for the two tensors

 >

 >

from the previous section.  To simplify data entry, we rename the first as T and the second, as V.

 >

 >

The four possible contractions that result in a rank-two tensor are given in Table 4, where their components are displayed as elements of matrices.

Table 4   Four rank-two tensors formed by contraction of and

Careful inspection of the four tensors in Table 4 (see especially the rightmost column) shows that they are all different.

The Metric Tensor

The geometry of a manifold is first captured in the covariant metric tensor or its contravariant counterpart .  There is no "calculus" in tensor calculus without first obtaining this essential tensor.  Hence, it is imperative that there be efficient ways to obtain this tensor.  Several of these techniques will be illustrated for the Cartesian plane on which polar coordinates have been imposed.

Method 1 - Obtain as a Matrix and Convert to a Tensor

Define the map with equations of the form via

 >

 >

and the radius (position) vector via

 >

 >

Then, a representation of the basis vectors is given by

 >

 >

so that a matrix whose entries are is

 >

 >

To convert this to the metric tensor in polar coordinates, we need to define the polar frame with

 >

 >

in which case we then have

 >

 >

The covariant form of this metric tensor, , can be obtained with

 >

 >

The matrix representing is the inverse of the matrix representing

 >

Method 2 - Transform the Cartesian Metric Tensor

Begin with the contravariant metric tensor on the Cartesian space

 >

 >

Define the transformation from Cartesian to polar coordinates as

 >

 >

and use it to convert the Euclidean metric to

 >

 >

Raising and Lowering Indices

The same vector V can be given with respect to the natural tangent basis vectors or with respect to the reciprocal basis of gradient basis vectors .  Thus, one has .  The conversion between contravariant and covariant components of V is effected by contraction with the metric tensor:

or

Example 1

Given the contravariant vector

 >

 >

the covariant components are given by the RaiseLowerIndices command in the Tensor package.  Thus, we have

 >

 >

where "g" denotes the covariant metric tensor .  Alternatively, given the covariant vector

 >

 >

the contravariant components are

 >

 >

where "" denotes the contravariant metric tensor

 >

The Connection Coefficients

Once the metric tensor is known, the way the basis vectors change from point to point can be determined.  It turns out that the rate of change of the basis vectors can be expressed as linear combinations of these same vectors.  The coefficients of these linear combinations are called the connection coefficients, or the Christoffel symbols.  Depending on the form used for these symbols, they are called the Christoffel symbols of the first kind or the second kind.  Since there is a built-in command that provides the Christoffel symbols of the second kind, we will obtain those via

 >

 >

Classical texts in differential geometry use the notation , , or for Christoffel symbols of the second kind, and or for Christoffel symbols of the first kind.  Consequently, the correct interpretation of the output of the Christoffel command is captured, painstakingly and laboriously, in Table 5.

 Table 5   Christoffel symbols of the second kind for polar coordinates

The closest the Tensor package comes to articulating the Christoffel symbols is

 >

 >

where the list maps to .  Thus, the middle index in the list is the raised one, and the first and third are the lower ones.

Example 2

 For polar coordinates, show that          Thus, show that the derivative of a basis vector can be expressed as a linear combination of the basis vectors, and that the coefficients of these linear combinations are the Christoffel symbols.

To show that derivatives of basis vectors can be expressed as linear combinations of basis vectors, begin by expressing in terms of .  This is done in Table 6 where the symbols i and j are introduced explicitly and naming clashes are avoided by introducing the alternate names for

Table 6   Expressing in terms of

Table 7, which lists the derivatives of the basis vectors, shows that the Christoffel symbols indeed are the coefficients in the linear combinations of basis vectors that express the derivatives of the basis vectors.

 = = = = Table 7   Differentiation of basis vectors in polar coordinates

Table 8 provides a formula for computing Christoffel symbols of the second kind from the components of the metric tensor.

 Table 8   Christoffel symbols in terms of the metric tensor

A practical notational shortcut is the use of for the differentiation operator .  The formula in Table 8 would be easier to write (and even remember) if this notational device is used.

 >

The Covariant Derivative

In calculus, is the directional derivative of the scalar function taken in the direction of the unit vector u. The gradient vector arises naturally from the calculation of the derivative

which is how the directional derivative is defined when , , and .  Of course, this calculation extends to higher dimensions.  But more important, note how starting with a scalar function , the vector quantity must be defined, and the desired directional derivative is a dot product of this gradient vector with the direction vector.

The covariant derivative arises in much the same way, that is, from defining a directional derivative of a vector.  The new object that must be created is the covariant derivative, a rank-two tensor, and the actual rate of change of the vector in a given direction is the "dot product" of this new tensor with a vector specifying the direction.  In particular, Table 9 gives the expressions for the covariant derivative of contravariant and covariant vectors, a mixed tensor, and the metric tensor.

 Contravariant vector Covariant vector Mixed tensor Metric tensor Table 9   Formulas for covariant derivatives

In polar coordinates, covariant derivatives of the contravariant and covariant vectors

 >

 >

respectively, are obtained in the Tensor package with the CovariantDerivative command.  It requires as a second argument the connection coefficients, which can be given either as the Christoffel symbols (of the second kind)

 >

 >

or as the connection

 >

 >

(In the Tensor package, the Christoffel symbols are not separated.  The argument to the Connection command is the same sum of terms that the Tensor package uses to express the Christoffel symbols.)

The covariant derivative of the contravariant vector is

 >

 >

more easily read as the array

 >

 >

The covariant derivative of the covariant vector is

 >

 >

more easily read as the array

 >

 >

The Directional Covariant Derivative

The covariant derivative is the rank-two tensor that arises when the directional derivative of the vector is defined. As such, it is the generalization of the gradient vector that arises when the directional derivative of the scalar function is defined.  Just as the directional derivative of the scalar function requires a dot product with a direction vector, so too does the directional derivative of the vector.  Thus, the directional derivative of the vector in the direction is given by , the contraction of the covariant derivative with the direction vector.

This calculation is implemented in the Tensor package via the command DirectionalCovariantDerivative

Example 3

The derivative of the contravariant vector

 >

 >

in the direction of the vector

 >

 >

is given by

 >

 >

If we write the covariant derivative as the array

 >

 >

and write the directional covariant derivative as the column vector

 >

 >

we can more easily see that the sum in is well modeled by the product of the matrix with the direction vector.

 >

Parallelism and Geodesics

 The contravariant vector defined along , the curve , is said to be parallel along if        the absolute (or intrinsic) derivative along vanishes. Definition 1   Parallelism along a curve

The ParallelTransportEquations command in the Tensor package generates the equations implied by Definition 1.

Example 4

In polar coordinates , let the curve be given by

 >

 >

The contravariant vector

 >

 >

is parallel along if the equations implied by

 >

 >

hold.  To access the components of this vector and write the individual equations, use

 >

 >

where the extra syntax isolates the two equations in a unique order.  Maple provides a solution for these equations and the initial conditions

 >

 >

where again, we provide extra syntax for extracting and processing items uniquely.  We are looking for some evidence that along the curve the vector remains parallel.  Since the polar plane is coincident with the Cartesian plane, we will look for this evidence in Cartesian coordinates.

We begin by writing the curve in radius (position) vector form:

 >

 >

Then, we form the vector and evaluate it along

 >

 >

Figure 1 is a graph of along with vectors that have been transported parallel to the initial vector.  Even without the graph it's clear from the algebraic expression for the field along that it's constant in the Euclidean sense, and hence, parallel.

Figure 1   Parallel field along the curve

 The contravariant vector represents a parallel vector field if its covariant derivative vanishes. Definition 2   Parallel vector field

Example 5

The covariant derivative of the vector field

 >

 >

is, in array form,

 >

 >

The vanishing of this covariant derivative requires solving a set of four partial differential equations, the general solution of which is

 >

 >

For any initial direction of the vector as a vector in polar coordinates, the equivalent vector in Cartesian coordinates is

 >

 >

Hence, we have a constant vector field, parallel in the Euclidean plane.

 >

 Let be arc length along , the curve defined by .  If the unit tangent vector along is parallel along , then is a geodesic.  The condition that must satisfy is Definition 3   Geodesics

Example 6

The equations for geodesic curves in polar coordinates are

 >

 >

If we use the overdot to express differentiation with respect to , we can write these equations as

These equations are consistent with Definition 3, as we see by recalling the Christoffel symbols in Table 5.  From Definition 3 we have

In the first equation, and the remaining Christoffel symbols are zero; in the second, and the other Christoffel symbols are zero.

The general solution of the geodesic equations in polar coordinates is

 >

 >

The first solution is a radial line segment; the second, an arbitrary line segment - both in accord with our expectations for geodesics in the plane.

 >

Curvature

The commutator formula for covariant differentiation is

where

and .  The mixed rank-four tensor is generally called the Riemann-Christoffel curvature tensor of the second kind.  Other names include curvature tensor, Riemann tensor, Riemann-Christoffel tensor, and mixed Riemann-Christoffel tensor.  Moreover, variations in the formula itself appear in the literature, especially when the repeated covariant differentiation is denoted by or . There is an inherent reversal of the lexical order with respect to the operator order.  When the reversed lettering is translated to , a minus sign is introduced into the definition.  The reader is cautioned to be most careful when comparing Maple to the literature.

The Riemann-Christoffel curvature tensor of the first kind is defined as and is sometimes called the covariant curvature tensor, and even the Riemann tensor.  Again, great care must be taken when reading and comparing different texts.

The Riemann curvature tensor (see how easily these phrases creep into one's writing?) measures the curvature of a space. A space in which this tensor is zero is called flat, and the Cartesian plane is flat no matter what coordinate system is imposed, as we see from

 >

 >

the curvature tensor for the plane under polar coordinates.

A more interesting surface is that of the unit sphere centered at the origin.  The metric tensor for this surface is obtained by the following calculations.

 >

 >

Consequently, the Riemann-Christoffel curvature tensor of the second kind is

 >

 >

or better yet,

 >

 >

From this, we construct the explicit representation in Table 10.

 Table 10   Nonzero components of for the sphere

The Riemann-Christoffel curvature tensor of the first kind is

 >

 >

or better yet,

 >

 >

From this, we construct Table 11.

 Table 11   Nonzero components of for the sphere

Notice that for a space of dimension there is just one distinct component needed to describe the complete tensor, and this component is generally taken as .  In general, there are distinct components of the Riemann curvature tensor of the first kind, so for there 6, but for there are 20.

In the DifferentialGeometry package, the Ricci tensor is defined as .  Obtained by this definition, the tensor is

 >

 >

Obtained with the RicciTensor command, this tensor is

 >

 >

We see that the tensor is the same under either computation.

The Ricci scalar, given by

 >

 >

is defined as , a definition we can test via

 >

 >

For a two-dimensional manifold the Gaussian (or total) curvature is given by , where is the determinant of the array representing the metric tensor .  Since for the sphere , can be computed in the Tensor package via

 >

 >

Finally, we note that is the Einstein tensor.  For the sphere, this tensor is

 >

 >

so that it vanishes, as we can verify by inspection after obtaining via

 >

 >

Since , and the array form of is the identity matrix, is the zero matrix.

Legal Notice:  Maplesoft, a division of Waterloo Maple Inc. 2009. Maplesoft and Maple are trademarks of Waterloo Maple Inc. This application may contain errors and Maplesoft is not liable for any damages resulting from the use of this material. This application is intended for non-commercial, non-profit use only. Contact Maplesoft for permission if you wish to use this application in for-profit activities.