Cartesian Tensors: An Introduction
This undergraduate text provides an introduction to the theory of Cartesian tensors, defining tensors as multilinear functions of direction, and simplifying many theorems in a manner that lends unity to the subject. The author notes the importance of the analysis of the structure of tensors in terms of spectral sets of projection operators as part of the very substance of quantum theory. He therefore provides an elementary discussion of the subject, in addition to a view of isotropic tensors and spinor analysis within the confines of Euclidean space. The text concludes with an examination of tensors in orthogonal curvilinear coordinates. Numerous examples illustrate the general theory and indicate certain extensions and applications. 1960 edition.
1018964683
Cartesian Tensors: An Introduction
This undergraduate text provides an introduction to the theory of Cartesian tensors, defining tensors as multilinear functions of direction, and simplifying many theorems in a manner that lends unity to the subject. The author notes the importance of the analysis of the structure of tensors in terms of spectral sets of projection operators as part of the very substance of quantum theory. He therefore provides an elementary discussion of the subject, in addition to a view of isotropic tensors and spinor analysis within the confines of Euclidean space. The text concludes with an examination of tensors in orthogonal curvilinear coordinates. Numerous examples illustrate the general theory and indicate certain extensions and applications. 1960 edition.
6.49 In Stock
Cartesian Tensors: An Introduction

Cartesian Tensors: An Introduction

by G. Temple
Cartesian Tensors: An Introduction

Cartesian Tensors: An Introduction

by G. Temple

eBook

$6.49  $6.99 Save 7% Current price is $6.49, Original price is $6.99. You Save 7%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This undergraduate text provides an introduction to the theory of Cartesian tensors, defining tensors as multilinear functions of direction, and simplifying many theorems in a manner that lends unity to the subject. The author notes the importance of the analysis of the structure of tensors in terms of spectral sets of projection operators as part of the very substance of quantum theory. He therefore provides an elementary discussion of the subject, in addition to a view of isotropic tensors and spinor analysis within the confines of Euclidean space. The text concludes with an examination of tensors in orthogonal curvilinear coordinates. Numerous examples illustrate the general theory and indicate certain extensions and applications. 1960 edition.

Product Details

ISBN-13: 9780486154541
Publisher: Dover Publications
Publication date: 04/06/2012
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 112
Sales rank: 938,313
File size: 8 MB

Read an Excerpt

CARTESIAN TENSORS

AN INTRODUCTION


By G. TEMPLE

Dover Publications, Inc.

Copyright © 2004 Dover Publications, Inc.
All rights reserved.
ISBN: 978-0-486-15454-1



CHAPTER 1

Vectors, Bases and Orthogonal Transformations

* * *

1.1 Introduction

Mathematicians and physicists have thoroughly exploited the daydream in which M. des Cartes first envisaged the tri-rectangular frame of reference which we have named in his honour. But a Cartesian coordinate system is generally an external frame of reference with no organic or intrinsic relation to the mathematical or physical entities to be described. It is indispensable but irrelevant. The best we can do is express our mathematical descriptions in such a way that they preserve the same form in any Cartesian frame. Such a technique makes the frame harmless although necessary.

We need therefore some mathematical structures which shall be invariant under transformations from one frame of reference to another. Such structures are tensor algebra and analysis. But before studying these structures we need to study the nature of the transformations from one frame to another. We need not worry about such a triviality as a mere change of origin, but rotational transformations deserve serious consideration. We therefore begin with rotations and the vectors which they rotate.


1.2 The geometrical theory of vectors

In the geometrical theory of vectors a vector is defined, in effect, as an ordered pair of points, say [??], although the harsh abstraction of this definition is often mitigated by the gloss that the vector is represented by an 'arrow', i.e. by the directed segment of the straight line AB.

It is at once evident that a vector so defined possesses both magnitude and direction, for the magnitude of the vector AB is naturally taken to be the distance |AB| between A and B, and, if the points A, B, C, D (in this order) form the vertices of a parallelogram, then the vectors [??] and [??] are naturally described as equal in magnitude and similar in direction. But whereas the numerical representation of the magnitude of a vector is obvious and direct, the numerical representation of its direction presents a problem.

The key to the solution of this problem is the fact that just as the position of a point can be specified only by reference to other points, so also the direction of a vector can be specified only by reference to other vectors. The relative direction of two vectors [??] and [??] is measured by the cosine of either of the angles, α or 2π - α, between the straight lines OA and OB. To give a complete description of the direction of a vector [??] in three-dimensional space we need the cosines of the angles between the straight line OP and three other given straight lines OA, OB, OC, which are not coplanar.

These same 'direction cosines' will then also specify the direction of any vector [??] which is parallel to [??].


1.3 Bases

The simplest way of systematizing the calculus of directions from a point P is to take three mutually perpendicular unit vectors [??] as a basis of reference. There are then two species of bases – the left handed and the right handed. These are distinguished by drawing the triangle X1X2X3 and viewing it from the point P. The vectors [??] specify a sense of circulation around the triangle X1X2X3. If this sense is right handed as seen from P, the basis is described as right handed; if the sense of circulation is left handed as seen from P, the basis is described as left handed. Right handedness and left handedness cannot be further analysed; they can only be demonstrated, as, for example, by reference to the directions in which the tendrils of the hop and the vine curl as they grow upwards.

Henceforward we shall employ only right handed bases, in accordance with what has become a standard and universal practice.

The direction of a vector [??], relative to the base PX1X2X3, will then be specified by the ordered set of three direction cosines (l1, l2, l3), such that lα = the cosine of either angle between [??] and [??].

Hence

-1 < 1α< + 1 (α = 1, 2, 3).

Also, by taking the distance |PQ| to be unity, it follows from Pythagoras' Theorem that

l12 + l22 + l33 = 1. (1.3.1)

Now consider two vectors [??] and [??] with direction cosines (l1, l2, l3) and (m1, m2, m3). Then, by a well-known theorem of coordinate geometry in three dimensions, if θ is either of the angles between the straight lines PQ and PR,

cos θ = l1m1 + l2m2 + l3m3. (1.3.2)


1.4 The summation convention

It is convenient to introduce at once a convention due to Einstein which considerably lightens the burden of writing, or setting up in type, such expressions as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

This convention is that any expression, such as

lαmα,

in which a suffix, such as α, is repeated, is to be interpreted as the sum of all the values which lαmα can take as α takes the values 1, 2, 3, i.e.

lαmα = l1m1 + l2m2 + l3m3.

Also any expression in which two or more suffixes α, β, ... are each repeated is to be interpreted as the sum of all the values which it can take as α, β, ... take the values 1, 2, 3, e.g.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

With this 'summation convention',

lαmα = lβmβ,

so that the identity of the letter used for a repeated suffix has no significance. Hence the repeated suffixes are called 'dummy suffixes'.

Similarly any symbol in brackets with a literal suffix, such as

(lα),

is taken to mean the ordered set (l1, l2, l3).


1.5 The components of a vector

If the vector [??] has a magnitude

u = |PQ|,

and if its direction cosines relative to a base PX1X2X3 are (lα), then the components of the vector [??] in the directions (PXα) are defined to be the numbers

uα = ulα,

(which may be positive or negative). These components are the lengths of the orthogonal projections of the vector [??] on the directions (PXα).

The component of the vector [??] in the direction of any other vector [??] with direction cosines (mα is similarly defined to be

u cos θ,

where θ is either of the angles between the lines PQ and PR. Now by (1.3.2),

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Hence the component of the vector [??] in the direction (mα) is a linear function of the direction cosines (mα).

Example: Any vector uα can be expressed in the form

uα = (uβlβ)lα + (uβmβ)mα + (uβnβ)nα,

where the directions (lα), (mα), (nα) form a base.

This elementary theorem enables us to give a rigorous interpretation to the familiar description of a vector as a 'directed magnitude'. Just as an ordinary (scalar) function f(x) is completely specified by its numerical value for each assigned value of the argument x, so a vector [??] is completely specified by the numerical value of its component for each assigned direction (mα). With any vector u there is therefore associated a function of direction, say u(m1 m1 m3), which determines its component in the direction (mα). This by itself is not sufficient to characterize a vector, for the same statement is true of 'pseudo- vectors' such as moments of inertia. What really characterizes a true vector is that the associated function of direction u(mα) is real, single-valued and linear in the direction cosines (mα), so that

u = u(mα) = uαmα. (1.5.1)

The advantages of associating a vector u with the linear function of direction, u(mα) = uαmα, as will appear later, are that the 'transformation laws' appear as an immediate deduction, and that the whole theory of tensors can be developed as a theory of multilinear functions of several directions.


1.6 Transformations of base

Since the direction and components of a vector are specified by reference to an arbitrary base, it is necessary to determine the changes produced in the direction and components by a change of base.

We therefore consider two bases (PXα) and (PYα) . The base (PYα) is completely specified in terms of the base (PXα) by the nine numbers Tαβ, where

Tαβ = the cosine of either angle between PYα and PXβ.

These numbers can be arranged in a square matrix, in which Tαβ is the element in row α and column β.

It will be obvious that these nine direction cosines are far from being independent, and furthermore that, in general,

Tαβ ≠ Tβα

Similarly the base (PXα) is completely specified in terms of the base (PYα) by the nine numbers T'αβ, where T'αβ = the cosine of the angle between PXα and PYβ.

Obviously

T'αβ = Tβα,

so that the matrix T', formed with the elements T'αβ, is the transpose of the matrix T, formed with the elements Tαβ.

Now let the direction of a vector [??] be specified by the direction cosines (lα) relative to the base (PXα) and by the direction cosines (mα) relative to the base (PYα). Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

by the preceding definitions. Hence

mα = Tαβlβ. (1.6.1)

Similarly

lα = T'αβmβ = mβTβα. (1.6.2)

These formulae express the relation between directions relative to the two bases (PXα) and (PYα).


1.7 Properties of the transformation matrix T

The nine direction cosines Tαβ are in fact connected by six independent relations which are easily derived as follows:

By using different dummy suffixes we can express mαmα or m12 + m22 + m32 in the form

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII],

for any set of direction cosines (lβ).

If we equate the coefficients of lβlγ on each side of this identity, we find that

TαβTαγ = 0 if β ≠ γ or 1 if β = γ (1.7.1)

(no summation with respect to β being implied in the latter case).

Similarly we find that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.7.2)

These properties (1.7.1) and (1.7.2) of the transformation matrix T can be written and remembered more easily by introducing the unit matrix U, whose elements are the Knonecker symbols δαβ, defined as

δαβ = 0 if α ≠ β or 1 if α = β. (1.7.3)

Thus

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

It is also advantageous to write products of matrices so that the dummy suffixes are adjacent. Thus (1.7.1) and (1.7.2) can be written as

T'βαTαγ = δβγ TβαT'αγ = δβγ (1.7.4)

or even more compactly still as

T'T = U TT' = U. (1.7.5)

The corresponding transformation law for direction cosines may also be abbreviated to

m = Tl, l = T'm, = lT,' = mT. (1.7.6)

The transformation matrix T is also subject to a further condition when we restrict ourselves to transformations from one right handed base (PXα) to another (PYα) , viz. that the determinant of the matrix T, det T, is equal to +1. All we can note here is that, by the theory of determinants, it follows from (1.7.4) or (1.7.5) that

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. (1.7.7)

In fact it can be shown (see example 3, p. 13) that det T = 1 if (PXα) and (PY)α are both right handed or both left handed, but that det T = - 1 if one is right handed and the other is left handed.


1.8 The orthogonal group

In this section we will not restrict ourselves to right handed bases. It will still be true, however, that, under a transformation from a basis PXα to another basis PYα, any vector remains unchanged in length. If (uα) and (vα) are the components of the same vector PQ in these two bases, and if S is the transformation matrix, then

υβ = Sβαuα and υβυβ = uαuα.

Such length-preserving transformations are called 'orthogonal transformations'.

Now let us carry out a second transformation from the basis PYα to the basis PZα. Let the transformation matrix be T and let (wα) be the components of the vector PQ in the third basis PZα. Then

wγ = Tγβυβ, and wγwγ = υβυβ.

It is clear that the direct transformation from PXα to PZα must also be an orthogonal transformation, since

wγwγ = uαuα

In this transformation

wγ = TγβSβαuα = Uγαuα, say.

Hence the matrix of this transformation, U, is the product of the matrix S by the matrix T.

Any two orthogonal transformations, S and T, therefore possess a 'product' TS. The product TS is in general different from the product ST, so that multiplication is not commutative.

We can also invert any orthogonal transformation, for

uα = S'αβvβ,

where S' is the transpose of S. Hence any orthogonal transformation S possesses an inverse S-1, which for such transformations is merely the transpose S' of S.

Now any set of transformations Tn which includes the inverse Tn-1 of any transformation Tn, and the product TkTj of any two transformations Tj, Tk is called a group of transformations. Hence the orthogonal transformations form a group, called the 'orthogonal group'.

We have already proved (1.7.7) that, if T is the matrix of any orthogonal transformation, then

det T = [+ or -] 1.

The orthogonal transformations therefore fall into two classes, (1) the 'proper' orthogonal transformations, for which

det T = + 1,

and (2) the 'improper' orthogonal transformations, for which

det T = - 1.

The proper orthogonal transformations themselves form a group, for if

det S = 1 and det T = 1, then det(TS) = (det T).(det S) = 1.

This group is a subgroup of the complete orthogonal group. The improper orthogonal transformations do not form a group.


1.9 Examples

(1) A rotation about PX3 through the angle α has the transformation matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and det T = + 1.

(2) A reflexion in the plane x2 cos α = x1 sin α has the transformation matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

with determinant det T = - 1.

(3) The transformation matrix for a rotation is most easily obtained by the use of vector algebra.

Let the rotation be through an angle ω about an axis PΩ of unit length and direction cosines (λα). Let the point A with coordinates (lα) be carried to the point B with coordinates (mα). Let N be the foot of the perpendiculars from A and B on to PΩ.

Draw AC in the plane of ANB and perpendicular to NA to meet NB produced in C. Then

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

Hence m1 = (1 - cos ω)(lαλα)λ1 + cos ω.l1 + sin ω.(λ2l3 - λ3l2), with two similar equations, i.e.

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

The determinant of T is a continuous function of ω, which reduces to +1 as ω [right arrow] 0. Hence det T is always equal to +1.

(4) A reflexion in the plane λαxα = 0, whose normal has direction cosines (λα), has the transformation matrix

Tαβ = δαβ - 2λαλβ and det T = - 1.

(5) A 'half-turn', i.e. a rotation through 180[degrees] about an axis with direction cosines (λα) has the transformation matrix

Tαβ = δαβ - 2λαλβ, with det T = - 1.

(6) A reflexion in the origin has the transformation matrix Tαβ = - δαβ; and the product of three reflexions in three mutually orthogonal planes is a reflexion in their common point of intersection.

(7) Show that in the matrix T of any proper orthogonal transformation, any element Tα β is equal to its cofactor. Hence show that

det (Tαβ - δαβ) = 0.

(8) Deduce from the preceding result that there exists a vector λα such that

Tαβλβ = λα,

in any proper orthogonal transformation. (The vector λα lies along the axis of the rotation specified by T.)

(9) By taking a basis with PY3 along this vector (λα),show that any proper orthogonal transformation is a rotation, and therefore preserves the chirality of any base (i.e. a right handed base is transformed into a right handed base, etc.).

(10) By considering reflexions in the planes

x2 cos α = x1 sin α, x2 cos β = x1 sin β,

show that any rotation is the product of two reflexions.

CHAPTER 2

The Definition of a Tensor

* * *

2.1 Introduction

The purpose of this chapter is to introduce the algebraical definition of a tensor as a multilinear function of direction. This definition is the simplest form of the abstract definition of a tensor adopted by Bourbaki and it gives a new unity to the whole subject of tensor algebra and analysis by suggesting simple and direct proofs of many fundamental theorems.

We commence with some specific simple examples of multilinear functions of direction before giving the formal definition of a tensor.


2.2 Geometrical examples of multilinear functions of direction

Two elementary formulae of coordinate geometry provide simple examples of multilinear functions of direction of ranks two and three, i.e. functions of two or three directions respectively. (A vector will be regarded as a tensor of rank one.)

The cosine of either angle between the two directions (lα) and (mα) is

cos θ = lαmα.

If the right hand side of this equation is written as

δαβlαmβ,

where δαβ is the Knonecker symbol introduced in equation 1.7.3, it is evident that cos θ is a bilinear function of (lα) and (mα) with co-efficients δαβ. We can therefore regard the numbers δαβ as the components of a tensor U of the second rank. Since the numbers δαβ are the elements of the unit matrix in three dimensions, U is called the 'unit tensor'.

The volume of the parallelepiped with unit edges in the directions (lα), (mα) and (nα) is

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

if the directions (lα), (mα), (nα) form a right handed triad. Now we can write this determinant in the form

V = εαβγlαmβ nγ,

where the symbol εαβγ is defined as follows:

ε123 = ε231 = ε312 = + 1, ε321 = ε213 = ε132 = - 1,

and εαβγ = 0 if any two suffixes are the same.

In other words

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

It is clear that the volume V is a trilinear function of (lα), (mα) and (nα) with coefficients εαβγ. We can therefore regard the numbers εαβγ as the components of a tensor A of the third rank. A is called the 'alternating tensor'.

The tensors U and A possess the remarkable property that their components δαβ and εαβγ have the same numerical values for all bases. They are in fact the only tensors of the second and third rank respectively with this property as will be proved later (Chap. VI).

These symbols, δαβ and εαβγ, are of considerable importance in tensor theory. Familiarity with their properties can be gained by verifying the following identities of which the most important is number (2).

Exercises: (1) If (mαnα) = cos θ, (nαlα) = cos φ, (lαmα) = cos ψ, then the volume

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].

(This is the tensor equivalent of the vector identity,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


2.3 Examples of multilinear functions of direction in rigid dynamics

It is almost trivial to observe that displacements, velocities, accelerations and forces are instances of vectors, i.e. of tensors of the first rank. Two important dynamical tensors of the second and third ranks respectively are the inertia tensor and the tensor which gives the moment of a force about a skew axis.

We consider a distribution of particles with a typical particle of mass m at the point (xα). The moment of inertia about an axis PQ(lα) is, in the notation of the figure

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].


(Continues...)

Excerpted from CARTESIAN TENSORS by G. TEMPLE. Copyright © 2004 Dover Publications, Inc.. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

Preface
I. Vectors, Bases and Orthogonal Transformations
II. The Definition of a Tensor
III. The Algebra of Tensors
IV. The Calculus of Tensors
V. The Structure of Tensors
VI. Isotropic Tensors
VII. Spinors
VIII. Tensors in Orthogonal Curvilinear Coordinates
Index
From the B&N Reads Blog

Customer Reviews