Applied Matrix Algebra in the Statistical Sciences

Applied Matrix Algebra in the Statistical Sciences

by Alexander Basilevsky
Applied Matrix Algebra in the Statistical Sciences

Applied Matrix Algebra in the Statistical Sciences

by Alexander Basilevsky

eBook

$17.49  $22.95 Save 24% Current price is $17.49, Original price is $22.95. You Save 24%.

Available on Compatible NOOK devices, the free NOOK App and in My Digital Library.
WANT A NOOK?  Explore Now

Related collections and offers

LEND ME® See Details

Overview

This comprehensive text covers both applied and theoretical branches of matrix algebra in the statistical sciences. It also provides a bridge between linear algebra and statistical models. Appropriate for advanced undergraduate and graduate students, the self-contained treatment also constitutes a handy reference for researchers. The only mathematical background necessary is a sound knowledge of high school mathematics and a first course in statistics.
Consisting of two interrelated parts, this volume begins with the basic structure of vectors and vector spaces. The latter part emphasizes the diverse properties of matrices and their associated linear transformations--and how these, in turn, depend upon results derived from linear vector spaces. An overview of introductory concepts leads to more advanced topics such as latent roots and vectors, generalized inverses, and nonnegative matrices. Each chapter concludes with a section on real-world statistical applications, plus exercises that offer concrete examples of the applications of matrix algebra.

Product Details

ISBN-13: 9780486153377
Publisher: Dover Publications
Publication date: 12/21/2012
Series: Dover Books on Mathematics
Sold by: Barnes & Noble
Format: eBook
Pages: 416
File size: 23 MB
Note: This product may take a few minutes to download.

Read an Excerpt

Applied Matrix Algebra in the Statistical Sciences


By Alexander Basilevsky

Dover Publications, Inc.

Copyright © 1983 Alexander Basilevsky
All rights reserved.
ISBN: 978-0-486-15337-7



CHAPTER 1

Vectors


1.1 Introduction

In applied quantitative work matrices arise for two main reasons—to manipulate data arranged in tables and to solve systems of equations. A real matrix A is defined as a n × k rectangular array

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


where the real aij (i = 1, 2, ..., n; j = 1, 2, ..., k) comprising the elements of A have either known or unknown values. When n = 3 and k = 2, we have the 3 × 2 matrix

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]

and when the elements are known we may have, for example,

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


where a11 = 2, a12 = 0, ..., a32 = 8. The subscripts i and j are convenient index numbers that indicate the row and column of aij, respectively. For the special case when n = k=1, matrix A reduces to a single number, referred to as a scalar. When n = 1 (k = 1) we obtain a row (column) array, or a vector, which can be viewed as a particular type of matrix. Alternatively, a matrix can be considered as a set of vectors, which in turn consist of real scalar numbers. Each view has its own particular merit, but for the sake of exposition it is useful to first consider properties of vectors and to then extend these properties to matrices. Geometrically, a vector is represented as a point in a Cartesian system of coordinate axes and is frequently depicted by a straight arrow (Figure 1.1); however, it is important to keep in mind that a vector is in fact a point, and not a straight line.


1.2 Vector Operations

Although geometric representations of vectors can be intuitive aids and will be used frequently in the following chapters, they are less helpful for defining the basic properties of vectors, which is best achieved by algebra.

Let a set of n numbers ai (i = 1, 2, ..., n), be represented in the linear array (a1, a2, ..., an) where, in general, interchanging any two (or more) numbers results in a different set. The set (a2, a1, ..., an), for example, is not the same as a1, a2, ..., an), unless a1 = a2. For this reason a vector is said to be ordered. Such ordered sets of numbers are generally referred to as vectors, and the scalars ai are known as components of a vector. The components are measured with respect to the zero vector 0 = (0,0, ..., 0), which serves as the origin point. Not all vector systems employ zero as the origin, but we confine our treatment to those systems that do. The total number of components n is known as the dimension of the vector. A more precise definition of the notion of dimensionality is given in Chapter 2.

A vector obeys the following algebraic rules.


1.2.1 Vector Equality

Let

A = (a1, a2, ..., an), B = (b1, b2, ..., bn)


denote any two n-dimensional vectors. Then vectors A and B are said to be equal if and only if ai = bi for all i = 1,2, ..., n. Two equal vectors are written as A = B, and the equality therefore holds only if the corresponding elements of A and B are equal. Note that two vectors can be equal only when they contain the same number of components.


Example 1.1. The two vectors

A = (3,8,1), B = (3,8,1)

are equal.


1.2.2 Addition

Consider any three n-dimensional vectors

A = (a1, a2, ..., an), B = (b1, b2, ..., bn), C = (c1, c2, ..., cn)


The addition of two vectors,

A = B + C,


is defined as

[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]


The vector A is thus defined in terms of sums of corresponding elements of B and C. Note that in order to be conformable for addition, B and C must again contain the same number of components.

Vector addition obeys the following axioms.

1. The commutative law for addition,

B + C = C + B.


2. The associative law for addition,

A + (B + C) = (A + B ) + C.


3. There exists a zero (null) vector 0 = (0, 0, ..., 0) such that

A + 0 = A.


4. For any vector A there exists a negative vector -A = (-a1, -a2, ..., -an) such that

A + (-A) = 0.


It is straightforward to show that the negative vector of -A is the vector A.


Theorem 1.1.The negative vector of -A is the vector A.


PROOF: Let A+ be the negative vector of A, and A* the negative vector of -A. We will now prove that A* = A. By rule 4 we have

-A + A* = A + A+ = 0.


Adding A to both sides of the equation yields

A + (-A) + A* = A + A+ + A


or

0 + A* = A + 0 (rule 4).


Thus

A* = A (rule 3).


1.2.3 Scalar Multiplication

If A = (a1, a2, ..., an is any n-dimensional vector and k any scalar, then the scalar product

kA = k (a1, a2, ..., an) = (ka1, ka2, ..., kan)


is a uniquely determined vector that obeys the following laws.

1. The commutative law:

kA = Ak.

2. The associative law:

k1(k2A) = (k1k2)A,


where k1 and k2 are any two scalars.

3. The following products hold for the scalars 0 and 1:

0A = 0, 1A = A, (-1) A = -A.


Rule 4 of Section 1.2.2 can therefore be expressed in the alternative form

A + (-1)A = A + (-A) = A - A = 0,


which effectively establishes vector subtraction.


1.2.4 Distributive Laws

Combining vector addition and scalar multiplication, we obtain the following two distributive laws:

1. (k1 + k2)A = k1A + k2A,

2. k(A + B) = kA + kB,


where k, k1, and k2 are scalars and A and B are any two n-component vectors.

The following two examples provide an illustration of the above rules.

Example 1.2 Find the sum and difference of the vectors A = (3, -1,0) and B = (1,4, -7).


SOLUTION: We have

A + B = (3 + 1, -1 + 4, 0 - 7) = (4, 3, -7)

and

A - B = (3 - 1, -1 -4, 0 + 7) = (2, -5, 7).


Example 1.3 Find, for the vector A = (4, -1/2, 3), the negative vector -A.


SOLUTION:

-A = (-1)A = -1(4, -1/2, 3) = (-4, 1/2, -3)


Thus A + (-A) = A - A = 0.


1.3 Coordinates of a Vector

Since basic operations can be defined in terms of their components, it is natural to extend this method to cover other properties of vectors. This can be achieved by defining a system of coordinate axes that provide numerical scales along which all possible values of the components of a vector are measured. It is more common to use orthogonal Cartesian coordinate systems, although such practice is not essential. Indeed, in the following chapters we shall have occasion to refer to oblique (nonorthogonal) systems.

With coordinate axes every component of a vector can be uniquely associated with a point on an axis. The axes therefore serve as a reference system in terms of which concepts such as dimension, length, and linear dependency can be defined. Consider the two-dimensional parallel vectors V and V*, as in Figure 1.1. The two vectors are of equal length; components a1, b1, and x1 are measured along the horizontal axis, and a2, b2 and x2 along the vertical axis. A vector can originate and terminate at any point, but it is convenient to standardize the origin to coincide with the zero vector (0, 0) = 0. We then have

b1 = a1 - x1,

b2 = a2 - x2,

(1.1)


and setting x1 = x2 = 0, we have V = V*. Displacing a vector to a new parallel position (or, equivalently, shifting the vertical and horizontal axes in a parallel fashion) leaves that vector unchanged, and in such a system equal vectors possess equal magnitudes and direction. A parallel displacement of a vector (coordinate axes) is known as a translation.

Although the requirement that vectors originate at the zero point simplifies a coordinate system, this is achieved at a cost; it now becomes meaningless to speak of parallel vectors. However, an equivalent concept is that of collinearity. Two vectors are said to be collinear if they lie on the same straight line. Collinearity (and multicollinearity) will be dealt with more fully when we consider the linear dependence of vectors. For the moment we note that collinear vectors need not point in the same direction, since their terminal points can be separated by an angle of 180°. Thus if vector V1 is collinear with V2, then -V1 is also collinear with V2, since even though V1 and -V1 point in opposite directions, they nevertheless lie on the same straight line.

Once the coordinates of a vector are defined in terms of orthogonal axes, the basic vector operations of Section 1.2 can be given convenient geometric representation. For example, vector addition corresponds to constructing the diagonal vector of a parallelogram (Figure 1.2). Also, vector components can themselves be defined in terms of vector addition, since for an orthogonal three-dimensional system any vector Y = (y1, y2, y3) can be written as

Y = (y1, 0, 0) + (0, y2, 0) + (0, 0, y3) = (y1, y2, y3)

(1.2)


(see Figure 1.3). More generally, the coordinate numbers of any n-dimensional vector can be easily visualized as a set of n component vectors that make up the vector Y.


(Continues...)

Excerpted from Applied Matrix Algebra in the Statistical Sciences by Alexander Basilevsky. Copyright © 1983 Alexander Basilevsky. Excerpted by permission of Dover Publications, Inc..
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.

Table of Contents

1. Vectors
2. Vector Spaces
3. Matrices and Systems of Linear Equations
4. Matrices of Special Type
5. Latent Roots and Latent Vectors
6. Generalized Matrix Inverses
7. Nonnegative and Diagonally Dominant Matrices
References
Index
From the B&N Reads Blog

Customer Reviews