MATHEMATICAL PRELIMINARIES

ByPál Rózsa , in Applied Dimensional Analysis and Modeling (Second Edition), 2007

Example 1-8

Given column vectors a = [ 2 1 3 ] and b = [ 1 4 3 ] , we wish to determine a T·b and a·b T. Thus,

a T b = [ 2 1 3 ] T [ 1 4 3 ] = [ 2 1 3 ] [ 1 4 3 ] = ( 2 ) ( 1 ) + ( 1 ) ( 4 ) + ( 3 ) ( 3 ) = 11

and

a b T = [ 2 1 3 ] [ 1 4 3 ] = [ 2 8 6 1 4 3 3 12 9 ]

Utilizing now the outer products of vectors [see (1-12)], if in A·B, matrix A is partitioned into its columns and B into its rows, then the product obtained is the sum of the outer products, or dyads, formed by the columns of A and the rows of B. That is, the sum of dyads can always be written as the product of two matrices, where the first factor consists of the columns, the second factor the rows of the dyads. It follows that the problem of factoring a given matrix is equivalent to that of the decomposition of this matrix into dyads (i.e., outer products).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123706201500071

Vectors and Matrices

Stormy Attaway , in MATLAB (Fifth Edition), 2019

2.1.2 Creating Column Vectors

One way to create a column vector is to explicitly put the values in square brackets, separated by semicolons (rather than commas or spaces):

>> c = [1; 2; 3; 4]

c =

1

2

3

4

There is no direct way to use the colon operator to get a column vector. However, any row vector created using any method can be transposed to result in a column vector. In general, the transpose of a matrix is a new matrix in which the rows and columns are interchanged. For vectors, transposing a row vector results in a column vector, and transposing a column vector results in a row vector. In MATLAB, the apostrophe (or single quote) is built-in as the transpose operator.

>> r = 1:3;

>> c = r'

c =

1

2

3

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128154793000027

Vectors in MATLAB®

Munther Gdeisat , Francis Lilley , in Matlab by Example, 2013

3.1.8.1 Method 1: Creating Complex Vectors Manually

3.1.8.1.1 Creating Complex Row Vectors Manually

To create the complex row vector x=[2+i2, 3+i4, 5+i6], type at the Command Prompt

> > x = [ 2 + 2i , 3 + 4i , 5 + 6i ] ;

To display the contents of x, type at MATLAB Command Prompt

> > x

MATLAB responds with

x = 2.0000 + 2.0000 i 3.0000 + 4 .0000 i 5.0000 + 6 .0000 i

3.1.8.1.2 Creating Complex Column Vectors Manually

To create the complex column vector

y = [ 4 + i3 9 + i4 7 + i5 12 + i11 ] ,

type at the MATLAB Command Prompt

> > y = [ 4 + 3i ; 9 + 4i ; 7 + 5i ; 12 + 11i ] ;

This creates a column vector variable with the name y. The first element in this vector is 4+i3. The second element is 9+i4, the third element is 7+i5, and so on.

To display the contents of y, type at MATLAB Command Prompt

> > y

MATLAB responds with

y = 4.0000 + 3.0000 i 9.0000 + 4.0000 i 7.0000 + 5.0000 i 12.0000 + 11.0000 i

To get more information about the column vector y, type at MATLAB Command Prompt

> > whos y

MATLAB responds with

Name Size Bytes Class y 4 × 1 64 double array ( complex ) Grand total is 4 elements using 64 bytes

MATLAB informs you that y is a complex vector with the size of four rows and one column.

Remember that MATLAB is matrix-based software. The column vector y is considered to be a 4×1 matrix.

3.1.8.1.3 Transpose Operation for Complex Vectors

Applying the transpose operation to a complex vector not only changes rows to columns and vice versa, but also conjugates the vector's elements. Remember, the conjugate operation changes the sign of the imaginary parts in the complex vector.

To transpose the row vector x=[2+i, 3−i2, 5+i3], type at MATLAB Command Prompt

> > x = [ 2 + i , 3 2i , 5 + 3i ] > > z = x ;

where (') refers to the transpose operation.

To display the contents of the column vector z, type at MATLAB Command Prompt

> > z

MATLAB responds with

z = 2.0000 1.0000 i 3.0000 + 2.0000 i 5.0000 3.0000 i

Exercise 191

Explain the operation of the MATLAB commands

y = [ 4 3i ; 9 + 4i ; 7 5i ; 12 + 11i ] ; z = y . ' ;

Applying the command y.' changes rows to columns and columns to rows only. It does not conjugate the vector y elements.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124052123000037

Direct algorithms of decompositions of matrices by non-orthogonal transformations

Ong U. Routh , in Matrix Algorithms in MATLAB, 2016

2.1 Gauss Elimination Matrix

One central theme of all matrix computations is to introduce zeros to a given matrix. The following three problems are found in many matrix computation algorithms.

1.

Given a column vector u of length m , we want to find a matrix U which satisfies the condition, Eq. (2.1),

(2.1) U [ u 1 u k u k + 1 u m ] = [ u 1 u k 0 0 ] .

2.

Given a row vector v of length n , we want to find a matrix V which satisfies the condition, Eq. (2.2),

(2.2) ( v 1 , , v k , v k + 1 , , v n ) V = ( v 1 , , v k , 0 , , 0 ) .

3.

Given a column vector u and a row vector v from a column and row of a matrix, we want to find two matrices U and V which satisfy the condition, Eq. (2.3),

(2.3) U [ u 1 v 1 , , w j , , v l , v l + 1 , , v n u k u k + 1 u m ] V = [ u 1 v 1 , , w j , , v l , 0 , , 0 u k 0 0 ] .

Problem 2 can be recast as problem 1, because ( v V ) = V v , where v is a column vector. For all three problems, U and V always exist and are non-unique. To find unique U and V , we can put extra requirements on U and V . For example, we may require them to have a simple form such as I + w I ( k , : ) (Gauss elimination) or I α h h (Householder reflection), or G ( k , l , θ ) (Givens rotation). In problem 3, we may require V = U 1 or V = U .

Many matrix algorithms can be thought of clever and repeated applications of zeroing algorithms outlined above. The challenges are to keep the zeros introduced in earlier steps, and use only well conditioned U and V .

LU decomposition and all the other decomposition algorithms of this chapter are built upon the Gauss elimination. We denote G for the zeroing matrix in Gauss elimination. G has the following form:

(2.4a) G = I + g I ( k , : ) = [ I ( : , 1 ) , , I ( : , k 1 ) , g + I ( : , k ) , I ( : , k + 1 ) , , I ( : , n ) ] ,

(2.4b) w = G v = v + v ( k ) g .

In order for G to eliminate v as in Eq. (2.4), g has to adopt the following values:

(2.5) g = [ 0 , , 0 , 0 , v ( k + 1 ) v ( k ) , , v ( n ) v ( k ) ] .

Because g ( k ) = 0 , it can be verified that ( I + g I ( k , : ) ) ( I g I ( k , : ) ) = I . Therefore,

(2.6) G 1 = I g I ( k , : ) = [ I ( : , 1 ) , , I ( : , k 1 ) , g + I ( : , k ) , I ( : , k + 1 ) , , I ( : , n ) ] .

v ( k ) in Eq. (2.5) is called the pivot. Eqs. (2.4), (2.5), and (2.6) fail when v ( k ) = 0 . When v ( k ) is small compared to other components of v , g will have components whose absolute values are larger than 1 -- a condition that should be avoided in the matrix computations. We can show how cond ( G ) is determined by g . For this purpose, we calculate the eigenvalues of G G ,

(2.7) G G x λ x = ( I + g I ( k , : ) + I ( : , k ) g + g g I ( : , k ) I ( k , : ) ) x λ x = ( 1 λ ) x + x k g + ( g x + x k g g ) I ( : , k ) = [ ( 1 λ ) [ x 1 x k 1 ] ( 1 λ ) x k + ( g g ) x k + g x ( 1 λ ) [ x k + 1 x n ] + x k [ g k + 1 g n ] ] = [ 0 0 0 ] .

There are n 2 eigen pairs λ = 1 and x satisfying x k = 0 and g x = 0 . The remaining two eigen pairs can be chosen to satisfy λ 1 and x k 0 . From Eq. (2.7), we can obtain the two eigenvalues as follows:

(2.8a) λ 1 = 1 2 [ ( g g + 2 ) ( g g + 2 ) 2 2 2 ] 1 ,

(2.8b) λ n = 1 2 [ ( g g + 2 ) + ( g g + 2 ) 2 2 2 ] 1 .

By the definition of the 2-norm condition number of a matrix, we can calculate the 2-norm condition number of G as follows:

(2.9) κ 2 ( G ) = σ n ( G ) σ 1 ( G ) = λ n ( G ) λ 1 ( G ) = 1 2 [ ( g g + 2 ) + ( g g + 2 ) 2 2 2 ] = λ n .

To reduce κ 2 ( G ) , we can permute v : v k = max k j n v j . Then g j 1 , k + 1 j n . With such a permutation, we can guarantee the 2-norm condition number of the Gauss elimination matrix occurring in all the cases is bounded by

(2.10) 1 κ 2 ( G ) 1 2 [ ( n k + 2 ) + ( n k + 2 ) 2 2 2 ] , 1 k n 1 .

Here is the detail of the MATLAB function for the Gauss elimination. It should be noted that the zeroing of a matrix using Gauss elimination is usually performed in line.

The usage of the MATLAB function gauss_ can be illustrated with a few examples.

The following two examples show the effect of permutation to the conditioning of the Gauss elimination matrix.

The effect of the second input argument (k) can be understood from the comparison of the previous two examples and the following two examples. k sets the position of the elimination. The third input argument z controls whether the elimination is only below row k (default by z=1) or both below and above row k (by z=2). z=1 corresponds to the Gauss elimination. z=2 corresponds to Gauss--Jordan elimination.

The fourth input argument (format) determines whether the return value g is the Gauss vector (default by format='short') or it is the Gauss matrix (by format='long').

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128038048000088

Getting Familiar with Audio Signals

Theodoros Giannakopoulos , Aggelos Pikrakis , in Introduction to Audio Analysis, 2014

2.3 Mono and Stereo Audio Signals

In MATLAB, a column vector represents a single-channel (monophonic—MONO) audio signal. Similarly, a matrix with two columns refers to a two-channel signal (stereophonic—STEREO), where the first column represents the left channel and the second column represents the right channel. The following code creates a STEREO signal. The left channel contains a 250  Hz tone (cosine signal) and the right channel a 450   Hz tone. Figure 2.2 provides a separate plot of each channel over time.

Figure 2.2. A STEREO audio signal.

Note: When x has 2 columns, i.e. when the corresponding signal is in STEREO mode, we usually want to covert it to a monophonic version before we actually start processing it. A simple way to achieve this conversion is by averaging the two signal channels: x = sum(x,2);

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080993881000029

Matrix Inversion

Michael Parker , in Digital Signal Processing 101 (Second Edition), 2017

13.5 Gram–Schmidt Method

The process is to decompose A into Q · R.

A is composed of column vectors { a 1 , a 2 , a 3 a n }. These vectors are independent, or else the matrix is singular and no inverse exists. Independent means that n vectors can be used to define any point in n-dimensional space. However, unlike the axis vectors (x, y, and z in three-dimensional space), independent vectors do not have to be orthogonal (90-degree angles between vectors).

The next step is to create orthonormal {q 1 , q 2 , q 3 q n } vectors from the a i vectors. The first one is easy—simply take a 1 and normalize it (make the magnitude equal to 1).

q 1 = u 1 / norm ( u 1 ) and u 1 = a 1 ,

where norm (u 1 )   =   sqrt(u 1,1 2   + u 2,1 2   + u 3,1 2   + u m,1 2 ) and u 1 is an m-length column vector. Dividing a vector by its norm (which is a scalar) will normalize that vector or give it a length of 1.

We now need to introduce the concept of projection. The projection of a vector is with respect to a reference vector. The projection is the component of a vector that is colinear or in the same direction as the reference vector. The remaining portion of the vector will be orthogonal to the reference vector. The projection is a scalar value, since the direction of the projection is by definition the reference vector direction (Fig. 13.9).

Figure 13.9. Examples of projection.

Dot<v 1 , v 2 > is the dot product of v 1 , v 2 as defined in Fig. 13.2. The dot product produces a scalar result.

Scaler proj u ( a ) = dot < u , a > / dot < u , u >

Reference vectors do not need to be orthogonal, just independent. Any vector in n-space can be expressed as a combination of the n reference vectors. The q i vectors are both independent and orthogonal. All of the a i vectors can be expressed as linear combinations of q i vectors, as defined by the values in the R matrix. By defining q 1 to be colinear or in same direction as a 1 , this means a 1 is defined only by the scale factor of r1,1. There is a two-dimensional plane defined by the vectors a 1 and a 2 . The vector q 2 lies in this plane and is orthogonal to q 1 . Therefore, a 2 can be defined as a linear combination of q 1 and q 2 , defined by the scale factors of r1,2 and r2,2.

Using this, the q i vectors and the values of the R matrix can be computed.

q 1 = u 1 / norm ( u 1 ) and u 1 = a 1 q 2 = u 2 / norm ( u 2 ) and u 2 = a 2 proj u1 ( a 2 ) q 3 = u 3 / norm ( u 3 ) and u 3 = a 3 proj u1 ( a 3 ) proj u2 ( a 3 ) q 4 = u 4 / norm ( u 4 ) and u 4 = a 4 proj u1 ( a 4 ) proj u2 ( a 4 ) proj u3 ( a 4 ) q n = u n / norm ( u n ) and u n = a n proj u1 ( a n ) proj u2 ( a n ) proj u3 ( a n ) proj u ( n−1 ) ( a n )

Q  =   {q 1 , q 2 , q 3… q n } which is a matrix composed of orthonormal column vectors.

The upper triangular R matrix is the scalar coefficients of the projection.

R = Dot < q 1 , a 1 > Dot < q 1 , a 2 > Dot < q 1 , a 3 > Dot < q 1 , a n > 0 Dot < q 2 , a 2 > Dot < q 2 , a 3 > Dot < q 2 , a n > 0 0 Dot < q 3 , a 3 > Dot < q 3 , a n > 0 0 0 Dot < q 4 , a n > . . . . . 0 0 0 0 Dot < q n , a n >

Now, A  = Q · R.

This can be verified by performing the matrix multiplication. For example, multiplying the first row of Q by the first column of R gives

a 1,1 = q 1,1 · r 1,1 ( since the other values of the r 1 column are 0 ) = ( a 1,1 / norm ( a 1 ) ) · Dot < q 1 , a 1 > = a 1 , 1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128114537000135

Differential geometry of surfaces in E3

M. Farrashkhalvat , J.P. Miles , in Basic Structured Grid Generation, 2003

Exercise 18.

Show by direct matrix multiplication using eqns (3.143) and (3.147) that

where here N stands for the column vector of cartesian components of the surface normal vector and I 3 is the unit 3×3 matrix, and hence that 1 a J T is not a right inverse for C.

Now consider a surface vector field V(u 1,u 2), defined at all points of the surface and having the property that it is everywhere tangential to the surface. Then the divergence of V is given by an expression analogous to eqn (1.134):

(3.153) . V = a α V u α = a 1 V u 1 + a 2 V u 2 = 1 a [ ( a 2 × N ) V u 1 + ( N × a 1 ) V u 2 ]

in terms of the background cartesian components Vj of V. (Here the index i is summed from 1 to 3, while a is summed from 1 to 2.) These expressions are non-conservative. A conservative form follows by using eqn (3.144):

. V = 1 a [ u 1 ( ( a 2 × N ) V ) + u 2 ( ( N × a 1 ) V ) ] 2 κ m N V ,

but now the last term vanishes because V is a tangential vector. Hence we have

(3.155) . V = 1 a [ u 1 ( ( a2 × N ) V ) + u 2 ( ( N × a 1 ) V ) ]

For any surface scalar field φ(u1, u2), the gradient ∇φ must be a surface vector field, according to eqn (3.139). Thus we can combine the results of eqns (3.141) and (3.156) to give a formula for the Laplacian ∇ = ∇ · ∇φ:

(3.157) 2 φ = 1 a u α ( 1 a C i α C i β φ u β ) .

In fact, as the Laplacian operator associated with a particular surface, this second-order differential operator is known as the Beltrami operator of the surface, and given the special notation δB. Since the 2 × 2 matrix represented by Ci α C , using eqn (3.142), is

(3.158) C T C = a ( ( a 1 ) T ( a 2 ) T ) ( a 1 a 2 ) = a ( a 11 a 12 a 12 a 22 ) ,

we have

To make this identity appear consistent, we should have written the index α as a superscript in the definition of C in eqn (3.141). However, a consistent form for the Beltrami operator is now given, from eqn (3.157), by

(3.160) Δ B φ = 1 a u a ( a a α β φ u β ) ,

which has precisely the same form as eqn (1.147). This can be written in terms of the covariant metric tensor as

(3.161) Δ B φ = 1 a [ u 1 ( a 22 φ u 1 a 12 φ u 2 a ) + u 2 ( a 11 φ u 2 a 12 φ u 1 a ) ] ,

which leads to the identities

Δ B = 1 a { u ( a 22 a ) υ ( a 12 a ) } , Δ B υ = 1 a { υ ( a 11 a ) u ( a 12 a ) } ,

writing ua as (u, v).

Using eqns (3.56) and (3.61), we also have

(3.162) Δ B φ = u α ( a α β φ u β ) + Γ γ α γ a α β φ u β = a α β 2 φ u α u β ( a δ α Γ δ α β + a δ β Γ δ α α ) φ u β + Γ γ α γ a α β φ u β = a α β 2 φ u α u β a δ α Γ δ α β φ u β = a α β ( 2 φ u α u β Γ α β δ φ u δ )

after some manipulation of indices.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750650588500034

Geometrical Transformations

Adrian Biran , in Geometry for Naval Architects, 2019

Abstract

We define a point by the column vector of its coordinates. A geometric figure is defined by concatenating points. An affine transformation of a point, or figure, is performed by multiplying at left the matrix of coordinates by a transformation matrix and adding a vector of translation. Transformations like translation, rotation, and reflection do not change distances or angles. Therefore, they are called isometric transformations, or, shortly, isometries. Other transformations, not isometric, are shearing and scaling. To perform all transformations by multiplication we use homogeneous coordinates. Let us call the usual coordinates Euclidean. Given a point with Euclidean coordinates x , y, z, its homogeneous coordinates are

[ X Y Z w ] = [ w x w y w z w ]

A common assumption is w = 1 . For w = 0 we have a point at infinity, or ideal point. All lines with the same directions have the same point at infinity. When several transformations must be performed on the same point or figure, the order in which they are carried on may influence the results. Instead of performing a sequence of transformations it is possible to multiply the corresponding transformation matrices and to apply directly the resulting combined matrix.

The use of homogeneous coordinates unifies also the application of projection matrices, among them the matrices for perspective projection, that is central projection.

A point can be defined as the linear combination of other points, for example

P = λ P 1 + μ P 2

If λ + μ = 1 , then P is an affine combination. The affine combination of two points lies on the line defined by the two points, and the affine combination of three points lies in the plane defined by the three points. Moreover, to apply a transformation to an affine combination we may either calculate the affine combination and next transform it, or transform the points and next calculate their affine combination. Some splines used in computer graphics are affine combinations of given control points.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780081003282000195

Systems Modeling Principles

Donald W. Boyd , in Systems Analysis and Modeling, 2001

2.5 CLASSIFICATION OF MODEL VARIABLES

Model variables are designated by the column vector, X = (X 1, X 2, …, Xn ) T . Each variable carries the same single subscript for each of N Δt's. When a particular value of t is to be referenced in the text, a superscript is used: X j t . Three functions performed by variables during the modeling process form a basis for classification. Variables may be observed to do the following:

1.

Comprise the most basic model component

2.

Establish data categories

3.

Fulfill roles

2.5.1 By Component

A system is composed of subsystems and variables. Therefore, a systems model must take into account three component levels: the system itself, subsystems, and variables. How many subsystems to include is determined by circumscription. The following rules of thumb are helpful in identifying system components:

1.

Look for logical or naturally occurring division of function by subsystem.

2.

Define the component that performs a functional role in the subsystem.

3.

Determine if the component is connected in any way to other parts of the system.

4.

Define any relationship between the system or any of its other components that affects the function of the component.

Variables comprise the most basic model component and may be further classified into naturally occurring levels and rates [19]. A systems model must provide a simple mathematical structure for applying one or more rates, over successive Δt increments of time, that accurately replicates changes in the level of a system (subsystem).

2.5.1.1 Levels

In Mtm modeling, rates become integrated into levels within the balance equation. An initial level relative to Δt, plus the change in level due to the net of input/output rates during Δt, results in a new terminal level relative to Δt. Thus, a systems model emulates continuity, with selection of Δt providing the degree of resolution. System states are defined by levels at discrete points in time. Changes in state occur as the system evolves over a series of Δt's. Given a current level XI , the system will evolve to the level XT during a finite increment of time, Δt. XI and XT form an initial-terminal or IT pair and are illustrated in Figure 2.4 .

Figure 2.4. System Levels

2.5.1.2 Rates

By defining levels as paired variables, rate of change is simply the difference between two levels separated by Δt. System level evolves from XI to XT in response to single or net rates of change applied as system input and/or output. For example, a single input rate, Xj , is represented by the macro form Xj = (XT XI )/Δt. In general, the value of Xj is unique to each Δt and varies with time as illustrated in Figure 2.5 . Specific rates are defined by dynamic forms.

Figure 2.5. Variable Rates

2.5.1.3 Interaction of Levels and Rates

Rates can produce changes in levels: XT = XI + Xj Δt. Levels can produce changes in rates: Xj = c 1 XI + c 2 XT . Thus, an initial level, XI ; a rate, Xj ; and a terminal level, XT comprise a triad of variables.

2.5.2 By Data Category

Each variable of an Mtm systems model defines a data category, either primary or secondary. Any variable whose values are established by historical data is a primary variable, and its data are called primary data. Very seldom do all variables qualify as primary. Yet despite deficiency in primary data, an operational systems model must be supported by a complete data base, complete in the sense that each data category contains data for the periods of interest. Consequently, the first application of a model is to complete the data base by solving for secondary data. These solution variables are referred to as secondary variables. Their tentative values form the basis for the first of a series of validation tests of the model.

2.5.3 By Role

Each level and rate variable fulfills a role in the model. External data series are processed by the model via exogenous variables. Primary data are always exogenous, but primary variables can assume endogenous roles. Dynamic response of the model is generated via endogenous—that is, its solution—variables. Various tests and applications of the model utilize role inversion, accomplished by interchanging pairs of variables in terms of their endogenous and exogenous roles.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121218515500022

Vectors and Matrices

Stormy Attaway , in Matlab (Fourth Edition), 2017

Abstract

The chapter introduces the creation of row vectors, column vectors, and matrices. Methods of indexing into elements and subsets of arrays are demonstrated. Functions and operators that find and modify the dimensions of arrays are shown. The use of vectors and matrices as function arguments, and functions that are written specifically to perform common operations on vectors and matrices, such as sums and products, are covered. Logical vectors and other concepts useful in vectorizing code are emphasized in this chapter. Array operations and matrix operations (such as matrix multiplication) are explained.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128045251000027