the main - Hicks Jerry
Theorem. Each X vector can be sole in the form of LIN. Baseface vectors. Linear dependence and independence of the theorem on two linearly independent systems of vectors

The concepts of linear dependence and independence of the vectors are very important when studying the algebra of vectors, since they are based on the concepts of dimension and the basis of space. In this article we will give definitions, consider the properties of linear dependence and independence, we obtain an algorithm for the study of the system of vectors on a linear dependence and describe in detail the solutions of examples.

Navigating page.

Determining the linear dependence and linear independence of the vectors system.

Consider a set of P n-dimensional vectors, we denote them as follows. Make a linear combination of these vectors and arbitrary numbers (valid or complex) :. Stripping from the definition of operations over N-dimensional vectors, as well as the properties of the embedding of vectors and multiplication of the vector to the number, it can be argued that the recorded linear combination is some n-dimensional vector, that is,.

So we approached the definition of the linear dependence of the vectors.

Definition.

If a linear combination can be a zero vector when among numbers There are at least one different from zero, the system of vectors is called linearly dependent.

Definition.

If a linear combination is a zero vector only when all numbers equal to zero, then the system of vectors is called linearly independent.

Properties of linear dependence and independence.

Based on these definitions, we formulate and prove properties of linear dependence and linear independence of the system of vectors.

    If you add multiple vectors to a linearly dependent system of vectors, the resulting system will be linearly dependent.

    Evidence.

    Since the system of vectors is linearly dependent, the equality is possible in the presence of at least one nonzero number from numbers . Let be .

    Add to source system vector more s vectors At the same time we get the system. Since it is, a linear combination of vectors of this system of type

    It is a zero vector, but. Consequently, the resulting vectors system is linearly dependent.

    If you eliminate multiple vectors from a linearly independent system of vectors, then the resulting system will be linearly independent.

    Evidence.

    Suppose that the obtained system is linearly dependent. By adding the vectors to this system with all the vectors, we get the original system of vectors. By condition - it is linearly independent, and due to the previous property of linear dependence, it must be linearly dependent. We came to contradiction, therefore, our assumption is incorrect.

    If there are at least one zero vector in the vectors system, then such a system is linearly dependent.

    Evidence.

    Let the vector in this vectors system be zero. Suppose that the initial system of vectors is linearly independent. Then the vector equality is possible only when. However, if you take any, different from zero, the equality will still be fair, since. Consequently, our assumption is incorrect, and the original system of vectors is linearly dependent.

    If the system of the vectors are linearly dependent, then at least one of its vectors is linearly expressed in the rest. If the system of vectors are linearly independent, then none of the vectors are expressed in the rest.

    Evidence.

    First, we prove the first statement.

    Let the system of the vectors are linearly dependent, then there are at least one different number from zero and is true equality. This equality can be resolved relative because

    Consequently, the vector is linearly expressed through the rest of the system vectors, which was required to prove.

    Now we will prove the second approval.

    Since the system of vectors is linearly independent, then equality is possible only at.

    Suppose that some system of the system is expressed linearly through the rest. Let this vector be, then. This equality can be rewritten as, in its left part there is a linear combination of system vectors, and the coefficient in front of the vector is different from zero, which indicates a linear dependence of the source system of vectors. So we came to the contradiction, it means that the property is proved.

Of the last two properties, an important statement follows:
If the system of vectors contains vectors and, where - an arbitrary number, then it is linearly dependent.

Study of the system of vectors on a linear dependence.

We will set the task: we need to establish a linear dependence or linear independence of the vectors system.

Logical question: "How to solve it?"

Something useful from a practical point of view can be made from the definitions and properties of the linear dependence and independence of the vectors system. These definitions and properties allow us to establish a linear dependence of the vectors in the following cases:

How to be in other cases that most?

Tell me.

Recall the wording of the theorem on the ring of the matrix, which we led in the article.

Theorem.

Let be r is the grade of the matrix and order p per n, . Let M be a basic minor matrix a. All lines (all columns) matrix A, which do not participate in the formation of the base miner M, are linearly expressed through lines (columns) of matrices, generating base minor m.

Now, we will explain the connection of the theorem on the rank of the matrix with the study of the system of vectors on a linear dependence.

Make a matrix A, the rows of which will be the vectors of the system under study:

What would the linear independence of the vectors system mean?

From the fourth properties of linear independence of the system system, we know that none of the system vectors is expressed in the rest. In other words, no string of the matrix A will be linearly expressed through other lines, therefore, linear independence of the vectors system will be equivalent to the Rank condition (A) \u003d P.

What would the linear dependence of the vectors mean?

Everything is very simple: at least one line of the matrix A will be linearly expressed in the rest, therefore, linear dependence of the vectors system will be equivalent to Rank condition (A)

.

So, the task of studying the system of vectors on a linear dependence is reduced to the task of finding the rank of the matrix made up of the vectors of this system.

It should be noted that at p\u003e n, the system of vectors will be linearly dependent.

Comment: When compiling the matrix, the system vectors can not be taken as rows, but as columns.

Algorithm Studies of the system of vectors on a linear dependence.

We will analyze the algorithm on the examples.

Examples of the study of the system of vectors on a linear dependence.

Example.

Dana system vectors. Explore it on linear dependence.

Decision.

Since the vector c zero, the initial system of the vectors is linearly dependent by virtue of the third property.

Answer:

The system of vectors is linearly dependent.

Example.

Explore the system of vectors on a linear dependence.

Decision.

It is not difficult to note that the coordinates of the vector C are equal to the corresponding vector coordinates multiplied by 3, that is,. Therefore, the source system of vectors is linearly dependent.

Theorem 1. (On linear independence of orthogonal vectors). Let the system of vectors are linearly independent.

We will make a linear combination of σλ i x i \u003d 0 and consider the scalar product (x j, σλ i x i) \u003d λ j || x j || 2 \u003d 0, but || x j || 2 ≠ 0⇒λ j \u003d 0.

Definition 1. System vectors or (E i, e j) \u003d δ ij - the symbol of the macketer, called orthonormal (ONS).

Definition 2. For an arbitrary element X of an arbitrary infinite-dimensional Euclidean space and an arbitrary orthonormal system of elements near Fourier element X on the system, a formally composed infinite amount (row) of the type in which the actual numbers λ i are called Fourier coefficients of an element x on a system, where λ i \u003d (x, e i).

Comment. (Naturally, the question arises about the convergence of this series. To study this issue, you will fix the arbitrary number n and find out what distinguishes the N-Mu partial sum of the Fourier series from any other linear combination of the first N elements of the orthonormal system.)

Theorem 2. For any fixed number N among all amounts of the form of the smallest deviation from the element X by the rate of this Euclidean space has a N-I partial sum of the Fourier series element

Considering the system's orthonormality and the definition of the Fourier coefficient, you can record


The minimum of this expression is achieved at C i \u003d λ i, since it is always a non-negative first sum in the right-hand side of it to zero, and the remaining terms from C I do not depend.

Example. Consider a trigonometric system

in the space of all the functions integrable by the Riemann F (X) on the segment [-π, π]. It is easy to verify that this is ONS, and then a Fourier series F (X) has a view where.

Comment. (Trigonometric Fourier series are usually written in the form Then )

Arbitrary ONS in an infinite-dimensional euclide space without additional assumptions, generally speaking, is not the basis of this space. At an intuitive level, without giving strict definitions, we describe the essence of the case. In an arbitrary infinite-dimensional euclide space, we consider ONS, where (E i, E J) \u003d δ ij is a symbol of a macketer. Let M be the subspace of the Euclidean space, and k \u003d m ⊥ - the subspace, orthogonal to M, such that the euklidovo is the space E \u003d M + M ⊥. The projection of the vector x∈E to the subspace m - vector ∈M, where


We will look for those values \u200b\u200bof the decomposition coefficients α k, in which the non-visual (square residual) H 2 \u003d || X- || 2 will be minimal:

h 2 \u003d || X- || 2 \u003d (x-, x -) \u003d (x-σα kek, x-σα kek) \u003d (x, x) -2σα k (x, ek) + (σα kek, σα kek) \u003d || x || 2 -2σα k (x, e k) + σα k 2 + σ (x, e k) 2 -σ (x, e k) 2 \u003d || x || 2 + σ (α k - (x, e k)) 2 -σ (x, e k) 2.

It is clear that this expression will take the minimum value at α k \u003d 0, which is trivial, and at α k \u003d (x, e k). Then ρ min \u003d || x || 2 -σα k 2 ≥0. From here we get the inequality of Bessel Σα k 2 || x || 2. At ρ \u003d 0 the orthonormal system of vectors (ONS) is called a complete orthonormal system in the sense of Steklov (PONS). From here you can get the equality of the glass - Parseval Σα k 2 \u003d || x || 2 - "Pythagore's theorem" for the infinite-dimensional econcity spaces complete in the sense of glass. Now it would be necessary to prove that in order for any space of spaces to be the only way to present in the form of a series of Fourier converging towards him, it is necessary to have enough accommodation of the steels-parcell. The system of vectors Pic \u003d ""\u003e onb forms? Vectors system Consider for the partial sum of the row Then As the tail of a converging row. Thus, the system of vectors is Pons and forms onb.

Example. Trigonometric system

in the space of all the functions integrable on Riemann F (X), the segment [-π, π] is Ponns and forms onb.

The following gives several criteria of linear dependence and, accordingly, linear independence of vectors systems.

Theorem. (Required and sufficient condition of the linear dependence of vectors.)

The system of vectors is dependent if and only if one of the system vectors is linearly expressed through the other system.

Evidence. Necessity. Let the system linearly dependent. Then, by definition, it represents the zero vector is nontrivially, i.e. There is a non-trivial combination of this vector system equal to zero vector:

where at least one of the coefficients of this linear combination is not equal to zero. Let be , .

We divide both parts of the previous equality on this nonzero coefficient (i.e. multiply on:

Denote:, where.

those. One of the vectors of the system is linearly expressed through the other system, Ch.T.D.

Adequacy. Let one of the system vectors linearly expressed through other vector vectors:

We transfer the vector in the right equality:

Since the coefficient with the vector is equal, then we have a non-trivial representation of zero system of vectors, which means that this system of vectors is linearly dependent, bt.d.

Theorem is proved.

Corollary.

1. The vector space vector system is linearly independent if and only if none of the system vectors are linearly expressed in other vector vectors of this system.

2. The system of vectors containing a zero vector or two equal vector is linearly dependent.

Evidence.

1) the need. Let the system linearly independent. Suppose the nasty and there is a system vector linearly expressed through other vector vectors. Then, by the theorem, the system is linearly dependent and we come to contradiction.

Adequacy. Let none of the system vectors are expressed through others. Suppose the opposite. Let the system linearly dependent, but then from the theorem it follows that there is a vector system linearly expressed through other vectors of this system and we again come to the contradiction.

2a) Let the system contain a zero vector. Suppose to definitely, vector:. Then the equality is obvious

those. One of the vectors of the system is linearly expressed through other vector vectors. It follows from the theorem that such a system of vectors is linearly dependent, bt.d.

Note that this fact can be proved directly from the linearly dependent system of vectors.

Since, the next equality is obvious

This is a non-trivial representation of the zero vector, which means the system is linearly dependent.

2B) Let the system have two equal vector. Let for. Then the equality is obvious

Those. The first vector is linearly expressed through the remaining vectors of the same system. From the theorem it follows that this system is linearly dependent, Ch.T.D.

Similar to the previous one, this statement can also be proved directly determining the linearly dependent system .. Then this system represents the zero vector is nontrivial

where is the linear dependence of the system.

Theorem is proved.

Corollary. The system consisting of a single vector is linearly independent if and only if this vector is nonzero.

OPR. Sset W is called linear space, and its element. -Vectors if:

* The law (+) is set on the cat. Any two elements x, from W is compared the element is called. their sum [x + y]

* The law is given (* by the number A), according to the cat. The element X from W and A is compared the element from W, called the product x at a [ah];

* Made

the following requirements (or axioms):

Trail C1. Zero vector (CTV 0 1 and 0 2. By A3: 0 2 + 0 1 \u003d 0 2 and 0 1 + 0 2 \u003d 0 1. By A1 0 1 + 0 2 \u003d 0 2 + 0 1 \u003d\u003e 0 1 \u003d 0 2.)

c2. . (CTV, A4)

c3. 0 case. (A7)

c4. A (number) * 0 \u003d 0. (A6, C3)

c5. x (*) -1 \u003d 0 case, opposite x, i.e. (-1) x \u003d -x. (A5, A6)

c6. In W, the subtraction is determined: the vector X is called the difference between the vectors B and a, if x + a \u003d b, and is denoted by x \u003d b - a.

Number n. called dimension Lin. PR-A. L. , if in L. There is a system n. Lin. in vectors, and any system from n.+1 vectors - Lin. dependent. Dim L.= n.. Space L. called n-dimensional.

Ordered totality of N Lin. in vectors N dimensional independence. space - basis

Theorem. Each X vector can be submitted the only way in the form of LIN.D. Basis

Let (1) - the N-dimensional Lin basis. PR-V. V.. A combination of linearly independent vectors. The totality of vectors will be Lin. dependent, because them n +.1.

Those. There are numbers that are not all equal to zero at the same time, that anyone (otherwise (1) is linearly dependent).

Then where the decomposition of the vector x.base (1).

This expression is unique, because If there is another expression (**)

sumped out of (*) equality (**),

receive

Because linearly independent, then. CTD

Theorem. If - Lin. Independent vectors of space V and each vector x from V can be represented through, then these vectors form the basis V

Dock: (1) -l. It does not matter \u003d\u003e the dock remains that for Lin. Dependent. By SL. Each vector A is expressed through (1):, Consider, Rang≤N \u003d\u003e Among the columns are not more non-linearly independent, but M\u003e n \u003d\u003e M columns are linearly dependent \u003d\u003e S \u003d 1, n

Those. Vectors Lin. Dependent

That is the space V n-dimly and (1) its basis

№4ORD.Subset L Lin. Pr-v V called Lin. Submer this space if relatively specified in V operations (+) and (* a) subspace L is a linear space

Theorem The set L of vectors of space V is Lin. The subspace of this space is performed

(Cons) Let (1) and (2) are fulfilled, for the fact that L Subsurist.v remains to prove that all the axioms of Lin are fulfilled. pr-va.

(-X): -x + x \u003d 0 D.. a (x + y) \u003d ah + ay;

(AA-b) and (dts) follows from justice for v Prove (B)

(necessity) Let L be LIN. The subspace of this space, then (1) and (2) are carried out due to the definition of Lin. PR-V.

ORD.The totality of all sorts of lin. Combinations of some elements (x j) lin. pr-v is called a linear shell

Theorem Arbitrary set of all lin. Combinations of V vectors V with action. Coef is Lin. Submer V. (Linear shell this system of lin vectors. Ave. is Lin.PhodRR of this. )

OPR. Employ subset L vectors of Lin. Pr-v V called Lin. subspace if:

a) the sum of any vectors from L belongs to l

b) the product of each vector from L per any number belongs to L

The sum of two subspacesL. It is again subspaceL.

1) Let y 1 + y 2 (L 1 + L 2)<=> y 1 \u003d x 1 + x 2, y 2 \u003d x '1 + x' 2, where (x 1, x '1) l 1, (x 2, x' 2) l 2. Y 1 + Y 2 \u003d (x 1 + x 2) + (x '1 + x' 2) \u003d (x 1 + x '1) + (x 2 + x' 2), where (x 1 + x '1 ) L 1, (x 2 + x '2) l 2 \u003d\u003e The first condition of the linear subspace is performed.

aY 1 \u003d AX 1 + AX \u200b\u200b2, where (Ah 1) L 1, (Ah 2) L 2 \u003d\u003e. (Y 1 + y 2) (L 1 + L 2), (LY 1) (L 1 + L 2) \u003d\u003e Conditions are performed \u003d\u003e L 1 + L 2 - linear subspace.

Crossing two subdes.L. 1 andL. 2 Lin. PR-V.L. Also is the submer. of this space.

Consider two arbitrary vector x.,y.belonging to the intersection of subspaces and two arbitrary numbers a.,b.:.

By ODR. Setting sets:

\u003d\u003e To determine the subspace of the linear space: ,.

T. K. Vector aX. + by Belongs and set L. 1, and set L. 2, then it belongs, by definition, and the intersection of these sets. In this way:

OPR. The vs is that V is the direct sum of its submer. If and b) this decomposition is the only

b ") We show that b) is equivalent to b ')

When b) right b ')

All sorts (M., N.) is intersect only to zero vector

Let ∃ z ∈

Feected. Route.L.=

contradiction

Theorem to (*) It is necessary and enough to combine bases ( constituted the basis of space

(Optional) Let (*) and the vectors - the bases of subsets. and there is a decomposition of software; X is unfolded by the basis L to argue that (is the basis, it is necessary to prove their linear independence to all contain 0 0 \u003d 0 + ... + 0. By virtue of the uniqueness of the decomposition of 0 by: \u003d\u003e due to lin. Independence of the base \u003d\u003e (- basis

(Cost.) Let (forms the basis of L unity. Decomposition (**) at least one decomposition exists. By virtue of uniqueness (*) \u003d\u003e Uniqueness (**)

Comment. The dimension of the direct amount is equal to the sum of the dimensions of the subspace

Any non-degenerate quadratic matrix can serve as a transition matrix from one basis to another

Suppose that there are two bases with a measuring linear space V

(1) \u003d A, where elements * and ** are not numbers, but we will spread certain operations on the numerical matrix to such lines.

Because Otherwise, vectors ** would be lin. dependent

Back.If then the columns are linearly independent \u003d\u003e form basis

Coordinates and associated with the relationship where elements of the transition matrix

Let it know the decomposition of the elements of the "new" basis for "old"

Then equality are fair

But if a linear combination of linear independent elements is 0 TO \u003d\u003e

Basic linear dependence theorem

If a (*) linearly expressed through (**) thenn.<= m.

We prove induction by m

m \u003d 1: The system (*) contains 0 and lin. The head is impossible

let it be true for m \u003d k-1

we prove for m \u003d k

it may turn out that 1), i.e. Bf (1) are Lin.comb. Lin. B-Mark (2) System (1) Lin.Nozav., Because It is part of Lin.Nozav. Systems (*). Because In system (2) only k-1, vectors, then by the assumption of induction, we obtain K + 1

 


Read:



Secret checks on the main after death with whom Lebedev and Voloshin in Sochi are resting

Secret checks on the main after death with whom Lebedev and Voloshin in Sochi are resting

Do you think you are Russian? Born in the USSR and think that you are Russian, Ukrainian, Belarus? Not. This is not true. You are actually Russian, Ukrainian or ...

How many people eat for life?

How many people eat for life?

Of these 50 tons of products, it is possible to allocate: 2 tons of various meat, including 70 thousand meats. Average data on some products are given ...

The University of Mechnikova will discern with unfinished to restore the hostel for students

The University of Mechnikova will discern with unfinished to restore the hostel for students

St. Petersburg is the third, official name of the famous city of our country. One of the few cities that has about a dozen ...

The University of Mechnikova will discern with unfinished to restore the hostel for students

The University of Mechnikova will discern with unfinished to restore the hostel for students

"Information on hostels for ISGMU IPMU in GBOU. I.I. Mechnikov Ministry of Health of Russia Dormitory GBOU VPO SZGMU. I.I. Mechnikov ... "...

feed-Image. RSS.