the main - Sviyash Alexander
Initial approximation. Numerical methods: solving nonlinear equations split sampling on subsamples

Let the root of the equation f (x) \u003d 0 separated on the segment, and the first and second derivatives f '(x) and f "" (x) Continuous and alpophared at xî.

Let the root refinement obtained (chosen) the first approximation to the root x n . Then suppose that the following approximation obtained by the H N correction , leads to accurate root value

x \u003d x n + h n. (1.2.3-6)

Counting h N. Low magnitudes, represent F (x n + h n) as a series of Taylor, limited to linear terms

f (x n + h n) "f (x n) + h n f '(x n). (1.2.3-7)

Considering that f (x) \u003d f (x n + h n) \u003d 0, we obtain f (x n) + h n f '(x n) "0.

Hence H n "- F (x n) / f '(x n). Substitute value h N. in (1.2.3-6) and instead of the exact value of the root x.we get another approximation

Formula (1.2.3-8) allows you to obtain a sequence of approximations 1, x 2, x 3 ..., which under certain conditions converges to the exact value of the root x i.e

Geometric interpretation of Newton's method It consists in the following
(Fig.1.2.3-6). We will take for the initial approximation x 0 the right end of the segment B and at the corresponding point in 0 on the graph of the function y \u003d f (x) we will construct tangential. The point of intersection of the tangent with the abscissa axis is adopted for a new more accurate approximation x 1. The multiple repetition of this procedure allows you to obtain the sequence of approximations x 0, x 1, x 2 , . . . that seeks to the exact value of the root x.

The estimated formula of Newton method (1.2.3-8) can be obtained from geometric construction. So in the rectangular triangle x 0 in 0 x 1 catat
x 0 x 1 \u003d x 0 in 0 / TGA. Considering that the point in 0 is on the graph of the function f (x), and hypotenuse is formed by tangent to graph F (X) at point 0, we get

(1.2.3-9)

(1.2.3-10)

This formula coincides with (1.2.3-8) for the N-th approximation.

From fig.1.2.3-6 it can be seen that the choice as an initial approximation of the point A can lead to the fact that the following approximation of X 1 will be out of the segment on which the root is separated x.. In this case, the convergence of the process is not guaranteed. In the general case, the selection of the initial approximation is made in accordance with the following rule: for the initial approximation, such a point x 0 î should be taken, in which f (x 0) × F '' (x 0)\u003e 0, that is, the signs of the function and its second derivative match up.

The conditions for the convergence of Newton's method are formulated in the following theorem.

If the root of the equation is separated on the segment, andf '(x 0) and f' '(x) different from zero and save their signs whenhîe., if you choose such a point as an initial approximationx 0 î , whatf (x 0) .f ¢¢ (x 0)\u003e 0 then the root of the equationf (x) \u003d 0 can be calculated with any degree of accuracy.

Assessment of the error of the Newton method is determined by the following expression:

(1.2.3-11)

where - the smallest meaning for

The greatest value for

The calculation process stops if ,

where is the specified accuracy.

In addition, the following expressions can serve as a condition for achieving a given accuracy when clarifying the root method of Newton:

The scheme of the newton method algorithm is shown in Fig. 1.2.3-7.

The left part of the initial equation F (X) and its derivative F '(x) in the algorithm is decorated in the form of separate software modules.

Fig. 1.2.3-7. Newton method algorithm scheme

Example 1.2.3-3. It is used by Newton's roots of the equation X-Ln (x + 2) \u003d 0 and the condition that the roots of this equation are separated on segments x 1 î [-1.9; -1.1] and x 2 î [-0.9; 2 ].

The first derivative f '(x) \u003d 1 - 1 / (x + 2) saves its sign on each of the segments:

f '(x)<0 при хÎ [-1.9; -1.1],

f '(x)\u003e 0 at xî [-0.9; 2].

The second derivative f "(x) \u003d 1 / (x + 2) 2\u003e 0 for any x.

Thus, convergence conditions are performed. Since f "" (x)\u003e 0 On the entire area of \u200b\u200bpermissible values, then to clarify the root for the initial approximation x 1choose x 0 \u003d -1.9 (sinceF (-1.9) × F "(- 1.9)\u003e 0). We get a sequence of approximations:

Continuing calculations, we obtain the following sequence of the first four approximations: -1.9; -1.8552, -1.8421; -1.8414. . The value of the function f (x) at the point x \u003d -1.8414 is equal to f (-1.8414) \u003d - 0.00003 .

To clarify the root x 2 î [-0.9; 2], we choose as an initial approximation 0 \u003d 2 (f (2) × f "(2)\u003e 0). Based on x 0 \u003d 2, we obtain the sequence of approximations: 2.0; 1.1817; 1.1462; 1.1461. The value of the function f (x) at the point x \u003d 1.1461 is equal to f (1.1461) \u003d -0.00006.

Newton Method has a high speed of convergence, however, at each step, it requires calculating not only the values \u200b\u200bof the function, but also its derivative.

Horde method

Geometric interpretation of the chord method It consists in the following
(Fig.1.2.3-8).

We will spend a cut straight through points A and B. The next approximation of X 1 is the abscissa point of intersection of the chord with axis 0x. We construct a straight line equation:

We put y \u003d 0 and find the value x \u003d x 1 (next approximation):

Repeat the process of computing to obtain the next approximation to the root - x 2 :

In our case (Fig.1.2.11) and the calculated formula of the chord method will be

This formula is valid when a point B is accepted for a fixed point, and point a as an initial approximation.

Consider another case (Fig. 1.2.3-9), when .

The equation is direct for this incident

Another approximation x 1 at y \u003d 0

Then the recurrent formula of the chord method for this incident is

It should be noted that for a fixed point in the chord method, the end of the segment is chosen for which the condition f (x) ∙ f ¢¢ (x)\u003e 0 is calculated.

Thus, if the station was taken for a fixed point , that as an initial approximation is x 0 \u003d B, and vice versa.

Sufficient conditions that ensure the calculation of the root of the equation F (x) \u003d 0 by chord formula will be the same as for the method of tangent (Newton method), only a fixed point is selected instead of the initial approximation. The chord method is a modification of the Newton method. The difference is that as another approximation in the Newton method, the point of intersection of tangents with the axis of 0x is acting, and in the chord method - the point of intersection of chords with the axis of 0x - approximation converge to the root from different sides.

Estimation of the error of the chord method is determined by the expression

(1.2.3-15)

The condition of the end of the process of iterations according to the method of chord

(1.2.3-16)

In case M 1<2m 1 , то для оценки погрешности метода может быть использована формула | x n -x n -1 |£e.

Example 1.2.3-4. Specify the root of the equation E x - 3x \u003d 0, separated on the segment with an accuracy of 10 -4.

Check the condition of convergence:

Consequently, for a fixed point, it should be selected a \u003d 0, and as an initial approximation, take x 0 \u003d 1, since f (0) \u003d 1\u003e 0 and f (0) * f "(0)\u003e 0.

In the minimization task, the function is of paramount importance to have a good choice of the initial approximation of course, it is impossible to come up with a general rule that would be satisfactorily for all cases, that is, for all possible nonlinear functions, each time you have to look for your solution. Below is a set of some methods for finding coarse initial approximations, which in practice can serve as a starting point for finding satisfactory approximations in a particular task.

9.6.1. Search on the grid. This method is particularly effective with a small number of non-linear parameters actually. Often the functions are arranged so that when fixing the values \u200b\u200bof some parameters (which we call properly nonlinear), the rest of the parameters becomes linear.

By defining the bottom and upper bounds for nonlinear parameters, with some step you can arrange a brute force on the obtained grid of the values \u200b\u200bof these actually nonlinear parameters and reveal the linear regression that leads to a minimum amount of squares.

As an example, consider the function

Here actually the nonlinear parameter will be. Suppose it is known that. Let H be a step for the parameter. Calculate linear regressions

where and find for each of them the minimum sum of squares. The smallest of them corresponds to the optimal initial approximation. In principle, a step from which the "thick" of the grid depends may vary, so that by reducing the value of H, the parameter values \u200b\u200bcan be found with any accuracy.

9.6.2. Convert model.

Sometimes some conversion model can be reduced to linear or reduce the number of non-linear parameters actually (see Section 6.2.3). Let us show how this can be achieved by the example of a logistic curve.

Producing over the relevant equations of regression reverse transformation, we get

Labiting coming to a new function, the number of linear parameters of which increased from one to two. The rating for the parameter in the new model can be found, for example, by the previous method.

It is appropriate to make the following remark about the transformations of regression models. It should be borne in mind that the error that came into the initial equation, after the transformation, generally speaking, will not be additive.

Taking advantage of the decomposition in a series of Taylor and denoting the transformation through obtaining, neglecting the term

Hence it follows that

The latter equality can be taken as the basis for analyzing the task with the transformed model.

9.6.3. Split sampling on subsamples.

To find the initial approximation, you can smash the entire sample on the subsamples (with approximately equal volumes), where the number of unknown parameters. For each sub-assembly, we find the average for y and by x, which we denote, respectively, t. Let the system of nonlinear equations relative

Solving this system and will be the initial approximation of the parameters. Obviously, in order for this method "worked," it is necessary that this system of nonlinear equations is solved quite easily, for example analytically.

9.6.4. Decomposition in a series of Taylor on an independent variable.

The basis of iterative minimization of the sum of the squares is the decomposition of the regression function in a series of Taylor to linear members by parameters. To find a coarse initial approximation, it is sometimes useful for the procedure for approximation of regression by decomposing it in a series of Taylor on an independent variable. We will be considered one-dimensional for ease. Let - mean the average then approximately

Denote, thus come to a linear model

Let the MNA estimates of the parameters of this linear regression. As initial approximations, we will take the solution of a nonlinear system of equations regarding

Newton (tangent) method for root search

This is an iterative method inhibited Isaac Newton (Isaak Newton) around 1664. However, sometimes this method is called the Newton Rafson (Rafson) method, since Rafson invented the same algorithm for several years later Newton, but his article was published much earlier.

The task is as follows. An equation is given:

It is required to solve this equation, more precisely, to find one of its roots (it is assumed that the root exists). It is assumed that it is continuous and differentiated on the segment.

Algorithm

The input parameter of the algorithm, except the function, is also initial approximation - Some, from which the algorithm begins to go.

Let already be calculated by calculating as follows. We will tangent to the schedule function at the point, and we will find the point of intersection of this tangent with the axis of the abscissa. We put an equal point found point, and repeat the whole process from the beginning.

It is not difficult to get the following formula:

It is intuitive that if the function is "good" (smooth), and is close enough of the root, it will be even closer to the desired root.

The speed of convergence is quadraticthat, conventionally, means that the number of accurate discharges in the approximate value doubles with each iteration.

Application for calculating square root

Consider the Newton method on the example of calculating the square root.

If we substitute, then after simplifying the expression we get:

The first typical task option is when a fractional number is given, and it is necessary to calculate its root with some accuracy:

Double N; CIN \u003e\u003e; Const Double EPS \u003d 1E-15; Double x \u003d 1; for (;) (double nx \u003d (x + n / x) / 2; if (ABS (X - NX)< EPS) break ; x = nx; } printf ("%.15lf" , x) ;

Another common task option is when it is necessary to calculate the integer root (for this to find the greatest such that). Here you have to change the condition of the stop algorithm slightly, because it may happen that it will start "jumping" near the answer. Therefore, we add a condition that if the value in the previous step has decreased, and at the current step tries to increase, the algorithm must be stopped.

int n; CIN \u003e\u003e; int x \u003d 1; Bool decreased \u003d false; for (;;) (int nx \u003d (x + n / x) \u003e\u003e 1; if (x \u003d\u003d nx || nx\u003e x && decreased) Break; decreased \u003d nx< x; x = nx; } cout << x;

Finally, we will give another third option - for the case of long arithmetic. Since the number can be quite large, it makes sense to pay attention to the initial approximation. Obviously, what it is closer to the root, the faster the result will be achieved. Quite simple and effective will be taken as the initial approximation number, where - the number of bits among the number. Here is the code in the language of Java, demonstrating this option:

Biginteger N; // input data BigInteger A \u003d biginteger.one .shiftleft (n.bitlength () / 2); boolean p_dec \u003d false; for (;;) (Biginteger B \u003d n.divide (a) .add (a) .shiftright (1); if (A.comPareto (B) \u003d\u003d 0 || A.comPareto (b)< 0 && p_dec) break ; p_dec = a.compareTo (b) > 0; a \u003d b; )

For example, this code variant is performed for a number for milliseconds, and if you remove the improved selection of the initial approximation (just start with), then approximately milliseconds will be performed.

 


Read:



Secret checks on the main after death with whom Lebedev and Voloshin in Sochi are resting

Secret checks on the main after death with whom Lebedev and Voloshin in Sochi are resting

Do you think you are Russian? Born in the USSR and think that you are Russian, Ukrainian, Belarus? Not. This is not true. You are actually Russian, Ukrainian or ...

How many people eat for life?

How many people eat for life?

Of these 50 tons of products, it is possible to allocate: 2 tons of various meat, including 70 thousand meats. Average data on some products are given ...

The University of Mechnikova will discern with unfinished to restore the hostel for students

The University of Mechnikova will discern with unfinished to restore the hostel for students

St. Petersburg is the third, official name of the famous city of our country. One of the few cities that has about a dozen ...

The University of Mechnikova will discern with unfinished to restore the hostel for students

The University of Mechnikova will discern with unfinished to restore the hostel for students

"Information on hostels for ISGMU IPMU in GBOU. I.I. Mechnikov Ministry of Health of Russia Dormitory GBOU VPO SZGMU. I.I. Mechnikov ... "...

feed-Image. RSS.