Physical quantities. International system of units of physical quantities Si

Under physical quantity understand the characteristics of physical objects or phenomena of the material world, common in a qualitative sense for many objects or phenomena, but individual for each of them in a quantitative sense. For example, mass is a physical quantity. It is a general characteristic of physical objects in a qualitative sense, but in a quantitative sense it has its own individual meaning for different objects.

Under meaning physical quantity understand its assessment, expressed by the product of an abstract number by the unit accepted for a given physical quantity. For example, in the expression for atmospheric air pressure R= 95.2 kPa, 95.2 is an abstract number representing the numerical value of air pressure, kPa is the unit of pressure adopted in this case.

Under unit of physical quantity understand a physical quantity that is fixed in size and taken as the basis for the quantitative assessment of specific physical quantities. For example, meters, centimeters, etc. are used as units of length.

One of the most important characteristics of a physical quantity is its dimension. Dimension of a physical quantity reflects the relationship of a given quantity with the quantities accepted as basic in the system of quantities under consideration.

The system of quantities, which is determined by the International System of Units SI and which is adopted in Russia, contains seven main system quantities presented in Table 1.1.

There are two additional SI units - radians and steradians, the characteristics of which are presented in Table 1.2.

From the basic and additional SI units, 18 derived SI units are formed, which are assigned special, mandatory names. Sixteen units are named after scientists, the remaining two are lux and lumen (see Table 1.3).

Special names of units can be used in the formation of other derived units. Derived units that do not have a special mandatory name are: area, volume, speed, acceleration, density, impulse, moment of force, etc.

Along with SI units, it is allowed to use decimal multiples and submultiples of them. Table 1.4 presents the names and designations of the prefixes of such units and their multipliers. Such prefixes are called SI prefixes.

The choice of one or another decimal multiple or submultiple unit is primarily determined by the convenience of its use in practice. In principle, multiple and submultiple units are chosen such that the numerical values ​​of the quantities are in the range from 0.1 to 1000. For example, instead of 4,000,000 Pa, it is better to use 4 MPa.

Table 1.1. Basic SI units

Magnitude Unit
Name Dimension Recommended designation Name Designation Definition
international Russian
Length L l meter m m A meter is equal to the distance traveled in a vacuum by a plane electromagnetic wave in 1/299,792,458 fractions of a second km, cm, mm, µm, nm
Weight M m kilogram kg kg A kilogram is equal to the mass of the international prototype of the kilogram Mg, g, mg, mcg
Time T t second s With A second is equal to 9192631770 periods of radiation during the transition between two hyperfine levels of the ground state of the cesium-133 atom ks, ms, mks, ns
Electric current strength I I ampere A A An ampere is equal to the force of a varying current, which, when passing through two parallel conductors of infinite length and a negligibly small circular cross-sectional area, located in a vacuum at a distance of 1 m from each other, would cause an interaction force of 2 10 -7 on each section of the conductor 1 m long N kA, mA, μA, nA, pA
Thermodynamic temperature T kelvin* TO TO Kelvin is equal to 1/273.16 of the thermodynamic temperature of the triple point of water MK, kK, mK, mkK
Quantity of substance N n; n mole mol mole A mole is equal to the amount of substance in a system containing the same number of structural elements as there are atoms in carbon-12 weighing 0.012 kg kmol, mmol, µmol
The power of light J J candela CD cd Candela is equal to the intensity of light in a given direction of a source emitting monochromatic radiation of frequencies 540·10 12 Hz, the radiation intensity of which in this direction is 1/683 W/sr

* In addition to Kelvin temperature (designation T) it is also possible to use Celsius temperature (designation t), defined by the expression t = T– 273.15 K. Kelvin temperature is expressed in kelvins, and Celsius temperature is expressed in degrees Celsius (°C). Kelvin temperature interval or difference is expressed only in kelvins. The Celsius temperature interval or difference can be expressed in both kelvins and degrees Celsius.

Table 1.2

Additional SI units

Magnitude Unit Designations of recommended multiples and submultiples
Name Dimension Recommended designation Constitutive equation Name Designation Definition
international Russian
Flat angle 1 a, b, g, q, n, j a = s /r radian rad glad A radian is equal to the angle between two radii of a circle, the length of the arc between which is equal to the radius mrad, mrad
Solid angle 1 w,W W= S /r 2 steradian sr Wed A steradian is equal to a solid angle with its vertex at the center of the sphere, cutting out on the surface of the sphere an area equal to the area of ​​a square with a side equal to the radius of the sphere

Table 1.3

Derived SI units with special names

Magnitude Unit
Name Dimension Name Designation
international Russian
Frequency T -1 hertz Hz Hz
Strength, weight LMT-2 newton N N
Pressure, mechanical stress, elastic modulus L -1 MT -2 pascal Pa Pa
Energy, work, amount of heat L 2 MT -2 joule J J
Power, energy flow L 2 MT -3 watt W W
Electric charge (amount of electricity) TI pendant WITH Cl
Electrical voltage, electrical potential, electrical potential difference, electromotive force L 2 MT -3 I -1 volt V IN
Electrical capacity L -2 M -1 T 4 I 2 farad F F
Electrical resistance L 2 MT -3 I -2 ohm Ohm
Electrical conductivity L -2 M -1 T 3 I 2 Siemens S Cm
Magnetic induction flux, magnetic flux L 2 MT -2 I -1 weber Wb Wb
Magnetic flux density, magnetic induction MT -2 I -1 tesla T Tl
Inductance, mutual inductance L 2 MT -2 I -2 Henry N Gn
Light flow J lumen lm lm
Illumination L -2 J luxury lx OK
Activity of a nuclide in a radioactive source T-1 becquerel Bq Bk
Absorbed radiation dose, kerma L 2 T -2 gray Gy Gr
Equivalent radiation dose L 2 T -2 sievert Sv Sv

Table 1.4

Names and designations of SI prefixes for the formation of decimal multiples and submultiples and their factors

Set-top box name Prefix designation Factor
international Russian
exa E E 10 18
peta P P 10 15
tera T T 10 12
giga G G 10 9
mega M M 10 6
kilo k To 10 3
hecto* h G 10 2
soundboard* da Yes 10 1
deci* d d 10 -1
centi* c With 10 -2
Milli m m 10 -3
micro mk 10 -6
nano n n 10 -9
pico p P 10 -12
femto f f 10 -15
atto a A 10 -18

* The prefixes “hecto”, “deca”, “deci” and “santi” are allowed to be used only for units that are widely used, for example: decimeter, centimeter, deciliter, hectoliter.

MATHEMATICAL OPERATIONS WITH APPROXIMATE NUMBERS

As a result of measurements, as well as during many mathematical operations, approximate values ​​of the desired quantities are obtained. Therefore, it is necessary to consider a number of rules for calculations with approximate values. These rules make it possible to reduce the amount of computational work and eliminate additional errors. Approximate values ​​have quantities such as , logarithms, etc., various physical constants, and measurement results.

As you know, any number is written using numbers: 1, 2, ..., 9, 0; in this case, significant digits are considered to be 1, 2, ..., 9. Zero can be either a significant digit if it is in the middle or end of the number, or an insignificant digit if it is in the decimal fraction on the left side and indicates only the rank of the remaining digits.

When writing down an approximate number, it should be taken into account that the numbers that make it up may be true, doubtful or incorrect. Number true, if the absolute error of a number is less than one digit unit of this digit (to the left of it all digits will be correct). Doubtful name the number to the right of the correct number, and the numbers to the right of the doubtful one unfaithful. Incorrect numbers must be discarded not only in the result, but also in the source data. There is no need to round the number. When the error of a number is not indicated, it should be assumed that its absolute error is equal to half the unit digit of the last digit. The digit of the most significant digit of the error indicates the digit of the doubtful digit in the number. Only correct and doubtful figures can be used as significant figures, but if the error of the number is not indicated, then all figures are significant.

The following basic rule for writing approximate numbers should be applied (in accordance with ST SEV 543-77): an approximate number should be written with such a number of significant digits that guarantees the accuracy of the last significant digit of the number, for example:

1) writing the number 4.6 means that only the numbers of integers and tenths are correct (the true value of the number can be 4.64; 4.62; 4.56);

2) writing the number 4.60 means that hundredths of the number are also correct (the true value of the number can be 4.604; 4.602; 4.596);

3) writing the number 493 means that all three digits are correct; if you cannot vouch for the last digit 3, this number should be written like this: 4.9 10 2;

4) when expressing the density of mercury 13.6 g/cm 3 in SI units (kg/m 3), one should write 13.6 10 3 kg/m 3 and cannot write 13600 kg/m 3, which would mean that five significant figures are correct , while the original number gives only three valid significant digits.

The results of experiments are recorded in significant figures only. A comma is placed immediately after a non-zero digit, and the number is multiplied by ten to the appropriate power. Zeros at the beginning or end of a number are usually not written down. For example, the numbers 0.00435 and 234000 are written as 4.35·10 -3 and 2.34·10 5 . This notation simplifies calculations, especially in the case of formulas convenient for logarithms.

Rounding a number (in accordance with ST SEV 543-77) is the removal of significant digits on the right to a certain digit with a possible change in the digit of this digit.

Rounding does not change the last digit stored if:

1) the first digit to be discarded, counting from left to right, is less than 5;

2) the first discarded digit, equal to 5, was obtained as a result of the previous rounding up.

When rounding, the last digit stored is increased by one if

1) the first digit to be discarded is greater than 5;

2) the first discarded digit, counting from left to right, is equal to 5 (in the absence of previous roundings or in the presence of a previous rounding down).

Rounding should be done immediately to the desired number of significant figures, rather than in stages, which can lead to errors.

GENERAL CHARACTERISTICS AND CLASSIFICATION OF SCIENTIFIC EXPERIMENTS

Each experiment is a combination of three components: the phenomenon under study (process, object), conditions and means of conducting the experiment. The experiment is carried out in several stages:

1) subject-substantive study of the process under study and its mathematical description based on available a priori information, analysis and determination of the conditions and means of conducting the experiment;

2) creation of conditions for conducting the experiment and functioning of the object under study in the desired mode, ensuring the most effective observation of it;

3) collection, registration and mathematical processing of experimental data, presentation of processing results in the required form;

5) use of experimental results, for example, correction of a physical model of a phenomenon or object, use of the model for prediction, control or optimization, etc.

Depending on the type of object (phenomenon) under study, several classes of experiments are distinguished: physical, engineering, medical, biological, economic, sociological, etc. The most thoroughly developed are the general issues of conducting physical and engineering experiments in which natural or artificial physical objects (devices) are studied. and the processes occurring in them. When conducting them, the researcher can repeatedly repeat measurements of physical quantities under similar conditions, set the desired values ​​of input variables, change them on a wide scale, fix or eliminate the influence of those factors, the dependence on which is not currently being studied.

Experiments can be classified according to the following criteria:

1) the degree of proximity of the object used in the experiment to the object in relation to which it is planned to obtain new information (full-scale, bench or test site, model, computational experiments);

2) objectives – research, testing (control), management (optimization, tuning);

3) the degree of influence on the experimental conditions (passive and active experiments);

4) the degree of human participation (experiments using automatic, automated and non-automated means of conducting an experiment).

The result of an experiment in a broad sense is a theoretical understanding of experimental data and the establishment of laws and cause-and-effect relationships that make it possible to predict the course of phenomena of interest to the researcher and to select conditions under which it is possible to achieve the required or most favorable course. In a narrower sense, the result of an experiment is often understood as a mathematical model that establishes formal functional or probabilistic connections between various variables, processes or phenomena.

GENERAL INFORMATION ABOUT THE EXPERIMENTAL TOOLS

The initial information for constructing a mathematical model of the phenomenon under study is obtained using experimental means, which are a set of measuring instruments of various types (measuring devices, converters and accessories), information transmission channels and auxiliary devices to ensure the conditions for conducting the experiment. Depending on the goals of the experiment, sometimes a distinction is made between measurement information (research), measurement control (monitoring, testing) and measurement control (control, optimization) systems, which differ both in the composition of the equipment and in the complexity of processing experimental data. The composition of measuring instruments is largely determined by the mathematical model of the object being described.

Due to the increasing complexity of experimental research, modern measuring systems include computing tools of various classes (computers, programmable microcalculators). These tools perform both the tasks of collecting and mathematical processing of experimental information, and the tasks of controlling the progress of the experiment and automating the functioning of the measuring system. The effectiveness of using computing tools when conducting experiments is manifested in the following main areas:

1) reducing the time for preparing and conducting an experiment as a result of accelerating the collection and processing of information;

2) increasing the accuracy and reliability of experimental results based on the use of more complex and efficient algorithms for processing measurement signals, increasing the volume of experimental data used;

3) reduction in the number of researchers and the emergence of the possibility of creating automatic systems;

4) strengthening control over the progress of the experiment and increasing the possibilities for its optimization.

Thus, modern means of conducting experiments are, as a rule, measuring and computing systems (MCS) or complexes equipped with advanced computing tools. When justifying the structure and composition of temporary detention facilities, it is necessary to solve the following main tasks:

1) determine the composition of the IVS hardware (measuring instruments, auxiliary equipment);

2) select the type of computer included in the IVS;

3) establish communication channels between the computer, devices included in the hardware of the IVS, and the information consumer;

4) develop IVS software.

2. PLANNING THE EXPERIMENT AND STATISTICAL PROCESSING OF EXPERIMENTAL DATA

BASIC CONCEPTS AND DEFINITIONS

Most studies are carried out to establish experimentally functional or statistical relationships between several quantities or to solve extreme problems. The classical method of setting up an experiment involves fixing all variable factors at accepted levels, except for one, the values ​​of which change in a certain way in the area of ​​its definition. This method forms the basis of a one-factor experiment (such an experiment is often called passive). In a one-factor experiment, varying one factor and stabilizing all others at selected levels, one finds the dependence of the value under study on only one factor. By performing a large number of single-factor experiments when studying a multifactor system, frequency dependencies are obtained, presented in many graphs that are illustrative in nature. The partial dependencies found in this way cannot be combined into one large one. In the case of a one-factor (passive) experiment, statistical methods are used after the end of the experiments, when the data have already been obtained.

The use of a single-factor experiment for a comprehensive study of a multifactorial process requires a very large number of experiments. In some cases, their implementation requires significant time, during which the influence of uncontrolled factors on the experimental results can change significantly. For this reason, the data from a large number of experiments are incomparable. It follows that the results of single-factor experiments obtained in the study of multifactor systems are often of little use for practical use. In addition, when solving extreme problems, the data from a significant number of experiments turn out to be unnecessary, since they were obtained for a region far from the optimum. To study multifactor systems, the most appropriate is the use of statistical methods of experiment planning.

Experimental planning is understood as the process of determining the number and conditions of conducting experiments necessary and sufficient to solve a given problem with the required accuracy.

Experimental planning is a branch of mathematical statistics. It covers statistical methods for experimental design. These methods make it possible in many cases to obtain models of multifactor processes with a minimum number of experiments.

The effectiveness of using statistical methods of experimental planning in the study of technological processes is explained by the fact that many important characteristics of these processes are random variables, the distributions of which closely follow the normal law.

Characteristic features of the experimental planning process are the desire to minimize the number of experiments; simultaneous variation of all studied factors according to special rules - algorithms; the use of mathematical apparatus that formalizes many of the researcher’s actions; choosing a strategy that allows you to make informed decisions after each series of experiments.

When planning an experiment, statistical methods are used at all stages of the study and, first of all, before setting up experiments, developing the experimental design, as well as during the experiment, when processing the results and after the experiment, making decisions on further actions. Such an experiment is called active and he assumes experiment planning .

The main advantages of an active experiment are related to the fact that it allows:

1) minimize the total number of experiments;

2) choose clear, logically sound procedures that are consistently performed by the experimenter when conducting the study;

3) use a mathematical apparatus that formalizes many of the experimenter’s actions;

4) simultaneously vary all variables and optimally use the factor space;

5) organize the experiment in such a way that many of the initial premises of regression analysis are met;

6) obtain mathematical models that have better properties in some sense compared to models built from passive experiment;

7) randomize the experimental conditions, i.e. turn numerous interfering factors into random variables;

8) evaluate the element of uncertainty associated with the experiment, which makes it possible to compare the results obtained by different researchers.

Most often, an active experiment is set up to solve one of two main problems. The first problem is called extreme. It consists in finding process conditions that ensure obtaining the optimal value of the selected parameter. A sign of extremal problems is the requirement to search for the extremum of some function (*illustrate with a graph*). Experiments that are performed to solve optimization problems are called extreme .

The second problem is called interpolation. It consists of constructing an interpolation formula to predict the values ​​of the parameter being studied, which depends on a number of factors.

To solve an extremal or interpolation problem, it is necessary to have a mathematical model of the object under study. A model of the object is obtained using experimental results.

When studying a multifactor process, setting up all possible experiments to obtain a mathematical model is associated with the enormous complexity of the experiment, since the number of all possible experiments is very large. The task of planning an experiment is to establish the minimum required number of experiments and the conditions for their conduct, to select methods for mathematical processing of the results, and to make decisions.

MAIN STAGES AND MODES OF STATISTICAL PROCESSING OF EXPERIMENTAL DATA

2. Drawing up an experimental plan, in particular, determining the values ​​of independent variables, selecting test signals, estimating the volume of observations. Preliminary justification and selection of methods and algorithms for statistical processing of experimental data.

3. Conducting direct experimental research, collecting experimental data, recording it and entering it into a computer.

4. Preliminary statistical processing of data, intended, first of all, to check the fulfillment of the prerequisites underlying the selected statistical method for constructing a stochastic model of the research object, and, if necessary, to correct the a priori model and change the decision on the choice of processing algorithm.

5. Drawing up a detailed plan for further statistical analysis of experimental data.

6. Statistical processing of experimental data (secondary, complete, final processing), aimed at constructing a model of the research object, and statistical analysis of its quality. Sometimes at the same stage, problems of using the constructed model are also solved, for example: object parameters are optimized.

7. Formal, logical and meaningful interpretation of the results of experiments, making a decision to continue or complete the experiment, summing up the results of the study.

Statistical processing of experimental data can be carried out in two main modes.

In the first mode, the full amount of experimental data is first collected and recorded, and only then is it processed. This type of processing is called off-line processing, a posteriori processing, and data processing based on a sample of a full (fixed) volume. The advantage of this processing mode is the ability to use the entire arsenal of statistical methods for data analysis and, accordingly, the most complete extraction of experimental information from them. However, the efficiency of such processing may not satisfy the consumer; in addition, controlling the progress of the experiment is almost impossible.

In the second mode, observations are processed in parallel with their receipt. This type of processing is called on-line processing, data processing based on a sample of increasing volume, and sequential data processing. In this mode, it becomes possible to expressly analyze the results of an experiment and promptly control its progress.

GENERAL INFORMATION ABOUT BASIC STATISTICAL METHODS

When solving problems of processing experimental data, methods are used based on two main components of the apparatus of mathematical statistics: the theory of statistical estimation of unknown parameters used in describing the experimental model, and the theory of testing statistical hypotheses about the parameters or nature of the analyzed model.

1. Correlation analysis. Its essence is to determine the degree of probability of a relationship (usually linear) between two or more random variables. These random variables can be input, independent variables. This set may also include the resulting (dependent) variable. In the latter case, correlation analysis makes it possible to select factors or regressors (in a regression model) that have the most significant impact on the resulting characteristic. The selected values ​​are used for further analysis, in particular when performing regression analysis. Correlation analysis allows you to detect previously unknown cause-and-effect relationships between variables. It should be borne in mind that the presence of a correlation between variables is only a necessary, but not a sufficient condition for the presence of causal relationships.

Correlation analysis is used at the stage of preliminary processing of experimental data.

2. Analysis of variance. This method is intended for processing experimental data that depends on qualitative factors, and for assessing the significance of the influence of these factors on the results of observations.

Its essence consists in decomposing the variance of the resulting variable into independent components, each of which characterizes the influence of a particular factor on this variable. Comparison of these components allows us to assess the significance of the influence of factors.

3. Regression analysis. Regression analysis methods make it possible to establish the structure and parameters of a model connecting quantitative resultant and factor variables, and to assess the degree of its consistency with experimental data. This type of statistical analysis allows you to solve the main problem of the experiment if the observed and resulting variables are quantitative, and in this sense it is fundamental when processing this type of experimental data.

4. Factor analysis. Its essence lies in the fact that the “external” factors used in the model and are strongly interconnected must be replaced by other, smaller “internal factors that are difficult or impossible to measure, but which determine the behavior of the “external” factors and thereby the behavior the resulting variable. Factor analysis makes it possible to put forward hypotheses about the structure of the relationship of variables without specifying this structure in advance and without having any prior information about it. This structure is determined from the results of observations. The resulting hypotheses can be tested in further experiments. The task of factor analysis is to find a simple structure that would fairly accurately reflect and reproduce real, existing dependencies.

4. MAIN TASKS OF PRE-PROCESSING EXPERIMENTAL DATA

The ultimate goal of preliminary processing of experimental data is to put forward hypotheses about the class and structure of the mathematical model of the phenomenon under study, determine the composition and volume of additional measurements, and select possible methods for subsequent statistical processing. To do this, it is necessary to solve some particular problems, among which the following can be distinguished:

1. Analysis, rejection and restoration of anomalous (erroneous) or missing measurements, since experimental information is usually heterogeneous in quality.

2. Experimental verification of the laws of distribution of the obtained data, assessment of the parameters and numerical characteristics of the observed random variables or processes. The choice of methods for subsequent processing aimed at constructing and checking the adequacy of a mathematical model for the phenomenon under study significantly depends on the law of distribution of observed quantities.

3. Compression and grouping of initial information with a large volume of experimental data. In this case, the features of their distribution laws, which were identified at the previous stage of processing, must be taken into account.

4. Combining several groups of measurements, possibly obtained at different times or under different conditions, for joint processing.

5. Identification of statistical relationships and mutual influence of various measured factors and resulting variables, successive measurements of the same quantities. Solving this problem allows you to select those variables that have the strongest impact on the resulting characteristic. The selected factors are used for further processing, in particular, using regression analysis methods. Analysis of correlations makes it possible to put forward hypotheses about the structure of the relationship between variables and, ultimately, about the structure of the phenomenon model.

Pre-processing is characterized by an iterative solution of the main problems, when they repeatedly return to the solution of a particular problem after obtaining the results at the subsequent stage of processing.

1. CLASSIFICATION OF MEASUREMENT ERRORS.

Under measurement understand finding the value of a physical quantity experimentally using special technical means. Measurements can be like straight, when the desired value is found directly from experimental data, and indirect, when the desired quantity is determined on the basis of a known relationship between this quantity and the quantities subjected to direct measurements. The value of a quantity found by measurement is called measurement result .

The imperfection of measuring instruments and human senses, and often the nature of the measured value itself, lead to the fact that in any measurements the results are obtained with a certain accuracy, that is, the experiment does not give the true value of the measured value, but only its approximate value. Under real value of a physical quantity we understand its value, found experimentally and so close to the true value that for a given purpose it can be used instead.

The accuracy of a measurement is determined by the closeness of its result to the true value of the measured quantity. The accuracy of the instrument is determined by the degree of approximation of its readings to the true value of the desired quantity, and the accuracy of the method is determined by the physical phenomenon on which it is based.

Errors (errors) measurements characterized by the deviation of measurement results from the true value of the measured value. The measurement error, like the true value of the measured quantity, is usually unknown. Therefore, one of the main tasks of statistical processing of experimental results is to estimate the true value of the measured quantity from the obtained experimental data. In other words, after repeatedly measuring the desired quantity and obtaining a number of results, each of which contains some unknown error, the task is to calculate the approximate value of the desired quantity with the smallest possible error.

Measurement errors are divided into rude mistakes (misses), systematic And random .

Gross mistakes. Gross errors arise as a result of violation of the basic measurement conditions or as a result of an oversight by the experimenter. If a gross error is detected, the measurement result should be immediately discarded and the measurement repeated. An external sign of a result containing a gross error is its sharp difference in magnitude from the other results. This is the basis for some criteria for excluding gross errors based on their magnitude (will be discussed further), however, the most reliable and effective way to reject incorrect results is to reject them directly during the measurement process itself.

Systematic errors. Systematic is an error that remains constant or changes naturally with repeated measurements of the same quantity. Systematic errors appear due to incorrect adjustment of instruments, inaccuracy of the measurement method, some omission by the experimenter, or the use of inaccurate data for calculations.

Systematic errors also arise when performing complex measurements. The experimenter may not be aware of them, although they can be very large. Therefore, in such cases it is necessary to carefully analyze the measurement methodology. Such errors can be detected, in particular, by measuring the desired quantity using another method. The coincidence of measurement results by both methods serves as a certain guarantee of the absence of systematic errors.

When making measurements, every effort must be made to eliminate systematic errors, as they can be so large that they greatly distort the results. Identified errors are eliminated by introducing amendments.

Random errors. A random error is a component of the measurement error that changes randomly, i.e. it is the measurement error that remains after eliminating all identified systematic and gross errors. Random errors are caused by a large number of both objective and subjective factors that cannot be isolated and taken into account separately. Since the reasons leading to random errors are not the same in each experiment and cannot be taken into account, such errors cannot be excluded; one can only estimate their significance. Using the methods of probability theory, it is possible to take into account their influence on the assessment of the true value of the measured quantity with a significantly smaller error than the errors of individual measurements.

Therefore, when the random error is greater than the error of the measuring device, it is necessary to repeat the same measurement many times to reduce its value. This makes it possible to minimize the random error and make it comparable to the instrument error. If the random error is less than the instrument error, then it makes no sense to reduce it.

In addition, errors are divided into absolute , relative And instrumental. An absolute error is an error expressed in units of the measured value. Relative error is the ratio of the absolute error to the true value of the measured quantity. The component of the measurement error, which depends on the error of the measuring instruments used, is called the instrumental measurement error.


2. ERRORS IN DIRECT EQUAL-PRECISION MEASUREMENTS. LAW OF NORMAL DISTRIBUTION.

Direct measurements– these are measurements when the value of the studied quantity is found directly from experimental data, for example, by taking readings from a device that measures the value of the desired quantity. To find the random error, the measurement must be carried out several times. The results of such measurements have similar error values ​​and are called equally accurate .

Let as a result n measurements of quantity X carried out with equal accuracy, a number of values ​​were obtained: X 1 , X 2 , …, X n. As shown in error theory, the closest to the true value is X 0 measured value X is arithmetic mean

The arithmetic mean is considered only as the most probable value of the measured value. The results of individual measurements generally differ from the true value X 0 . In this case, the absolute error i-th measurement is

D x i " = X 0 – x i 4

and can take both positive and negative values ​​with equal probability. Summing up all the errors, we get

,


. (2.2)

In this expression, the second term on the right side for large n is equal to zero, since any positive error can be associated with an equal negative one. Then X 0 =. With a limited number of measurements there will be only an approximate equality X 0 . Thus, it can be called a real value.

In all practical cases the value X 0 is unknown and there is only a certain probability that X 0 is located in some interval nearby and it is necessary to determine this interval corresponding to this probability. D is used as an estimate of the absolute error of an individual measurement x i = – x i .

It determines the accuracy of a given measurement.

For a number of measurements, the arithmetic mean error is determined

.

It defines the limits within which more than half of the dimensions lie. Hence, X 0 with a fairly high probability falls into the interval from –h to +h. Quantity measurement results X then written in the form:

Magnitude X the smaller the interval in which the true value is measured, the more accurately it is measured X 0 .

Absolute error of measurement results D x by itself does not determine the accuracy of measurements. Let, for example, the accuracy of some ammeter be 0.1 A. Current measurements were carried out in two electrical circuits. The following values ​​were obtained: 320.1 A and 0.20.1 A. The example shows that although the absolute measurement error is the same, the measurement accuracy is different. In the first case, the measurements are quite accurate, but in the second, they allow one to judge only the order of magnitude. Therefore, when assessing the quality of a measurement, it is necessary to compare the error with the measured value, which gives a more clear idea of ​​the accuracy of the measurements. For this purpose, the concept is introduced relative error

d x=D x /. (2.3)

The relative error is usually expressed as a percentage.

Since in most cases the measured quantities have dimensions, the absolute errors are dimensional, and the relative errors are dimensionless. Therefore, using the latter, it is possible to compare the accuracy of measurements of different quantities. Finally, the experiment must be designed in such a way that the relative error remains constant over the entire measurement range.

It should be noted that with correct and carefully performed measurements, the average arithmetic error of their result is close to the error of the measured device.

If the measurements of the desired quantity X carried out many times, then the frequency of occurrence of a particular value X i can be presented in the form of a graph that looks like a stepped curve - a histogram (see Fig. 1), where at– number of samples; D x i = X ix i +1 (i varies from – n to + n). With an increase in the number of measurements and a decrease in the interval D x i the histogram turns into a continuous curve characterizing the probability distribution density that the value x i will be in the interval D x i .


Under distribution of a random variable understand the set of all possible values ​​of a random variable and their corresponding probabilities. Law of distribution of a random variable call any correspondence of a random variable to the possible values ​​of their probabilities. The most general form of the distribution law is the distribution function R (X).

Then the function R (X) =R" (X) – probability density function or differential distribution function. A graph of a probability density function is called a distribution curve.

Function R (X) is characterized by the fact that the work R (X)dx there is a probability that a separate, randomly selected value of the measured quantity will appear in the interval ( X ,x + dx).

In the general case, this probability can be determined by various distribution laws (normal (Gaussian), Poisson, Bernoulli, binomial, negative binomial, geometric, hypergeometric, uniform discrete, negative exponential). However, most often the probability of occurrence of the value x i in the interval ( X ,x + dx) in physical experiments are described by a normal distribution law - Gauss's law (see Fig. 2):

, (2.4)

where s 2 is the variance of the population. General population name the entire set of possible measurement values x i or possible error values ​​D x i .

The widespread use of Gauss's law in error theory is explained by the following reasons:

1) errors of equal absolute value occur equally often with a large number of measurements;

2) errors that are small in absolute value are more common than large ones, i.e., the greater the absolute value of an error, the less likely it is to occur;

3) measurement errors take a continuous series of values.

However, these conditions are never strictly met. But experiments have confirmed that in the region where the errors are not very large, the normal distribution law agrees well with experimental data. Using the normal law, you can find the probability of an error occurring in a particular value.

The Gaussian distribution is characterized by two parameters: the mean value of the random variable and the variance s2. The average value is determined by the abscissa ( X=) axis of symmetry of the distribution curve, and the dispersion shows how quickly the probability of an error decreases with an increase in its absolute value. The curve has a maximum at X=. Therefore, the average value is the most probable value of the quantity X. The dispersion is determined by the half-width of the distribution curve, i.e., the distance from the axis of symmetry to the inflection points of the curve. It is the mean square of the deviation of the results of individual measurements from their arithmetic mean over the entire distribution. If, when measuring a physical quantity, only constant values ​​are obtained X=, then s 2 = 0. But if the values ​​of the random variable X take values ​​not equal to , then its variance is not zero and is positive. Dispersion thus serves as a measure of fluctuation in the values ​​of a random variable.

The measure of dispersion of the results of individual measurements from the average value must be expressed in the same units as the values ​​of the measured quantity. In this regard, the quantity

called mean square error .

It is the most important characteristic of the measurement results and remains constant when the experimental conditions remain unchanged.

The value of this value determines the shape of the distribution curve.

Since when s changes, the area under the curve, remaining constant (equal to unity), changes its shape, then with a decrease in s, the distribution curve stretches upward near the maximum at X=, and compressing in the horizontal direction.

As s increases, the value of the function R (X i) decreases, and the distribution curve stretches along the axis X(see Fig. 2).

For the normal distribution law, the mean square error of an individual measurement

, (2.5)

and the mean square error of the average value

. (2.6)

The mean square error characterizes measurement errors more accurately than the arithmetic mean error, since it is obtained quite strictly from the law of distribution of random error values. In addition, its direct connection with dispersion, the calculation of which is facilitated by a number of theorems, makes the mean square error a very convenient parameter.

Along with the dimensional error s, they also use the dimensionless relative error d s = s/, which, like d x, expressed either as fractions of a unit or as a percentage. The final measurement result is written as:

However, in practice it is impossible to take too many measurements, so a normal distribution cannot be constructed to accurately determine the true value X 0 . In this case, a good approximation to the true value can be considered, and a fairly accurate estimate of the measurement error is the sample variance, which follows from the normal distribution law, but relates to a finite number of measurements. This name for the quantity is explained by the fact that from the entire set of values X i, i.e., only a finite number of value values ​​are selected (measured) from the general population X i(equal n), called sampling. The sample is characterized by a sample mean and sample variance.

Then the sample mean square error of an individual measurement (or empirical standard)

, (2.8)

and the sample mean square error of a number of measurements

. (2.9)

From expression (2.9) it is clear that by increasing the number of measurements, the mean square error can be made as small as desired. At n> 10, a noticeable change in the value is achieved only with a very significant number of measurements, so a further increase in the number of measurements is inappropriate. In addition, it is impossible to completely eliminate systematic errors, and with a smaller systematic error, a further increase in the number of experiments also does not make sense.

Thus, the problem of finding the approximate value of a physical quantity and its error has been solved. Now it is necessary to determine the reliability of the found real value. Reliability of measurements is understood as the probability of the true value falling within a given confidence interval. Interval (– e,+ e) in which the true value is located with a given probability X 0 is called confidence interval. Let us assume that the probability of a measurement result differing X from true value X 0 by an amount greater than e is equal to 1 – a, i.e.

p(–e<X 0 <+ e) = 1 – a. (2.10)

In error theory, e is usually understood as the quantity . That's why

p (– <X 0 <+ ) = Ф(t), (2.11)

where Ф( t) – probability integral (or Laplace function), as well as normal distribution function:

, (2.12) where .

Thus, to characterize the true value, it is necessary to know both the uncertainty and the reliability. If the confidence interval increases, then the confidence increases that the true value X 0 falls within this interval. A high degree of reliability is necessary for critical measurements. This means that in this case it is necessary to select a large confidence interval or carry out measurements with greater accuracy (i.e., reduce the value), which can be done, for example, by repeating measurements many times.

Under confidence probability refers to the probability that the true value of the measured value falls within a given confidence interval. The confidence interval characterizes the accuracy of the measurement of a given sample, and the confidence probability characterizes the reliability of the measurement.

In the vast majority of experimental problems, the confidence level is 0.90.95 and higher reliability is not required. So when t= 1 according to formulas (2.10 –2.12) 1 – a= Ф( t) = 0.683, i.e. more than 68% of measurements are in the interval (–,+). At t= 2 1 – a= 0.955, and at t= 3 parameter 1 – a= 0.997. The latter means that almost all measured values ​​are in the interval (–,+). From this example it is clear that the interval actually contains the majority of the measured values, i.e. the parameter a can serve as a good characteristic of the measurement accuracy.

Until now it was assumed that the number of dimensions, although finite, is quite large. In reality, the number of dimensions is almost always small. Moreover, both in technology and in scientific research, the results of two or three measurements are often used. In this situation, the quantities, at best, can only determine the order of magnitude of the dispersion. There is a correct method for determining the probability of finding the desired value in a given confidence interval, based on the use of the Student distribution (proposed in 1908 by the English mathematician W. S. Gosset). Let us denote by the interval by which the arithmetic mean may deviate from the true value X 0, i.e. D x = X 0 –. In other words, we want to determine the value

.

Where S n is determined by formula (2.8). This value obeys the Student distribution. The Student distribution is characterized by the fact that it does not depend on the parameters X 0 and s of the normal population and allows for a small number of measurements ( n < 20) оценить погрешность Dx = ­­– X i by a given confidence probability aor by a given value D x find the reliability of measurements. This distribution depends only on the variable t a and number of degrees of freedom l = n – 1.


The Student distribution is valid for n 2 and symmetrical about t a = 0 (see Fig. 3). With increasing number of measurements t a -distribution tends to the normal distribution (in fact, when n > 20).

The confidence probability for a given measurement result error is obtained from the expression

p (–<X 0 <+) = 1 – a. (2.14)

In this case, the value t a is similar to the coefficient t in formula (2.11). Size t a is called Student's coefficient, its values ​​are given in reference tables. Using relations (2.14) and reference data, it is possible to solve the inverse problem: from a given reliability a, determine the permissible error of the measurement result.

The Student distribution also allows us to establish that with a probability as close as desired to reliability, with a sufficiently large n the arithmetic mean will differ as little as desired from the true value X 0 .

It was assumed that the distribution law of the random error is known. However, often when solving practical problems it is not necessary to know the distribution law; it is enough just to study some numerical characteristics of a random variable, for example, the mean value and variance. In this case, calculating the dispersion makes it possible to estimate the confidence probability even in the case when the law of error distribution is unknown or differs from normal.

In the event that only one measurement is made, the accuracy of the measurement of a physical quantity (if it is carried out carefully) is characterized by the accuracy of the measuring device.

3. ERRORS OF INDIRECT MEASUREMENTS

Often, when conducting an experiment, a situation occurs when the desired quantities And (X i) cannot be directly determined, but the quantities can be measured X i .

For example, to measure density r, mass is most often measured m and volume V, and the density value is calculated using the formula r= m /V .

Quantities X i contain, as usual, random errors, i.e. they observe the values x i " = x i D x i. As before, we believe that x i distributed according to the normal law.

1. Let And = f (X) is a function of one variable. In this case, the absolute error

. (3.1)

Relative error of the result of indirect measurements

. (3.2)

2. Let And = f (X , at) is a function of two variables. Then the absolute error

, (3.3)

and the relative error will be

. (3.4)

3. Let And = f (X , at , z, ...) is a function of several variables. Then the absolute error by analogy

(3.5)

and relative error

where , and are determined according to formula (2.9).

Table 2 provides formulas for determining the errors of indirect measurements for some commonly used formulas.

table 2

Function u Absolute error D u Relative error d u
e x
ln x
sin x
cos x
tg x
ctg x
x y
xy
x /y

4. CHECKING THE NORMALITY OF DISTRIBUTION

All of the above confidence estimates of both average values ​​and variances are based on the hypothesis of normality of the law of distribution of random measurement errors and therefore can be used only as long as the experimental results do not contradict this hypothesis.

If the results of an experiment raise doubts about the normality of the distribution law, then to resolve the question of the suitability or unsuitability of the normal distribution law, it is necessary to make a sufficiently large number of measurements and apply one of the methods described below.

Checking by mean absolute deviation (MAD). The technique can be used for not very large samples ( n < 120). Для этого вычисляется САО по формуле:

. (4.1)

For a sample with an approximately normal distribution law, the following expression must be valid:

. (4.2)

If this inequality (4.2) is satisfied, then the hypothesis of normal distribution is confirmed.

Verification based on compliance criteria c 2 (“chi-square”) or Pearson’s goodness-of-fit test. The criterion is based on a comparison of empirical frequencies with theoretical ones that can be expected when accepting the hypothesis of a normal distribution. The measurement results, after eliminating gross and systematic errors, are grouped into intervals so that these intervals cover the entire axis and so that the amount of data in each interval is sufficiently large (at least five). For each interval ( x i –1 ,x i) count the number T i measurement results falling within this interval. Then calculate the probability of falling into this interval under the normal probability distribution law R i :

, (4.3)

, (4.4)

Where l– number of all intervals, n– number of all measurement results ( n = T 1 +T 2 +…+t l).

If the amount calculated using this formula (4.4) turns out to be greater than the critical tabular value c 2, determined at a certain confidence level R and number of degrees of freedom k = l– 3, then with reliability R we can assume that the probability distribution of random errors in the series of measurements under consideration differs from normal. Otherwise, there are no sufficient grounds for such a conclusion.

Checking by indicators of asymmetry and kurtosis. This method gives an approximate estimate. Asymmetry indicators A and excess E are determined by the following formulas:

, (4.5)

. (4.6)

If the distribution is normal, then both of these indicators should be small. The smallness of these characteristics is usually judged in comparison with their mean square errors. Comparison coefficients are calculated accordingly:

, (4.7)

. (4.8)

5. METHODS FOR ELIMINATING GROSS ERRORS

When receiving a measurement result that differs sharply from all other results, a suspicion arises that a gross error has been made. In this case, it is necessary to immediately check whether the basic measurement conditions have been violated. If such a check was not done on time, then the question of the advisability of rejecting sharply different values ​​is resolved by comparing it with other measurement results. In this case, different criteria are applied, depending on whether the mean square error s is known or not i measurements (it is assumed that all measurements are made with the same accuracy and independently of each other).

Elimination method with known s i . First, the coefficient is determined t according to the formula

, (5.1)

Where x* – outlier value (supposed error). The value is determined by formula (2.1) without taking into account the expected error x *.

Next, the significance level a is set, at which errors whose probability of occurrence is less than the value a are excluded. Usually one of three significance levels is used: 5% level (errors whose probability of occurrence is less than 0.05 are excluded); 1% level (respectively less than 0.01) and 0.1% level (respectively less than 0.001).

At the selected significance level, a stands out value x* is considered a gross error and is excluded from further processing of measurement results if for the corresponding coefficient t, calculated according to formula (5.1), the condition is satisfied: 1 – Ф( t) < a.

Elimination method for unknown s i .

If the mean square error of an individual measurement s i is unknown in advance, then it is estimated approximately from the measurement results using formula (2.8). Next, the same algorithm is applied as for known s i with the only difference that in formula (5.1) instead of s i value used S n, calculated according to formula (2.8).

Three sigma rule.

Since the choice of the reliability of a confidence estimate allows for some arbitrariness, in the process of processing experimental results, the three sigma rule has become widespread: the deviation of the true value of the measured value does not exceed the arithmetic mean value of the measurement results and does not exceed three times the root mean square error of this value.

Thus, the three-sigma rule represents a confidence estimate in the case of a known value s

or confidence assessment

in the case of an unknown value s.

The first of these estimates has a reliability of 2Ф(3) = 0.9973, regardless of the number of measurements.

The reliability of the second estimate depends significantly on the number of measurements n .

Reliability Dependence R on the number of measurements n to estimate the gross error in the case of an unknown value s is indicated in

Table 4

n 5 6 7 8 9 10 14 20 30 50 150
p(x) 0.960 0.970 0.976 0.980 0.983 0.985 0.990 0.993 0.995 0.996 0.997 0.9973

6. PRESENTATION OF MEASUREMENT RESULTS

The measurement results can be presented in the form of graphs and tables. The last method is the simplest. In some cases, research results can only be presented in table form. But the table does not give a clear idea of ​​the dependence of one physical quantity on another, so in many cases a graph is built. It can be used to quickly find the dependence of one quantity on another, i.e., from the measured data, an analytical formula is found that relates the quantities X And at. Such formulas are called empirical. Function finding accuracy at (X) according to the graph is determined by the correctness of the graph. Consequently, when great accuracy is not required, graphs are more convenient than tables: they take up less space, they are faster to carry out readings, and when constructing them, outliers in the course of the function due to random measurement errors are smoothed out. If particularly high accuracy is required, it is preferable to present the experimental results in the form of tables, and intermediate values ​​are found using interpolation formulas.

Mathematical processing of measurement results by the experimenter does not set the task of revealing the true nature of the functional relationship between variables, but only makes it possible to describe the results of the experiment using the simplest formula, which makes it possible to use interpolation and apply methods of mathematical analysis to the observed data.

Graphic method. Most often, a rectangular coordinate system is used to construct graphs. To make construction easier, you can use graph paper. In this case, distance readings on graphs should be done only by divisions on paper, and not using a ruler, since the length of divisions can be different vertically and horizontally. First you need to select reasonable scales along the axes so that the measurement accuracy corresponds to the accuracy of the reading on the graph and the graph is not stretched or compressed along one of the axes, as this leads to an increase in the reading error.

Next, points representing the measurement results are plotted on the graph. To highlight different results, they are plotted with different icons: circles, triangles, crosses, etc. Since in most cases the errors in the function values ​​are greater than the errors in the argument, only the error of the function is plotted in the form of a segment with a length equal to twice the error on a given scale. In this case, the experimental point is located in the middle of this segment, which is limited at both ends by dashes. After this, a smooth curve is drawn so that it passes as close as possible to all experimental points and approximately the same number of points are located on both sides of the curve. The curve should (usually) be within the measurement errors. The smaller these errors are, the better the curve coincides with the experimental points. It is important to note that it is better to draw a smooth curve outside the error limits than to allow a break in the curve near a single point. If one or more points lie far from the curve, this often indicates a gross error in calculation or measurement. Curves on graphs are most often constructed using patterns.

You should not take too many points when constructing a graph of a smooth dependence, and only for curves with maxima and minima it is necessary to plot points more often in the extremum region.

When constructing graphs, a technique called the alignment method or the stretched string method is often used. It is based on the geometric selection of a straight line “by eye”.

If this technique fails, then in many cases the transformation of a curve into a straight line is achieved by using one of the functional scales or grids. The most commonly used are logarithmic or semi-logarithmic grids. This technique is also useful in cases where you need to stretch or compress any section of the curve. Thus, the logarithmic scale is convenient to use to depict the quantity being studied, which varies by several orders of magnitude within the limits of measurements. This method is recommended for finding approximate values ​​of coefficients in empirical formulas or for measurements with low data accuracy. When using a logarithmic grid, a straight line depicts a dependence of type , and when using a semilogarithmic grid, a dependence of type . Coefficient IN 0 may be zero in some cases. However, when using a linear scale, all values ​​on the graph are measured with the same absolute accuracy, and when using a logarithmic scale, all values ​​​​are measured with the same relative accuracy.

It should also be noted that it is often difficult to judge from the limited portion of the curve available (especially if not all points lie on the curve) what type of function should be used for approximation. Therefore, they transfer the experimental points to one or another coordinate grid and only then look at which of them the obtained data most closely coincides with the straight line, and in accordance with this they select an empirical formula.

Selection of empirical formulas. Although there is no general method that would make it possible to select the best empirical formula for any measurement results, it is still possible to find an empirical relationship that most accurately reflects the desired relationship. You should not achieve complete agreement between the experimental data and the desired formula, since the interpolation polynomial or other approximating formula will repeat all measurement errors, and the coefficients will not have physical meaning. Therefore, if the theoretical dependence is not known, then choose a formula that better matches the measured values ​​and contains fewer parameters. To determine the appropriate formula, experimental data are plotted graphically and compared with various curves that are plotted using known formulas on the same scale. By changing the parameters in the formula, you can change the appearance of the curve to a certain extent. In the comparison process, it is necessary to take into account the existing extrema, the behavior of the function at different values ​​of the argument, the convexity or concavity of the curve in different sections. Having selected a formula, the values ​​of the parameters are determined so that the difference between the curve and the experimental data is no greater than the measurement errors.

In practice, linear, exponential and power dependencies are most often used.

7. SOME TASKS OF ANALYSIS OF EXPERIMENTAL DATA

Interpolation. Under interpolation understand, firstly, finding the values ​​of a function for intermediate values ​​of the argument that are not in the table and, secondly, replacing a function with an interpolating polynomial if its analytical expression is unknown and the function must be subjected to certain mathematical operations. The simplest methods of interpolation are linear and graphic. Linear interpolation can be used when the dependence at (X) is expressed by a straight line or a curve close to a straight line, for which such interpolation does not lead to gross errors. In some cases, it is possible to carry out linear interpolation even with a complex dependence at (X), if it is carried out within such a small change in the argument that the relationship between the variables can be considered linear without noticeable errors. When graphically interpolating an unknown function at (X) replace it with an approximate graphic image (based on experimental points or tabular data), from which the values ​​are determined at for any X within measurements. However, accurate graphical plotting of complex curves is sometimes very difficult, such as curves with sharp extremes, so graphical interpolation is of limited use.

Thus, in many cases it is impossible to apply either linear or graphical interpolation. In this regard, interpolating functions were found that made it possible to calculate the values at with sufficient accuracy for any functional dependence at (X) provided that it is continuous. The interpolating function has the form

Where B 0 ,B 1 , … Bn– determined coefficients. Since this polynomial (7.1) is represented by a curve of parabolic type, such interpolation is called parabolic.

The coefficients of the interpolating polynomial are found by solving the system of ( l+ 1) linear equations obtained by substituting known values ​​into equation (7.1) at i And X i .

Interpolation is easiest when the intervals between the values ​​of the argument are constant, i.e.

Where h– a constant value called step. In general

When using interpolation formulas, you have to deal with differences in values at and the differences of these differences, i.e. the differences of the function at (X) of various orders. Differences of any order are calculated using the formula

. (7.4)

For example,

When calculating differences, it is convenient to arrange them in the form of a table (see Table 4), in each column of which the differences are written between the corresponding values ​​of the minuend and the subtrahend, i.e., a diagonal type table is compiled. Usually differences are written in units of the last digit.

Table 4

Difference function at (X)

x y Dy D2y D 3 y D 4 y
x 0 y 0
x 1 at 1
x 2 at 2 D 4 y 0
x 3 at 3
x 4 at 4

Since the function at (X) is expressed by the polynomial (7.1) n th degree relative X, then the differences are also polynomials, the degrees of which are reduced by one when moving to the next difference. N-th difference of the polynomial n the th power is a constant number, i.e. it contains X to the zero degree. All higher order differences are equal to zero. This determines the degree of the interpolating polynomial.

By transforming function (7.1), we can obtain Newton’s first interpolation formula:

It is used to find values at for any X within measurements. Let us present this formula (7.5) in a slightly different form:

The last two formulas are sometimes called Newton's interpolation formulas for forward interpolation. These formulas include differences running diagonally downward, and are convenient to use at the beginning of a table of experimental data, where there are enough differences.

Newton's second interpolation formula, derived from the same equation (7.1), is as follows:

This formula (7.7) is usually called Newton’s interpolation formula for backward interpolation. It is used to determine the values at at the end of the table.

Now let's consider interpolation for unequally spaced values ​​of the argument.

Let it still be a function at (X) is given by a series of values x i And y i, but the intervals between successive values x i are not the same. The above Newton formulas cannot be used, since they contain a constant step h. In problems of this kind it is necessary to calculate the given differences:

; etc. (7.8)

Differences of higher orders are calculated similarly. As in the case of equidistant argument values, if f (X) – polynomial n-th degree, then the differences n of the th order are constant, and differences of higher order are equal to zero. In simple cases, tables of reduced differences have a form similar to tables of differences for equally spaced values ​​of the argument.

In addition to the considered Newton interpolation formulas, the Lagrange interpolation formula is often used:

In this formula, each of the terms is a polynomial n-th degree and they are all equal. Therefore, you cannot neglect any of them until the end of the calculations.

Reverse interpolation. In practice, sometimes it is necessary to find the argument value that corresponds to a certain function value. In this case, the inverse function is interpolated and it should be borne in mind that the differences of the function are not constant and interpolation must be carried out for unequally spaced values ​​of the argument, that is, use formula (7.8) or (7.9).

Extrapolation. By extrapolation called the calculation of the values ​​of a function at outside the range of argument values X, in which the measurements were taken. If the analytical expression of the desired function is unknown, extrapolation must be carried out very carefully, since the behavior of the function is not known at (X) outside the measurement interval. Extrapolation is allowed if the course of the curve is smooth and there is no reason to expect sudden changes in the process under study. However, extrapolation must be carried out within narrow limits, for example within the step h. At more distant points you can get incorrect values at. The same formulas are used for extrapolation as for interpolation. Thus, Newton's first formula is used when extrapolating backwards, and Newton's second formula is used when extrapolating forward. Lagrange's formula applies in both cases. It should also be borne in mind that extrapolation leads to larger errors than interpolation.

Numerical integration.

Trapezoid formula. The trapezoidal formula is usually used if the function values ​​are measured for equally spaced values ​​of the argument, that is, with a constant step. Using the trapezoidal rule as an approximate value of the integral

take the value

, (7.11)

Rice. 7.1. Comparison of numerical integration methods

that is, they believe. The geometric interpretation of the trapezoid formula (see Fig. 7.1) is as follows: the area of ​​a curved trapezoid is replaced by the sum of the areas of rectilinear trapezoids. The total error in calculating the integral using the trapezoidal formula is estimated as the sum of two errors: the truncation error caused by replacing the curved trapezoid with rectilinear ones, and the rounding error caused by errors in measuring the function values. The truncation error for the trapezoidal formula is

, Where . (7.12)

Rectangle formulas. The formulas of rectangles, like the formula of trapezoids, are also used in the case of equidistant argument values. The approximate integral sum is determined by one of the formulas

The geometric interpretation of the formulas for rectangles is given in Fig. 7.1. The error of formulas (7.13) and (7.14) is estimated by the inequality

, Where . (7.15)

Simpson's formula. The integral is approximately determined by the formula

Where n- even number. The error of Simpson's formula is estimated by the inequality

, Where . (7.17)

Simpson's formula gives exact results for the case when the integrand is a polynomial of the second or third degree.

Numerical integration of differential equations. Consider the ordinary differential equation of the first order at " = f (X , at) with the initial condition at = at 0 at X = X 0 . It is required to find its approximate solution at = at (X) on the segment [ X 0 , X k ].

Rice. 7.2. Geometric interpretation of Euler's method

To do this, this segment is divided into n equal parts length ( X kX 0)/n. Finding Approximate Values at 1 , at 2 , … , at n functions at (X) at division points X 1 , X 2 , … , X n = X k carried out using various methods.

Euler's broken line method. At a given value at 0 = at (X 0) other values at i at (X i) are sequentially calculated using the formula

, (7.18)

Where i = 0, 1, …, n – 1.

Graphically, Euler's method is presented in Fig. 7.1, where the graph of the solution to the equation at = at (X) approximately appears as a broken line (hence the name of the method). Runge-Kutta method. Provides higher accuracy compared to the Euler method. Search values at i are sequentially calculated using the formula

, (7.19), where,

, , .

REVIEW OF SCIENTIFIC LITERATURE

A literature review is an essential part of any research report. The review should fully and systematically present the state of the issue, allow for an objective assessment of the scientific and technical level of the work, correctly choose the ways and means to achieve the goal, and evaluate both the effectiveness of these means and the work as a whole. The subject of analysis in the review should be new ideas and problems, possible approaches to solving these problems, the results of previous studies, economic data, and possible ways to solve problems. Conflicting information contained in various literature sources must be analyzed and evaluated with particular care.

From an analysis of the literature it should be clear that in this narrow issue what is known quite reliably, what is doubtful and controversial; what are the priority and key tasks in the given technical problem; where and how to look for their solutions.

The time spent on a review works out something like this:

Research always has a narrow, specific goal. The review concludes by justifying the choice of purpose and method. The review should prepare this decision. From here follows his plan and selection of material. The review considers only such narrow issues that can directly affect the solution of the problem, but so completely as to cover almost all modern literature on this issue.

ORGANIZATION OF REFERENCE AND INFORMATION ACTIVITIES

In our country, information activities are based on the principle of centralized processing of scientific documents, which makes it possible to achieve full coverage of information sources at the lowest cost and to summarize and systematize them in the most qualified manner. As a result of such processing, various forms of information publications are prepared. These include:

1) abstract journals(RJ) is the main information publication containing mainly abstracts (sometimes annotations and bibliographic descriptions) of sources of greatest interest to science and practice. Abstract journals, which notify about emerging scientific and technical literature, allow retrospective searches, overcome language barriers, and make it possible to monitor achievements in related fields of science and technology;

2) signal information bulletins(SI), which include bibliographic descriptions of literature published in a certain field of knowledge and are essentially bibliographic indexes. Their main task is to promptly inform about all the latest scientific and technical literature, since this information appears much earlier than in abstract journals;

3) express information– information publications containing extended abstracts of articles, descriptions of inventions and other publications and allowing you not to refer to the original source. The purpose of express information is to quickly and fairly fully familiarize specialists with the latest achievements of science and technology;

4) analytical reviews– information publications that give an idea of ​​the state and development trends of a certain area (section, problem) of science and technology;

5) abstract reviews– pursuing the same purpose as analytical reviews, and at the same time being more descriptive in nature. The authors of abstract reviews do not provide their own assessment of the information contained in them;

6) printed bibliography cards, i.e. a complete bibliographic description of the source of information. They are among the signal publications and perform the functions of notifying about new publications and the possibilities of creating catalogs and card files necessary for every specialist and researcher;

7) annotated printed bibliography cards ;

8) bibliographic indexes .

Most of these publications are also distributed by individual subscription. Detailed information about them can be found in the “Catalogues of publications of scientific and technical information bodies” published annually.

In the 50–60s of the XX century. Increasingly, the desire of many countries to create a single universal system of units that could become international was manifested. Among the general requirements for basic and derived units, the requirement of coherence of such a system of units was put forward.

In 1954 The X General Conference on Weights and Measures established six basic units for international relations: meter, kilogram, second, ampere, Kelvin, candle.

IN 1960 The XI General Conference on Weights and Measures approved International system of units, abbreviated S.I.(initial letters of the French name Systeme International d Unites), in Russian transcription - SI.

As a result of some modifications adopted by the General Conferences on Weights and Measures in 1967, 1971, 1979, the system currently includes seven main units (Table 3.3.1).

Table 3.3.1

Basic and additional units of physical quantities of the SI system

Magnitude Unit
Designation
Name Dimension Recommended designation Name Russian international
Length Basic
L meter m m
Weight M m kilogram kg kg
Time T t second With s
Electric current strength I I ampere A A
Thermodynamic temperature Q T kelvin TO TO
Quantity of substance N n,v mole mole mol
The power of light J J candella cd CD
Flat angle Additional
- - radian glad rad
Solid angle - - steradian Wed sr

The SI system of units operates on the territory of our country. from January 1, 1982. in accordance with GOST 8.417–81. The SI system is a logical development of the previous systems of units GHS and MKGSS, etc.

Definition and content of SI basic units.

In accordance with the decisions of the General Conference on Weights and Measures (GCPM), adopted in different years, the following definitions of the basic SI units are currently in effect.

Unit of lengthmeter– the length of the path traveled by light in a vacuum in 1/299,792,458 fractions of a second (decision of the XVII CGPM in 1983).

Unit of masskilogram– mass equal to the mass of the international prototype of the kilogram (decision of the 1st CGPM in 1889).

Unit of timesecond– duration of 9192631770 periods of radiation corresponding to the transition between two hyperfine levels of the ground state of the cesium-133 atom, not perturbed by external fields (decision of the XIII CGPM in 1967).

Unit of electric currentampere- the strength of a constant current, which, when passing through two parallel conductors of infinite length and negligible circular cross-section, located at a distance of 1 m from each other in a vacuum, would create between these conductors a force equal to 2 10 -7 N per meter of length (approved IX GCPM in 1948).

Thermodynamic temperature unitkelvin(until 1967 it was called degrees Kelvin) – 1/273.16 part of the thermodynamic temperature of the triple point of water. Expression of thermodynamic temperature in degrees Celsius is allowed (resolution XIII CGPM in 1967).

Unit of quantity of substancemole– the amount of substance of a system containing the same number of structural elements as there are atoms contained in a carbon-12 nuclide weighing 0.012 kg (resolution XIV GCPM in 1971).

Luminous intensity unitcandela– the luminous intensity in a given direction of a source emitting monochromatic radiation with a frequency of 540 10 12 Hz, the energetic luminous intensity of which in this direction is 1/683 W/sr (resolution XVI GCPM in 1979).

Lecture 4.

Ensuring uniformity of measurements

Unity of measurements

When carrying out measurements, it is necessary to ensure their unity. Under uniformity of measurements is understood characteristic of the quality of measurements, which consists in the fact that their results are expressed in legal units, the sizes of which, within established limits, are equal to the sizes of the reproduced quantities, and the errors of the measurement results are known with a given probability and do not go beyond the established limits.

The concept of “unity of measurements” is quite capacious. It covers the most important tasks of metrology: unification of PV units, development of systems for reproducing quantities and transferring their sizes to working measuring instruments with established accuracy and a number of other questions. The uniformity of measurements must be ensured with any accuracy required by science and technology. The activities of state and departmental metrological services, carried out in accordance with established rules, requirements and standards, are aimed at achieving and maintaining the uniformity of measurements at the proper level.

At the state level, activities to ensure the uniformity of measurements are regulated by the standards of the State System for Ensuring the Uniformity of Measurements (GSI) or regulatory documents of metrological service bodies.

The State System for Ensuring the Uniformity of Measurements (GSI) is a set of interconnected rules, regulations, requirements and norms established by standards that determine the organization and methodology of carrying out work to assess and ensure measurement accuracy.

Legal basis To ensure the uniformity of measurements, legal metrology is used, which is a set of state laws (the Law of the Russian Federation “On Ensuring the Uniformity of Measurements”), acts and regulatory and technical documents of various levels regulating metrological rules, requirements and norms.

Technical basis GSI are:

1. The system (set) of state standards of units and scales of physical quantities is the country’s reference base.

2. A system for transferring the sizes of units and scales of physical quantities from standards to all SI using standards and other means of verification.

3. A system for the development, launch into production and release into circulation of working measuring instruments, providing research, development, determination with the required accuracy of the characteristics of products, technological processes and other objects.

4. System of state testing of measuring instruments (approval of measuring instruments type), intended for serial or mass production and import from abroad in batches.

5. System of state and departmental metrological certification, verification and calibration of measuring instruments.

6. System of reference materials for the composition and properties of substances and materials, System of standard reference data on physical constants and properties of substances and materials.

The variety of individual units (force, for example, could be expressed in kg, pounds, etc.) and systems of units created great difficulties in the worldwide exchange of scientific and economic achievements. Therefore, back in the 19th century, there was a need to create a unified international system that would include units of measurement of quantities used in all branches of physics. However, agreement to introduce such a system was adopted only in 1960.

International system of units is a correctly constructed and interconnected set of physical quantities. It was adopted in October 1960 at the 11th General Conference on Weights and Measures. The abbreviated name of the system is SI. In Russian transcription - SI. (international system).

In the USSR, GOST 9867-61 was introduced in 1961, which established the preferable use of this system in all areas of science, technology, and teaching. Currently, the current GOST 8.417-81 “GSI. Units of physical quantities". This standard establishes the units of physical quantities used in the USSR, their names, designations and rules of application. It is developed in full accordance with the SI system and ST SEV 1052-78.

The C system consists of seven basic units, two additional units and a number of derivatives. In addition to SI units, the use of submultiples and multiples is allowed, obtained by multiplying the original values ​​by 10 n, where n = 18, 15, 12, ... -12, -15, -18. The names of multiple and submultiple units are formed by adding the corresponding decimal prefixes:

exa (E) = 10 18; peta (P) = 10 15 ; tera (T) = 10 12 ; giga (G) = 10 9 ; mega (M) = 10 6 ;

miles (m) = 10 –3 ; micro (μ) = 10 –6; nano(n) = 10 –9; pico(p) = 10 –12;

femto (f) = 10 –15; atto(a) = 10 –18;

GOST 8.417-81 allows the use, in addition to the specified units, of a number of non-systemic units, as well as units temporarily approved for use until the relevant international decisions are adopted.

The first group includes: ton, day, hour, minute, year, liter, light year, volt-ampere.

The second group includes: nautical mile, carat, knot, rpm.

1.4.4 Basic units of SI.

Unit of length – meter (m)

A meter is equal to 1650763.73 wavelengths in vacuum of radiation corresponding to the transition between the 2p 10 and 5d 5 levels of the krypton-86 atom.

The International Bureau of Weights and Measures and large national metrology laboratories have created installations for reproducing the meter in light wavelengths.

The unit of mass is kilogram (kg).

Mass is a measure of the inertia of bodies and their gravitational properties. A kilogram is equal to the mass of the international prototype of the kilogram.

The state primary standard of the SI kilogram is intended for reproduction, storage and transfer of the unit of mass to working standards.

The standard includes:

    A copy of the international prototype of the kilogram - platinum-iridium prototype No. 12, which is a weight in the form of a cylinder with a diameter and height of 39 mm.

    Equal-arm prismatic scales No. 1 for 1 kg with remote control from Ruphert (1895) and No. 2 manufactured at VNIIM in 1966.

Once every 10 years, the state standard is compared with a copy standard. Over 90 years, the mass of the state standard has increased by 0.02 mg due to dust, adsorption and corrosion.

Now mass is the only unit quantity that is determined through a real standard. This definition has a number of disadvantages - change in the mass of the standard over time, irreproducibility of the standard. Research is underway to express a unit of mass through natural constants, for example through the mass of a proton. It is also planned to develop a standard using a certain number of Si-28 silicon atoms. To solve this problem, first of all, the accuracy of measuring Avogadro's number must be increased.

The unit of time is second (s).

Time is one of the central concepts of our worldview, one of the most important factors in the life and activities of people. It is measured using stable periodic processes - the annual rotation of the Earth around the Sun, daily - the rotation of the Earth around its axis, and various oscillatory processes. The definition of the unit of time, the second, has changed several times in accordance with the development of science and the requirements for measurement accuracy. The current definition is:

A second is equal to 9192631770 periods of radiation corresponding to the transition between two hyperfine levels of the ground state of the cesium 133 atom.

Currently, a beam standard of time, frequency and length has been created, used by the time and frequency service. Radio signals allow the transmission of a unit of time, so it is widely available. The standard second error is 1·10 -19 s.

The unit of electric current is ampere (A)

An ampere is equal to the strength of an unchanging current, which, when passing through two parallel and straight conductors of infinite length and negligibly small cross-sectional area, located in a vacuum at a distance of 1 meter from each other, would cause on each section of the conductor 1 meter long an interaction force equal to 2 ·10 -7 N.

The error of the ampere standard is 4·10 -6 A. This unit is reproduced using the so-called current scales, which are accepted as the ampere standard. It is planned to use 1 volt as the main unit, since its reproduction error is 5·10 -8 V.

Unit of thermodynamic temperature – Kelvin (K)

Temperature is a value that characterizes the degree of heating of a body.

Since the invention of the Thermometer by Galileo, temperature measurement has been based on the use of one or another thermometric substance that changes its volume or pressure with a change in temperature.

All known temperature scales (Fahrenheit, Celsius, Kelvin) are based on some reference points to which different numerical values ​​are assigned.

Kelvin and, independently of him, Mendeleev expressed considerations about the advisability of constructing a temperature scale based on one reference point, which was taken as the “triple point of water,” which is the equilibrium point of water in the solid, liquid and gaseous phases. It can currently be reproduced in special vessels with an error of no more than 0.0001 degrees Celsius. The lower limit of the temperature range is the absolute zero point. If this interval is divided into 273.16 parts, you get a unit of measurement called Kelvin.

Kelvin is 1/273.16 part of the thermodynamic temperature of the triple point of water.

The symbol T is used to denote temperature expressed in Kelvin, and t in degrees Celsius. The transition is made according to the formula: T=t+ 273.16. A degree Celsius is equal to one Kelvin (both units are eligible for use).

The unit of luminous intensity is candela (cd)

Luminous intensity is a quantity that characterizes the glow of a source in a certain direction, equal to the ratio of the luminous flux to the small solid angle in which it propagates.

The candela is equal to the luminous intensity in a given direction of a source emitting monochromatic radiation with a frequency of 540·10 12 Hz, the luminous energy intensity of which in that direction is 1/683 (W/sr) (Watts per steradian).

The error in reproducing a unit with a standard is 1·10 -3 cd.

The unit of quantity of a substance is the mole.

A mole is equal to the amount of substance in a system containing the same number of structural elements as there are atoms in C12 carbon weighing 0.012 kg.

When using a mole, the structural elements must be specified and can be atoms, molecules, ions, electrons, or specified groups of particles.

Additional SI units

The international system includes two additional units - for measuring plane and solid angles. They cannot be basic, since they are dimensionless quantities. Assigning an independent dimension to an angle would lead to the need to change the mechanics equations related to rotational and curvilinear motion. However, they are not derivatives, since they do not depend on the choice of basic units. Therefore, these units are included in the SI as additional ones necessary for the formation of some derived units - angular velocity, angular acceleration, etc.

The unit of plane angle is radian (rad)

A radian is equal to the angle between two radii of a circle, the length of the arc between which is equal to the radius.

The state primary standard of the radian consists of a 36-sided prism and a standard goniometric autocollimation installation with a division value of the reading devices of 0.01’’. The reproduction of the plane angle unit is carried out by the calibration method, based on the fact that the sum of all central angles of a polyhedral prism is equal to 2π rad.

The unit of solid angle is steradian (sr)

The steradian is equal to the solid angle with its vertex at the center of the sphere, cutting out on the surface of the sphere an area equal to the area of ​​a square with a side equal to the radius of the sphere.

The solid angle is measured by determining the plane angles at the vertex of the cone. The solid angle 1ср corresponds to a flat angle 65 0 32’. For recalculation use the formula:

where Ω is the solid angle in sr; α is the plane angle at the vertex in degrees.

The solid angle π corresponds to a plane angle of 120 0, and the solid angle 2π corresponds to a plane angle of 180 0.

Usually angles are measured in degrees - this is more convenient.

Advantages of SI

    It is universal, that is, it covers all measurement areas. With its implementation, you can abandon all other unit systems.

    It is coherent, that is, a system in which the derived units of all quantities are obtained using equations with numerical coefficients equal to the dimensionless unit (the system is coherent and consistent).

    The units in the system are unified (instead of a number of units of energy and work: kilogram-force-meter, erg, calorie, kilowatt-hour, electron-volt, etc. - one unit for measuring work and all types of energy - joule).

    There is a clear distinction between units of mass and force (kg and N).

Disadvantages of SI

    Not all units have a size convenient for practical use: the pressure unit Pa is a very small value; unit of electrical capacitance F is a very large value.

    Inconvenience of measuring angles in radians (degrees are easier to perceive)

    Many derived quantities do not yet have their own names.

Thus, the adoption of SI is the next and very important step in the development of metrology, a step forward in improving systems of units of physical quantities.

In principle, one can imagine any large number of different systems of units, but only a few are widely used. All over the world, the metric system is used for scientific and technical measurements and in most countries in industry and everyday life.

Basic units.

In the system of units, for each measured physical quantity there must be a corresponding unit of measurement. Thus, a separate unit of measurement is needed for length, area, volume, speed, etc., and each such unit can be determined by choosing one or another standard. But the system of units turns out to be much more convenient if in it only a few units are selected as basic ones, and the rest are determined through the basic ones. So, if the unit of length is a meter, the standard of which is stored in the State Metrological Service, then the unit of area can be considered a square meter, the unit of volume is a cubic meter, the unit of speed is a meter per second, etc.

The convenience of such a system of units (especially for scientists and engineers, who deal with measurements much more often than other people) is that the mathematical relationships between the basic and derived units of the system turn out to be simpler. In this case, a unit of speed is a unit of distance (length) per unit of time, a unit of acceleration is a unit of change in speed per unit of time, a unit of force is a unit of acceleration per unit of mass, etc. In mathematical notation it looks like this: v = l/t, a = v/t, F = ma = ml/t 2. The presented formulas show the “dimension” of the quantities under consideration, establishing relationships between units. (Similar formulas allow you to determine units for quantities such as pressure or electric current.) Such relationships are of a general nature and are valid regardless of what units (meter, foot or arshin) the length is measured in and what units are chosen for other quantities.

In technology, the basic unit of measurement of mechanical quantities is usually taken not as a unit of mass, but as a unit of force. Thus, if in the system most commonly used in physical research, a metal cylinder is taken as a standard of mass, then in a technical system it is considered as a standard of force that balances the force of gravity acting on it. But since the force of gravity is not the same at different points on the Earth's surface, location specification is necessary to accurately implement the standard. Historically, the location was sea level at a latitude of 45°. Currently, such a standard is defined as the force necessary to give the specified cylinder a certain acceleration. True, in technology, measurements are usually not carried out with such high accuracy that it is necessary to take care of variations in gravity (if we are not talking about the calibration of measuring instruments).

There is a lot of confusion surrounding the concepts of mass, force and weight. The fact is that there are units of all these three quantities that have the same names. Mass is an inertial characteristic of a body, showing how difficult it is to remove it from a state of rest or uniform and linear motion by an external force. A unit of force is a force that, acting on a unit of mass, changes its speed by one unit of speed per unit of time.

All bodies attract each other. Thus, any body near the Earth is attracted to it. In other words, the Earth creates the force of gravity acting on the body. This force is called its weight. The force of weight, as stated above, is not the same at different points on the surface of the Earth and at different altitudes above sea level due to differences in gravitational attraction and in the manifestation of the Earth's rotation. However, the total mass of a given amount of substance is unchanged; it is the same both in interstellar space and at any point on Earth.

Precise experiments have shown that the force of gravity acting on different bodies (i.e. their weight) is proportional to their mass. Consequently, masses can be compared on scales, and masses that turn out to be the same in one place will be the same in any other place (if the comparison is carried out in a vacuum to exclude the influence of displaced air). If a certain body is weighed on a spring scale, balancing the force of gravity with the force of an extended spring, then the results of measuring the weight will depend on the place where the measurements are taken. Therefore, spring scales must be adjusted at each new location so that they correctly indicate the mass. The simplicity of the weighing procedure itself was the reason that the force of gravity acting on the standard mass was adopted as an independent unit of measurement in technology. HEAT.

Metric system of units.

The metric system is the general name for the international decimal system of units, the basic units of which are the meter and the kilogram. Although there are some differences in details, the elements of the system are the same throughout the world.

Story.

The metric system grew out of regulations adopted by the French National Assembly in 1791 and 1795 defining the meter as one ten-millionth of the portion of the earth's meridian from the North Pole to the equator.

By decree issued on July 4, 1837, the metric system was declared mandatory for use in all commercial transactions in France. It gradually replaced local and national systems in other European countries and was legally accepted as acceptable in the UK and USA. An agreement signed on May 20, 1875 by seventeen countries created an international organization designed to preserve and improve the metric system.

It is clear that by defining the meter as a ten-millionth part of a quarter of the earth's meridian, the creators of the metric system sought to achieve invariance and accurate reproducibility of the system. They took the gram as a unit of mass, defining it as the mass of one millionth of a cubic meter of water at its maximum density. Since it would not be very convenient to carry out geodetic measurements of a quarter of the earth's meridian with each sale of a meter of cloth or to balance a basket of potatoes at the market with the appropriate amount of water, metal standards were created that reproduced these ideal definitions with extreme accuracy.

It soon became clear that metal length standards could be compared with each other, introducing much less error than when comparing any such standard with a quarter of the earth's meridian. In addition, it became clear that the accuracy of comparing metal mass standards with each other is much higher than the accuracy of comparing any such standard with the mass of the corresponding volume of water.

In this regard, the International Commission on the Meter in 1872 decided to accept the “archival” meter stored in Paris “as it is” as the standard of length. Similarly, the members of the Commission accepted the archival platinum-iridium kilogram as the standard of mass, “considering that the simple relationship established by the creators of the metric system between the unit of weight and the unit of volume is represented by the existing kilogram with an accuracy sufficient for ordinary applications in industry and commerce, and the exact Sciences do not need a simple numerical relationship of this kind, but an extremely perfect definition of this relationship.” In 1875, many countries around the world signed a meter agreement, and this agreement established a procedure for coordinating metrological standards for the world scientific community through the International Bureau of Weights and Measures and the General Conference on Weights and Measures.

The new international organization immediately began developing international standards for length and mass and transmitting copies of them to all participating countries.

Standards of length and mass, international prototypes.

The international prototypes of the standards of length and mass - the meter and the kilogram - were deposited with the International Bureau of Weights and Measures, located in Sèvres, a suburb of Paris. The meter standard was a ruler made of a platinum alloy with 10% iridium, the cross-section of which was given a special X-shape to increase bending rigidity with a minimum volume of metal. In the groove of such a ruler there was a longitudinal flat surface, and the meter was defined as the distance between the centers of two strokes applied across the ruler at its ends, at a standard temperature of 0 ° C. The mass of a cylinder made of the same platinum was taken as the international prototype of the kilogram. iridium alloy, the same as the standard meter, with a height and diameter of about 3.9 cm. The weight of this standard mass, equal to 1 kg at sea level at latitude 45°, is sometimes called kilogram-force. Thus, it can be used either as a standard of mass for an absolute system of units, or as a standard of force for a technical system of units in which one of the basic units is the unit of force.

The international prototypes were selected from a large batch of identical standards produced simultaneously. Other standards of this batch were transferred to all participating countries as national prototypes (state primary standards), which are periodically returned to the International Bureau for comparison with international standards. Comparisons made at various times since then show that they do not show deviations (from international standards) beyond the limits of measurement accuracy.

International SI system.

The metric system was very favorably received by scientists of the 19th century. partly because it was proposed as an international system of units, partly because its units were theoretically assumed to be independently reproducible, and also because of its simplicity. Scientists began to develop new units for the various physical quantities they dealt with, based on the elementary laws of physics and linking these units to the metric units of length and mass. The latter increasingly conquered various European countries, in which previously many unrelated units for different quantities were in use.

Although all countries that adopted the metric system of units had nearly the same standards for metric units, various discrepancies in derived units arose between different countries and different disciplines. In the field of electricity and magnetism, two separate systems of derived units emerged: electrostatic, based on the force with which two electric charges act on each other, and electromagnetic, based on the force of interaction between two hypothetical magnetic poles.

The situation became even more complicated with the advent of the so-called system. practical electrical units introduced in the mid-19th century. by the British Association for the Advancement of Science to meet the demands of rapidly developing wire telegraph technology. Such practical units do not coincide with the units of both systems mentioned above, but differ from the units of the electromagnetic system only by factors equal to whole powers of ten.

Thus, for such common electrical quantities as voltage, current and resistance, there were several options for accepted units of measurement, and each scientist, engineer, and teacher had to decide for himself which of these options was best for him to use. In connection with the development of electrical engineering in the second half of the 19th and first half of the 20th centuries. Practical units were increasingly used and eventually came to dominate the field.

To eliminate such confusion at the beginning of the 20th century. a proposal was put forward to combine practical electrical units with corresponding mechanical ones based on metric units of length and mass, and build some kind of coherent system. In 1960, the XI General Conference on Weights and Measures adopted a unified International System of Units (SI), defined the basic units of this system and prescribed the use of certain derived units, “without prejudice to others that may be added in the future.” Thus, for the first time in history, an international coherent system of units was adopted by international agreement. It is now accepted as a legal system of units of measurement by most countries in the world.

The International System of Units (SI) is a harmonized system that provides one and only one unit of measurement for any physical quantity, such as length, time, or force. Some of the units are given special names, an example is the unit of pressure pascal, while the names of others are derived from the names of the units from which they are derived, for example the unit of speed - meter per second. The basic units, together with two additional geometric ones, are presented in Table. 1. Derived units for which special names are adopted are given in table. 2. Of all the derived mechanical units, the most important are the unit of force newton, the unit of energy the joule and the unit of power the watt. Newton is defined as the force that imparts an acceleration of one meter per second squared to a mass of one kilogram. A joule is equal to the work done when the point of application of a force equal to one Newton moves a distance of one meter in the direction of the force. A watt is the power at which one joule of work is done in one second. Electrical and other derived units will be discussed below. The official definitions of major and minor units are as follows.

A meter is the length of the path traveled by light in a vacuum in 1/299,792,458 of a second. This definition was adopted in October 1983.

A kilogram is equal to the mass of the international prototype of the kilogram.

A second is the duration of 9,192,631,770 periods of radiation oscillations corresponding to transitions between two levels of the hyperfine structure of the ground state of the cesium-133 atom.

Kelvin is equal to 1/273.16 of the thermodynamic temperature of the triple point of water.

A mole is equal to the amount of a substance that contains the same number of structural elements as atoms in the carbon-12 isotope weighing 0.012 kg.

A radian is a plane angle between two radii of a circle, the length of the arc between which is equal to the radius.

The steradian is equal to the solid angle with its vertex at the center of the sphere, cutting out on its surface an area equal to the area of ​​a square with a side equal to the radius of the sphere.

To form decimal multiples and submultiples, a number of prefixes and factors are prescribed, indicated in the table. 3.

Table 3. Prefixes and multipliers of the international system of units

exa deci
peta centi
tera Milli
giga micro

mk

mega nano
kilo pico
hecto femto
soundboard

Yes

atto

Thus, a kilometer (km) is 1000 m, and a millimeter is 0.001 m. (These prefixes apply to all units, such as kilowatts, milliamps, etc.)

It was originally intended that one of the base units should be the gram, and this was reflected in the names of the units of mass, but nowadays the base unit is the kilogram. Instead of the name megagram, the word “ton” is used. In physics disciplines, such as measuring the wavelength of visible or infrared light, a millionth of a meter (micrometer) is often used. In spectroscopy, wavelengths are often expressed in angstroms (Å); An angstrom is equal to one tenth of a nanometer, i.e. 10 - 10 m. For radiation with a shorter wavelength, such as X-rays, in scientific publications it is allowed to use a picometer and an x-unit (1 x-unit = 10 –13 m). A volume equal to 1000 cubic centimeters (one cubic decimeter) is called a liter (L).

Mass, length and time.

All basic SI units, except the kilogram, are currently defined in terms of physical constants or phenomena that are considered immutable and reproducible with high accuracy. As for the kilogram, a way to implement it with the degree of reproducibility that is achieved in procedures for comparing various mass standards with the international prototype of the kilogram has not yet been found. Such a comparison can be carried out by weighing on a spring balance, the error of which does not exceed 1H 10 –8. Standards of multiple and submultiple units for a kilogram are established by combined weighing on scales.

Since the meter is defined in terms of the speed of light, it can be reproduced independently in any well-equipped laboratory. Thus, using the interference method, line and end length measures, which are used in workshops and laboratories, can be checked by comparing directly with the wavelength of light. The error with such methods under optimal conditions does not exceed one billionth (1H 10 –9). With the development of laser technology, such measurements have become very simplified, and their range has expanded significantly.

Likewise, the second, according to its modern definition, can be independently realized in a competent laboratory in an atomic beam facility. The beam's atoms are excited by a high-frequency oscillator tuned to the atomic frequency, and an electronic circuit measures time by counting the periods of oscillation in the oscillator circuit. Such measurements can be carried out with an accuracy of the order of 1H 10 -12 - much higher than was possible with previous definitions of the second, based on the rotation of the Earth and its revolution around the Sun. Time and its reciprocal, frequency, are unique in that their standards can be transmitted by radio. Thanks to this, anyone who has the appropriate radio receiving equipment can receive signals of exact time and reference frequency, almost no different in accuracy from those transmitted over the air.

Mechanics.

Temperature and warmth.

Mechanical units do not allow solving all scientific and technical problems without involving any other relationships. Although the work done when moving a mass against the action of a force, and the kinetic energy of a certain mass are equivalent in nature to the thermal energy of a substance, it is more convenient to consider temperature and heat as separate quantities that do not depend on mechanical ones.

Thermodynamic temperature scale.

The unit of thermodynamic temperature Kelvin (K), called kelvin, is determined by the triple point of water, i.e. the temperature at which water is in equilibrium with ice and steam. This temperature is taken to be 273.16 K, which determines the thermodynamic temperature scale. This scale, proposed by Kelvin, is based on the second law of thermodynamics. If there are two thermal reservoirs with a constant temperature and a reversible heat engine transferring heat from one of them to the other in accordance with the Carnot cycle, then the ratio of the thermodynamic temperatures of the two reservoirs is given by T 2 /T 1 = –Q 2 Q 1 where Q 2 and Q 1 – the amount of heat transferred to each of the reservoirs (the minus sign indicates that heat is taken from one of the reservoirs). Thus, if the temperature of the warmer reservoir is 273.16 K, and the heat taken from it is twice as much as the heat transferred to the other reservoir, then the temperature of the second reservoir is 136.58 K. If the temperature of the second reservoir is 0 K, then it no heat will be transferred at all, since all the gas energy has been converted into mechanical energy in the adiabatic expansion section of the cycle. This temperature is called absolute zero. The thermodynamic temperature commonly used in scientific research coincides with the temperature included in the equation of state of an ideal gas PV = RT, Where P- pressure, V– volume and R– gas constant. The equation shows that for an ideal gas, the product of volume and pressure is proportional to temperature. This law is not exactly satisfied for any of the real gases. But if corrections are made for virial forces, then the expansion of gases allows us to reproduce the thermodynamic temperature scale.

International temperature scale.

In accordance with the definition outlined above, temperature can be measured with very high accuracy (up to approximately 0.003 K near the triple point) by gas thermometry. A platinum resistance thermometer and a gas reservoir are placed in a thermally insulated chamber. When the chamber is heated, the electrical resistance of the thermometer increases and the gas pressure in the reservoir increases (in accordance with the equation of state), and when cooled, the opposite picture is observed. By measuring resistance and pressure simultaneously, you can calibrate the thermometer by gas pressure, which is proportional to temperature. The thermometer is then placed in a thermostat in which the liquid water can be kept in equilibrium with its solid and vapor phases. By measuring its electrical resistance at this temperature, a thermodynamic scale is obtained, since the temperature of the triple point is assigned a value equal to 273.16 K.

There are two international temperature scales - Kelvin (K) and Celsius (C). Temperature on the Celsius scale is obtained from temperature on the Kelvin scale by subtracting 273.15 K from the latter.

Accurate temperature measurements using gas thermometry require a lot of labor and time. Therefore, the International Practical Temperature Scale (IPTS) was introduced in 1968. Using this scale, thermometers of different types can be calibrated in the laboratory. This scale was established using a platinum resistance thermometer, a thermocouple and a radiation pyrometer, used in the temperature intervals between certain pairs of constant reference points (temperature benchmarks). The MPTS was supposed to correspond to the thermodynamic scale with the greatest possible accuracy, but, as it turned out later, its deviations were very significant.

Fahrenheit temperature scale.

The Fahrenheit temperature scale, which is widely used in combination with the British technical system of units, as well as in non-scientific measurements in many countries, is usually determined by two constant reference points - the melting point of ice (32 ° F) and the boiling point of water (212 ° F) at normal (atmospheric) pressure. Therefore, to get the Celsius temperature from the Fahrenheit temperature, you need to subtract 32 from the latter and multiply the result by 5/9.

Units of heat.

Since heat is a form of energy, it can be measured in joules, and this metric unit has been adopted by international agreement. But since the amount of heat was once determined by the change in temperature of a certain amount of water, a unit called a calorie became widespread and is equal to the amount of heat required to increase the temperature of one gram of water by 1 ° C. Due to the fact that the heat capacity of water depends on temperature , I had to clarify the calorie value. At least two different calories appeared - “thermochemical” (4.1840 J) and “steam” (4.1868 J). The “calorie” used in dietetics is actually a kilocalorie (1000 calories). The calorie is not an SI unit and has fallen into disuse in most fields of science and technology.

Electricity and magnetism.

All commonly accepted electrical and magnetic units of measurement are based on the metric system. In accordance with modern definitions of electrical and magnetic units, they are all derived units, derived by certain physical formulas from the metric units of length, mass and time. Since most electrical and magnetic quantities are not so easy to measure using the standards mentioned, it was found that it is more convenient to establish, through appropriate experiments, derivative standards for some of the indicated quantities, and to measure others using such standards.

SI units.

Below is a list of SI electrical and magnetic units.

The ampere, a unit of electric current, is one of the six SI base units. Ampere is the strength of a constant current, which, when passing through two parallel straight conductors of infinite length with a negligibly small circular cross-sectional area, located in a vacuum at a distance of 1 m from each other, would cause on each section of the conductor 1 m long an interaction force equal to 2H 10 - 7 N.

Volt, a unit of potential difference and electromotive force. Volt is the electrical voltage in a section of an electrical circuit with a direct current of 1 A with a power consumption of 1 W.

Coulomb, a unit of quantity of electricity (electric charge). Coulomb is the amount of electricity passing through the cross-section of a conductor at a constant current of 1 A in 1 s.

Farad, a unit of electrical capacitance. Farad is the capacitance of a capacitor on the plates of which, when charged at 1 C, an electric voltage of 1 V appears.

Henry, unit of inductance. Henry is equal to the inductance of the circuit in which a self-inductive emf of 1 V occurs when the current in this circuit changes uniformly by 1 A in 1 s.

Weber unit of magnetic flux. Weber is a magnetic flux, when it decreases to zero, an electric charge equal to 1 C flows in the circuit connected to it, which has a resistance of 1 Ohm.

Tesla, a unit of magnetic induction. Tesla is the magnetic induction of a uniform magnetic field, in which the magnetic flux through a flat area of ​​1 m2, perpendicular to the induction lines, is equal to 1 Wb.

Practical standards.

Light and illumination.

Luminous intensity and illuminance units cannot be determined based on mechanical units alone. We can express the energy flux in a light wave in W/m2, and the intensity of the light wave in V/m, as in the case of radio waves. But the perception of illumination is a psychophysical phenomenon in which not only the intensity of the light source is significant, but also the sensitivity of the human eye to the spectral distribution of this intensity.

By international agreement, the unit of luminous intensity is the candela (previously called a candle), equal to the luminous intensity in a given direction of a source emitting monochromatic radiation of frequency 540H 10 12 Hz ( l= 555 nm), the energy force of light radiation of which in this direction is 1/683 W/sr. This roughly corresponds to the luminous intensity of a spermaceti candle, which once served as a standard.

If the luminous intensity of the source is one candela in all directions, then the total luminous flux is 4 p lumens. Thus, if this source is located at the center of a sphere with a radius of 1 m, then the illumination of the inner surface of the sphere is equal to one lumen per square meter, i.e. one suite.

X-ray and gamma radiation, radioactivity.

X-ray (R) is an obsolete unit of exposure dose of x-ray, gamma and photon radiation, equal to the amount of radiation that, taking into account secondary electron radiation, forms ions in 0.001 293 g of air that carry a charge equal to one unit of the CGS charge of each sign. The SI unit of absorbed radiation dose is the gray, equal to 1 J/kg. The standard for absorbed radiation dose is a setup with ionization chambers that measure the ionization produced by radiation.