**§ ****3 ****Random processes**

1. General random process

[ Definition of Random Process ] For each *t* Î *T* ( *T* is a fixed set of real numbers ), *x* ( *t* ) is a random variable, and the family of such random variables { *x* ( *t* ) , *t *Î *T* } is called stochastic process. The result of an experiment of a stochastic process is a function defined on *T* , which is called a realization of a stochastic process. When the variation range *T* of the parameter *t* is a set of integers, then it is called

*
**x* (* t* ),
* t* =0, ± 1, ± 2, L

is a random sequence.

When *T* contains only one or a finite number of elements, { *x* ( *t* ) , *t *Î *T* } is the random variable or random vector studied in probability theory.

[ Finite-dimensional distribution function family of random processes ]
Let { *x* ( *t* ) , *t *Î *T* } be a random process, for any positive integer *n* and any *t *_{1} , *t *_{2} ,
L , *t _{n}* Î

_{}

A family of finite-dimensional distribution functions called It not only characterizes the statistical regularity of the random variable *x* ( *t* ) corresponding to each *t* , but also characterizes the relationship between each random variable *x* ( *t* ) , thus completely describing the statistical regularity of the random process ._{}

[ Statistical parameters of random process ] Let { *x* ( *t* ) , *t *Î *T* } be a complex-valued random process ( meaning that its real and imaginary parts are real random processes ) . The main statistical parameters are:

1 ^{°} Mathematical expectation (mean) of the mean function for each *t* Î *T* , random variable *x* ( *t* )^{ }

_{}

is called the mean function of a random process, where *F _{t}* (

2 ^{° }covariance function and variance function for any *s* , *t *Î *T* ,^{ }

_{}

is called the covariance function ( or correlation function ) of a random process , where *m* ( *t* ) is the mean function.

In particular, when *s* = *t* , then it is called

_{}

is the variance function ( or autocorrelation function ) of a random process .

3 ^{° } Higher-order moments for any positive integer *n*
* , non-negative integers **m *_{1} , *m *_{2} , L , *m _{n}* ,

_{}

Call it an *m -order moment of **x* ( *t* ) at *t *_{1} , *t *_{2} , L , *t ** _{n}* moments .

[ Mean-square continuity of random processes ] Let { *x* ( *t* ) , *t *Î *T* } be a random process, *t *_{0 }Î *T* , if

_{}

which is
_{}

*Then x* ( *t* ) is said to be mean square continuous at *t* = *t *_{0} , where lim means mean square convergence. If *x* ( *t* ) is mean square continuous for any *t *Î *T* , then *x* ( *t* ) is mean square continuous on *T.*

The following three propositions of random process { *x* ( *t* ) , *t *Î *T* } are equivalent:

1 ^{° }random process { *x* ( *t* ) , *t *Î *T* } is mean square continuous on *T ;*

2 ^{°} The covariance function *R* ( *s* , *t* )( *s* , *t *Î *T* ) of a random process { *x* ( *t* ) , *t *Î *T* } is continuous with respect to *s* , *t ;*

^{} The covariance function *R* ( *s* , *t* )( *s* , *t *Î *T* ) of a 3 ^{° }random process { *x* ( *t* ) , *t *Î *T* } is continuous with respect to *s* , *t* on the diagonal *s* = *t .*

Special types of random variables are described below:

[ Independent random process ]
If for any positive integer *n* and any arbitrary *t *_{1} , *t *_{2} , L *t _{n}* Î

_{}

Then { *x* ( *t* ) , *t *Î *T* } is said to be an independent random process.

[ Normal process ] If for any positive integer *n* and any *t *_{1} , *t *_{2} , L , *t _{n}* Î

_{
}

_{}

Then { *x* ( *t* ) , *t *Î *T* } is a normal (or Gaussian) process, where *R _{jk}* =

[ Markov Process ]
If for any *n* = 1,2, L and any *t *_{0} , *t *_{1} , L , *t _{n}* Î

*P* { *x* ( *t _{n}* ) ≤

For all *x* ( *t _{n-}*

[ Time-aligned Markov Process ] Let { *x* ( *t* ) , *t *Î *T* } be a Markov process, if for any *t *_{1 }Î *T* , *t _{2}* Î

( *t *_{1} < *t *_{2} ) , conditional distribution

_{}

That is, the conditional distribution *F* ( *t *_{1} , *x* ; *t *_{2 }*, y* ) only depends on *t *_{2} - *t *_{1} , *x* , *y* , then { *x* ( *t* ) , *t *Î *T* } is a time- aligned ) of the Markov process.

[ Random process with independent increments ] If for and any group , where ), the random variables , , ¼ , are independent of each other, then it is called a random process with independent increments._{}_{}_{}_{}_{}_{}_{}

[ Random Process with Stationary Increment ] If for any *t *_{1} , *t *_{2} Î *T* and any *h* ( *t *_{1} + *h* , *t *_{2} + *h *Î *T* ) , the random variable

*
**x* (* t *_{2} +* h* )(* t *_{1} +* h *) and _{}_{}*x* (* t *_{2} )(* t *_{1} )_{}_{}

Following the same probability distribution, it is called a random process with stationary increments._{}

[ Poisson process ] Let { *x* ( *t* ) , 0 ≤ *t* < ∞ } be a random process with stationary independent increments taking non-negative integer values. If for any *t* ( 0 ≤ *t* < ∞ ) , the relation

_{} ( *k* =0 ,1,2 , L )

is established ( where *λ* > 0 is a constant ) , then { *x* ( *t* ) , 0 ≤ *t* < ∞ } is called a Poisson process.

[ Wiener process ]
If the random process { *x* ( *t* ) , 0 ≤ *t* < ∞ } satisfies *P* ( *x* (0)=0)=1 , has a stationary independent increment, and the distribution density function of the random variable *x* ( *t* ) Yes

_{}

Then { *x* ( *t* ) , 0 ≤ *t* < ∞ } is called a Wiener process or a Brownian motion process.

[ Stationary Process ]
If for *n* =1,2, L , any *t _{m}* Î

_{}

is established, then { *x* ( *t* ) , *t *Î *T* } is a stationary process ( a stationary process in the narrow sense ) .

The Markov process

1. Transition probability

[ State and State Transition Probability ] Consider a series of random trials , in which the results of each trial can be listed as one of two mutually exclusive events *E *_{1} , *E *_{2} , L and only one of them occurs, then these events *E _{i}* are called (

[ No After Effects and Time Homogeneity of the Process] Without After Effects If the state of the system at time *t *_{0} is known, the situation where the system will reach the state after time *t *_{0} is the same as the state the system was in before time *t *_{0} If the state is irrelevant, the process is said to be ineffectual. _{}_{}_{}

Time-homogeneous If the transition probability *p ** _{ij}* (

*
p _{ij }(*

2. Markov chain

[ Markov chain ] A Markov chain is a Markov process in which time and state are discrete .

1 ^{°} Under a series of random experiments, the possible discrete states of the system are *E *_{0 }, *E *_{1} , L , if for any two positive integers *k* , *m* , any integer 0 ≤ *j *_{1} < *j *_{2} < L < *j *_{l} < *m* , the equation _{}_{}_{}_{}_{}

_{}

All are established ( _{}representing the * _{event}* of " the occurrence of

2 ^{° }Sequence of random variables { *x _{n}* } (

Let { *x _{n}* }(

_{}

If established, { *x _{n}* } is called a Markov chain, or Markov chain for short.

It is usually desirable to take { *x _{i}* }={1,2, L } .

The random test sequence delineated by the Markov chain can be intuitively understood as to test the state of the "future" as long as the known state of the "present" is used, and the state of the "past" has no effect. Aftereffects.

Markov chains, so that Markov processes are random processes with no aftereffects.

[ Transition probability matrix of Markov chain ] If the probability of the system going from state *E *_{i}* to state E ** _{j}* through a transition at time

_{} ( *p _{ij}* ≥ 0,

The transition probability *p _{ij}* can be arranged into a transition probability matrix

_{}

This is a matrix of non-negative elements whose elements in each row sum to 1 , called the one-step transition probability matrix of the Markov chain.

It is also used to represent the transition probability of the system from state *E ** _{i}* to reach

*Also define the n-* step transition probability matrix of the Markov chain :

_{}

From no aftereffect, we get

_{}

It is called the Chepman - Kolmogorov equation.

From the Chepman - Kolmogorov equation it can be derived

*P *^{( }^{n}^{ )} =* Pn ^{_}*

[ Classification of Closed Sets and States ] Consider a time-homogeneous Markov chain . Let *E* be the state space, *E* = ( *E *_{0} , *E *_{1} , *E *_{2} , L ), if there is a positive integer *n* such that *E ** _{k}* is said to be reachable from

A subset *C of **E* is called a closed set, which means that no state outside *C* can be reached from any state in *C. *Let *E* be a closed set, if the single-point set {* E ** _{k}* } is a closed set, then

Denote the conditional probability of "the system is in state *E _{i}* and firstly transitions to state

_{}

then

_{}

remember

_{}

It is the conditional probability that " the system finally reaches state *E ** _{j}* after finite transitions under the condition that it is in state

_{}

If *f _{ij}* =1, then

The statuses are categorized as follows:

1 ° If *f ** _{jj}* =1 , then

2 ° Let *E ** _{j}* be a constant return state, if

3 ° If the positive integer has the greatest common divisor *t* , when *t* >1 , *E ** _{j}* is called periodic , or has period

4 ° *E ** _{j}* is said to be ergodic if

Discriminant method for state classification

1 ° *E ** _{j}* is a necessary and sufficient condition for very return

2 ° If *E ** _{j}* is a constant return state with period

3 ° If *E ** _{j}* is ergodic

4 ° If *E ** _{j}* is constant

[ Decomposition theorem of Markov chains ] The state space of any system can be decomposed into the sum of the following disjoint subsets *D* , *C *_{1} , *C *_{2} , L , where

1 ^{° }Any *C ** _{j}* is an inseparable closed set composed of always-returning states

2 ^{° The states in }*C ** _{j}* are homogeneous

3 ^{° }*D *consists of all non-recurrent states ( states in *C _{j}* may arrive from states in

[ The ergodicity theorem of Markov chains ] For different types , there are the following ergodicity theorems:

1 ^{° }If *E *_{k}* _{}* Î

_{}

2 ^{° }If *E ** _{k}* is a normal return state with period

_{}
(1 ≤ *r* ≤ *t* )

in
_{}

Represents the probability of arriving at *E ** _{k}* for the first time at a certain

3 ^{° }For inseparable aperiodic Markov chains , the limit

_{}

exists, and only in the following two cases:

( i ) All *p _{j}* ( probability of appearing

_{} ( *j* =0,1, L )

( ii ) All *pj _{}* are equal to zero, in which

3. Markov process with continuous time and discrete state

Only the Markov process of Shiqi is considered here .

[ Cherman - Kolmogorov equation ] Let *p _{ij}* (

_{}, *p ** _{ij}* (

For *t* > 0, *τ* > 0 there is the Cherman - Kolmogorov equation

_{}

It is the basis of Markov process research.

[ Ergodicity theorem ] Any time - continuous Markov process with finite state ( *E *_{1} , L , *En _{}* ) , if there is a

_{} ( 0 ≤ *j *, * i* ≤ *n* )

exists and is independent of *i* .

[ Kolmogorov's forward and backward equations ]
If a Markov process with only a finite number of states satisfies

_{}

It is called a stochastic continuous Markov process.

For stochastic continuous Markov processes with finite states, there are Kolmogorov's forward and backward equations:

_{} ( forward equation )

_{} ( backward equation )

in

_{}

4. Diffusion process

[ Definition of Diffusion Process ] A state-continuous Markov process { *x* ( *t* ) , 0 ≤ *t* < ∞ } , if its condition

The distribution function *F* ( *t* , *x* ; *τ* , *y* ) for any *ε* > 0 and *t *_{1} < *t* < *t *_{2} , *t *_{1} → *t* , *t *_{2} → *t* , holds the following three relations about *x consistently:*

( i )_{}

( ii )_{}

( iii )_{}

The Markov process { *x* ( *t* ) , 0 ≤ *t* < ∞ } is called a diffusion process.

[ Kolmogorov's first equation ]
* If the partial derivative of the conditional distribution function **F* ( *t,x* ; *τ* , *y* ) of the diffusion process

_{}

exists and is continuous for any *t* , *x* ,
*y* and *τ* ( *τ* > *t* ) , then the function *F* ( *t* , *x* ; *τ* , *y* ) satisfies Kolmogorov's first equation

_{}

[ Kolmogorov's Second Equation ] If the conditional distribution function *F* ( *t* , *x* ; *τ* , *y* ) of the diffusion process has a distribution density *f* ( *t* , *x* ; *τ* , *y* ) , and the following partial derivatives

_{}

exists and is continuous, then *f* ( *t* , *x* ; *τ* , *y* ) satisfies Kolmogorov’s second equation

_{}

3. Stationary Stochastic Process

[ Weakly stationary process ] If the random process { *x* ( *t* ), *t *Î *T* } satisfies

_{}

It is called a weakly stationary process ( or a generalized stationary process ) .

A stationary process in the broad sense is not necessarily a stationary process in the narrow sense; conversely, a stationary process in the narrow sense is not necessarily a stationary process in the broad sense, but if the second moment of the stationary process in the narrow sense exists, then it must be a stationary process in the broad sense.

For a normal process, generalized stationarity and narrowly defined stationarity are the same.

In theoretical studies, it is often more convenient to consider complex-valued stochastic processes. The so-called complex-valued random variable *x* refers to *x = **η* + *i **x* , where *η* and *x* are both random variables; and the complex-valued random process is *x* ( *t* ) = *η* ( *t* ) + *
i **x* ( *t* ) , where *η* ( *t* ), *x* ( *t* ) are real-valued random processes.* ** *

The mean ( or mathematical expectation ) of a complex-valued random variable *x = **η* + *i **x* is defined as

_{}

The correlation moments of two complex-valued random variables *x*_{ 1} , *x*_{ 2 are defined as}

_{}

The generalized stationarity of a complex-valued random process { *x* ( *t* ), *t *Î *T* } is that it satisfies

_{}

All complex-valued generalized stationary processes are considered below.

[ Spectral decomposition of correlation function ]
If the function *R* ( *τ* ) is the correlation function of a mean square continuous stationary process { *x* ( *t* ), < *t* < } _{}_{}, then

_{}

Where *F* ( *λ* ) is a bounded non-decreasing function, satisfying , is called the spectral function of the stationary process { *x* ( *t* ), < *t* < } ( called spectrum in engineering ) ._{}_{}_{}

If *F* ( *λ* ) is absolutely continuous, denoted as spectral density ( called spectral density in engineering ) , then_{}_{}

_{}

When { *x* ( *t* ), < *t* < } _{}_{}is a real-valued stationary process, the correlation function *R* ( *τ* ) can be expressed as

_{}

or ( when spectral density exists )

_{}

Where *F *_{1} ( *λ* )=2 *F* ( *λ* )+ *c* ( *c* is a constant ) , ._{}

In particular, for complex-valued stationary sequences { *x _{n}* ,

_{} ( *k* =0 , ± 1, L )

where the spectral function *F* ( *λ* ) satisfies

*F* ( )=0,* F* (_{}
* p* )=* R* (0)

[ ergodicity theorem ]

1 ^{° } If { *x* ( *t* ),- ∞ < *t* < ∞ } is a mean square continuous stationary process, then

_{}

The necessary and sufficient conditions are:

_{}

2 ^{° }If
{ *x _{n}* ,

_{}

The necessary and sufficient conditions are:

_{}

3 ^{° }If { *x* ( *t* ),- ∞ < *t* < ∞ } is a mean-square continuous stationary process with zero mean, and for the constant *t* > 0, it is also a mean-square continuous stationary process, record its correlation function is *R ** _{t}* (

_{}
_{}

The necessary and sufficient conditions are:

_{}

4 ^{° }If
{ *n* = 0, * _{}* ± 1, L } is a stationary sequence with zero mean, and a fixed integer

_{}

The necessary and sufficient conditions are:

_{}

The ergodicity theorem shows that for a stationary process, as long as it satisfies the conditions of the theorem ( in practice they are often satisfied ) , then the average of the sample space ( such as mean, correlation moment, etc. ) can be replaced by the average of time, More specifically, the mean and correlation function of the process can be determined as long as the stationary process is realized once over a sufficiently long time. This is exactly why the ergodicity theorem is practically important.

[ spectral expansion of stationary process ]
If { *x* ( *t* ),- ∞ < *t* < ∞ } is a mean-square continuous stationary process with zero mean, then we have

_{}

in
_{}

Satisfy ( i ) *EZ* ( l )=0

( ii ) when the interval and do not overlap_{}_{}

_{}

( i.e. *Z* ( *l* ) is a process with orthogonal increments )

( iii ) ( *F* ( *l* ) is the spectral function )_{}

*Z* ( *l* ) iscalledthe random spectral function of* x* (* t* ), and the integral expression of x ( t ) is called* the* spectral* expansion* of* x* (* t* ).

In particular, if *x* ( *t* ) is a real-valued mean-square continuous stationary process, then we have

_{}

in
_{}

_{}

Satisfy ( i ) *EZ *_{1} ( *l* )= *EZ *_{2} ( *l* )=0,

( ii ) when the interval and do not overlap_{}_{}

_{} ( *j* , *k* =1,2)

( iii )_{}

( *F* ( *l* ) is the spectral function )

If { *x _{n}* ,

_{}

where the random spectral function *Z* ( *l* ) is

_{} ( - *p* ≤ *λ* ≤ *p* )

It also satisfies the properties ( i ) ~ (iii) similar to the random spectral functions of the mean square continuous stationary process .

Contribute a better translation