• Nie Znaleziono Wyników

A TWO-DISORDER DETECTION PROBLEM

N/A
N/A
Protected

Academic year: 2021

Share "A TWO-DISORDER DETECTION PROBLEM"

Copied!
11
0
0

Pełen tekst

(1)

K. S Z A J O W S K I (Wroc law)

A TWO-DISORDER DETECTION PROBLEM

Abstract. Suppose that the process X = {X n , n ∈ N} is observed sequen- tially. There are two random moments of time θ 1 and θ 2 , independent of X, and X is a Markov process given θ 1 and θ 2 . The transition probabilities of X change for the first time at time θ 1 and for the second time at time θ 2 . Our objective is to find a strategy which immediately detects the distribution changes with maximal probability based on observation of X. The corre- sponding problem of double optimal stopping is constructed. The optimal strategy is found and the corresponding maximal probability is calculated.

1. Introduction. Suppose that a process X = {X n , n ∈ N} (N = {0, 1, 2, . . .}) is observed sequentially. The process is obtained from three Markov processes by switches between them at two random moments of time, θ 1 and θ 2 . Our objective is to detect immediately these moments based on observation of X.

This type of problem arises in quality control. An automaton which produces some details changes its parameters. This causes the details to change their quality. Production can be divided into three grades. Assuming that at the beginning of the production process the quality is highest, from some time θ 1 on the products should be classified to a lower grade, and beginning with θ 2 to the lowest grade. We want to detect the moments of these changes.

Shiryaev (1978) considered the disorder problem for independent random variables with one disorder where the mean distance between disorder time and the moment of its detection was minimized. The probability maximiz- ing approach to the problem was used by Bojdecki (1979), and the stopping time which is in a given neighbourhood of the moment of disorder with

1991 Mathematics Subject Classification: Primary 60G40; Secondary 62L15.

Key words and phrases : disorder problem; sequential detection; multiple optimal stopping.

[231]

(2)

maximal probability was found. The problem with two disorders was con- sidered by Yoshida (1983) and Szajowski (1992). Yoshida (1983) solved the problem of optimal stopping for the observation of a Markov process X so as to maximize the probability that the distance between θ i , i = 1, 2, and the moment of disorder will not exceed a given number (for each disorder independently). He constructed a strategy which stops the process between the first and the second disorder with maximal probability. References to other papers treating variations of the disorder problem can be found in Szajowski (1992).

In the present paper the probability maximizing approach to optimal stopping developed by Bojdecki (1979) is extended to solve a double stopping problem (see Haggstrom (1967), Nikolaev (1979)) arising in the quickest detection of double disorders. In Section 2 the problem is formulated in a rigorous manner. Section 3 contains the reduction of the problem to an optimal stopping problem for a doubly indexed stochastic sequence. The main result is given in Section 4.

2. The double disorder detection problem. Let X = {X n , n ∈ N}, defined on (Ω, F, P ), be a potentially observable sequence of r.v.’s with values in (E, B), where E is a subset of the real line. Assume that the epochs of distributional changes are N-valued F-measurable r.v. θ 1 and θ 2 , independent of X and having the distribution

(1) P (θ 1 = j) = p j−1 1 q 1 , P (θ 2 = k | θ 1 = j) = p k−j−1 2 q 2 , where j = 1, 2, . . . , k = j + 1, j + 2, . . . and p i + q i = 1, i = 1, 2.

Suppose that on (Ω, F, P ) Markov processes X i = {(X n i , F n , P x i )}, i = 1, 2, 3, are defined and we have

(2) X n =

X n 1 if n < θ 1 , X n 2 if θ 1 ≤ n < θ 2 , X n 3 if n ≥ θ 2 .

The measures P x i , i = 1, 2, 3, are absolutely continuous with respect to some fixed measure P x and satisfy the following relations: P x i (dy) = f x i (y)P (x, dy), where f x i (·) 6= f x j (·), i 6= j and f x i+1 (y)/f x i (y) < ∞, i = 1, 2, for every x, y ∈ E. The distribution of θ i , i = 1, 2, is given by (1) and the measures P x i , i = 1, 2, 3, x ∈ E, are known. We observe the process (X n , F n , P x ), n = 0, 1, 2, . . . , x ∈ E, which is a Markov process given θ 1 and θ 2 , defined by (2) with F n = σ(X 0 , X 1 , . . . , X n ). On the basis of the distribution of θ 1 , θ 2 and measures P x i , i = 1, 2, 3, x ∈ E, we calculate the finite-dimensional distributions of the observed process.

Let S denote the set of all stopping times with respect to the filtration

(F n ), n = 0, 1, . . . , and T = {(τ, σ) : τ < σ, τ, σ ∈ S}. Let us determine a

(3)

pair of stopping times (τ , σ ) ∈ T such that for every x ∈ E, (3) P x (τ < σ < ∞, |θ 1 − τ | ≤ d 1 , |θ 2 − σ | ≤ d 2 )

= sup

(τ,σ)∈T

P x (τ < σ < ∞, |θ 1 − τ | ≤ d 1 , |θ 2 − σ| ≤ d 2 ).

This problem will be denoted by D d 1 d 2 .

3. Reduction of the double “disorder problem” to double opti- mal stopping of a Markov process. A compound stopping variable is a pair (τ, σ) of stopping times such that τ < σ a.e. Define T m = {(τ, σ) ∈ T : τ ≥ m}, T mn = {(τ, σ) ∈ T : τ = m, σ ≥ n} and S m = {τ ∈ S : τ ≥ m}.

Set F mn = F n , m, n ∈ N, m ≤ n. We define a two-parameter stochastic sequence ξ(x) = {ξ mn (x), m, n ∈ N, m < n, x ∈ E} by

ξ mn (x) = P x (|θ 1 − τ | ≤ d 1 , |θ 2 − σ| ≤ d 2 | F mn ).

For every m, n ∈ N with m < n, we can consider the optimal stopping problem of ξ(x) on T mn . A compound stopping variable (τ , σ ) is said to be optimal in T m (or T mn ) if E x ξ τ σ = sup (τ,σ)∈T m E x ξ τ σ (or E x ξ τ σ = sup (τ,σ)∈T mn E x ξ τ σ ). Define

(4) η mn (x) = ess sup

(τ,σ)∈T mn

E(ξ τ σ | F mn ), (5) η m = E x (η m,m+1 | F m ).

If we put ξ m∞ = 0, then η mn = ess sup

(τ,σ)∈T mn

P x (|θ 1 − m| ≤ d 1 , |θ 2 − n| ≤ d 2 | F mn ).

From the theory of optimal stopping for double indexed processes (cf. Hag- gstrom (1967), Nikolaev (1981)) the sequence η mn satisfies

η mn = max{ξ mn , E(η m,n+1 | F mn )}.

Moreover, if σ m = inf{n ≥ m : η mn = ξ mn }, then (m, σ m ) is optimal in T mn

and η mn = E x

m | F mn ) a.e.

Lemma 1. The stopping time σ m is optimal for every stopping problem (4).

P r o o f. It suffices to prove lim n→∞ ξ mn = 0 (Lemma 4.10 of Chow, Robbins & Siegmund (1971), cf. also Bojdecki (1979), Bojdecki (1982)).

For m, n, k ∈ N with n ≥ k > m and every x ∈ E we have E x (I {|θ 1 −m|≤d 1 ,|θ 2 −n|≤d 2 } | F mn ) = ξ mn (x)

≤ E x (sup

j≥k

I {|θ

1 −m|≤d 1 ,|θ 2 −j|≤d 2 } | F n ),

(4)

where I A is the characteristic function of the set A. By Levy’s theorem, lim sup

n→∞ ξ mn (x) ≤ E x (sup

j≥k

I {|θ

1 −m|≤d 1 ,|θ 2 −j|≤d 2 } | F n∞ ), where F ∞ = F n∞ = σ( S ∞

n=1 F n ).

We have lim k→∞ sup j≥k I {|θ

1 −m|≤d 1 ,|θ 2 −j|≤d 2 } = 0 a.e. and by the dom- inated convergence theorem,

k→∞ lim E x (sup

j≥k

I {|θ

1 −m|≤d 1 ,|θ 2 −j|≤d 2 } | F ∞ ) = 0.

As the next step the optimal stopping problem for η m should be solved.

Define

(6) V m = ess sup

τ∈S m

E x (η τ | F m ).

Then V m = max{η m , E x (V m+1 | F m )} a.e. and we define τ n = inf{k ≥ n : V k = η k }.

Lemma 2. The strategy τ 0 is the optimal strategy of the first stop.

P r o o f. To show that τ 0 is the optimal first stop strategy we prove that P x0 < ∞) = 1. We argue in the usual manner, i.e. we show lim m→∞ η m (x)

= 0.

We have η m = E x

m | F m ) = E x (E x (I {|θ 1 −m|≤d 1 ,|θ 2 −σ

m |≤d 2 } | F

m ) | F m )

= E x (I {|θ 1 −m|≤d 1 ,|θ 2 −σ

m |≤d 2 } | F m )

≤ E x (sup

j≥k

I {|θ

1 −j|≤d 1 ,|θ 2 −σ j |≤d 2 } | F m ).

Similarly to the proof of Lemma 1 we have lim sup

m→∞ η m (x) ≤ E x (sup

j≥k

I {|θ

1 −j|≤d 1 ,|θ 2 −σ j |≤d 2 } | F ∞ ).

Since

k→∞ lim sup

j≥k

I {|θ

1 −k|≤d 1 ,|θ 2 −σ j |≤d 2 } ≤ lim sup

k→∞

I {|θ

1 −k|≤d 1 } = 0, it follows that

m→∞ lim η m (x) ≤ lim

k→∞ E x (sup

j≥k

I {|θ

1 −j|≤d 1 ,|θ 2 −σ j |≤d 2 } | F ∞ ) = 0.

Lemmas 1 and 2 describe the method of solving the “disorder problem”

formulated in Section 2.

4. Immediate detection of the first and second disorder. For the

sake of simplicity we restrict ourselves to the case d 1 = d 2 = 0. It will be

easily seen how to generalize the solution to D d 1 d 2 for d 1 > 0 or d 2 > 0.

(5)

First we construct multi-dimensional Markov chains such that ξ mn and η m are the functions of their states. Set (cf. Yoshida (1983), Szajowski (1992))

Π n 1 (x) = P x1 > n | F n ), Π n 2 (x) = P x2 > n | F n ), Π mn (x) = P x (θ 1 = m, θ 2 > n | F mn ) for m, n = 1, 2, . . . , m < n,

H(t, u, α, β) = αp 1 f t 1 (u) + [p 2 (β − α) + q 1 α]f t 2 (u) + [1 − β + q 2 (β − α)]f t 3 (u), Π 1 (t, u, α, β) = p 1 αf t 1 (u)(H(t, u, α, β)) −1 ,

Π 2 (t, u, α, β) = {p 1 αf t 1 (u) + [αq 1 + (β − α)p 2 ]f t 2 (u)}(H(t, u, α, β)) −1 , Π(t, u, α, β, γ) = p 2 γf t 2 (u)(H(t, u, α, β)) −1 .

The following auxiliary results will be needed in the proof of the main the- orem.

Lemma 3. For each x ∈ E and m, n = 1, 2, . . . with m < n, and each Borel function u : R → R,

Π n+1 1 (x) = Π 1 (X n , X n+1 , Π n 1 (x), Π n 2 (x)), (7)

Π n+1 2 (x) = Π 2 (X n , X n+1 , Π n 1 (x), Π n 2 (x)), (8)

Π m,n+1 (x) = Π(X n , X n+1 , Π n 1 (x), Π n 2 (x), Π mn (x)), (9)

with the boundary condition Π 0 1 (x) = Π 0 2 (x) = 0, Π mm (x) = q 1 f X 2

m−1 (X m ) p 1 f X 1

m−1 (X m ) Π n 1 (x) and

(10) E x (u(X n+1 ) | F n ) =

\

E

u(y)H(X n , y, Π n 1 (x), Π n 2 (x)) P X n (dy).

P r o o f. (7), (8) and (10) are proved in Yoshida (1983) and Szajowski (1992). The formula (9) follows from the Bayes formula:

P x (θ 1 = j, θ 2 = k | F n )

=

 

 

 

 

 

 

 

 

P x (θ 1 = j, θ 2 = k)p n 1 Q n

s=1 f x 1 s−1 (x s )

×(S n (x 0 , x 1 , . . . , x n )) −1 if j > n, P x (θ 1 = j, θ 2 = k) Q j−1

s=1 f x 1

s−1 (x s ) Q n t=j f x 2

t−1 (x t )

×(S n (x 0 , x 1 , . . . , x n )) −1 if j ≤ n < k, P x (θ 1 = j, θ 2 = k) Q n

s=1 f x 1

s−1 (x s ) Q k−1 t=j f x 2

t−1 (x t )

× Q n u=k f x 3

u−1 (x u )(S n (x 0 , x 1 , . . . , x n )) −1 if k ≤ n,

(6)

where

S n (x 0 , x 1 , . . . , x n )

=

n−1

X

j=1 n

X

k=j+1

n

p j−1 1 q 1 p k−j−1 2 q 2 j−1

Y

s=1

f x 1 s−1 (x s )

k−1

Y

t=j

f x 2 t−1 (x t )

n

Y

u=k

f x 3 u−1 (x u ) o

+

n

X

j =1

n p j−1 1 q 1 p n−j 2

j−1

Y

s=1

f x 1

s−1 (x s )

n

Y

t=j

f x 2

t−1 (x t ) o + p n 1

n

Y

s=1

f x 1

s−1 (x s ).

We have

Π m,n+1 (x) = P x (θ 1 = m, θ 2 > n + 1 | F n+1 )

= p 2 f X 2 n (X n+1 )Π mn (x)S n (x 0 , x 1 , . . . , x n+1 )

× (S n+1 (x 0 , x 1 , . . . , x n )) −1 and

S n+1 (x 0 , x 1 , . . . , x n+1 ) = H(X n , X n+1 , Π n 1 (x), Π n 2 (x))S n (x 0 , x 1 , . . . , x n ).

Hence

Π m,n+1 (x) = p 2 f X 2 n (X n+1 )Π mn (x) H(X n , X n+1 , Π n 1 (x), Π n 2 (x)) . By the above we have

ξ mn (x) = P x (θ 1 = j, θ 2 = k | F mn )

=

p j−1 1 q 1 p k−j−1 2 q 2 j−1

Y

s=1

f x 1 s−1 (x s )

n−1

Y

t=j

f x 2 t−1 (x t )f X 3 n−1 (X n ) S n (x 0 , x 1 , . . . , x n )

= q 2 p 2

Π mn (x) f X 3

n−1 (X n ) f X 2

n−1 (X n ) .

We can observe that (X n , X n+1 , Π n 1 (x), Π n 2 (x), Π mn (x)) for n = m + 1, m + 2, . . . is a function of (X n−1 , X n , Π n−1 1 (x), Π n−1 2 (x), Π m,n−1 (x)) and X n+1 . Moreover, the conditional distribution of X n+1 given F n (cf. (10)) depends on X n , Π n 1 (x) and Π n 2 (x) only. These facts imply that {(X n , X n+1 , Π n 1 (x), Π n 2 (x), Π mn (x))} n=m+1 form a homogeneous Markov process (see Chapter 2.15 of Shiryaev (1978)). This allows us to reduce the problem (4) for each m to the optimal stopping problem for the Markov pro- cess Z m (x) = {(X n−1 , X n , Π n 1 (x), Π n 2 (x), Π mn (x)), m, n ∈ N, m < n, x ∈ E } with the reward function

h(t, u, α, β, γ) = q 2

p 2

γ f t 3 (u)

f t 2 (u) .

(7)

Lemma 4. The solution of the optimal stopping problem (4) for m = 1, 2, . . . has the form

(11) σ m = inf



n > m : f X 3 n−1 (X n ) f X 2

n−1 (X n ) ≥ R (X n )



where R (t) = p 2

T

E r (t, s)f t 2 (s) P t (ds). Here r = lim n→∞ r n , where r 0 (t, u) = f t 3 (u)/f t 2 (u) and

(12) r n+1 (t, u) = max  f t 3 (u) f t 2 (u) , p 2

\

E

r n (u, s)f u 2 (s) P u (ds)

 . The function r (t, u) satisfies the equation

(13) r (t, u) = max  f t 3 (u) f t 2 (u) , p 2

\

E

r (u, s)f u 2 (s) P u (ds)

 . The value of the problem is

(14) η m = E x (η m,m+1 | F m ) = q 2 p 2

q 1 p 1

Π n 1 (x)R (X m ).

P r o o f. For any Borel function u : E × E × [0, 1] 3 → [0, 1] define two operators

T x u(t, s, α, β, γ) = E x (u(X n , X n+1 , Π n+1 1 (x), Π n+1 2 (x), Π m,n+1 (x)) |

X n−1 = t, X n = s, Π n 1 (x) = α, Π n 2 (x) = β, Π mn (x) = γ) and

Q x u(t, s, α, β, γ) = max{u(t, s, α, β, γ), T x u(t, s, α, β, γ)}.

By the well-known theorem from the theory of optimal stopping (see Shiryaev (1978), Ch. 2, and Nikolaev (1981)) we conclude that the solu- tion of (4) is the Markov time

σ m = inf{n > m : h(X n−1 , X n , Π n 1 (x), Π n 2 (x), Π mn (x))

= h (X n−1 , X n , Π n 1 (x), Π n 2 (x), Π mn (x))}, where h = lim k→∞ Q k x h(t, u, α, β, γ). Then

T x h(t, u, α, β, γ) = E x  q 2

p 2 Π m,n+1 (x) f X 3 n (X n+1 ) f X 2

n (X n+1 )

X n−1 = t, X n = u, Π n 1 (x) = α, Π n 2 (x) = β, Π mn (x) = γ



= q 2

p 2

γp 2 E

 f u 2 (X n+1 ) H(u, X n+1 , α, β)

f u 3 (X n+1 ) f u 2 (X n+1 )

F n



(10) = q 2 γ

\

E

f u 3 (s)

H(u, s, α, β) H(u, s, α, β) P u (ds) = q 2 γ

(8)

and

(15) Q x h(t, u, α, β, γ) = q 2

p 2 γ max  f t 3 (u) f t 2 (u) , p 2

 . Define r n (t, u) as in the statement of the lemma. We show that (16) Q l x h(t, u, α, β, γ) = q 2

p 2

γr l (t, u)

for l = 1, 2, . . . By (15) we have Q x h = (q 2 /p 2 )γr 1 . Assume (16) for l ≤ k.

By (10) we have

T x Q k x h(t, u, α, β, γ) = E x  q 2

p 2

Π m,k+1 (x)r k (X n , X n+1 )

X n−1 = t, X n = u, Π n 1 (x) = α, Π n 2 (x) = β, Π mn (x) = γ



= q 2

p 2

γp 2

\

E

r k (u, s)f u 2 (s) P u (ds).

It is easy to show (see Shiryaev (1978)) that

Q k+1 x h = max{h, T x Q k x h} for k = 1, 2, . . .

Hence we get Q k+1 x h = (q 2 /p 2 )γr k+1 and (16) is proved for l = 1, 2, . . . This gives

h (t, u, α, β, γ) = q 2

p 2

γ lim

k→∞ r k (t, u) = q 2

p 2

γr (t, u) and

η mn = ess sup

(τ,σ)∈T mn

E xτ,σ | F mn )

= h (X n−1 , X n , Π n 1 (x), Π n 2 (x), Π mn (x)).

We have

T x h (t, u, α, β, γ) = q 2

p 2

γp 2

\

E

r (u, s)f u 2 (s) P u (ds) = q 2

p 2

γR (u) and σ m has the form (11). By (5) and (10) we obtain

η m (x) = f (X n−1 , X n , Π n 1 (x), Π n 2 (x)) = E(η m,m+1 | F m ) (17)

= E  q 2

p 2

Π m,m+1 r (X m , X m+1 )

F m

 (18)

= q 2 p 2

Π mm

\

E

r (X m , s)f X 2 m (s) P X m (ds).

(19)

By Lemmas 4 and 3 the optimal stopping problem (6) has been trans- formed to the optimal stopping problem for the homogeneous Markov pro- cess

W = {(X m−1 , X m , Π m 1 (x), Π m 2 (x)), m ∈ N, x ∈ E}

(9)

with the reward function

f (t, u, α, β) = q 1 q 2

p 1 p 2

· f t 2 (u)

f t 1 (u) αR (u).

Lemma 5. The solution of the optimal stopping problem (6) for n = 1, 2, . . . has the form

(20) τ n = inf{k ≥ n : (X k−1 , X k , Π k 1 (x), Π k 2 (x)) ∈ B } where B = {(t, u, α, β) : f t 2 (u)/f t 1 (u) ≥ p 1

T

E v (u, s) P u (ds)}. Here v (t, u)

= lim n→∞ v n (t, u), where v 0 (t, u) = R (u) and (21) v n+1 (t, u) = max  f t 2 (u)

f t 1 (u) , p 1

\

E

v n (u, s)f u 1 (s) P u (ds)

 .

The function v (t, u) satisfies the equation (22) v (t, u) = max  f t 2 (u)

f t 1 (u) , p 1

\

E

v (u, s)f u 1 (s) P u (ds)

 . The value of the problem is V n = v (X n−1 , X n ).

P r o o f. For any Borel function u : E × E × [0, 1] 2 → [0, 1] define two operators

T x u(t, s, α, β) = E x (u(X n , X n+1 , Π n+1 1 (x), Π n+1 2 (x)) |

X n−1 = t, X n = s, Π n 1 (x) = α, Π n 2 (x) = β) and

Q x u(t, s, α, β) = max{u(t, s, α, β), T x u(t, s, α, β)}.

As in the proof of Lemma 4 we conclude that the solution of (6) is the Markov time

τ m = inf{n > m : f (X n−1 , X n , Π n 1 (x), Π n 2 (x))

= f (X n−1 , X n , Π n 1 (x), Π n 2 (x))}, where f = lim k→∞ Q k x f (t, u, α, β). We have

T x h(t, u, α, β) = E x  q 1 q 2 p 1 p 2

Π n+1 1 (x) f X 2 n (X n+1 )

f X 1 n (X n+1 ) R (X n+1 )

X n−1 = t, X n = u, Π n 1 (x) = α, Π n 2 (x) = β



= q 1 q 2

p 1 p 2

γp 1 E

 f u 1 (X n+1 )

H(u, X n+1 , α, β) · f u 2 (X n+1 )

f u 1 (X n+1 ) R (X n+1 )

F n



(10)

(10) = q 1 q 2

p 1 p 2 αp 1

\

E

f u 2 (s)

H(u, s, α, β) H(u, s, α, β)R (s) P u (ds)

= q 1 q 2

p 1 p 2 αp 1

\

E

R (s)f X 2 n (s) P X n (ds) and

Q x f (t, u, α, β) = q 1 q 2 p 1 p 2

α max  f t 2 (u) f t 1 (u) , p 1

\

E

R (s)f u 2 P u (ds)

 (23)

= q 1 q 2 p 1 p 2

αv 1 (t, u).

We show that

(24) Q l x f (t, u, α, β) = q 1 q 2 p 1 p 2

αv l (t, u)

for l = 1, 2, . . . , where v l (t, u) is defined as in the statement of the lemma.

By (23) we have Q x f = q p 1 q 2

1 p 2 αv 1 . Assume (24) for l ≤ k. By (10) we have T x Q k x f (t, u, α, β) = E x  q 1 q 2

p 1 p 2

Π k+1 1 (x)v k (X n , X n+1 )

X n−1 = t, X n = u, Π n 1 (x) = α, Π n 2 (x) = β



= q 1 q 2 p 1 p 2

αp 1

\

E

v k (u, s)f u 1 (s) P u (ds).

Hence Q k+1 x f = q p 1 q 2

1 p 2 αv k+1 and (24) is proved for l = 1, 2, . . . This gives f (t, u, α, β) = q 1 q 2

p 1 p 2

α lim

k→∞ v k (t, u) = q 1 q 2

p 1 p 2

αv (t, u) and

V m = q 1 q 2 p 1 p 2

Π m 1 v (X m−1 , X m ).

We have

T x f (t, u, α, β) = q 1 q 2

p 1 p 2

αp 1

\

E

v (u, s)f u 1 (s) P u (ds).

It follows that τ n has the form (20). The value of the problem (6) and (3) is

v 0 (x) = E x (V 1 | F 0 ) = q 1 q 2

p 1 p 2

E x Π 1 1 (x)v (x, X 1 )

= q 1 q 2

p 2

\

E

v (x, s)f x 1 (s) P x (ds).

By Lemmas 4 and 5 the solution of the problem D 00 can be formulated

as follows.

(11)

Theorem 4.1. The compound stopping time (τ , σ τ ∗ ), where σ m is given by (11) and τ = τ 0 is given by (20) is a solution of the problem D 00 . The value of the problem is

P x < σ < ∞, θ 1 = τ , θ 2 = σ τ ∗ ) = q 1 q 2

p 2

\

E

v (x, s)f x 1 (s) P x (ds).

R e m a r k 1. The problem can be extended to optimal detection of more than two successive disorders. The distribution of θ 1 , θ 2 may be more general. The general a priori distributions of disorder moments lead to more complicated formulae, since the corresponding Markov chains are not homogeneous.

References

T. B o j d e c k i (1979), Probability maximizing approach to optimal stopping and its appli- cation to a disorder problem, Stochastics 3, 61–71.

T. B o j d e c k i (1982), Probability maximizing method in problems of sequential analysis, Mat. Stos. 21, 5–37 (in Polish).

Y. C h o w, H. R o b b i n s and D. S i e g m u n d (1971), Great Expectations: The Theory of Optimal Stopping , Houghton Mifflin, Boston.

G. H a g g s t r o m (1967), Optimal sequential procedures when more than one stop is re- quired , Ann. Math. Statist. 38, 1618–1626.

M. N i k o l a e v (1979), Generalized sequential procedures, Lit. Mat. Sb. 19, 35–44 (in Russian).

M. N i k o l a e v (1981), On an optimality criterion for a generalized sequential procedure, ibid. 21, 75–82 (in Russian).

A. S h i r y a e v (1978), Optimal Stopping Rules, Springer, New York.

K. S z a j o w s k i (1992), Optimal on-line detection of outside observations, J. Statist. Plann.

Inference 30, 413–422.

M. Y o s h i d a (1983), Probability maximizing approach for a quickest detection problem with complicated Markov chain , J. Inform. Optim. Sci. 4, 127–145.

Krzysztof Szajowski Institute of Mathematics Technical University of Wroc law Wybrze˙ze Wyspia´ nskiego 27 50-370 Wroc law, Poland E-mail: szajow@im.pwr.wroc.pl

Received on 15.3.1996

Cytaty

Powiązane dokumenty

The number of accidents in a city on a given day has a Poisson distribution with parameter 10 on Mondays to Fridays, and a Poisson distribution with parameter 3 on Saturdays

Let X denote the number of heads in the last toss, and Y - the overall number

W edłu g autora Diener i Lieger praktycznie stanow ili niem al jedną grupę faktorów i kom isantów K rzy ­ żaków, przy czym termin Lieger teoretycznie oznaczał

Jest rzeczą oczywistą, że badając dzieje Światpolu, autor sięgnął przede wszystkim do materiałów źródłowych Związku — archiwaliów (niekompletnych niestety), źródeł

The formal economy, which is to say remunerated labour, even though maintaining its importance in the economy, has been losing its importance as a consequence of unemployment,

The levels of such parameters characterizing dynamic loads and overloads of examined movement structures as: maximal and average values of vertical ground reaction forces, total

5 children drank coffee and fruit juice only.. Find the probability that both children drank all three

The Riemann problem has been solved in [9] for an arbitrary closed Rie- mann surface in terms of the principal functionals.. This paper is devoted to solution of the problem only