• Nie Znaleziono Wyników

Examples of probability mass functions (discrete variables)

N/A
N/A
Protected

Academic year: 2021

Share "Examples of probability mass functions (discrete variables)"

Copied!
30
0
0

Pełen tekst

(1)

Introduction to theory of probability and statistics

Lecture 6.

Examples of probability mass functions (discrete variables)

prof. dr hab.inż. Katarzyna Zakrzewska Katedra Elektroniki, AGH

e-mail: zak@agh.edu.pl

(2)

Outline :

Definitions of mean and variance for discrete variables

Discrete uniform distribution

Binomial (Bernoulli) distribution

Geometric distribution

Poisson distribution

(3)

MEAN AND VARIANCE OF A

DISCRETE RANDOM VARIABLE

Mean and variance are two measures that do not uniquely identify a probability distribution. Below you can find two different distributions that have the same mean and variance.

(4)

MEAN AND VARIANCE OF A

DISCRETE RANDOM VARIABLE

The variance of a random variable X can be considered to be the expected value of a specific function of X:

 X 

X

h ( ) (

In general, the expected value of any function h(X) of a discrete random variable is defined in a similar manner.

(5)

Discrete uniform distribution

The simplest discrete random variable is one that assumes only a finite number of possible values, each with equal probability.

A random variable X that assumes each of the values x1, x2, …, xn with equal probability 1/n, is frequently of interest.

(6)

Discrete uniform distribution

Suppose the range of the discrete random variable X is the consecutive integers: a, a+1, a+2,….b for a≤b.

The range of X contains b-a+1 values each with probability 1/(b-a+1).

According to definition mean value equals to:

(7)

Two-point distribution (zero-one), e.g. coin toss, head = failure x=0, tail = success x=1, p – probability of success, its distribution:

xi 0 1

pi 1-p p Binomial (Bernoulli)

where 0<p<1; X={0, 1, 2, … k} k – number of successes when n-times sampled with replacement

For k=1 two-point distribution

Examples of probability

distributions – discrete variables

n k

p k p

p

k

n  

k

( 1  )

n k

,  0 , 1 ,  ,

 

 

(8)

Examples of probability

distributions – discrete variables

(9)

Binomial distribution

(10)

Binomial distribution - assumptions

Random experiment consists of n Bernoulli trials :

1. Each trial is independent of others.

2. Each trial can have only two results: „success” and

„failure” (binary!).

3. Probability of success p is constant.

Probability pk of an event that random variable X will be equal to the number of k-successes at n trials.

n k

p k p

p

k

n  

k

( 1  )

n k

,  0 , 1 ,  ,

 

 

(11)

Pascal’s triangle

2 1 2 2

1 1 2

0 2 2

1 1 1 1

0 1 1

0 1 0 0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

n n n

!

! ) (

! k k n

n k

n

 

 

 

 Symbol 

k n n k

k

n

a b

k b n

a



 

 

 

0

) (

Newton’s binomial

(12)

1

1 1

1 2 1

1 3 3 1

1 4 6 4 1

1 5 10 10 5 1

1 6 15 20 15 6 1

n = 0 n = 1 n = 2 n = 3 n = 4 n = 5 n = 6

+

Pascal’s triangle

(13)

Bernoulli distribution

Example 6.1

Probability that in a company the daily use of water will not exceed a certain level is p=3/4. We monitor a use of water for 6 days.

Calculate a probability the daily use of water will not exceed the set-up limit in 0, 1, 2, …, 6 consecutive days,

respectively.

Data:

6 , ,

1 , 0 4 6

1 4

3    

 q N k

p

(14)

0 6

1 5

2 4

3 3

4 2

5 1

6 0

1 6 3

) 6 (

6

4 1 4

3 5

) 6 5 (

5

4 1 4

3 4

) 6 4 (

4

4 1 4

3 3

) 6 3 (

3

4 1 4

3 2

) 6 2 (

2

4 1 4

3 1

) 6 1 (

1

4 1 4

3 0

) 6 0 (

0





























k P k

k P k

k P k

k P k

k P k

k P k

k P k

Bernoulli distribution

(15)

9 9 9 1

3

356 . 0 ) 0 ( 4 1458

3 9 9 6 4

1 4

6 3 ) 5 ( 5

297 . 0 ) 0 ( 4 1215

9 9 15 4

1 4

15 3 )

4 ( 4

132 . 0 ) 0 ( 4 540

3 9 20 4

1 4

20 3 )

3 ( 3

033 . 0 ) 0 ( 4 135

9 15 4

1 4

15 3 )

2 ( 2

004 . 0 ) 0 ( 4 18

3 6 4

1 4 6 3 ) 1 ( 1

00024 .

4 0 1 1 1 ) 0 ( 0

6

6 1

5

6 2

4

6 3

3

6 4

2

6 5

6

P P

k

P P

k

P P

k

P P

k

P P

k

P k

Bernoulli distribution

(16)

0,00024 0,004

0,033

0,132

0,297

0,356

0,178

0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4

0 1 2 3 4 5 6 7

P(k)

k

Maximum for k=5

Bernoulli distribution

(17)

Bernoulli distribution

(18)

Expected value

Variance

np X

E ( )   

) 1

( )

( X

2

np p

V    

Bernoulli distribution

(19)

Errors in transmission

Example 6.2

Digital channel of information transfer is prone to errors in single bits. Assume that the probability of single bit error is p=0.1

Consecutive errors in transmissions are independent. Let X denote the random variable, of values equal to the number of bits in error, in a sequence of 4 bits.

E - bit error, O - no error

OEOE corresponds to X=2; for EEOO - X=2 (order does not matter)

(20)

Example 6.2 cd

For X=2 we get the following results:

{EEOO, EOEO, EOOE, OEEO, OEOE, OOEE}

What is a probability of P(X=2), i.e., two bits will be sent with error?

Events are independent, thus

P(EEOO)=P(E)P(E)P(O)P(O)=(0.1)2 (0.9)2 = 0.0081

Events are mutually exhaustive and have the same probability, hence

P(X=2)=6 P(EEOO)= 6 (0.1)2 (0.9)2 = 6 (0.0081)=0.0486

Errors in transmission

(21)

Example 6.2 continued

Therefore, P(X=2)=6 (0.1)2 (0.9)2 is given by Bernoulli distribution

! 6 2

! ) 2 (

! 4 2

4    

 

1 . 0 ,

4 , 3 , 2 , 1 , 0 ,

) 1

4 ( )

(   

4

 

 

 

 p p

x p

x x X

P

x x

P(X = 0) = 0,6561 P(X = 1) = 0,2916 P(X = 2) = 0,0486 P(X = 3) = 0,0036 P(X = 4) = 0,0001

Errors in transmission

(22)

Mean:

P(X = 0) = 0,6561 P(X = 1) = 0,2916 P(X = 2) = 0,0486 P(X = 3) = 0,0036 P(X = 4) = 0,0001

Errors in transmission –

calculation of mean and variance

Variance:

(23)

Geometric distribution

The height of the line at x is (1-p) times the height at the line at x-1. That is, the probabilities decrease in a geometric progression. The distribution acquires its name from this result.

(24)

Geometric distribution

Lack of memory property (the system will not wear out): A geometric random variable has been defined as the number of trials until the first success. However, because the trials are independent, the count of the number of trials until the next success can be started at any trial without changing the probability distribution of the random variable.

Example 6.3 In the transmission of bits, if 100 bits are transmitted, the probability that the first error, after bit 100, occurs on bit 106 is the probabability that the next six outcomes are OOOOOE and can be calculated as

1 . 0 ,

) 1

( )

6

( X   p

1

 p

5

p  P

This result is identical to the probability that the initial error occurs on bit 6.

(25)

Poisson’s distribution

We introduce a parameter λ=pn (E(X) = λ)

x n x

x n x

n n

x p n

x p x n

X P

 

  

 

 

 

 

 

 

 

 

  

1 )

1 ( )

(

Let us assume that n increases while p decreases, but λ=pn remains constant. Bernoulli distribution changes to Poisson’s distribution.

1 ! )

( lim

lim P X x n x n n e x

x

x n

x

 

 

 

  

 

 

 

 

 

Consider the transmission of n bits over a digital communication channel. Let the random variable X equal the number of bits in error. When the probability that a bit is in error p is constant and the transmissions are independent, X has binomial distribution.

(26)

It is one of the rare cases where expected value equals to variance:

 np X

E ( )

Why?

    

np np np X

V ( )

n

lim

p

(

2

)

0 , 2

Poisson’s distribution

(27)

Poisson’s distribution

(28)

Poisson’s distribution

(29)

Poisson’s distribution

(30)

0 0,05 0,1 0,15 0,2 0,25 0,3 0,35 0,4

0 5 10 15 20 25

lambda=1 lambda=5 lambda=10

x

p(X)

Bernoulli n=50; p=0.02

Poisson:

λ=1 0

1 2 3 4 5 6

0.364 0.372 0.186 0.061 0.014 0.003 0.000

0.368 0.368 0.184 0.061 0.015 0.003 0.001

(x- integer, infinite; x 0) For big n Bernoulli distribution resembles Poisson’s distribution

Poisson’s distribution

Cytaty

Powiązane dokumenty

Szynal, On Levy’ s and Dudley ’ s type estimates of the rate conver ­ gence in the cental limit theorem for functions of the average of independent random

The collection was, however, not complete–it lacked, among others, a specimen of Purpuricenus kaehleri captured by the professor in Ciechocinek and other valuable

Chociaż książka Andrzeja Wakara i W ojciecha Wrzesińskiego ukazała się na półkach księgarskich kilka miesięcy po kwietniow ych uroczystościach stulecia „G

Our purpose in this paper is to show the recurrent formulas useful in the calculation of the distribution of the random variable Zn and in the method of calculation of its

Because of the random variation of the main parameters affecting the shape and area under the statical stability curve, thn characteristics of the latter should be treated as

Skutnabb-Kangas (2000: 502) maintains that a universal covenant of Linguistic Human Rights should guarantee, among other things, that “any change of mother tongue is

O dm iany pism a, które jeszcze nie odnosiły się do dźwięków języka, a jedynie do idei czy pojęć, określane są jako proto-pism a (piktogram y, ideogram y), n

According to the Organisation for Economic Co-operation and Development (OECD) knowledge-based economy should be defined as an economy which directly based on the