Thinking (as I often do to understand Probability) about coin flipping, I'm looking for someone to explain how - and I've tried to make this as arbitrary as possible - for a coin with probability p of flipping a heads, we can investigate some of its probabilistic properties. We can restrict p to $0
I've found the expected number of heads in n flips to be np and the variance for the number of heads to be p(n−p) - if these are wrong, I'd appreciate some correction, though intuitively the former seems right at least.
Suppose then we have Y heads in total. If we look at the flips individually, so say we define a function Xi, which takes value 1 if the ith flip is heads, and 0 if it's tails, how can we determine E[Xi|Y] (which I imagine we can re-write as E[X1|Y]) and how can we also determineE[Y|Xi]?
Can we also find the expected numbers of flips before the first head?
I'm quite interested in seeing where these answers come from, so any help would be really useful. Thanks, MM.
EDIT 1
Variance for first case is np(1−p) rather than p(n−p).
Answer
You're dealing with the binomial distribution. Wikipedia has means, variances and more for all the widely used distributions. Your mean is correct, but the variance is a bit off; it's p(1−p)n.
By linearity of expectation, the expected values of all the Xi given Y=y must add up to y, so E[Xi|Y=y]=y/n.
The expected value of Y given Xi=x is just x plus the expected value of the remaining Xj, which is (n−1)p, so E[Y|X1=x]=x+(n−1)p.
[Edit in response to the comment:]
The probability for the first heads to occur in the k-th flip is given by the geometric distribution (1−p)k−1p. It has the finite mean 1/p. However, there's nothing strange in general about a value being finite with probability 1 yet having infinite expected value. This is the case for instance for (1−p)−k (where k is again the number of flips until the first occurrence of heads).
No comments:
Post a Comment