Tuesday, 3 March 2015

probability - Proving two random variables are uncorrelated but not independent



I'm trying to find two random variables that are not independent -



there exist a and b such that P(X=aY=b)P(X=a)P(Y=b)



but, E[XY]=E[X]E[Y]



I'm trying to understand some of the examples of this that I have seen, but they almost always define two random variables X and Y either in terms of each other, or in terms of a third random variable. This seems to be at odds with the formal definition of a random variable that I am accustomed to. That is, if we have a probability space (Ω,P), then a random variable X is a mapping X:ΩR (or some other space, but let's use R for simplicity).




So to say something like Ω={1,0,1} P is uniform, X(a)=a for all aΩ and Y=X2 doesn't seem like an example, since Y does not formally meet the definition of a random variable.



I'm also unsure of what values I would even sum over in computing E[XY] for this case. any help would be great.


Answer



Your example is fine. Y is a random variable, because Y is a mapping from the probability space to R given by Y(ω)=X2(ω)=ω2 for ωΩ. Thus most of the examples you have seen hold.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find lim without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...