Tuesday, 24 November 2015

probability - Showing intinfty0(1FX(x))dx=E(X) in both discrete and continuous cases



Ok, according to some notes I have, the following is true for a random variable X that can only take on positive values, i.e P(X<0=0)



0(1FX(x))dx=0P(X>x)dx



=0xfX(y)dydx



=0y0dxfX(y)dy




=0yfX(y)dy=E(X)



I'm not seeing the steps here clearly. The first line is obvious and the second makes sense to me, as we are using the fact that the probability of a random variable being greater than a given value is just the density evaluated from that value to infinity.



Where I'm lost is why:
=xfX(y)dy=y0fX(y)dy



Also, doesn't the last line equal E(Y) and not E(X)?



How would we extend this to the discrete case, where the pmf is defined only for values of X in the non-negative integers?




Thank you


Answer



The region of integration for the double integral is x,y0 and yx. If you express this integral by first integrating with respect to y, then the region of integration for y is [x,). However if you exchange the order of integration and first integrate with respect to x, then the region of integration for x is [0,y]. The reason why you get E(X) and not something like E(Y) is that y is just a dummy variable of integration, whereas X is the actual random variable that defines fX.


No comments:

Post a Comment

real analysis - How to find limhrightarrow0fracsin(ha)h

How to find limh0sin(ha)h without lhopital rule? I know when I use lhopital I easy get $$ \lim_{h\rightarrow 0}...