When studying probability I always had this doubt. Suppose we have one sample space $S$ then one event is a subset of $S$, that is, one element of $\mathcal{P}(S)$. The axioms for a probability function states that it is a function $P : \mathcal{P}(S) \to \mathbb{R}$ such that
- If $A\in \mathcal{P}(S)$ then $P(A) \geq 0$
- $P(S) = 1$
- If $\{A_i\}$ is a countable sequence of disjoint elements of $\mathcal{P}(A)$ then $P\left(\bigcup_{i} A_i\right) = \sum_{i} P(A_i)$
From that it is possible to deduce many things like $P(\emptyset) = 0$, a formula for probability of the union, and so on.
Now, nothing tells how does one define this function $P$. Indeed it could be anything provided it satisfies the axioms. Sometimes one can make an assumption that every $\{\omega\}\subset S$ has equal probability. In that case if $S$ is finite with $n$ elements we have
$$P(S) = P\left(\bigcup_{i=1}^n \{\omega_i\}\right) = \sum_{i=1}^n P(\{\omega_i\}) = np = 1$$
So that the probability $p$ of $\{\omega_i\} = 1/n$.
Apart from this situation (supposition of equal probabilities and finite $S$) I don't know how does one build a probability function $P$, that is, how does one define the probability of each event.
How is that done? Is this something done arbitrarily based on observation of a certain situation with the only constraint of obeying the axioms?
No comments:
Post a Comment