31. FDA de una función de masa de probabilidades (FMP) FDA de una función de densidad de probabilidades continua FDA de una función de densidad de probabilidades que tiene un componente continuo y una parte discreta.
32. Continuidad por la derecha y por la izquierda Función continua por la derecha Función continua por la izquierda
37. Ejemplo: FDA discreta (Poisson) P(X ≤ k) k λ>0 representa el número esperado de ocurrencias durante un intervalo dado de tiempo. Por la FDA del evento que lleguen k clientes a un banco dado que en promedio llegan λ=1, 4 y 10 clientes/minuto se muestra a continuación:
subsection{Generalities on random variables} Let $(Omega,sigma_Omega,P_Omega)$ be a probability space and $(X,sigma_X)$ be a measurable space. A emph{random variable}index{random variable} $X$ is a $(sigma_Omega-sigma_X)$-measurable %footnote{A mapping $x$ is said to be $sigma_Omega-sigma_X$ emph{measurable} if and only if for all $A in sigma_X$, $x^{-1}(A) in sigma_Omega$} mapping egin{equation} X:Omega
ightarrow D; omega mapsto X(omega). label{eq:randomvar} end{equation} This mapping can be used to generate a probability measure on $(X,sigma_X)$ such that the probability space $(X,sigma_X,P_X)$ is the mathematical description of the experiment as well as of the original probability space $(Omega,sigma_Omega,P_Omega)$. This mapping is given by $P_X = P_Omega circ X^{-1}$. This means that an event $F in sigma_X$ has the probability egin{align} P_X(F) & = P_Omega circ X^{-1}(F)\ & = P_Omega left(X^{-1}(F)
ight)\ & = P_Omega left{omega : X(omega) in F
ight} label{eq:PXA_PomeA} end{align} for $omega in Omega$. The benefit of mapping (1) arises when $(X,sigma_X)$ is a well characterized measurable space where mathematical tools such as Riemann integration are well defined. One of the most commonly used measurable spaces is $(mathbbm{R},mathscr{B})$, where $mathscr{B}$ is the Borel $sigma$-algebra on $mathbbm{R}$; in this case $X$ is called a emph{numerical random variable}. For example, according to the Kolmogorov axioms, the probability measure is additive, therefore, depending on whether we have a discrete or continuous random variables, it follows that, for $F in sigma_X$, egin{alignat}{2} P_X(F)& := sum_{omega in X^{-1}(F)} P_Omega({omega}) && qquad ext{(discrete case)} \ & = int_{X^{-1}(F)} dd P_Omega(omega) && qquad ext{(general case)} end{alignat} In the last case, $P_X(F)$ can also be written in terms of the expectation operation as egin{equation} P_X(F) = int_F dd P_X(x) = int_X Ileft[x in F
ight] dd P_X(x) = E_Xleft[Ileft[x in F
ight]
ight] end{equation} where $xin X$, and $I$ stands for the emph{indicator function}, defined by, egin{equation} I[cdot] = egin{cases} 1 & ext{if } cdot ext{ is true} \ 0 & ext{if } cdot ext{ is false} \ end{cases} end{equation}
x = 20:180; fiq = normpdf(x,100,15); n = 1e5; iq = normrnd(100,15,n,1); figure; i = 0; for N = [10 25 40 60 100] [nn,xout] = hist(iq,N); nn = nn./sum((xout(2)-xout(1))*nn); i = i+1; subplot(2,3,i); bar(xout,nn,'hist'); hold on; plot(x,fiq,'r','LineWidth',1); title(sprintf('clases = %d',N),'FontSize',18); end; subplot(2,3,6); plot(x,fiq,'r','LineWidth',3); title('Función de densidad de probabilidades');
Propiedad de optimalidad The median is also the central point which minimizes the average of the absolute deviations; in the example above this would be (1 + 0 + 0 + 0 + 1 + 7) / 6 = 1.5 using the median, while it would be 1.944 using the mean. In the language of probability theory, the value of c that minimizes E(left|X-c
ight|), is the median of the probability distribution of the random variable X. Note, however, that c is not always unique, and therefore not well defined in general. [edit] An inequality relating means and medians For continuous probability distributions, the difference between the median and the mean is less than or equal to one standard deviation. See an inequality on location and scale parameters. Ver: http://en.wikipedia.org/wiki/An_inequality_on_location_and_scale_parameters