// $Id: Statistics.doxygen 1487 2008-09-10 08:41:36Z jari $
//
// Copyright (C) 2005 Peter Johansson
// Copyright (C) 2006 Jari Häkkinen, Peter Johansson, Markus Ringnér
// Copyright (C) 2007 Jari Häkkinen, Peter Johansson
// Copyright (C) 2008 Peter Johansson
//
// This file is part of the yat library, http://dev.thep.lu.se/yat
//
// The yat library is free software; you can redistribute it and/or
// modify it under the terms of the GNU General Public License as
// published by the Free Software Foundation; either version 3 of the
// License, or (at your option) any later version.
//
// The yat library is distributed in the hope that it will be useful,
// but WITHOUT ANY WARRANTY; without even the implied warranty of
// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
// General Public License for more details.
//
// You should have received a copy of the GNU General Public License
// along with yat. If not, see .
/**
\page weighted_statistics Weighted Statistics
\section Introduction
There are several different reasons why a statistical analysis needs
to adjust for weighting. In literature reasons are mainly diveded in
to groups.
The first group is when some of the measurements are known to be more
precise than others. The more precise a measurement is, the larger
weight it is given. The simplest case is when the weight are given
before the measurements and they can be treated as deterministic. It
becomes more complicated when the weight can be determined not until
afterwards, and even more complicated if the weight depends on the
value of the observable.
The second group of situations is when calculating averages over one
distribution and sampling from another distribution. Compensating for
this discrepency weights are introduced to the analysis. A simple
example may be that we are interviewing people but for economical
reasons we choose to interview more people from the city than from the
countryside. When summarizing the statistics the answers from the city
are given a smaller weight. In this example we are choosing the
proportions of people from countryside and people from city being
intervied. Hence, we can determine the weights before and consider
them to be deterministic. In other situations the proportions are not
deterministic, but rather a result from the sampling and the weights
must be treated as stochastic and only in rare situations the weights
can be treated as independent of the observable.
Since there are various origins for a weight occuring in a statistical
analysis, there are various ways to treat the weights and in general
the analysis should be tailored to treat the weights correctly. We
have not chosen one situation for our implementations, so see specific
function documentation for what assumtions are made. Though, common
for implementations are the following:
- Setting all weights to unity yields the same result as the
non-weighted version.
- Rescaling the weights does not change any function.
- Setting a weight to zero is equivalent to removing the data point.
An important case is when weights are binary (either 1 or 0). Then we
get the same result using the weighted version as using the data with
weight not equal to zero and the non-weighted version. Hence, using
binary weights and the weighted version missing values can be treated
in a proper way.
\section AveragerWeighted
\subsection Mean
For any situation the weight is always designed so the weighted mean
is calculated as \f$ m=\frac{\sum w_ix_i}{\sum w_i} \f$, which obviously
fulfills the conditions above.
In the case of varying measurement error, it could be motivated that
the weight shall be \f$ w_i = 1/\sigma_i^2 \f$. We assume measurement error
to be Gaussian and the likelihood to get our measurements is
\f$ L(m)=\prod
(2\pi\sigma_i^2)^{-1/2}e^{-\frac{(x_i-m)^2}{2\sigma_i^2}} \f$. We
maximize the likelihood by taking the derivity with respect to \f$ m \f$ on
the logarithm of the likelihood \f$ \frac{d\ln L(m)}{dm}=\sum
\frac{x_i-m}{\sigma_i^2} \f$. Hence, the Maximum Likelihood method yields
the estimator \f$ m=\frac{\sum w_i/\sigma_i^2}{\sum 1/\sigma_i^2} \f$.
\subsection Variance
In case of varying variance, there is no point estimating a variance
since it is different for each data point.
Instead we look at the case when we want to estimate the variance over
\f$f\f$ but are sampling from \f$ f' \f$. For the mean of an observable \f$ O \f$ we
have \f$ \widehat O=\sum\frac{f}{f'}O_i=\frac{\sum w_iO_i}{\sum
w_i} \f$. Hence, an estimator of the variance of \f$ X \f$ is
\f$
s^2 = -^2=
\f$
\f$
= \frac{\sum w_ix_i^2}{\sum w_i}-\frac{(\sum w_ix_i)^2}{(\sum w_i)^2}=
\f$
\f$
= \frac{\sum w_i(x_i^2-m^2)}{\sum w_i}=
\f$
\f$
= \frac{\sum w_i(x_i^2-2mx_i+m^2)}{\sum w_i}=
\f$
\f$
= \frac{\sum w_i(x_i-m)^2}{\sum w_i}
\f$
This estimator fulfills that it is invariant under a rescaling and
having a weight equal to zero is equivalent to removing the data
point. Having all weights equal to unity we get \f$ \sigma=\frac{\sum
(x_i-m)^2}{N} \f$, which is the same as returned from Averager. Hence,
this estimator is slightly biased, but still very efficient.
\subsection standard_error Standard Error
The standard error squared is equal to the expexted squared error of
the estimation of \f$m\f$. The squared error consists of two parts, the
variance of the estimator and the squared bias:
\f$
^2=+-\mu>^2=
\f$
\f$
>^2+(-\mu)^2
\f$.
In the case when weights are included in analysis due to varying
measurement errors and the weights can be treated as deterministic, we
have
\f$
Var(m)=\frac{\sum w_i^2\sigma_i^2}{\left(\sum w_i\right)^2}=
\f$
\f$
\frac{\sum w_i^2\frac{\sigma_0^2}{w_i}}{\left(\sum w_i\right)^2}=
\f$
\f$
\frac{\sigma_0^2}{\sum w_i},
\f$
where we need to estimate \f$ \sigma_0^2 \f$. Again we have the likelihood
\f$
L(\sigma_0^2)=\prod\frac{1}{\sqrt{2\pi\sigma_0^2/w_i}}\exp{(-\frac{w_i(x-m)^2}{2\sigma_0^2})}
\f$
and taking the derivity with respect to
\f$\sigma_o^2\f$,
\f$
\frac{d\ln L}{d\sigma_i^2}=
\f$
\f$
\sum -\frac{1}{2\sigma_0^2}+\frac{w_i(x-m)^2}{2\sigma_0^2\sigma_o^2}
\f$
which
yields an estimator \f$ \sigma_0^2=\frac{1}{N}\sum w_i(x-m)^2 \f$. This
estimator is not ignoring weights equal to zero, because deviation is
most often smaller than the expected infinity. Therefore, we modify
the expression as follows \f$\sigma_0^2=\frac{\sum w_i^2}{\left(\sum
w_i\right)^2}\sum w_i(x-m)^2\f$ and we get the following estimator of
the variance of the mean \f$\sigma_0^2=\frac{\sum w_i^2}{\left(\sum
w_i\right)^3}\sum w_i(x-m)^2\f$. This estimator fulfills the conditions
above: adding a weight zero does not change it: rescaling the weights
does not change it, and setting all weights to unity yields the same
expression as in the non-weighted case.
In a case when it is not a good approximation to treat the weights as
deterministic, there are two ways to get a better estimation. The
first one is to linearize the expression \f$\left<\frac{\sum
w_ix_i}{\sum w_i}\right>\f$. The second method when the situation is
more complicated is to estimate the standard error using a
bootstrapping method.
\section AveragerPairWeighted
Here data points come in pairs (x,y). We are sampling from \f$f'_{XY}\f$
but want to measure from \f$f_{XY}\f$. To compensate for this decrepency,
averages of \f$g(x,y)\f$ are taken as \f$\sum \frac{f}{f'}g(x,y)\f$. Even
though, \f$X\f$ and \f$Y\f$ are not independent \f$(f_{XY}\neq f_Xf_Y)\f$ we
assume that we can factorize the ratio and get \f$\frac{\sum
w_xw_yg(x,y)}{\sum w_xw_y}\f$
\subsection Covariance
Following the variance calculations for AveragerWeighted we have
\f$Cov=\frac{\sum w_xw_y(x-m_x)(y-m_y)}{\sum w_xw_y}\f$ where
\f$m_x=\frac{\sum w_xw_yx}{\sum w_xw_y}\f$
\subsection Correlation
As the mean is estimated as
\f$
m_x=\frac{\sum w_xw_yx}{\sum w_xw_y}
\f$,
the variance is estimated as
\f$
\sigma_x^2=\frac{\sum w_xw_y(x-m_x)^2}{\sum w_xw_y}
\f$.
As in the non-weighted case we define the correlation to be the ratio
between the covariance and geometrical average of the variances
\f$
\frac{\sum w_xw_y(x-m_x)(y-m_y)}{\sqrt{\sum w_xw_y(x-m_x)^2\sum
w_xw_y(y-m_y)^2}}
\f$.
This expression fulfills the following
- Having N equal weights the expression reduces to the non-weighted expression.
- Adding a pair of data, in which one weight is zero is equivalent
to ignoring the data pair.
- Correlation is equal to unity if and only if \f$x\f$ is equal to
\f$y\f$. Otherwise the correlation is between -1 and 1.
\section Score
\subsection Pearson
\f$\frac{\sum w(x-m_x)(y-m_y)}{\sqrt{\sum w(x-m_x)^2\sum w(y-m_y)^2}}\f$.
See AveragerPairWeighted correlation.
\subsection ROC
An interpretation of the ROC curve area is the probability that if we
take one sample from class \f$+\f$ and one sample from class \f$-\f$, what is
the probability that the sample from class \f$+\f$ has greater value. The
ROC curve area calculates the ratio of pairs fulfilling this
\f$
\frac{\sum_{\{i,j\}:x^-_i)^N\f$, where
\f$\f$ is the linear kernel (usual scalar product). For the weighted
case we define the linear kernel to be
\f$=\frac{\sum {w_xw_yxy}}{\sum{w_xw_y}}\f$ and the
polynomial kernel can be calculated as before
\f$(1+)^N\f$.
\subsection gaussian_kernel Gaussian Kernel
We define the weighted Gaussian kernel as \f$\exp\left(-N\frac{\sum
w_xw_y(x-y)^2}{\sum w_xw_y}\right)\f$.
\section Regression
\subsection Naive
\subsection Linear
We have the model
\f$
y_i=\alpha+\beta (x-m_x)+\epsilon_i,
\f$
where \f$\epsilon_i\f$ is the noise. The variance of the noise is
inversely proportional to the weight,
\f$Var(\epsilon_i)=\frac{\sigma^2}{w_i}\f$. In order to determine the
model parameters, we minimimize the sum of quadratic errors.
\f$
Q_0 = \sum \epsilon_i^2
\f$
Taking the derivity with respect to \f$\alpha\f$ and \f$\beta\f$ yields two conditions
\f$
\frac{\partial Q_0}{\partial \alpha} = -2 \sum w_i(y_i - \alpha -
\beta (x_i-m_x)=0
\f$
and
\f$ \frac{\partial Q_0}{\partial \beta} = -2 \sum
w_i(x_i-m_x)(y_i-\alpha-\beta(x_i-m_x)=0
\f$
or equivalently
\f$
\alpha = \frac{\sum w_iy_i}{\sum w_i}=m_y
\f$
and
\f$ \beta=\frac{\sum w_i(x_i-m_x)(y-m_y)}{\sum
w_i(x_i-m_x)^2}=\frac{Cov(x,y)}{Var(x)}
\f$
Note, by having all weights equal we get back the unweighted
case. Furthermore, we calculate the variance of the estimators of
\f$\alpha\f$ and \f$\beta\f$.
\f$
\textrm{Var}(\alpha )=\frac{w_i^2\frac{\sigma^2}{w_i}}{(\sum w_i)^2}=
\frac{\sigma^2}{\sum w_i}
\f$
and
\f$
\textrm{Var}(\beta )= \frac{w_i^2(x_i-m_x)^2\frac{\sigma^2}{w_i}}
{(\sum w_i(x_i-m_x)^2)^2}=
\frac{\sigma^2}{\sum w_i(x_i-m_x)^2}
\f$
Finally, we estimate the level of noise, \f$\sigma^2\f$. Inspired by the
unweighted estimation
\f$
s^2=\frac{\sum (y_i-\alpha-\beta (x_i-m_x))^2}{n-2}
\f$
we suggest the following estimator
\f$ s^2=\frac{\sum w_i(y_i-\alpha-\beta (x_i-m_x))^2}{\sum
w_i-2\frac{\sum w_i^2}{\sum w_i}} \f$
*/