1 | \documentclass[12pt]{article} |
---|
2 | |
---|
3 | % $Id: Statistics.tex 744 2007-02-10 20:16:11Z peter $ |
---|
4 | % |
---|
5 | % Copyright (C) The authors contributing to this file. |
---|
6 | % |
---|
7 | % This file is part of the yat library, http://lev.thep.lu.se/trac/yat |
---|
8 | % |
---|
9 | % The yat library is free software; you can redistribute it and/or |
---|
10 | % modify it under the terms of the GNU General Public License as |
---|
11 | % published by the Free Software Foundation; either version 2 of the |
---|
12 | % License, or (at your option) any later version. |
---|
13 | % |
---|
14 | % The yat library is distributed in the hope that it will be useful, |
---|
15 | % but WITHOUT ANY WARRANTY; without even the implied warranty of |
---|
16 | % MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU |
---|
17 | % General Public License for more details. |
---|
18 | % |
---|
19 | % You should have received a copy of the GNU General Public License |
---|
20 | % along with this program; if not, write to the Free Software |
---|
21 | % Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA |
---|
22 | % 02111-1307, USA. |
---|
23 | |
---|
24 | |
---|
25 | |
---|
26 | \flushbottom |
---|
27 | \footskip 54pt |
---|
28 | \headheight 0pt |
---|
29 | \headsep 0pt |
---|
30 | \oddsidemargin 0pt |
---|
31 | \parindent 0pt |
---|
32 | \parskip 2ex |
---|
33 | \textheight 230mm |
---|
34 | \textwidth 165mm |
---|
35 | \topmargin 0pt |
---|
36 | |
---|
37 | \renewcommand{\baselinestretch} {1.0} |
---|
38 | \renewcommand{\textfraction} {0.1} |
---|
39 | \renewcommand{\topfraction} {1.0} |
---|
40 | \renewcommand{\bottomfraction} {1.0} |
---|
41 | \renewcommand{\floatpagefraction} {1.0} |
---|
42 | |
---|
43 | \renewcommand{\d}{{\mathrm{d}}} |
---|
44 | \newcommand{\nd}{$^{\mathrm{nd}}$} |
---|
45 | \newcommand{\eg}{{\it {e.g.}}} |
---|
46 | \newcommand{\ie}{{\it {i.e., }}} |
---|
47 | \newcommand{\etal}{{\it {et al.}}} |
---|
48 | \newcommand{\eref}[1]{Eq.~(\ref{e:#1})} |
---|
49 | \newcommand{\fref}[1]{Fig.~\ref{f:#1}} |
---|
50 | \newcommand{\ovr}[2]{\left(\begin{array}{c} #1 \\ #2 \end{array}\right)} |
---|
51 | |
---|
52 | \begin{document} |
---|
53 | |
---|
54 | \large |
---|
55 | {\bf Weighted Statistics} |
---|
56 | \normalsize |
---|
57 | |
---|
58 | \tableofcontents |
---|
59 | \clearpage |
---|
60 | |
---|
61 | \section{Introduction} |
---|
62 | There are several different reasons why a statistical analysis needs |
---|
63 | to adjust for weighting. In literature reasons are mainly diveded in |
---|
64 | to groups. |
---|
65 | |
---|
66 | The first group is when some of the measurements are known to be more |
---|
67 | precise than others. The more precise a measurement is, the larger |
---|
68 | weight it is given. The simplest case is when the weight are given |
---|
69 | before the measurements and they can be treated as deterministic. It |
---|
70 | becomes more complicated when the weight can be determined not until |
---|
71 | afterwards, and even more complicated if the weight depends on the |
---|
72 | value of the observable. |
---|
73 | |
---|
74 | The second group of situations is when calculating averages over one |
---|
75 | distribution and sampling from another distribution. Compensating for |
---|
76 | this discrepency weights are introduced to the analysis. A simple |
---|
77 | example may be that we are interviewing people but for economical |
---|
78 | reasons we choose to interview more people from the city than from the |
---|
79 | countryside. When summarizing the statistics the answers from the city |
---|
80 | are given a smaller weight. In this example we are choosing the |
---|
81 | proportions of people from countryside and people from city being |
---|
82 | intervied. Hence, we can determine the weights before and consider |
---|
83 | them to be deterministic. In other situations the proportions are not |
---|
84 | deterministic, but rather a result from the sampling and the weights |
---|
85 | must be treated as stochastic and only in rare situations the weights |
---|
86 | can be treated as independent of the observable. |
---|
87 | |
---|
88 | Since there are various origins for a weight occuring in a statistical |
---|
89 | analysis, there are various ways to treat the weights and in general |
---|
90 | the analysis should be tailored to treat the weights correctly. We |
---|
91 | have not chosen one situation for our implementations, so see specific |
---|
92 | function documentation for what assumtions are made. Though, common |
---|
93 | for implementations are the following: |
---|
94 | \begin{itemize} |
---|
95 | \item Setting all weights to unity yields the same result as the |
---|
96 | non-weighted version. |
---|
97 | \item Rescaling the weights does not change any function. |
---|
98 | \item Setting a weight to zero is equivalent to removing the data point. |
---|
99 | \end{itemize} |
---|
100 | An important case is when weights are binary (either 1 or 0). Then we |
---|
101 | get the same result using the weighted version as using the data with |
---|
102 | weight not equal to zero and the non-weighted version. Hence, using |
---|
103 | binary weights and the weighted version missing values can be treated |
---|
104 | in a proper way. |
---|
105 | |
---|
106 | \section{AveragerWeighted} |
---|
107 | |
---|
108 | |
---|
109 | |
---|
110 | \subsection{Mean} |
---|
111 | |
---|
112 | For any situation the weight is always designed so the weighted mean |
---|
113 | is calculated as $m=\frac{\sum w_ix_i}{\sum w_i}$, which obviously |
---|
114 | fulfills the conditions above. |
---|
115 | |
---|
116 | In the case of varying measurement error, it could be motivated that |
---|
117 | the weight shall be $w_i = 1/\sigma_i^2$. We assume measurement error |
---|
118 | to be Gaussian and the likelihood to get our measurements is |
---|
119 | $L(m)=\prod |
---|
120 | (2\pi\sigma_i^2)^{-1/2}e^{-\frac{(x_i-m)^2}{2\sigma_i^2}}$. We |
---|
121 | maximize the likelihood by taking the derivity with respect to $m$ on |
---|
122 | the logarithm of the likelihood $\frac{d\ln L(m)}{dm}=\sum |
---|
123 | \frac{x_i-m}{\sigma_i^2}$. Hence, the Maximum Likelihood method yields |
---|
124 | the estimator $m=\frac{\sum w_i/\sigma_i^2}{\sum 1/\sigma_i^2}$. |
---|
125 | |
---|
126 | |
---|
127 | \subsection{Variance} |
---|
128 | In case of varying variance, there is no point estimating a variance |
---|
129 | since it is different for each data point. |
---|
130 | |
---|
131 | Instead we look at the case when we want to estimate the variance over |
---|
132 | $f$ but are sampling from $f'$. For the mean of an observable $O$ we |
---|
133 | have $\widehat O=\sum\frac{f}{f'}O_i=\frac{\sum w_iO_i}{\sum |
---|
134 | w_i}$. Hence, an estimator of the variance of $X$ is |
---|
135 | \begin{eqnarray} |
---|
136 | \sigma^2=<X^2>-<X>^2= |
---|
137 | \\\frac{\sum w_ix_i^2}{\sum w_i}-\frac{(\sum w_ix_i)^2}{(\sum w_i)^2}= |
---|
138 | \\\frac{\sum w_i(x_i^2-m^2)}{\sum w_i}= |
---|
139 | \\\frac{\sum w_i(x_i^2-2mx_i+m^2)}{\sum w_i}= |
---|
140 | \\\frac{\sum w_i(x_i-m)^2}{\sum w_i} |
---|
141 | \end{eqnarray} |
---|
142 | This estimator fulfills that it is invariant under a rescaling and |
---|
143 | having a weight equal to zero is equivalent to removing the data |
---|
144 | point. Having all weights equal to unity we get $\sigma=\frac{\sum |
---|
145 | (x_i-m)^2}{N}$, which is the same as returned from Averager. Hence, |
---|
146 | this estimator is slightly biased, but still very efficient. |
---|
147 | |
---|
148 | \subsection{Standard Error} |
---|
149 | The standard error squared is equal to the expexted squared error of |
---|
150 | the estimation of $m$. The squared error consists of two parts, the |
---|
151 | variance of the estimator and the squared |
---|
152 | bias. $<m-\mu>^2=<m-<m>+<m>-\mu>^2=<m-<m>>^2+(<m>-\mu)^2$. In the |
---|
153 | case when weights are included in analysis due to varying measurement |
---|
154 | errors and the weights can be treated as deterministic ,we have |
---|
155 | \begin{equation} |
---|
156 | Var(m)=\frac{\sum w_i^2\sigma_i^2}{\left(\sum w_i\right)^2}= |
---|
157 | \frac{\sum w_i^2\frac{\sigma_0^2}{w_i}}{\left(\sum w_i\right)^2}= |
---|
158 | \frac{\sigma_0^2}{\sum w_i}, |
---|
159 | \end{equation} |
---|
160 | where we need to estimate $\sigma_0^2$. Again we have the likelihood |
---|
161 | $L(\sigma_0^2)=\prod\frac{1}{\sqrt{2\pi\sigma_0^2/w_i}}\exp{-\frac{w_i(x-m)^2}{2\sigma_0^2}}$ |
---|
162 | and taking the derivity with respect to $\sigma_o^2$, $\frac{d\ln |
---|
163 | L}{d\sigma_i^2}=\sum |
---|
164 | -\frac{1}{2\sigma_0^2}+\frac{w_i(x-m)^2}{2\sigma_0^2\sigma_o^2}$ which |
---|
165 | yields an estimator $\sigma_0^2=\frac{1}{N}\sum w_i(x-m)^2$. This |
---|
166 | estimator is not ignoring weights equal to zero, because deviation is |
---|
167 | most often smaller than the expected infinity. Therefore, we modify |
---|
168 | the expression as follows $\sigma_0^2=\frac{\sum w_i^2}{\left(\sum |
---|
169 | w_i\right)^2}\sum w_i(x-m)^2$ and we get the following estimator of |
---|
170 | the variance of the mean $\sigma_0^2=\frac{\sum w_i^2}{\left(\sum |
---|
171 | w_i\right)^3}\sum w_i(x-m)^2$. This estimator fulfills the conditions |
---|
172 | above: adding a weight zero does not change it: rescaling the weights |
---|
173 | does not change it, and setting all weights to unity yields the same |
---|
174 | expression as in the non-weighted case. |
---|
175 | |
---|
176 | In a case when it is not a good approximation to treat the weights as |
---|
177 | deterministic, there are two ways to get a better estimation. The |
---|
178 | first one is to linearize the expression $\left<\frac{\sum |
---|
179 | w_ix_i}{\sum w_i}\right>$. The second method when the situation is |
---|
180 | more complicated is to estimate the standard error using a |
---|
181 | bootstrapping method. |
---|
182 | |
---|
183 | \section{AveragerPairWeighted} |
---|
184 | Here data points come in pairs (x,y). We are sampling from $f'_{XY}$ |
---|
185 | but want to measure from $f_{XY}$. To compensate for this decrepency, |
---|
186 | averages of $g(x,y)$ are taken as $\sum \frac{f}{f'}g(x,y)$. Even |
---|
187 | though, $X$ and $Y$ are not independent $(f_{XY}\neq f_Xf_Y)$ we |
---|
188 | assume that we can factorize the ratio and get $\frac{\sum |
---|
189 | w_xw_yg(x,y)}{\sum w_xw_y}$ |
---|
190 | \subsection{Covariance} |
---|
191 | Following the variance calculations for AveragerWeighted we have |
---|
192 | $Cov=\frac{\sum w_xw_y(x-m_x)(y-m_y)}{\sum w_xw_y}$ where |
---|
193 | $m_x=\frac{\sum w_xw_yx}{\sum w_xw_y}$ |
---|
194 | |
---|
195 | \subsection{correlation} |
---|
196 | |
---|
197 | As the mean is estimated as $m_x=\frac{\sum w_xw_yx}{\sum w_xw_y}$, |
---|
198 | the variance is estimated as $\sigma_x^2=\frac{\sum |
---|
199 | w_xw_y(x-m_x)^2}{\sum w_xw_y}$. As in the non-weighted case we define |
---|
200 | the correlation to be the ratio between the covariance and geometrical |
---|
201 | average of the variances |
---|
202 | |
---|
203 | $\frac{\sum w_xw_y(x-m_x)(y-m_y)}{\sqrt{\sum w_xw_y(x-m_x)^2\sum |
---|
204 | w_xw_y(y-m_y)^2}}$. |
---|
205 | |
---|
206 | This expression fulfills the following |
---|
207 | \begin{itemize} |
---|
208 | \item Having N weights the expression reduces to the non-weighted expression. |
---|
209 | \item Adding a pair of data, in which one weight is zero is equivalent |
---|
210 | to ignoring the data pair. |
---|
211 | \item Correlation is equal to unity if and only if $x$ is equal to |
---|
212 | $y$. Otherwise the correlation is between -1 and 1. |
---|
213 | \end{itemize} |
---|
214 | \section{Score} |
---|
215 | |
---|
216 | |
---|
217 | \subsection{Pearson} |
---|
218 | |
---|
219 | $\frac{\sum w(x-m_x)(y-m_y)}{\sqrt{\sum w(x-m_x)^2\sum w(y-m_y)^2}}$. |
---|
220 | |
---|
221 | See AveragerPairWeighted correlation. |
---|
222 | |
---|
223 | \subsection{ROC} |
---|
224 | |
---|
225 | An interpretation of the ROC curve area is the probability that if we |
---|
226 | take one sample from class $+$ and one sample from class $-$, what is |
---|
227 | the probability that the sample from class $+$ has greater value. The |
---|
228 | ROC curve area calculates the ratio of pairs fulfilling this |
---|
229 | |
---|
230 | \begin{equation} |
---|
231 | \frac{\sum_{\{i,j\}:x^-_i<x^+_j}1}{\sum_{i,j}1}. |
---|
232 | \end{equation} |
---|
233 | |
---|
234 | An geometrical interpretation is to have a number of squares where |
---|
235 | each square correspond to a pair of samples. The ROC curve follows the |
---|
236 | border between pairs in which the samples from class $+$ has a greater |
---|
237 | value and pairs in which this is not fulfilled. The ROC curve area is |
---|
238 | the area of those latter squares and a natural extension is to weight |
---|
239 | each pair with its two weights and consequently the weighted ROC curve |
---|
240 | area becomes |
---|
241 | |
---|
242 | \begin{equation} |
---|
243 | \frac{\sum_{\{i,j\}:x^-_i<x^+_j}w^-_iw^+_j}{\sum_{i,j}w^-_iw^+_j} |
---|
244 | \end{equation} |
---|
245 | |
---|
246 | This expression is invariant under a rescaling of weight. Adding a |
---|
247 | data value with weight zero adds nothing to the exprssion, and having |
---|
248 | all weight equal to unity yields the non-weighted ROC curve area. |
---|
249 | |
---|
250 | \subsection{tScore} |
---|
251 | |
---|
252 | Assume that $x$ and $y$ originate from the same distribution |
---|
253 | $N(\mu,\sigma_i^2)$ where $\sigma_i^2=\frac{\sigma_0^2}{w_i}$. We then |
---|
254 | estimate $\sigma_0^2$ as |
---|
255 | \begin{equation} |
---|
256 | \frac{\sum w(x-m_x)^2+\sum w(y-m_y)^2} |
---|
257 | {\frac{\left(\sum w_x\right)^2}{\sum w_x^2}+ |
---|
258 | \frac{\left(\sum w_y\right)^2}{\sum w_y^2}-2} |
---|
259 | \end{equation} |
---|
260 | The variance of difference of the means becomes |
---|
261 | \begin{eqnarray} |
---|
262 | Var(m_x)+Var(m_y)=\\\frac{\sum w_i^2Var(x_i)}{\left(\sum |
---|
263 | w_i\right)^2}+\frac{\sum w_i^2Var(y_i)}{\left(\sum w_i\right)^2}= |
---|
264 | \frac{\sigma_0^2}{\sum w_i}+\frac{\sigma_0^2}{\sum w_i}, |
---|
265 | \end{eqnarray} |
---|
266 | and consequently the t-score becomes |
---|
267 | \begin{equation} |
---|
268 | \frac{\sum w(x-m_x)^2+\sum w(y-m_y)^2} |
---|
269 | {\frac{\left(\sum w_x\right)^2}{\sum w_x^2}+ |
---|
270 | \frac{\left(\sum w_y\right)^2}{\sum w_y^2}-2} |
---|
271 | \left(\frac{1}{\sum w_i}+\frac{1}{\sum w_i}\right), |
---|
272 | \end{equation} |
---|
273 | |
---|
274 | For a $w_i=w$ we this expression get condensed down to |
---|
275 | \begin{equation} |
---|
276 | \frac{w\sum (x-m_x)^2+w\sum (y-m_y)^2} |
---|
277 | {n_x+n_y-2} |
---|
278 | \left(\frac{1}{wn_x}+\frac{1}{wn_y}\right), |
---|
279 | \end{equation} |
---|
280 | in other words the good old expression as for non-weighted. |
---|
281 | |
---|
282 | \subsection{FoldChange} |
---|
283 | Fold-Change is simply the difference between the weighted mean of the |
---|
284 | two groups //$\frac{\sum w_xx}{\sum w_x}-\frac{\sum w_yy}{\sum w_y}$ |
---|
285 | |
---|
286 | \subsection{WilcoxonFoldChange} |
---|
287 | Taking all pair samples (one from class $+$ and one from class $-$) |
---|
288 | and calculating the weighted median of the distances. |
---|
289 | |
---|
290 | \section{Kernel} |
---|
291 | \subsection{Polynomial Kernel} |
---|
292 | The polynomial kernel of degree $N$ is defined as $(1+<x,y>)^N$, where |
---|
293 | $<x,y>$ is the linear kernel (usual scalar product). For the weighted |
---|
294 | case we define the linear kernel to be $<x,y>=\sum {w_xw_yxy}$ and the |
---|
295 | polynomial kernel can be calculated as before |
---|
296 | $(1+<x,y>)^N$. Is this kernel a proper kernel (always being semi |
---|
297 | positive definite). Yes, because $<x,y>$ is obviously a proper kernel |
---|
298 | as it is a scalar product. Adding a positive constant to a kernel |
---|
299 | yields another kernel so $1+<x,y>$ is still a proper kernel. Then also |
---|
300 | $(1+<x,y>)^N$ is a proper kernel because taking a proper kernel to the |
---|
301 | $Nth$ power yields a new proper kernel (see any good book on SVM). |
---|
302 | \subsection{Gaussian Kernel} |
---|
303 | We define the weighted Gaussian kernel as $\exp\left(-\frac{\sum |
---|
304 | w_xw_y(x-y)^2}{\sum w_xw_y}\right)$, which fulfills the conditions |
---|
305 | listed in the introduction. |
---|
306 | |
---|
307 | Is this kernel a proper kernel? Yes, following the proof of the |
---|
308 | non-weighted kernel we see that $K=\exp\left(-\frac{\sum |
---|
309 | w_xw_yx^2}{\sum w_xw_y}\right)\exp\left(-\frac{\sum w_xw_yy^2}{\sum |
---|
310 | w_xw_y}\right)\exp\left(\frac{\sum w_xw_yxy}{\sum w_xw_y}\right)$, |
---|
311 | which is a product of two proper kernels. $\exp\left(-\frac{\sum |
---|
312 | w_xw_yx^2}{\sum w_xw_y}\right)\exp\left(-\frac{\sum w_xw_yy^2}{\sum |
---|
313 | w_xw_y}\right)$ is a proper kernel, because it is a scalar product and |
---|
314 | $\exp\left(\frac{\sum w_xw_yxy}{\sum w_xw_y}\right)$ is a proper |
---|
315 | kernel, because it a polynomial of the linear kernel with positive |
---|
316 | coefficients. As product of two kernel also is a kernel, the Gaussian |
---|
317 | kernel is a proper kernel. |
---|
318 | |
---|
319 | \section{Distance} |
---|
320 | |
---|
321 | \section{Regression} |
---|
322 | \subsection{Naive} |
---|
323 | \subsection{Linear} |
---|
324 | We have the model |
---|
325 | |
---|
326 | \begin{equation} |
---|
327 | y_i=\alpha+\beta (x-m_x)+\epsilon_i, |
---|
328 | \end{equation} |
---|
329 | |
---|
330 | where $\epsilon_i$ is the noise. The variance of the noise is |
---|
331 | inversely proportional to the weight, |
---|
332 | $Var(\epsilon_i)=\frac{\sigma^2}{w_i}$. In order to determine the |
---|
333 | model parameters, we minimimize the sum of quadratic errors. |
---|
334 | |
---|
335 | \begin{equation} |
---|
336 | Q_0 = \sum \epsilon_i^2 |
---|
337 | \end{equation} |
---|
338 | |
---|
339 | Taking the derivity with respect to $\alpha$ and $\beta$ yields two conditions |
---|
340 | |
---|
341 | \begin{equation} |
---|
342 | \frac{\partial Q_0}{\partial \alpha} = -2 \sum w_i(y_i - \alpha - |
---|
343 | \beta (x_i-m_x)=0 |
---|
344 | \end{equation} |
---|
345 | |
---|
346 | and |
---|
347 | |
---|
348 | \begin{equation} \frac{\partial Q_0}{\partial \beta} = -2 \sum |
---|
349 | w_i(x_i-m_x)(y_i-\alpha-\beta(x_i-m_x)=0 |
---|
350 | \end{equation} |
---|
351 | |
---|
352 | or equivalently |
---|
353 | |
---|
354 | \begin{equation} |
---|
355 | \alpha = \frac{\sum w_iy_i}{\sum w_i}=m_y |
---|
356 | \end{equation} |
---|
357 | |
---|
358 | and |
---|
359 | |
---|
360 | \begin{equation} \beta=\frac{\sum w_i(x_i-m_x)(y-m_y)}{\sum |
---|
361 | w_i(x_i-m_x)^2}=\frac{Cov(x,y)}{Var(x)} |
---|
362 | \end{equation} |
---|
363 | |
---|
364 | Note, by having all weights equal we get back the unweighted |
---|
365 | case. Furthermore, we calculate the variance of the estimators of |
---|
366 | $\alpha$ and $\beta$. |
---|
367 | |
---|
368 | \begin{equation} |
---|
369 | \textrm{Var}(\alpha )=\frac{w_i^2\frac{\sigma^2}{w_i}}{(\sum w_i)^2}= |
---|
370 | \frac{\sigma^2}{\sum w_i} |
---|
371 | \end{equation} |
---|
372 | |
---|
373 | and |
---|
374 | \begin{equation} |
---|
375 | \textrm{Var}(\beta )= \frac{w_i^2(x_i-m_x)^2\frac{\sigma^2}{w_i}} |
---|
376 | {(\sum w_i(x_i-m_x)^2)^2}= |
---|
377 | \frac{\sigma^2}{\sum w_i(x_i-m_x)^2} |
---|
378 | \end{equation} |
---|
379 | |
---|
380 | Finally, we estimate the level of noise, $\sigma^2$. Inspired by the |
---|
381 | unweighted estimation |
---|
382 | |
---|
383 | \begin{equation} |
---|
384 | s^2=\frac{\sum (y_i-\alpha-\beta (x_i-m_x))^2}{n-2} |
---|
385 | \end{equation} |
---|
386 | |
---|
387 | we suggest the following estimator |
---|
388 | |
---|
389 | \begin{equation} s^2=\frac{\sum w_i(y_i-\alpha-\beta (x_i-m_x))^2}{\sum |
---|
390 | w_i-2\frac{\sum w_i^2}{\sum w_i}} \end{equation} |
---|
391 | |
---|
392 | \section{Outlook} |
---|
393 | \subsection{Hierarchical clustering} |
---|
394 | A hierarchical clustering consists of two things: finding the two |
---|
395 | closest data points, merge these two data points two a new data point |
---|
396 | and calculate the new distances from this point to all other points. |
---|
397 | |
---|
398 | In the first item, we need a distance matrix, and if we use Euclidean |
---|
399 | distanses the natural modification of the expression would be |
---|
400 | |
---|
401 | \begin{equation} |
---|
402 | d(x,y)=\frac{\sum w_i^xw_j^y(x_i-y_i)^2}{\sum w_i^xw_j^y} |
---|
403 | \end{equation} |
---|
404 | |
---|
405 | For the second item, inspired by average linkage, we suggest |
---|
406 | |
---|
407 | \begin{equation} |
---|
408 | d(xy,z)=\frac{\sum w_i^xw_j^z(x_i-z_i)^2+\sum |
---|
409 | w_i^yw_j^z(y_i-z_i)^2}{\sum w_i^xw_j^z+\sum w_i^yw_j^z} |
---|
410 | \end{equation} |
---|
411 | |
---|
412 | to be the distance between the new merged point $xy$ and $z$, and we |
---|
413 | also calculate new weights for this point: $w^{xy}_i=w^x_i+w^y_i$ |
---|
414 | |
---|
415 | \end{document} |
---|
416 | |
---|
417 | |
---|
418 | |
---|