- Timestamp:
- Mar 5, 2008, 3:30:58 AM (16 years ago)
- Location:
- trunk/yat/classifier
- Files:
-
- 3 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/yat/classifier/NBC.h
r1184 r1200 64 64 65 65 /// 66 /// \brief Train the %classifierusing training data and targets.66 /// \brief Train the NBC using training data and targets. 67 67 /// 68 68 /// For each class mean and variance are estimated for each … … 76 76 77 77 /// 78 /// \brief Train the %classifierusing weighted training data and78 /// \brief Train the NBC using weighted training data and 79 79 /// targets. 80 80 /// … … 121 121 122 122 \f$ P_j = \frac{1}{Z} \exp\left(-N\frac{\sum 123 {w_i(x_i-\mu_i)^2}/(2\sigma_i^2)}{\sum w_i}\right)\f$, 124 where \f$ \mu_i \f$ and \f$ \sigma_i^2 \f$ are the estimated 125 mean and variance, respectively. Z is chosen such that 126 total probability equals unity, \f$ \sum P_j = 1 \f$. 123 {w_i(x_i-\mu_i)^2}/(2\sigma_i^2)}{\sum w_i}\right) 124 \prod_i\frac{1}{\sqrt{2\pi\sigma_i^2}}\f$, where \f$ \mu_i \f$ 125 and \f$ \sigma_i^2 \f$ are the estimated mean and variance, 126 respectively. Z is chosen such that total probability equals 127 unity, \f$ \sum P_j = 1 \f$. 127 128 128 129 \note If parameters could not be estimated during training, due -
trunk/yat/classifier/SVM.cc
r1177 r1200 153 153 } 154 154 155 /* 155 156 double SVM::predict(const DataLookup1D& x) const 156 157 { … … 170 171 return margin_*(y+bias_); 171 172 } 173 */ 172 174 173 175 int SVM::target(size_t i) const -
trunk/yat/classifier/SVM.h
r1175 r1200 46 46 class KernelLookup; 47 47 48 /// 49 /// @brief Support Vector Machine 50 /// 51 /// 52 /// 53 /// Class for SVM using Keerthi's second modification of Platt's 54 /// Sequential Minimal Optimization. The SVM uses all data given for 55 /// training. If validation or testing is wanted this should be 56 /// taken care of outside (in the kernel). 57 /// 48 /** 49 \brief Support Vector Machine 50 */ 58 51 class SVM 59 52 { … … 66 59 67 60 /** 68 Copy constructor.61 \brief Copy constructor. 69 62 */ 70 63 SVM(const SVM&); 71 64 72 65 /// 73 /// Destructor66 /// \brief Destructor 74 67 /// 75 68 virtual ~SVM(); 76 69 77 /// 78 /// Same as copy constructor. 79 /// 70 /** 71 \brief Create an untrained copy of SVM. 72 73 \returns A dynamically allocated SVM, which has to be deleted 74 by the caller to avoid memory leaks. 75 */ 80 76 SVM* make_classifier(void) const; 81 77 82 78 /// 83 /// @return \f$ \alpha \f$79 /// @return alpha parameters 84 80 /// 85 81 const utility::Vector& alpha(void) const; … … 89 85 /// large C means the training will be focused on getting samples 90 86 /// correctly classified, with risk for overfitting and poor 91 /// generalisation. A too small C will result in a training in which92 /// misclassifications are not penalized. C is weighted with93 /// respect to the size , so \f$ n_+C_+ = n_-C_- \f$, meaning a94 /// misclassificaion of the smaller group is penalized87 /// generalisation. A too small C will result in a training, in 88 /// which misclassifications are not penalized. C is weighted with 89 /// respect to the size such that \f$ n_+C_+ = n_-C_- \f$, meaning 90 /// a misclassificaion of the smaller group is penalized 95 91 /// harder. This balance is equivalent to the one occuring for 96 /// regression with regularisation, or ANN-training with a92 /// %regression with regularisation, or ANN-training with a 97 93 /// weight-decay term. Default is C set to infinity. 98 94 /// … … 117 113 + bias \f$, where \f$ t \f$ is the target. 118 114 119 @return output 115 @return output of training samples 120 116 */ 121 117 const theplu::yat::utility::Vector& output(void) const; … … 125 121 is calculated as the output times the margin, i.e., geometric 126 122 distance from decision hyperplane: \f$ \frac{ \sum \alpha_j 127 t_j K_{ij} + bias}{ w} \f$ The output has 2 rows. The first row123 t_j K_{ij} + bias}{|w|} \f$ The output has 2 rows. The first row 128 124 is for binary target true, and the second is for binary target 129 125 false. The second row is superfluous as it is the first row … … 137 133 void predict(const KernelLookup& input, utility::Matrix& predict) const; 138 134 135 /* 139 136 /// 140 137 /// @return output times margin (i.e. geometric distance from … … 148 145 /// 149 146 double predict(const DataLookupWeighted1D& input) const; 147 */ 150 148 151 149 /// … … 181 179 decreased. 182 180 183 \throw if maximal number of epoch is reach. 181 Class for SVM using Keerthi's second modification of Platt's 182 Sequential Minimal Optimization. The SVM uses all data given for 183 training. 184 185 \throw std::runtime_error if maximal number of epoch is reach. 184 186 */ 185 187 void train(const KernelLookup& kernel, const Target& target);
Note: See TracChangeset
for help on using the changeset viewer.