OpenCV 人脸检测自学(6)opencv_traincascade如何训练强弱分类器

时间:2023-01-16 04:12:36

在(3)中把opencv_traincascade在使用LBP特征的时候的训练准备工作的代码总结了下。下面开始硬着头皮看训练里面的部分了。介于这部分实在是没怎么找到网上介绍的帖子(为啥呢???)所以总结的大部分内容是自己猜测的。以后再回头慢慢完善。


接着上次结束的部分,训练一个强分类器的代码是:


bool CvCascadeBoost::train( const CvFeatureEvaluator* _featureEvaluator,//包含了sum,tilted,特征的位置等信息
                           int _numSamples,
                           int _precalcValBufSize, int _precalcIdxBufSize,
                           const CvCascadeBoostParams& _params )
{
    CV_Assert( !data );
    clear();
    data = new CvCascadeBoostTrainData( _featureEvaluator, _numSamples,
                                        _precalcValBufSize, _precalcIdxBufSize, _params );//目前暂且知道这里是调用preCalculate计算所有的特征值
    CvMemStorage *storage = cvCreateMemStorage();
    weak = cvCreateSeq( 0, sizeof(CvSeq), sizeof(CvBoostTree*), storage );
    storage = 0;

    set_params( _params );
    if ( (_params.boost_type == LOGIT) || (_params.boost_type == GENTLE) )
        data->do_responses_copy();

    update_weights( 0 );

    cout << "+----+---------+---------+" << endl;
    cout << "|  N |    HR   |    FA   |" << endl;
    cout << "+----+---------+---------+" << endl;

    do
    {
        CvCascadeBoostTree* tree = new CvCascadeBoostTree;
        if( !tree->train( data, subsample_mask, this ) )//应该是训练一个弱分类器tree
        {
            delete tree;
            break;
        }
        cvSeqPush( weak, &tree );//把弱分类器添加到强分类器里面
        update_weights( tree );//AdaBoost里面更新weight部分。
        trim_weights();
        if( cvCountNonZero(subsample_mask) == 0 )
            break;
    }
    while( !isErrDesired() && (weak->total < params.weak_count) );

    data->is_classifier = true;
    data->free_train_data();
    return true;
}
这里面主要分为

  调用preCalculate计算所有样本的所有的特征值

      训练一个弱分类器

      根据AdaBoost里面的步骤把样本的weight都更新一遍

  
主要看如何训练一个弱分类器。因为在(4)中总结output的xml文件结构的时候发现每一个node里面都包含了8个很大的数,后来在predict函数里看到这8个数是属于subset,在使用xml文件的时候用里面每一个feature算出来的特征值转换为二进制后是256个bit,然后跟这8个大数的最后32个位进行比较,如果有至少一个bit都是1的话那么就输出一个值,反之另一个值。在这需要指出的是,如果是haar-like feature的话用的不是subset而是threshold,后面比较后就一样了。而且haar-like基本上是一个feature一个弱分类器,而这里subset里可以有多个1出现,所以我理解着就是指有多个feature组成的subset。

而这些subset是如何求出来的呢,我跟踪程序总结如下。(里面用到好多opencv里面ml的数据结构,到现在感觉还是没有入门啊)

boost.cpp里:

bool
CvBoostTree::train( CvDTreeTrainData* _train_data,
                    const CvMat* _subsample_idx, CvBoost* _ensemble )
{
    clear();
    ensemble = _ensemble;
    data = _train_data;
    data->shared = true;
    return do_train( _subsample_idx );//去CvDTree里训练弱分类器去
}

然后在tree.cpp里

bool CvDTree::do_train( const CvMat* _subsample_idx )
{
    bool result = false;

    CV_FUNCNAME( "CvDTree::do_train" );

    __BEGIN__;

    root = data->subsample_data( _subsample_idx );

    CV_CALL( try_split_node(root));
	//在这把root的value啊,爹妈孩子都算出来的,其中跟分类器的子集(threshold)有关的split->subset就在此计算的。

    if( root->split )
    {
        CV_Assert( root->left );
        CV_Assert( root->right );

        if( data->params.cv_folds > 0 )
            CV_CALL( prune_cv() );

        if( !data->shared )
            data->free_train_data();

        result = true;
    }

    __END__;

    return result;
}

在这里面主要包括:

    找root(回头再研究)

   分裂root。 try_split_node(root),就是在这把我想知道的那些数给算出来的,在opencv文档上看到这么一段话应该是对此的描述“决策树是从根结点递归构造的。用所有的训练数据(特征向量和对应的响应)来在根结点处进行分裂。在每个结点处,优化准则(比如最优分裂)是基于一些基本原则来确定的(比如ML中的“纯度purity”原则被用来进行分类,方差之和用来进行回归)。

  prune_cv() 估计是剪枝(回头再研究)


然后再研究分裂root部分。
tree.cpp中

void CvDTree::try_split_node( CvDTreeNode* node )
{
    CvDTreeSplit* best_split = 0;
    int i, n = node->sample_count, vi;
    bool can_split = true;
    double quality_scale;

    calc_node_value( node );//计算node->value

    if( node->sample_count <= data->params.min_sample_count ||
        node->depth >= data->params.max_depth )
        can_split = false;

    if( can_split && data->is_classifier )
    {
        // check if we have a "pure" node,
        // we assume that cls_count is filled by calc_node_value()
        int* cls_count = data->counts->data.i;
        int nz = 0, m = data->get_num_classes();
        for( i = 0; i < m; i++ )
            nz += cls_count[i] != 0;
        if( nz == 1 ) // there is only one class
            can_split = false;
    }
    else if( can_split )
    {
        if( sqrt(node->node_risk)/n < data->params.regression_accuracy )
            can_split = false;
    }

    if( can_split )
    {
        best_split = find_best_split(node);//可能是在这计算split->subset[]
        // TODO: check the split quality ...
        node->split = best_split;
    }
    if( !can_split || !best_split )
    {
        data->free_node_data(node);
        return;
    }

    quality_scale = calc_node_dir( node );
    if( data->params.use_surrogates )
    {
        // find all the surrogate splits
        // and sort them by their similarity to the primary one
        for( vi = 0; vi < data->var_count; vi++ )
        {
            CvDTreeSplit* split;
            int ci = data->get_var_type(vi);

            if( vi == best_split->var_idx )
                continue;

            if( ci >= 0 )
                split = find_surrogate_split_cat( node, vi );
            else
                split = find_surrogate_split_ord( node, vi );

            if( split )
            {
                // insert the split
                CvDTreeSplit* prev_split = node->split;
                split->quality = (float)(split->quality*quality_scale);

                while( prev_split->next &&
                       prev_split->next->quality > split->quality )
                    prev_split = prev_split->next;
                split->next = prev_split->next;
                prev_split->next = split;
            }
        }
    }
    split_node_data( node );
    try_split_node( node->left );///////
    try_split_node( node->right );//////
	/*
	算法回归的继续分裂左右孩子结点。在以下情况下算法可能会在某个结点停止(i.e. stop splitting the node further):
	树的深度达到了指定的最大值
	在该结点训练样本的数目少于指定值,比如,没有统计意义上的集合来进一步进行结点分裂了。
	在该结点所有的样本属于同一类(或者,如果是回归的话,变化已经非常小了)
	跟随机选择相比,能选择到的最好的分裂已经基本没有什么有意义的改进了。
	*/
}

我要找的计算在best_split = find_best_split(node)中。

CvDTreeSplit* CvDTree::find_best_split( CvDTreeNode* node )
{
    DTreeBestSplitFinder finder( this, node );

    cv::parallel_reduce(cv::BlockedRange(0, data->var_count), finder);//调用本cpp文件中的DTreeBestSplitFinder::operator()

    CvDTreeSplit *bestSplit = 0;
    if( finder.bestSplit->quality > 0 )
    {
        bestSplit = data->new_split_cat( 0, -1.0f );
        memcpy( bestSplit, finder.bestSplit, finder.splitSize );
    }

    return bestSplit;
}


void DTreeBestSplitFinder::operator()(const BlockedRange& range)
{
    int vi, vi1 = range.begin(), vi2 = range.end();
    int n = node->sample_count;
    CvDTreeTrainData* data = tree->get_data();
    AutoBuffer<uchar> inn_buf(2*n*(sizeof(int) + sizeof(float)));

    for( vi = vi1; vi < vi2; vi++ )//{0,8464},就是一共有多少个feature,这个属于外面的循环,对于每一个feature
    {
        CvDTreeSplit *res;
        int ci = data->get_var_type(vi);
        if( node->get_num_valid(vi) <= 1 )
            continue;

        if( data->is_classifier )
        {
            if( ci >= 0 )
                res = tree->find_split_cat_class( node, vi, bestSplit->quality, split, (uchar*)inn_buf );
            else
                res = tree->find_split_ord_class( node, vi, bestSplit->quality, split, (uchar*)inn_buf );
        }
        else
        {
            if( ci >= 0 )
				res = tree->find_split_cat_reg( node, vi, bestSplit->quality, split, (uchar*)inn_buf );//计算split 调用的是CvDTreeSplit* CvBoostTree::find_split_cat_reg
            else
                res = tree->find_split_ord_reg( node, vi, bestSplit->quality, split, (uchar*)inn_buf );
        }

        if( res && bestSplit->quality < split->quality )
                memcpy( (CvDTreeSplit*)bestSplit, (CvDTreeSplit*)split, splitSize );//更新bestSplit选取那个最好的feature
    }
}

LBP是属于category的,所以调用res = tree->find_split_cat_reg( node, vi, bestSplit->quality, split, (uchar*)inn_buf )

boost.cpp中:
CvDTreeSplit*
CvBoostTree::find_split_cat_reg( CvDTreeNode* node, int vi, float init_quality, CvDTreeSplit* _split, uchar* _ext_buf )//cat好像是categary的意思,reg是regression的意思。
{
    const double* weights = ensemble->get_subtree_weights()->data.db;
    int ci = data->get_var_type(vi);
    int n = node->sample_count;//正负样本个数
    int mi = data->cat_count->data.i[ci];
    int base_size = (2*mi+3)*sizeof(double) + mi*sizeof(double*);
    cv::AutoBuffer<uchar> inn_buf(base_size);
    if( !_ext_buf )
        inn_buf.allocate(base_size + n*(2*sizeof(int) + sizeof(float)));
    uchar* base_buf = (uchar*)inn_buf;
    uchar* ext_buf = _ext_buf ? _ext_buf : base_buf + base_size;

    int* cat_labels_buf = (int*)ext_buf;
    const int* cat_labels = data->get_cat_var_data(node, vi, cat_labels_buf);//返回的是1 x n的矩阵,里面是所有样本的第vi个feature的特征值
    float* responses_buf = (float*)(cat_labels_buf + n);
    int* sample_indices_buf = (int*)(responses_buf + n);
    const float* responses = data->get_ord_responses(node, responses_buf, sample_indices_buf);//response就是类别{1,-1}

    double* sum = (double*)cv::alignPtr(base_buf,sizeof(double)) + 1;
    double* counts = sum + mi + 1;
    double** sum_ptr = (double**)(counts + mi);
    double L = 0, R = 0, best_val = init_quality, lsum = 0, rsum = 0;
    int i, best_subset = -1, subset_i;

    for( i = -1; i < mi; i++ )//mi 256,LBP一共256个category
        sum[i] = counts[i] = 0;

    // calculate sum response and weight of each category of the input var
    for( i = 0; i < n; i++ )//对于所有正负样本
    {
        int idx = ((cat_labels[i] == 65535) && data->is_buf_16u) ? -1 : cat_labels[i];//对于LBP来说,这个idx都属于[0,255]
        double w = weights[i];
        double s = sum[idx] + responses[i]*w;//把weight给乘以(+1)或(-1)求和
        double nc = counts[idx] + w;//纯粹weight求和
        sum[idx] = s;
        counts[idx] = nc;
    }

    // calculate average response in each category
    for( i = 0; i < mi; i++ )//对于LBP的256个category
    {
        R += counts[i];
        rsum += sum[i];
        sum[i] = fabs(counts[i]) > DBL_EPSILON ? sum[i]/counts[i] : 0;//求category里的平均值,要是这个category不太重要的话那么这个值估计会比较小(极端点考虑如果一半是正样本,一半是负样本那正好为0)
        sum_ptr[i] = sum + i;
    }

    icvSortDblPtr( sum_ptr, mi, 0 );//把256个值进行升序排序,注意sum_ptr里存的是sum[i]的地址

    // revert back to unnormalized sums
    // (there should be a very little loss in accuracy)
    for( i = 0; i < mi; i++ )
        sum[i] *= counts[i];

    for( subset_i = 0; subset_i < mi-1; subset_i++ )
    {
        int idx = (int)(sum_ptr[subset_i] - sum);//表示排序之后的第subset_i个在排序前是第idx个
        double ni = counts[idx];

        if( ni > FLT_EPSILON )
        {
            double s = sum[idx];
            lsum += s; L += ni;
            rsum -= s; R -= ni;

            if( L > FLT_EPSILON && R > FLT_EPSILON )
            {
                double val = (lsum*lsum*R + rsum*rsum*L)/(L*R);//要赋值给下面的quality,然后外层的每个feature的循环里还要用到
                if( best_val < val )
                {
                    best_val = val;
                    best_subset = subset_i;
                }
            }
        }
    }

    CvDTreeSplit* split = 0;
    if( best_subset >= 0 )
    {
        split = _split ? _split : data->new_split_cat( 0, -1.0f);
        split->var_idx = vi;//表示这是利用的第几个feature
        split->quality = (float)best_val;//上层的每个feature的循环里要用来比较好选择哪个feature
        memset( split->subset, 0, (data->max_c_count + 31)/32 * sizeof(int));
        for( i = 0; i <= best_subset; i++ )
        {
            int idx = (int)(sum_ptr[i] - sum);
            split->subset[idx >> 5] |= 1 << (idx & 31);
			//在这里把256个bin放到8个subset中(8 x 32 = 256),每个subset里面包含32位信息来表示这32个情况是存在与否,
			//但是我的问题是为啥subset定义的大小是2,根本不够使的啊。
        }
    }
    return split;
}

就是在这里最后把split->subset[]给写入数据的。

个人感觉sum[i] = fabs(counts[i]) > DBL_EPSILON ? sum[i]/counts[i] : 0;是[1] 里的Eq.4,但是double val = (lsum*lsum*R + rsum*rsum*L)/(L*R);这个公式从哪来的不明白,[1]里对应的应该是Eq.2才对啊。


在xml文件里除了subset还有threshold和output值,关于threshold的思路跟haartraing差不多,关于output值(即left->value 和 right->value)是在boost.cpp的calc_node_value(CvDTreeNode *n)中的。主要代码是:

{
        // in case of regression tree:
        //  * node value is 1/n*sum_i(Y_i), where Y_i is i-th response,
        //    n is the number of samples in the node.
        //  * node risk is the sum of squared errors: sum_i((Y_i - <node_value>)^2)
        double sum = 0, sum2 = 0, iw;
        float* values_buf = (float*)(labels_buf + n);
        int* sample_indices_buf = (int*)(values_buf + n);
        const float* values = data->get_ord_responses(node, values_buf, sample_indices_buf);

        for( i = 0; i < n; i++ )
        {
            int idx = labels[i];
            double w = weights[idx]/*priors[values[i] > 0]*/;
			double t = values[i];//{-1,1}
            rcw[0] += w;
            subtree_weights[i] = w;
            sum += t*w;
            sum2 += t*t*w;
        }

        iw = 1./rcw[0];
        node->value = sum*iw;
        node->node_risk = sum2 - (sum*iw)*sum;

        // renormalize the risk, as in try_split_node the unweighted formula
        // sqrt(risk)/n is used, rather than sqrt(risk)/sum(weights_i)
        node->node_risk *= n*iw*n*iw;
    }

这部分代码对应的数学公式是出自哪里啊?有知道的朋友帮忙告诉一声。



【1】 Face Detection Based on Multi-Block LBP Representation, L.Zhang