OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

内容

0.前置步骤

1.修改源码的目的

默认目标

修改源代码的解决方案

摸索过程

​踩坑记录

修改宏定义

报错1:

报错2:​

最终运行结果

补充:尽量增加代码逻辑

总结

参考链接​​​​

0.前置步骤

首先我们需要安装好OpenCV和OpenCV contrib。

如果读者还没有完成,请参考我的上一篇OpenCV文章:

详细教程:Windows 安装 Visual Studio + Opencv + Opencv contrib_树叶的叶and开心的开的博客-CSDN博客

1.修改源码的目的

下面说一下为什么要修改源代码。事实上,科学研究的许多改进都是在以前的研究和修改的基础上进行的。这时候如果能在理解别人代码的基础上进行修改,就可以大大减少无用的造轮子。以下是我自己的亲身经历:

研究生做毕业设计时候,我负责一个图像拼接项目,然后OpenCV跑着跑着就卡死了……

一步步调试后发现并没有卡住,而是因为一步图像拼接(光束法调整,涉及到大矩阵乘法)耗时太长……

然后就调试,发现……!@#¥%……&*,点进去是点进去了,但是只有HPP文件!!!

根本看不出具体的逻辑。

就像下图的情况一样(代码就是上一步的代码):

首先点击SURF::create(windows下按ctrl键然后鼠标点击)

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

然后跳转到了HPP文件中的create:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

然后就没了,hpp文件也点不进去啊,看不到完整的SURF算子的逻辑啊……

而且当时死活想不明白:OpenCV是提供了源码的,却没法debug和修改源码……让人崩溃。

总之,源代码需要修改,但是怎么做呢?

PS:当时负责的图像拼接项目使用到了OpenCV中的高级图像拼接算法的源代码,此处就不赘述啦。

PS:后来大概明白OpenCV为什么跟踪不到源码了:之前OpenCV的源码被cmake和VS生成编译之后,已经是lib包和dll库等二进制文件了,我们上一篇文章运行时候VS实际上调用的不是源码,而是生成的lib包和dll文件。由于本人C++基础一般,此处表述如有不符欢迎指出。

又ps:直到后来,深入接触了Java,才知道有AOP等技术……而且使用Java+Idea可以通过反编译技术一步一步debug的好嘛,虽然没法修改源码(通过反编译class文件实现的debug,class文件是没法修改的),但也比OpenCV这种蛋疼情况好多了,实在不行,复制一个java文件改改名字也行嘛。

默认目标

在读研过程中,修改源代码是为了科研需要,但是这次修改源代码,我们的目的更简单:

打印SURF特征点检测算法的各个步骤。

为什么选用SURF算子来测试呢?理由有两个:

1、我们之前已经跑通了SURF算子,对SURF算子也有了一定的了解;

2、SURF算子作为OpenCV contrib代码库的代码,调用流程比OpenCV的代码库更长,能够修改SURF算子代码更有挑战性和成就感。

ps:也没有办法打印全部步骤,中间步骤能够cout出来一些就算成功(毕竟现在不需要像当年一样面对真实的需求了)。

3、很重要的一点:我们的修改源码方式必须是可以通过打断点等方法来单步调试的。

CSDN上现有的一些修改源码的方法只能改一次,编译一次,运行一次,一旦修改的代码出现了bug,必须要再次编译,无法打断点,排查错误的成本是特别高的。

这里重新贴一下之前用到的SURF算子的测试代码,代码如下:

OpenCVTest.cpp

#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/xfeatures2d/nonfree.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/imgproc/types_c.h>
#include<opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;
 
int main()
{
	//【0】改变console字体颜色
	system("color 1F");
 
 
	//【1】载入原始图片
	Mat srcImage1 = imread("E:/images/3.jpg", 1);
	Mat srcImage2 = imread("E:/images/4.jpg", 1);
	Mat copysrcImage1 = srcImage1.clone();
	Mat copysrcImage2 = srcImage2.clone();
 
	if (!srcImage1.data || !srcImage2.data)
	{
		printf("读取图片错误,请确定目录下是否有imread函数指定的图片存在~! \n"); return false;
	}
 
	//【2】使用SURF算子检测关键点
	int minHessian = 100;//SURF算法中的hessian阈值
 
	Ptr<SURF> detector = SURF::create(minHessian);//定义一个SurfFeatureDetector(SURF) 特征检测类对象  
	// Ptr<SURF> detector = cv::xfeatures2d::SURF::create(400);
 
	vector<KeyPoint> keypoints_object, keypoints_scene;//vector模板类,存放任意类型的动态数组
 
	//【3】调用detect函数检测出SURF特征关键点,保存在vector容器中
	detector->detect(srcImage1, keypoints_object);
	detector->detect(srcImage2, keypoints_scene);
 
	//【4】计算描述符(特征向量)
	Ptr<SURF> extractor = SURF::create();
 
	Mat descriptors_object, descriptors_scene;
	extractor->compute(srcImage1, keypoints_object, descriptors_object);
	extractor->compute(srcImage2, keypoints_scene, descriptors_scene);
 
	//【5】使用FLANN匹配算子进行匹配
	FlannBasedMatcher matcher;
	vector< DMatch > matches;
	matcher.match(descriptors_object, descriptors_scene, matches);
	double max_dist = 0; double min_dist = 100;//最小距离和最大距离
 
	//【6】计算出关键点之间距离的最大值和最小值
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < min_dist) min_dist = dist;
		if (dist > max_dist) max_dist = dist;
	}
 
	printf(">Max dist 最大距离 : %f \n", max_dist);
	printf(">Min dist 最小距离 : %f \n", min_dist);
 
	//【7】存下匹配距离小于3*min_dist的点对
	std::vector< DMatch > good_matches;
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		if (matches[i].distance < 3 * min_dist)
		{
			good_matches.push_back(matches[i]);
		}
	}
 
	//绘制出匹配到的关键点
	Mat img_matches;
	drawMatches(srcImage1, keypoints_object, srcImage2, keypoints_scene,
		good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
		vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
 
	//定义两个局部变量
	vector<Point2f> obj;
	vector<Point2f> scene;
 
	//从匹配成功的匹配对中获取关键点
	for (unsigned int i = 0; i < good_matches.size(); i++)
	{
		obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
		scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
	}
	vector<unsigned char> listpoints;
 
	//Mat H = findHomography( obj, scene, CV_RANSAC );//计算透视变换 
	Mat H = findHomography(obj, scene, RANSAC, 3, listpoints);//计算透视变换 
 
 
	std::vector< DMatch > goodgood_matches;
	for (int i = 0; i < listpoints.size(); i++)
	{
		if ((int)listpoints[i])
		{
 
			goodgood_matches.push_back(good_matches[i]);
 
 
			cout << (int)listpoints[i] << endl;
		}
 
	}
	cout << "listpoints大小:" << listpoints.size() << endl;
	cout << "goodgood_matches大小:" << goodgood_matches.size() << endl;
	cout << "good_matches大小:" << good_matches.size() << endl;
	Mat Homgimg_matches;
	drawMatches(copysrcImage1, keypoints_object, copysrcImage2, keypoints_scene,
		goodgood_matches, Homgimg_matches, Scalar::all(-1), Scalar::all(-1),
		vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
 
	imshow("去除误匹配点后;", Homgimg_matches);
 
 
	//从待测图片中获取角点
	vector<Point2f> obj_corners(4);
	obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(srcImage1.cols, 0);
	obj_corners[2] = cvPoint(srcImage1.cols, srcImage1.rows); obj_corners[3] = cvPoint(0, srcImage1.rows);
	vector<Point2f> scene_corners(4);
 
	//进行透视变换
	perspectiveTransform(obj_corners, scene_corners, H);
 
	//绘制出角点之间的直线
	line(img_matches, scene_corners[0] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[1] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[1] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[2] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[2] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[3] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[3] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[0] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
 
	//显示最终结果
	imshow("效果图", img_matches);
 
	waitKey(0);
	return 0;
}

修改源代码的解决方案

这是我当时对解决方案的想法。

最开始的想法就是,找到源码中的头文件和cpp文件,拷贝到项目中,进行修改就行了。

在查看源码过程中,发现OpenCV的源码中(那个时候使用的是OpenCV343版本),头文件基本都是hpp文件。那个时候对hpp文件不甚了解(其实现在也不怎么了解哈哈),于是查找了HPP文件相关的资料,发现C++中,HPP文件中函数的定义和实现可以放在一起,具体内容参考C++ hpp文件 – kaizen – 博客园。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

作者的C++基础并不是很好,工作之后更是去做Java了,对C++其实一直没有研究过。

当时在经历了OpenCV安装的痛苦之后,又陷入了OpenCV修改源码的痛苦,看着这篇文章,突然灵光一闪:

把HPP文件拷贝出来,不就可以了吗?虽然OpenCV里面的hpp文件好像就只起了头文件的作用,但是可以把函数的定义拷贝过去呀?

开始。

摸索过程

首先我们知道了SURF文件的声明是在nonfree.hpp文件。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

在之前build的路径下找到nonfree.hpp:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

找到了,拷贝一份,改名myNonfree.hpp:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)hpp文件只有声明,没有实现。我们最后肯定要找到surf函数的具体实现。不用太着急,我们先看看这边的代码:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

可以看到SURF是继承了Feature2D这个类;

点击“Features2D”,我们可以看到features2d.hpp文件:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

在features2d.hpp文件中我们可以看到,Features2D文件的一些函数定义,比如detect,create等等,这些都需要子类去实现:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

在这个features2d.hpp文件中,我们可以看到很多特征点检测的算法,比如SIFT:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

还有ORB特征点检测算法(PS:高达SEED中的奥布联邦也叫Orb,高达迷狂喜哈哈):

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

还有其他的特征点检测算法,我就不赘述了。

但是这一步骤还是没有找到SURF的算法的实现,令人悲伤。

第一步可以先猜测,既然有一个nonfree.hpp,那应该有一个nonfree.cpp?

其实没有,别找了(别问怎么知道)。

那么下一步是什么?

进一步的猜测,具体的SURF算子的实现肯定是在之前下载的opencv-contrib的源码当中的,如果连源码都没有中SURF算子的相关实现源码,那么cmake还能怎么编译呢?所以我们先去开始下载的opencv contrib里面寻找,可以在xfeatures2d里面找到一个surf.cpp文件,直觉上能猜测到,SURF算子的函数实现应该就在其中。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

使用Notepad++打开代码查看,看到最后,然后,我们取得了关键的一步,我们找到了SURF的create方法,而且下文旁边的注释”#ifdef OPENCV_ENABLE_NONFREE”也在告诉我们,这个就是Nonfree的SURF中的create方法:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

这边的代码是说,如果开启了ENABLE_NONFREE就会走:

return makePtr<SURF_Impl>(_threshold, _nOctaves, _nOctaveLayers, _extended, _upright);

这个逻辑,否则(没有开启ENABLE_NONFREE)会报出CVError的错误。

然后我们再在文件中找找SURF_Impl相关的代码。

使用Notepad++的文件查找功能可以对文件夹下的所有文件进行查找,全局查找“SURF_Impl”,我们可以找到相关消息:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

可以看到SURF_Impl在surf.hpp中有定义,并且SURF_Impl是是SURF的一个实现。

Notepad++打开surf.hpp:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

可以看到,之前SURF类定义的一些virtual方法直接在此处进行了实现(注:……实际查找时候一直以为有更复杂的实现,无语)。

回到features2d.hpp文件,我们再看看features2d中的detect:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

可以看到,features2d的detect函数还是使用detectAndCompute完成的。

再在features2d.cpp文件中查找features2d中的detectAndCompute实现。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

它是一个空实现,需要在子类中实现。

我们再回到surf.cpp文件查找,找到detectAndCompute:OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

这个实现实际上SURF类没有实现,是交给SURF_Impl类来实现的。

这里我们可以大致确定以下几点:

1.SURF类是Features2D类的子类;

2.SURF类的很多函数(比如detectAndCompute)是由SURF类的子类 SURF_Impl实现的;

3.SURF_Impl的具体逻辑都在surf.cpp之中,SURF_Impl的声明又在surf.hpp当中。

所以我们应该这样做:

1.拷贝一份nonfree.hpp到项目中,重命名为myNonfree.hpp(此步骤此前已经完成了,没完成的话自己再复制一下);

2.拷贝一份surf.hpp文件到自己目录下,重命名为mysurf.hpp;

3.同时把surf.cpp中的函数实现拷贝到mysurf.hpp下,让一个mysurf.hpp完成函数的声明和定义(实现),拷贝时候注意放在同一个namespace下面;

4.为了避免和之前的SURF冲突,将mysurf.hpp和myNonfree.hpp中的“SURF”全部改为”MYSURF”;

45为了验证修改源码的效果,我们在mysurf.hpp中使用cout打印一些步骤,参考下图。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

mysurf.hpp和myNonfree.hpp的代码如下,此处当然可以直接使用代码,但是还是建议读者自己走一遍流程,否则除了SURF算子其他OpenCV的源码就不会修改了:

mysurf.hpp

/ see LICENSE.txt in the OpenCV root directory //

// 此处的宏定义必须要修改。否则会报错

#ifndef __OPENCV_XFEATURES2D_MY_SURF_HPP__
#define __OPENCV_XFEATURES2D_MY_SURF_HPP__

// 这里其实我也不知道要整啥依赖,就把原始OpenCVTest中的依赖全部拷贝过来就行了。
#include "myNonfree.hpp"
#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/xfeatures2d/nonfree.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/imgproc/types_c.h>
#include<opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;

namespace cv
{
    namespace xfeatures2d
    {

        //! Speeded up robust features, port from CUDA module.
        // MYSURF //
        /*!
         MYSURF implementation.

         The class implements SURF algorithm by H. Bay et al.
         */
        class MYSURF_Impl : public MYSURF
        {
        public:
            //! the full constructor taking all the necessary parameters
            explicit CV_WRAP MYSURF_Impl(double hessianThreshold,
                int nOctaves = 4, int nOctaveLayers = 2,
                bool extended = true, bool upright = false);

            //! returns the descriptor size in float's (64 or 128)
            CV_WRAP int descriptorSize() const CV_OVERRIDE;

            //! returns the descriptor type
            CV_WRAP int descriptorType() const CV_OVERRIDE;

            //! returns the descriptor type
            CV_WRAP int defaultNorm() const CV_OVERRIDE;

            void set(int, double);
            double get(int) const;

            //! finds the keypoints and computes their descriptors.
            // Optionally it can compute descriptors for the user-provided keypoints
            void detectAndCompute(InputArray img, InputArray mask,
                CV_OUT std::vector<KeyPoint>& keypoints,
                OutputArray descriptors,
                bool useProvidedKeypoints = false) CV_OVERRIDE;

            void setHessianThreshold(double hessianThreshold_) CV_OVERRIDE { hessianThreshold = hessianThreshold_; }
            double getHessianThreshold() const CV_OVERRIDE { return hessianThreshold; }

            void setNOctaves(int nOctaves_) CV_OVERRIDE { nOctaves = nOctaves_; }
            int getNOctaves() const CV_OVERRIDE { return nOctaves; }

            void setNOctaveLayers(int nOctaveLayers_) CV_OVERRIDE { nOctaveLayers = nOctaveLayers_; }
            int getNOctaveLayers() const CV_OVERRIDE { return nOctaveLayers; }

            void setExtended(bool extended_) CV_OVERRIDE { extended = extended_; }
            bool getExtended() const CV_OVERRIDE { return extended; }

            void setUpright(bool upright_) CV_OVERRIDE { upright = upright_; }
            bool getUpright() const CV_OVERRIDE { return upright; }

            double hessianThreshold;
            int nOctaves;
            int nOctaveLayers;
            bool extended;
            bool upright;
        };

#ifdef HAVE_OPENCL
        class MYSURF_OCL
        {
        public:
            enum KeypointLayout
            {
                X_ROW = 0,
                Y_ROW,
                LAPLACIAN_ROW,
                OCTAVE_ROW,
                SIZE_ROW,
                ANGLE_ROW,
                HESSIAN_ROW,
                ROWS_COUNT
            };

            //! the full constructor taking all the necessary parameters
            MYSURF_OCL();

            bool init(const MYSURF_Impl* params);

            //! returns the descriptor size in float's (64 or 128)
            int descriptorSize() const { return params->extended ? 128 : 64; }

            void uploadKeypoints(const std::vector<KeyPoint>& keypoints, UMat& keypointsGPU);
            void downloadKeypoints(const UMat& keypointsGPU, std::vector<KeyPoint>& keypoints);

            //! finds the keypoints using fast hessian detector used in SURF
            //! supports CV_8UC1 images
            //! keypoints will have nFeature cols and 6 rows
            //! keypoints.ptr<float>(X_ROW)[i] will contain x coordinate of i'th feature
            //! keypoints.ptr<float>(Y_ROW)[i] will contain y coordinate of i'th feature
            //! keypoints.ptr<float>(LAPLACIAN_ROW)[i] will contain laplacian sign of i'th feature
            //! keypoints.ptr<float>(OCTAVE_ROW)[i] will contain octave of i'th feature
            //! keypoints.ptr<float>(SIZE_ROW)[i] will contain size of i'th feature
            //! keypoints.ptr<float>(ANGLE_ROW)[i] will contain orientation of i'th feature
            //! keypoints.ptr<float>(HESSIAN_ROW)[i] will contain response of i'th feature
            bool detect(InputArray img, InputArray mask, UMat& keypoints);
            //! finds the keypoints and computes their descriptors.
            //! Optionally it can compute descriptors for the user-provided keypoints and recompute keypoints direction
            bool detectAndCompute(InputArray img, InputArray mask, UMat& keypoints,
                OutputArray descriptors, bool useProvidedKeypoints = false);

        protected:
            bool setImage(InputArray img, InputArray mask);

            // kernel callers declarations
            bool calcLayerDetAndTrace(int octave, int layer_rows);

            bool findMaximaInLayer(int counterOffset, int octave, int layer_rows, int layer_cols);

            bool interpolateKeypoint(int maxCounter, UMat& keypoints, int octave, int layer_rows, int maxFeatures);

            bool calcOrientation(UMat& keypoints);

            bool setUpRight(UMat& keypoints);

            bool computeDescriptors(const UMat& keypoints, OutputArray descriptors);

            bool detectKeypoints(UMat& keypoints);

            const MYSURF_Impl* params;

            //! max keypoints = min(keypointsRatio * img.size().area(), 65535)
            UMat sum, intBuffer;
            UMat det, trace;
            UMat maxPosBuffer;

            int img_cols, img_rows;

            int maxCandidates;
            int maxFeatures;

            UMat img, counters;

            // texture buffers
            ocl::Image2D imgTex, sumTex;
            bool haveImageSupport;
            String kerOpts;

            int status;
        };
#endif // HAVE_OPENCL

        /*
        template<typename _Tp> void copyVectorToUMat(const std::vector<_Tp>& v, UMat& um)
        {
            if(v.empty())
                um.release();
            else
                Mat(1, (int)(v.size()*sizeof(v[0])), CV_8U, (void*)&v[0]).copyTo(um);
        }

        template<typename _Tp> void copyUMatToVector(const UMat& um, std::vector<_Tp>& v)
        {
            if(um.empty())
                v.clear();
            else
            {
                size_t sz = um.total()*um.elemSize();
                CV_Assert(um.isContinuous() && (sz % sizeof(_Tp) == 0));
                v.resize(sz/sizeof(_Tp));
                Mat m(um.size(), um.type(), &v[0]);
                um.copyTo(m);
            }
        }*/


        /*
        *--------------------------------------------------这部分我们直接复制黏贴surf.cpp的代码-------------------------------------------------------------------------------
        *
        */


#ifdef OPENCV_ENABLE_NONFREE // 这里的宏定义应该是从lib包中加载的,具体的技术细节后续有时间再整。

        static const int   SURF_ORI_SEARCH_INC = 5;
        static const float SURF_ORI_SIGMA = 2.5f;
        static const float SURF_DESC_SIGMA = 3.3f;

        // Wavelet size at first layer of first octave.
        static const int SURF_HAAR_SIZE0 = 9;

        // Wavelet size increment between layers. This should be an even number,
        // such that the wavelet sizes in an octave are either all even or all odd.
        // This ensures that when looking for the neighbours of a sample, the layers
        // above and below are aligned correctly.
        static const int SURF_HAAR_SIZE_INC = 6;


        struct SurfHF
        {
            int p0, p1, p2, p3;
            float w;

            SurfHF() : p0(0), p1(0), p2(0), p3(0), w(0) {}
        };

        inline float calcHaarPattern(const int* origin, const SurfHF* f, int n)
        {
            double d = 0;
            for (int k = 0; k < n; k++)
                d += (origin[f[k].p0] + origin[f[k].p3] - origin[f[k].p1] - origin[f[k].p2]) * f[k].w;
            return (float)d;
        }

        static void
            resizeHaarPattern(const int src[][5], SurfHF* dst, int n, int oldSize, int newSize, int widthStep)
        {
            float ratio = (float)newSize / oldSize;
            for (int k = 0; k < n; k++)
            {
                int dx1 = cvRound(ratio * src[k][0]);
                int dy1 = cvRound(ratio * src[k][1]);
                int dx2 = cvRound(ratio * src[k][2]);
                int dy2 = cvRound(ratio * src[k][3]);
                dst[k].p0 = dy1 * widthStep + dx1;
                dst[k].p1 = dy2 * widthStep + dx1;
                dst[k].p2 = dy1 * widthStep + dx2;
                dst[k].p3 = dy2 * widthStep + dx2;
                dst[k].w = src[k][4] / ((float)(dx2 - dx1) * (dy2 - dy1));
            }
        }

        /*
         * Calculate the determinant and trace of the Hessian for a layer of the
         * scale-space pyramid
         */
        static void calcLayerDetAndTrace(const Mat& sum, int size, int sampleStep,
            Mat& det, Mat& trace)
        {
            const int NX = 3, NY = 3, NXY = 4;
            const int dx_s[NX][5] = { {0, 2, 3, 7, 1}, {3, 2, 6, 7, -2}, {6, 2, 9, 7, 1} };
            const int dy_s[NY][5] = { {2, 0, 7, 3, 1}, {2, 3, 7, 6, -2}, {2, 6, 7, 9, 1} };
            const int dxy_s[NXY][5] = { {1, 1, 4, 4, 1}, {5, 1, 8, 4, -1}, {1, 5, 4, 8, -1}, {5, 5, 8, 8, 1} };

            SurfHF Dx[NX], Dy[NY], Dxy[NXY];

            if (size > sum.rows - 1 || size > sum.cols - 1)
                return;

            resizeHaarPattern(dx_s, Dx, NX, 9, size, sum.cols);
            resizeHaarPattern(dy_s, Dy, NY, 9, size, sum.cols);
            resizeHaarPattern(dxy_s, Dxy, NXY, 9, size, sum.cols);

            /* The integral image 'sum' is one pixel bigger than the source image */
            int samples_i = 1 + (sum.rows - 1 - size) / sampleStep;
            int samples_j = 1 + (sum.cols - 1 - size) / sampleStep;

            /* Ignore pixels where some of the kernel is outside the image */
            int margin = (size / 2) / sampleStep;

            for (int i = 0; i < samples_i; i++)
            {
                const int* sum_ptr = sum.ptr<int>(i * sampleStep);
                float* det_ptr = &det.at<float>(i + margin, margin);
                float* trace_ptr = &trace.at<float>(i + margin, margin);
                for (int j = 0; j < samples_j; j++)
                {
                    float dx = calcHaarPattern(sum_ptr, Dx, 3);
                    float dy = calcHaarPattern(sum_ptr, Dy, 3);
                    float dxy = calcHaarPattern(sum_ptr, Dxy, 4);
                    sum_ptr += sampleStep;
                    det_ptr[j] = dx * dy - 0.81f * dxy * dxy;
                    trace_ptr[j] = dx + dy;
                }
            }
        }


        /*
         * Maxima location interpolation as described in "Invariant Features from
         * Interest Point Groups" by Matthew Brown and David Lowe. This is performed by
         * fitting a 3D quadratic to a set of neighbouring samples.
         *
         * The gradient vector and Hessian matrix at the initial keypoint location are
         * approximated using central differences. The linear system Ax = b is then
         * solved, where A is the Hessian, b is the negative gradient, and x is the
         * offset of the interpolated maxima coordinates from the initial estimate.
         * This is equivalent to an iteration of Netwon's optimisation algorithm.
         *
         * N9 contains the samples in the 3x3x3 neighbourhood of the maxima
         * dx is the sampling step in x
         * dy is the sampling step in y
         * ds is the sampling step in size
         * point contains the keypoint coordinates and scale to be modified
         *
         * Return value is 1 if interpolation was successful, 0 on failure.
         */
        static int
            interpolateKeypoint(float N9[3][9], int dx, int dy, int ds, KeyPoint& kpt)
        {
            Vec3f b(-(N9[1][5] - N9[1][3]) / 2,  // Negative 1st deriv with respect to x
                -(N9[1][7] - N9[1][1]) / 2,  // Negative 1st deriv with respect to y
                -(N9[2][4] - N9[0][4]) / 2); // Negative 1st deriv with respect to s

            Matx33f A(
                N9[1][3] - 2 * N9[1][4] + N9[1][5],            // 2nd deriv x, x
                (N9[1][8] - N9[1][6] - N9[1][2] + N9[1][0]) / 4, // 2nd deriv x, y
                (N9[2][5] - N9[2][3] - N9[0][5] + N9[0][3]) / 4, // 2nd deriv x, s
                (N9[1][8] - N9[1][6] - N9[1][2] + N9[1][0]) / 4, // 2nd deriv x, y
                N9[1][1] - 2 * N9[1][4] + N9[1][7],            // 2nd deriv y, y
                (N9[2][7] - N9[2][1] - N9[0][7] + N9[0][1]) / 4, // 2nd deriv y, s
                (N9[2][5] - N9[2][3] - N9[0][5] + N9[0][3]) / 4, // 2nd deriv x, s
                (N9[2][7] - N9[2][1] - N9[0][7] + N9[0][1]) / 4, // 2nd deriv y, s
                N9[0][4] - 2 * N9[1][4] + N9[2][4]);           // 2nd deriv s, s

            Vec3f x = A.solve(b, DECOMP_LU);

            bool ok = (x[0] != 0 || x[1] != 0 || x[2] != 0) &&
                std::abs(x[0]) <= 1 && std::abs(x[1]) <= 1 && std::abs(x[2]) <= 1;

            if (ok)
            {
                kpt.pt.x += x[0] * dx;
                kpt.pt.y += x[1] * dy;
                kpt.size = (float)cvRound(kpt.size + x[2] * ds);
            }
            return ok;
        }

        // Multi-threaded construction of the scale-space pyramid
        struct SURFBuildInvoker : ParallelLoopBody
        {
            SURFBuildInvoker(const Mat& _sum, const std::vector<int>& _sizes,
                const std::vector<int>& _sampleSteps,
                std::vector<Mat>& _dets, std::vector<Mat>& _traces)
            {
                sum = &_sum;
                sizes = &_sizes;
                sampleSteps = &_sampleSteps;
                dets = &_dets;
                traces = &_traces;
            }

            void operator()(const Range& range) const CV_OVERRIDE
            {
                for (int i = range.start; i < range.end; i++)
                    calcLayerDetAndTrace(*sum, (*sizes)[i], (*sampleSteps)[i], (*dets)[i], (*traces)[i]);
            }

            const Mat* sum;
            const std::vector<int>* sizes;
            const std::vector<int>* sampleSteps;
            std::vector<Mat>* dets;
            std::vector<Mat>* traces;
        };

        // Multi-threaded search of the scale-space pyramid for keypoints
        struct SURFFindInvoker : ParallelLoopBody
        {
            SURFFindInvoker(const Mat& _sum, const Mat& _mask_sum,
                const std::vector<Mat>& _dets, const std::vector<Mat>& _traces,
                const std::vector<int>& _sizes, const std::vector<int>& _sampleSteps,
                const std::vector<int>& _middleIndices, std::vector<KeyPoint>& _keypoints,
                int _nOctaveLayers, float _hessianThreshold)
            {
                sum = &_sum;
                mask_sum = &_mask_sum;
                dets = &_dets;
                traces = &_traces;
                sizes = &_sizes;
                sampleSteps = &_sampleSteps;
                middleIndices = &_middleIndices;
                keypoints = &_keypoints;
                nOctaveLayers = _nOctaveLayers;
                hessianThreshold = _hessianThreshold;
            }

            static void findMaximaInLayer(const Mat& sum, const Mat& mask_sum,
                const std::vector<Mat>& dets, const std::vector<Mat>& traces,
                const std::vector<int>& sizes, std::vector<KeyPoint>& keypoints,
                int octave, int layer, float hessianThreshold, int sampleStep);

            void operator()(const Range& range) const CV_OVERRIDE
            {
                for (int i = range.start; i < range.end; i++)
                {
                    int layer = (*middleIndices)[i];
                    int octave = i / nOctaveLayers;
                    findMaximaInLayer(*sum, *mask_sum, *dets, *traces, *sizes,
                        *keypoints, octave, layer, hessianThreshold,
                        (*sampleSteps)[layer]);
                }
            }

            const Mat* sum;
            const Mat* mask_sum;
            const std::vector<Mat>* dets;
            const std::vector<Mat>* traces;
            const std::vector<int>* sizes;
            const std::vector<int>* sampleSteps;
            const std::vector<int>* middleIndices;
            std::vector<KeyPoint>* keypoints;
            int nOctaveLayers;
            float hessianThreshold;

            static Mutex findMaximaInLayer_m;
        };

        Mutex SURFFindInvoker::findMaximaInLayer_m;


        /*
         * Find the maxima in the determinant of the Hessian in a layer of the
         * scale-space pyramid
         */
        void SURFFindInvoker::findMaximaInLayer(const Mat& sum, const Mat& mask_sum,
            const std::vector<Mat>& dets, const std::vector<Mat>& traces,
            const std::vector<int>& sizes, std::vector<KeyPoint>& keypoints,
            int octave, int layer, float hessianThreshold, int sampleStep)
        {
            // Wavelet Data
            const int NM = 1;
            const int dm[NM][5] = { {0, 0, 9, 9, 1} };
            SurfHF Dm;

            int size = sizes[layer];

            // The integral image 'sum' is one pixel bigger than the source image
            int layer_rows = (sum.rows - 1) / sampleStep;
            int layer_cols = (sum.cols - 1) / sampleStep;

            // Ignore pixels without a 3x3x3 neighbourhood in the layer above
            int margin = (sizes[layer + 1] / 2) / sampleStep + 1;

            if (!mask_sum.empty())
                resizeHaarPattern(dm, &Dm, NM, 9, size, mask_sum.cols);

            int step = (int)(dets[layer].step / dets[layer].elemSize());

            for (int i = margin; i < layer_rows - margin; i++)
            {
                const float* det_ptr = dets[layer].ptr<float>(i);
                const float* trace_ptr = traces[layer].ptr<float>(i);
                for (int j = margin; j < layer_cols - margin; j++)
                {
                    float val0 = det_ptr[j];
                    if (val0 > hessianThreshold)
                    {
                        /* Coordinates for the start of the wavelet in the sum image. There
                           is some integer division involved, so don't try to simplify this
                           (cancel out sampleStep) without checking the result is the same */
                        int sum_i = sampleStep * (i - (size / 2) / sampleStep);
                        int sum_j = sampleStep * (j - (size / 2) / sampleStep);

                        /* The 3x3x3 neighbouring samples around the maxima.
                           The maxima is included at N9[1][4] */

                        const float* det1 = &dets[layer - 1].at<float>(i, j);
                        const float* det2 = &dets[layer].at<float>(i, j);
                        const float* det3 = &dets[layer + 1].at<float>(i, j);
                        float N9[3][9] = { { det1[-step - 1], det1[-step], det1[-step + 1],
                                             det1[-1]  , det1[0] , det1[1],
                                             det1[step - 1] , det1[step] , det1[step + 1]  },
                                           { det2[-step - 1], det2[-step], det2[-step + 1],
                                             det2[-1]  , det2[0] , det2[1],
                                             det2[step - 1] , det2[step] , det2[step + 1]  },
                                           { det3[-step - 1], det3[-step], det3[-step + 1],
                                             det3[-1]  , det3[0] , det3[1],
                                             det3[step - 1] , det3[step] , det3[step + 1]  } };

                        /* Check the mask - why not just check the mask at the center of the wavelet? */
                        if (!mask_sum.empty())
                        {
                            const int* mask_ptr = &mask_sum.at<int>(sum_i, sum_j);
                            float mval = calcHaarPattern(mask_ptr, &Dm, 1);
                            if (mval < 0.5)
                                continue;
                        }

                        /* Non-maxima suppression. val0 is at N9[1][4]*/
                        if (val0 > N9[0][0] && val0 > N9[0][1] && val0 > N9[0][2] &&
                            val0 > N9[0][3] && val0 > N9[0][4] && val0 > N9[0][5] &&
                            val0 > N9[0][6] && val0 > N9[0][7] && val0 > N9[0][8] &&
                            val0 > N9[1][0] && val0 > N9[1][1] && val0 > N9[1][2] &&
                            val0 > N9[1][3] && val0 > N9[1][5] &&
                            val0 > N9[1][6] && val0 > N9[1][7] && val0 > N9[1][8] &&
                            val0 > N9[2][0] && val0 > N9[2][1] && val0 > N9[2][2] &&
                            val0 > N9[2][3] && val0 > N9[2][4] && val0 > N9[2][5] &&
                            val0 > N9[2][6] && val0 > N9[2][7] && val0 > N9[2][8])
                        {
                            /* Calculate the wavelet center coordinates for the maxima */
                            float center_i = sum_i + (size - 1) * 0.5f;
                            float center_j = sum_j + (size - 1) * 0.5f;

                            KeyPoint kpt(center_j, center_i, (float)sizes[layer],
                                -1, val0, octave, (trace_ptr[j] > 0) - (trace_ptr[j] < 0));

                            /* Interpolate maxima location within the 3x3x3 neighbourhood  */
                            int ds = size - sizes[layer - 1];
                            int interp_ok = interpolateKeypoint(N9, sampleStep, sampleStep, ds, kpt);

                            /* Sometimes the interpolation step gives a negative size etc. */
                            if (interp_ok)
                            {
                                /*printf( "KeyPoint %f %f %d\n", point.pt.x, point.pt.y, point.size );*/
                                cv::AutoLock lock(findMaximaInLayer_m);
                                keypoints.push_back(kpt);
                            }
                        }
                    }
                }
            }
        }

        struct KeypointGreater
        {
            inline bool operator()(const KeyPoint& kp1, const KeyPoint& kp2) const
            {
                if (kp1.response > kp2.response) return true;
                if (kp1.response < kp2.response) return false;
                if (kp1.size > kp2.size) return true;
                if (kp1.size < kp2.size) return false;
                if (kp1.octave > kp2.octave) return true;
                if (kp1.octave < kp2.octave) return false;
                if (kp1.pt.y < kp2.pt.y) return false;
                if (kp1.pt.y > kp2.pt.y) return true;
                return kp1.pt.x < kp2.pt.x;
            }
        };


        static void fastHessianDetector(const Mat& sum, const Mat& mask_sum, std::vector<KeyPoint>& keypoints,
            int nOctaves, int nOctaveLayers, float hessianThreshold)
        {   
            cout << "开始快速赫塞检测算法(fastHessianDetector):" << endl;
            /* Sampling step along image x and y axes at first octave. This is doubled
               for each additional octave. WARNING: Increasing this improves speed,
               however keypoint extraction becomes unreliable. */
            cout << "开始在x轴和y轴进行采样" << endl;
            const int SAMPLE_STEP0 = 1;

            int nTotalLayers = (nOctaveLayers + 2) * nOctaves;
            int nMiddleLayers = nOctaveLayers * nOctaves;

            std::vector<Mat> dets(nTotalLayers);
            std::vector<Mat> traces(nTotalLayers);
            std::vector<int> sizes(nTotalLayers);
            std::vector<int> sampleSteps(nTotalLayers);
            std::vector<int> middleIndices(nMiddleLayers);

            keypoints.clear();

            // Allocate space and calculate properties of each layer
            cout << "分配空间并计算每一层的属性" << endl;
            int index = 0, middleIndex = 0, step = SAMPLE_STEP0;

            for (int octave = 0; octave < nOctaves; octave++)
            {
                for (int layer = 0; layer < nOctaveLayers + 2; layer++)
                {
                    /* The integral image sum is one pixel bigger than the source image*/
                    dets[index].create((sum.rows - 1) / step, (sum.cols - 1) / step, CV_32F);
                    traces[index].create((sum.rows - 1) / step, (sum.cols - 1) / step, CV_32F);
                    sizes[index] = (SURF_HAAR_SIZE0 + SURF_HAAR_SIZE_INC * layer) << octave;
                    sampleSteps[index] = step;

                    if (0 < layer && layer <= nOctaveLayers)
                        middleIndices[middleIndex++] = index;
                    index++;
                }
                step *= 2;
            }

            // Calculate hessian determinant and trace samples in each layer
            cout << "计算每个层中的hessian行列式和跟踪样本(这边好像就是开始建造多层图像金字塔了?)。" << endl;
            parallel_for_(Range(0, nTotalLayers),
                SURFBuildInvoker(sum, sizes, sampleSteps, dets, traces));

            // Find maxima in the determinant of the hessian
            cout << "求黑森矩阵行列式的极大值。" << endl; 
            parallel_for_(Range(0, nMiddleLayers),
                SURFFindInvoker(sum, mask_sum, dets, traces, sizes,
                    sampleSteps, middleIndices, keypoints,
                    nOctaveLayers, hessianThreshold));

            std::sort(keypoints.begin(), keypoints.end(), KeypointGreater());
        }


        struct SURFInvoker : ParallelLoopBody
        {
            enum { ORI_RADIUS = 6, ORI_WIN = 60, PATCH_SZ = 20 };

            SURFInvoker(const Mat& _img, const Mat& _sum,
                std::vector<KeyPoint>& _keypoints, Mat& _descriptors,
                bool _extended, bool _upright)
            {
                keypoints = &_keypoints;
                descriptors = &_descriptors;
                img = &_img;
                sum = &_sum;
                extended = _extended;
                upright = _upright;

                // Simple bound for number of grid points in circle of radius ORI_RADIUS
                const int nOriSampleBound = (2 * ORI_RADIUS + 1) * (2 * ORI_RADIUS + 1);

                // Allocate arrays
                apt.resize(nOriSampleBound);
                aptw.resize(nOriSampleBound);
                DW.resize(PATCH_SZ * PATCH_SZ);

                /* Coordinates and weights of samples used to calculate orientation */
                
                Mat G_ori = cv::getGaussianKernel(2 * ORI_RADIUS + 1, SURF_ORI_SIGMA, CV_32F);
                nOriSamples = 0;
                for (int i = -ORI_RADIUS; i <= ORI_RADIUS; i++)
                {
                    for (int j = -ORI_RADIUS; j <= ORI_RADIUS; j++)
                    {
                        if (i * i + j * j <= ORI_RADIUS * ORI_RADIUS)
                        {
                            apt[nOriSamples] = Point(i, j);
                            aptw[nOriSamples++] = G_ori.at<float>(i + ORI_RADIUS, 0) * G_ori.at<float>(j + ORI_RADIUS, 0);
                        }
                    }
                }
                CV_Assert(nOriSamples <= nOriSampleBound);

                /* Gaussian used to weight descriptor samples */
                cout << "高斯分布用于加权描述符样本。" << endl;
                Mat G_desc = cv::getGaussianKernel(PATCH_SZ, SURF_DESC_SIGMA, CV_32F);
                for (int i = 0; i < PATCH_SZ; i++)
                {
                    for (int j = 0; j < PATCH_SZ; j++)
                        DW[i * PATCH_SZ + j] = G_desc.at<float>(i, 0) * G_desc.at<float>(j, 0);
                }
            }

            void operator()(const Range& range) const CV_OVERRIDE
            {
                /* X and Y gradient wavelet data */
                const int NX = 2, NY = 2;
                const int dx_s[NX][5] = { {0, 0, 2, 4, -1}, {2, 0, 4, 4, 1} };
                const int dy_s[NY][5] = { {0, 0, 4, 2, 1}, {0, 2, 4, 4, -1} };

                // Optimisation is better using nOriSampleBound than nOriSamples for
                // array lengths.  Maybe because it is a constant known at compile time
                const int nOriSampleBound = (2 * ORI_RADIUS + 1) * (2 * ORI_RADIUS + 1);

                float X[nOriSampleBound], Y[nOriSampleBound], angle[nOriSampleBound];
                uchar PATCH[PATCH_SZ + 1][PATCH_SZ + 1];
                float DX[PATCH_SZ][PATCH_SZ], DY[PATCH_SZ][PATCH_SZ];
                Mat _patch(PATCH_SZ + 1, PATCH_SZ + 1, CV_8U, PATCH);

                int dsize = extended ? 128 : 64;

                int k, k1 = range.start, k2 = range.end;
                float maxSize = 0;
                for (k = k1; k < k2; k++)
                {
                    maxSize = std::max(maxSize, (*keypoints)[k].size);
                }
                int imaxSize = std::max(cvCeil((PATCH_SZ + 1) * maxSize * 1.2f / 9.0f), 1);
                cv::AutoBuffer<uchar> winbuf(imaxSize * imaxSize);

                for (k = k1; k < k2; k++)
                {
                    int i, j, kk, nangle;
                    float* vec;
                    SurfHF dx_t[NX], dy_t[NY];
                    KeyPoint& kp = (*keypoints)[k];
                    float size = kp.size;
                    Point2f center = kp.pt;
                    /* The sampling intervals and wavelet sized for selecting an orientation
                     and building the keypoint descriptor are defined relative to 's' */
                    float s = size * 1.2f / 9.0f;
                    /* To find the dominant orientation, the gradients in x and y are
                     sampled in a circle of radius 6s using wavelets of size 4s.
                     We ensure the gradient wavelet size is even to ensure the
                     wavelet pattern is balanced and symmetric around its center */
                    int grad_wav_size = 2 * cvRound(2 * s);
                    if (sum->rows < grad_wav_size || sum->cols < grad_wav_size)
                    {
                        /* when grad_wav_size is too big,
                         * the sampling of gradient will be meaningless
                         * mark keypoint for deletion. */
                        kp.size = -1;
                        continue;
                    }

                    float descriptor_dir = 360.f - 90.f;
                    if (upright == 0)
                    {
                        resizeHaarPattern(dx_s, dx_t, NX, 4, grad_wav_size, sum->cols);
                        resizeHaarPattern(dy_s, dy_t, NY, 4, grad_wav_size, sum->cols);
                        for (kk = 0, nangle = 0; kk < nOriSamples; kk++)
                        {
                            int x = cvRound(center.x + apt[kk].x * s - (float)(grad_wav_size - 1) / 2);
                            int y = cvRound(center.y + apt[kk].y * s - (float)(grad_wav_size - 1) / 2);
                            if (y < 0 || y >= sum->rows - grad_wav_size ||
                                x < 0 || x >= sum->cols - grad_wav_size)
                                continue;
                            const int* ptr = &sum->at<int>(y, x);
                            float vx = calcHaarPattern(ptr, dx_t, 2);
                            float vy = calcHaarPattern(ptr, dy_t, 2);
                            X[nangle] = vx * aptw[kk];
                            Y[nangle] = vy * aptw[kk];
                            nangle++;
                        }
                        if (nangle == 0)
                        {
                            // No gradient could be sampled because the keypoint is too
                            // near too one or more of the sides of the image. As we
                            // therefore cannot find a dominant direction, we skip this
                            // keypoint and mark it for later deletion from the sequence.
                            kp.size = -1;
                            continue;
                        }

                        phase(Mat(1, nangle, CV_32F, X), Mat(1, nangle, CV_32F, Y), Mat(1, nangle, CV_32F, angle), true);

                        float bestx = 0, besty = 0, descriptor_mod = 0;
                        for (i = 0; i < 360; i += SURF_ORI_SEARCH_INC)
                        {
                            float sumx = 0, sumy = 0, temp_mod;
                            for (j = 0; j < nangle; j++)
                            {
                                int d = std::abs(cvRound(angle[j]) - i);
                                if (d < ORI_WIN / 2 || d > 360 - ORI_WIN / 2)
                                {
                                    sumx += X[j];
                                    sumy += Y[j];
                                }
                            }
                            temp_mod = sumx * sumx + sumy * sumy;
                            if (temp_mod > descriptor_mod)
                            {
                                descriptor_mod = temp_mod;
                                bestx = sumx;
                                besty = sumy;
                            }
                        }
                        descriptor_dir = fastAtan2(-besty, bestx);
                    }
                    kp.angle = descriptor_dir;
                    if (!descriptors || !descriptors->data)
                        continue;

                    /* Extract a window of pixels around the keypoint of size 20s */
                    int win_size = (int)((PATCH_SZ + 1) * s);
                    CV_Assert(imaxSize >= win_size);
                    Mat win(win_size, win_size, CV_8U, winbuf.data());

                    if (!upright)
                    {
                        descriptor_dir *= (float)(CV_PI / 180);
                        float sin_dir = -std::sin(descriptor_dir);
                        float cos_dir = std::cos(descriptor_dir);

                        /* Subpixel interpolation version (slower). Subpixel not required since
                        the pixels will all get averaged when we scale down to 20 pixels */
                        /*
                        float w[] = { cos_dir, sin_dir, center.x,
                        -sin_dir, cos_dir , center.y };
                        CvMat W = cvMat(2, 3, CV_32F, w);
                        cvGetQuadrangleSubPix( img, &win, &W );
                        */

                        float win_offset = -(float)(win_size - 1) / 2;
                        float start_x = center.x + win_offset * cos_dir + win_offset * sin_dir;
                        float start_y = center.y - win_offset * sin_dir + win_offset * cos_dir;
                        uchar* WIN = win.data;
#if 0
                        // Nearest neighbour version (faster)
                        for (i = 0; i < win_size; i++, start_x += sin_dir, start_y += cos_dir)
                        {
                            float pixel_x = start_x;
                            float pixel_y = start_y;
                            for (j = 0; j < win_size; j++, pixel_x += cos_dir, pixel_y -= sin_dir)
                            {
                                int x = std::min(std::max(cvRound(pixel_x), 0), img->cols - 1);
                                int y = std::min(std::max(cvRound(pixel_y), 0), img->rows - 1);
                                WIN[i * win_size + j] = img->at<uchar>(y, x);
                            }
                        }
#else
                        int ncols1 = img->cols - 1, nrows1 = img->rows - 1;
                        size_t imgstep = img->step;
                        for (i = 0; i < win_size; i++, start_x += sin_dir, start_y += cos_dir)
                        {
                            double pixel_x = start_x;
                            double pixel_y = start_y;
                            for (j = 0; j < win_size; j++, pixel_x += cos_dir, pixel_y -= sin_dir)
                            {
                                int ix = cvFloor(pixel_x), iy = cvFloor(pixel_y);
                                if ((unsigned)ix < (unsigned)ncols1 &&
                                    (unsigned)iy < (unsigned)nrows1)
                                {
                                    float a = (float)(pixel_x - ix), b = (float)(pixel_y - iy);
                                    const uchar* imgptr = &img->at<uchar>(iy, ix);
                                    WIN[i * win_size + j] = (uchar)
                                        cvRound(imgptr[0] * (1.f - a) * (1.f - b) +
                                            imgptr[1] * a * (1.f - b) +
                                            imgptr[imgstep] * (1.f - a) * b +
                                            imgptr[imgstep + 1] * a * b);
                                }
                                else
                                {
                                    int x = std::min(std::max(cvRound(pixel_x), 0), ncols1);
                                    int y = std::min(std::max(cvRound(pixel_y), 0), nrows1);
                                    WIN[i * win_size + j] = img->at<uchar>(y, x);
                                }
                            }
                        }
#endif
                    }
                    else
                    {
                        // extract rect - slightly optimized version of the code above
                        // TODO: find faster code, as this is simply an extract rect operation,
                        //       e.g. by using cvGetSubRect, problem is the border processing
                        // descriptor_dir == 90 grad
                        // sin_dir == 1
                        // cos_dir == 0

                        float win_offset = -(float)(win_size - 1) / 2;
                        int start_x = cvRound(center.x + win_offset);
                        int start_y = cvRound(center.y - win_offset);
                        uchar* WIN = win.data;
                        for (i = 0; i < win_size; i++, start_x++)
                        {
                            int pixel_x = start_x;
                            int pixel_y = start_y;
                            for (j = 0; j < win_size; j++, pixel_y--)
                            {
                                int x = MAX(pixel_x, 0);
                                int y = MAX(pixel_y, 0);
                                x = MIN(x, img->cols - 1);
                                y = MIN(y, img->rows - 1);
                                WIN[i * win_size + j] = img->at<uchar>(y, x);
                            }
                        }
                    }
                    // Scale the window to size PATCH_SZ so each pixel's size is s. This
                    // makes calculating the gradients with wavelets of size 2s easy
                    resize(win, _patch, _patch.size(), 0, 0, INTER_AREA);

                    // Calculate gradients in x and y with wavelets of size 2s
                    for (i = 0; i < PATCH_SZ; i++)
                        for (j = 0; j < PATCH_SZ; j++)
                        {
                            float dw = DW[i * PATCH_SZ + j];
                            float vx = (PATCH[i][j + 1] - PATCH[i][j] + PATCH[i + 1][j + 1] - PATCH[i + 1][j]) * dw;
                            float vy = (PATCH[i + 1][j] - PATCH[i][j] + PATCH[i + 1][j + 1] - PATCH[i][j + 1]) * dw;
                            DX[i][j] = vx;
                            DY[i][j] = vy;
                        }

                    // Construct the descriptor
                    cout << "开始构建描述子。" << endl;
                    vec = descriptors->ptr<float>(k);
                    for (kk = 0; kk < dsize; kk++)
                        vec[kk] = 0;
                    double square_mag = 0;
                    if (extended)
                    {
                        // 128-bin descriptor
                        for (i = 0; i < 4; i++)
                            for (j = 0; j < 4; j++)
                            {
                                for (int y = i * 5; y < i * 5 + 5; y++)
                                {
                                    for (int x = j * 5; x < j * 5 + 5; x++)
                                    {
                                        float tx = DX[y][x], ty = DY[y][x];
                                        if (ty >= 0)
                                        {
                                            vec[0] += tx;
                                            vec[1] += (float)fabs(tx);
                                        }
                                        else {
                                            vec[2] += tx;
                                            vec[3] += (float)fabs(tx);
                                        }
                                        if (tx >= 0)
                                        {
                                            vec[4] += ty;
                                            vec[5] += (float)fabs(ty);
                                        }
                                        else {
                                            vec[6] += ty;
                                            vec[7] += (float)fabs(ty);
                                        }
                                    }
                                }
                                for (kk = 0; kk < 8; kk++)
                                    square_mag += vec[kk] * vec[kk];
                                vec += 8;
                            }
                    }
                    else
                    {
                        // 64-bin descriptor
                        for (i = 0; i < 4; i++)
                            for (j = 0; j < 4; j++)
                            {
                                for (int y = i * 5; y < i * 5 + 5; y++)
                                {
                                    for (int x = j * 5; x < j * 5 + 5; x++)
                                    {
                                        float tx = DX[y][x], ty = DY[y][x];
                                        vec[0] += tx; vec[1] += ty;
                                        vec[2] += (float)fabs(tx); vec[3] += (float)fabs(ty);
                                    }
                                }
                                for (kk = 0; kk < 4; kk++)
                                    square_mag += vec[kk] * vec[kk];
                                vec += 4;
                            }
                    }

                    // unit vector is essential for contrast invariance
                    vec = descriptors->ptr<float>(k);
                    float scale = (float)(1. / (std::sqrt(square_mag) + FLT_EPSILON));
                    for (kk = 0; kk < dsize; kk++)
                        vec[kk] *= scale;
                }
            }

            // Parameters
            const Mat* img;
            const Mat* sum;
            std::vector<KeyPoint>* keypoints;
            Mat* descriptors;
            bool extended;
            bool upright;

            // Pre-calculated values
            int nOriSamples;
            std::vector<Point> apt;
            std::vector<float> aptw;
            std::vector<float> DW;
        };


        MYSURF_Impl::MYSURF_Impl(double _threshold, int _nOctaves, int _nOctaveLayers, bool _extended, bool _upright)
        {
            hessianThreshold = _threshold;
            extended = _extended;
            upright = _upright;
            nOctaves = _nOctaves;
            nOctaveLayers = _nOctaveLayers;
        }

        int MYSURF_Impl::descriptorSize() const { return extended ? 128 : 64; }
        int MYSURF_Impl::descriptorType() const { return CV_32F; }
        int MYSURF_Impl::defaultNorm() const { return NORM_L2; }


        void MYSURF_Impl::detectAndCompute(InputArray _img, InputArray _mask,
            CV_OUT std::vector<KeyPoint>& keypoints,
            OutputArray _descriptors,
            bool useProvidedKeypoints)
        {
            int imgtype = _img.type(), imgcn = CV_MAT_CN(imgtype);
            bool doDescriptors = _descriptors.needed();

            CV_Assert(!_img.empty() && CV_MAT_DEPTH(imgtype) == CV_8U && (imgcn == 1 || imgcn == 3 || imgcn == 4));
            CV_Assert(_descriptors.needed() || !useProvidedKeypoints);

#ifdef HAVE_OPENCL
            if (ocl::useOpenCL() && _img.isUMat())
            {
                MYSURF_OCL ocl_surf;
                UMat gpu_kpt;
                bool ok = ocl_surf.init(this);

                if (ok)
                {
                    if (!_descriptors.needed())
                    {
                        ok = ocl_surf.detect(_img, _mask, gpu_kpt);
                    }
                    else
                    {
                        if (useProvidedKeypoints)
                            ocl_surf.uploadKeypoints(keypoints, gpu_kpt);
                        ok = ocl_surf.detectAndCompute(_img, _mask, gpu_kpt, _descriptors, useProvidedKeypoints);
                    }
                }
                if (ok)
                {
                    if (!useProvidedKeypoints)
                        ocl_surf.downloadKeypoints(gpu_kpt, keypoints);
                    return;
                }
            }
#endif // HAVE_OPENCL

            Mat img = _img.getMat(), mask = _mask.getMat(), mask1, sum, msum;

            if (imgcn > 1)
                cv::cvtColor(img, img, cv::COLOR_BGR2GRAY);

            CV_Assert(mask.empty() || (mask.type() == CV_8U && mask.size() == img.size()));
            CV_Assert(hessianThreshold >= 0);
            CV_Assert(nOctaves > 0);
            CV_Assert(nOctaveLayers > 0);

            integral(img, sum, CV_32S);

            // Compute keypoints only if we are not asked for evaluating the descriptors are some given locations:
            if (!useProvidedKeypoints)
            {
                if (!mask.empty())
                {
                    cv::min(mask, 1, mask1);
                    integral(mask1, msum, CV_32S);
                }
                fastHessianDetector(sum, msum, keypoints, nOctaves, nOctaveLayers, (float)hessianThreshold);
                if (!mask.empty())
                {
                    for (size_t i = 0; i < keypoints.size(); )
                    {
                        Point pt(keypoints[i].pt);
                        if (mask.at<uchar>(pt.y, pt.x) == 0)
                        {
                            keypoints.erase(keypoints.begin() + i);
                            continue; // keep "i"
                        }
                        i++;
                    }
                }
            }

            int i, j, N = (int)keypoints.size();
            if (N > 0)
            {
                Mat descriptors;
                bool _1d = false;
                int dcols = extended ? 128 : 64;
                size_t dsize = dcols * sizeof(float);

                if (doDescriptors)
                {
                    _1d = _descriptors.kind() == _InputArray::STD_VECTOR && _descriptors.type() == CV_32F;
                    if (_1d)
                    {
                        _descriptors.create(N * dcols, 1, CV_32F);
                        descriptors = _descriptors.getMat().reshape(1, N);
                    }
                    else
                    {
                        _descriptors.create(N, dcols, CV_32F);
                        descriptors = _descriptors.getMat();
                    }
                }

                // we call SURFInvoker in any case, even if we do not need descriptors,
                // since it computes orientation of each feature.
                parallel_for_(Range(0, N), SURFInvoker(img, sum, keypoints, descriptors, extended, upright));

                // remove keypoints that were marked for deletion
                for (i = j = 0; i < N; i++)
                {
                    if (keypoints[i].size > 0)
                    {
                        if (i > j)
                        {
                            keypoints[j] = keypoints[i];
                            if (doDescriptors)
                                memcpy(descriptors.ptr(j), descriptors.ptr(i), dsize);
                        }
                        j++;
                    }
                }
                if (N > j)
                {
                    N = j;
                    keypoints.resize(N);
                    if (doDescriptors)
                    {
                        Mat d = descriptors.rowRange(0, N);
                        if (_1d)
                            d = d.reshape(1, N * dcols);
                        d.copyTo(_descriptors);
                    }
                }
            }
        }

        Ptr<MYSURF> MYSURF::create(double _threshold, int _nOctaves, int _nOctaveLayers, bool _extended, bool _upright)
        {
            return makePtr<MYSURF_Impl>(_threshold, _nOctaves, _nOctaveLayers, _extended, _upright);
        }


#else // ! #ifdef OPENCV_ENABLE_NONFREE
        Ptr<MYSURF> MYSURF::create(double, int, int, bool, bool)
        {
            CV_Error(Error::StsNotImplemented,
                "This algorithm is patented and is excluded in this configuration; "
                "Set OPENCV_ENABLE_NONFREE CMake option and rebuild the library");
        }
#endif


    }
}

#endif

myNonfree.hpp

/*M///
//
//  IMPORTANT: READ BEFORE DOWNLOADING, COPYING, INSTALLING OR USING.
//
//  By downloading, copying, installing or using the software you agree to this license.
//  If you do not agree to this license, do not download, install,
//  copy or use the software.
//
//
//                           License Agreement
//                For Open Source Computer Vision Library
//
// Copyright (C) 2000-2008, Intel Corporation, all rights reserved.
// Copyright (C) 2009, Willow Garage Inc., all rights reserved.
// Third party copyrights are property of their respective owners.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
//   * Redistribution's of source code must retain the above copyright notice,
//     this list of conditions and the following disclaimer.
//
//   * Redistribution's in binary form must reproduce the above copyright notice,
//     this list of conditions and the following disclaimer in the documentation
//     and/or other materials provided with the distribution.
//
//   * The name of the copyright holders may not be used to endorse or promote products
//     derived from this software without specific prior written permission.
//
// This software is provided by the copyright holders and contributors "as is" and
// any express or implied warranties, including, but not limited to, the implied
// warranties of merchantability and fitness for a particular purpose are disclaimed.
// In no event shall the Intel Corporation or contributors be liable for any direct,
// indirect, incidental, special, exemplary, or consequential damages
// (including, but not limited to, procurement of substitute goods or services;
// loss of use, data, or profits; or business interruption) however caused
// and on any theory of liability, whether in contract, strict liability,
// or tort (including negligence or otherwise) arising in any way out of
// the use of this software, even if advised of the possibility of such damage.
//
//M*/

// 每自行添加一个HPP文件都需要修改此处的宏定义哦。
#ifndef __OPENCV_XFEATURES2D_MY_FEATURES_2D_HPP__
#define __OPENCV_XFEATURES2D_MY_FEATURES_2D_HPP__

#include "opencv2/features2d.hpp"

namespace cv
{
    namespace xfeatures2d
    {

        /** @brief Class for extracting Speeded Up Robust Features from an image @cite Bay06 .

        The algorithm parameters:
        -   member int extended
            -   0 means that the basic descriptors (64 elements each) shall be computed
            -   1 means that the extended descriptors (128 elements each) shall be computed
        -   member int upright
            -   0 means that detector computes orientation of each feature.
            -   1 means that the orientation is not computed (which is much, much faster). For example,
        if you match images from a stereo pair, or do image stitching, the matched features
        likely have very similar angles, and you can speed up feature extraction by setting
        upright=1.
        -   member double hessianThreshold
        Threshold for the keypoint detector. Only features, whose hessian is larger than
        hessianThreshold are retained by the detector. Therefore, the larger the value, the less
        keypoints you will get. A good default value could be from 300 to 500, depending from the
        image contrast.
        -   member int nOctaves
        The number of a gaussian pyramid octaves that the detector uses. It is set to 4 by default.
        If you want to get very large features, use the larger value. If you want just small
        features, decrease it.
        -   member int nOctaveLayers
        The number of images within each octave of a gaussian pyramid. It is set to 2 by default.
        @note
           -   An example using the SURF feature detector can be found at
                opencv_source_code/samples/cpp/generic_descriptor_match.cpp
            -   Another example using the SURF feature detector, extractor and matcher can be found at
                opencv_source_code/samples/cpp/matcher_simple.cpp
         */
        class CV_EXPORTS_W MYSURF : public Feature2D
        {
        public:
            /**
            @param hessianThreshold Threshold for hessian keypoint detector used in SURF.
            @param nOctaves Number of pyramid octaves the keypoint detector will use.
            @param nOctaveLayers Number of octave layers within each octave.
            @param extended Extended descriptor flag (true - use extended 128-element descriptors; false - use
            64-element descriptors).
            @param upright Up-right or rotated features flag (true - do not compute orientation of features;
            false - compute orientation).
             */
            CV_WRAP static Ptr<MYSURF> create(double hessianThreshold = 100,
                int nOctaves = 4, int nOctaveLayers = 3,
                bool extended = false, bool upright = false);

            CV_WRAP virtual void setHessianThreshold(double hessianThreshold) = 0;
            CV_WRAP virtual double getHessianThreshold() const = 0;

            CV_WRAP virtual void setNOctaves(int nOctaves) = 0;
            CV_WRAP virtual int getNOctaves() const = 0;

            CV_WRAP virtual void setNOctaveLayers(int nOctaveLayers) = 0;
            CV_WRAP virtual int getNOctaveLayers() const = 0;

            CV_WRAP virtual void setExtended(bool extended) = 0;
            CV_WRAP virtual bool getExtended() const = 0;

            CV_WRAP virtual void setUpright(bool upright) = 0;
            CV_WRAP virtual bool getUpright() const = 0;
        };

        typedef MYSURF MySurfFeatureDetector;
        typedef MYSURF MySurfDescriptorExtractor;

        //! @}


    }
} /* namespace cv */

#endif

另外OpenCVTest.cpp中需要引入我们自定义的hpp文件,同时文件中的“SURF”也要相应的改为”MYSURF”,修改后的OpenCVTest.cpp代码如下:

OpenCVTest.cpp:

// OpenCVTest.cpp : 此文件包含 "main" 函数。程序执行将在此处开始并结束。
//

#include "opencv2/core/core.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/xfeatures2d/nonfree.hpp"
// 这里添加我们自定义的myNonfree.hpp文件
#include "myNonfree.hpp"
#include "mysurf.hpp"
#include "opencv2/xfeatures2d.hpp"
#include "opencv2/features2d/features2d.hpp"
#include "opencv2/imgproc.hpp"
#include <opencv2/imgproc/types_c.h>
#include<opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
using namespace cv::xfeatures2d;

int main()
{
	//【0】改变console字体颜色
	system("color 1F");


	//【1】载入原始图片
	Mat srcImage1 = imread("E:/images/3.jpg", 1);
	Mat srcImage2 = imread("E:/images/4.jpg", 1);
	Mat copysrcImage1 = srcImage1.clone();
	Mat copysrcImage2 = srcImage2.clone();

	if (!srcImage1.data || !srcImage2.data)
	{
		printf("读取图片错误,请确定目录下是否有imread函数指定的图片存在~! \n"); return false;
	}

	//【2】使用SURF算子检测关键点
	int minHessian = 100;//SURF算法中的hessian阈值

	//Ptr<SURF> detector = SURF::create(minHessian);//定义一个SurfFeatureDetector(SURF) 特征检测类对象  
	//Ptr<SURF> detector = cv::xfeatures2d::SURF::create(400);
	Ptr<MYSURF> detector = MYSURF::create(minHessian);//使用自定义的MYSURF文件

	vector<KeyPoint> keypoints_object, keypoints_scene;//vector模板类,存放任意类型的动态数组

	//【3】调用detect函数检测出SURF特征关键点,保存在vector容器中
	detector->detect(srcImage1, keypoints_object);
	detector->detect(srcImage2, keypoints_scene);

	//【4】计算描述符(特征向量)
	
	//Ptr<SURF> extractor = SURF::create();
	Ptr<MYSURF> extractor = MYSURF::create(); // 使用自定义的MYSURF算子

	Mat descriptors_object, descriptors_scene;
	extractor->compute(srcImage1, keypoints_object, descriptors_object);
	extractor->compute(srcImage2, keypoints_scene, descriptors_scene);

	//【5】使用FLANN匹配算子进行匹配
	FlannBasedMatcher matcher;
	vector< DMatch > matches;
	matcher.match(descriptors_object, descriptors_scene, matches);
	double max_dist = 0; double min_dist = 100;//最小距离和最大距离

	//【6】计算出关键点之间距离的最大值和最小值
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		double dist = matches[i].distance;
		if (dist < min_dist) min_dist = dist;
		if (dist > max_dist) max_dist = dist;
	}

	printf(">Max dist 最大距离 : %f \n", max_dist);
	printf(">Min dist 最小距离 : %f \n", min_dist);

	//【7】存下匹配距离小于3*min_dist的点对
	std::vector< DMatch > good_matches;
	for (int i = 0; i < descriptors_object.rows; i++)
	{
		if (matches[i].distance < 3 * min_dist)
		{
			good_matches.push_back(matches[i]);
		}
	}

	//绘制出匹配到的关键点
	Mat img_matches;
	drawMatches(srcImage1, keypoints_object, srcImage2, keypoints_scene,
		good_matches, img_matches, Scalar::all(-1), Scalar::all(-1),
		vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

	//定义两个局部变量
	vector<Point2f> obj;
	vector<Point2f> scene;

	//从匹配成功的匹配对中获取关键点
	for (unsigned int i = 0; i < good_matches.size(); i++)
	{
		obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
		scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
	}
	vector<unsigned char> listpoints;

	//Mat H = findHomography( obj, scene, CV_RANSAC );//计算透视变换 
	Mat H = findHomography(obj, scene, RANSAC, 3, listpoints);//计算透视变换 


	std::vector< DMatch > goodgood_matches;
	for (int i = 0; i < listpoints.size(); i++)
	{
		if ((int)listpoints[i])
		{

			goodgood_matches.push_back(good_matches[i]);


			cout << (int)listpoints[i] << endl;
		}

	}
	cout << "listpoints大小:" << listpoints.size() << endl;
	cout << "goodgood_matches大小:" << goodgood_matches.size() << endl;
	cout << "good_matches大小:" << good_matches.size() << endl;
	Mat Homgimg_matches;
	drawMatches(copysrcImage1, keypoints_object, copysrcImage2, keypoints_scene,
		goodgood_matches, Homgimg_matches, Scalar::all(-1), Scalar::all(-1),
		vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);

	imshow("去除误匹配点后;", Homgimg_matches);


	//从待测图片中获取角点
	vector<Point2f> obj_corners(4);
	obj_corners[0] = cvPoint(0, 0); obj_corners[1] = cvPoint(srcImage1.cols, 0);
	obj_corners[2] = cvPoint(srcImage1.cols, srcImage1.rows); obj_corners[3] = cvPoint(0, srcImage1.rows);
	vector<Point2f> scene_corners(4);

	//进行透视变换
	perspectiveTransform(obj_corners, scene_corners, H);

	//绘制出角点之间的直线
	line(img_matches, scene_corners[0] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[1] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[1] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[2] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[2] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[3] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);
	line(img_matches, scene_corners[3] + Point2f(static_cast<float>(srcImage1.cols), 0), scene_corners[0] + Point2f(static_cast<float>(srcImage1.cols), 0), Scalar(255, 0, 123), 4);

	//显示最终结果
	imshow("效果图", img_matches);

	waitKey(0);
	return 0;
}


项目的文件路径如下:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)
踩坑记录

修改宏定义

记得修改自己复制HPP文件后文件开头的宏定义,这一步不修改的话会报冲突……

总之,参考下面的修改。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

报错1:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

这个问题的解决方法:主要是我们修改后的hpp文件没有复制到项目下面,之前可能放到别处了(这个具体原因不是很清楚,有兴趣的读者自己深挖一下)。

报错2:OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

不要慌,在mysurf.hpp文件中,把OpenCVTest的所有include都拷贝过来就行(这里其实应该拷贝一部分include的hpp文件即可,随便了,能解决问题就行。)

然后就可以运行了。

最终运行结果

可以看到在控制台当中,在mysurf.hpp文件中自行添加的cout代码成功打印了!!!!!!!!

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

成功的同志们! ! ! ! !

虽然这个“开始构建描述符”。好像是打印太多了,而且因为这一步使用了多线程加速,所以打印的换行是不规则的,但这证明我们已经成功修改了非常底层的源代码。

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

相比于CSDN上其他的OpenCV修改源码的方法,我们的修改源码方式是可以debug跟踪调试的,详细见下图,可以看到打了断点之后,代码已经进入到我们的mysurf.hpp文件:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

后续如果在科研or工作中进行优化,在hpp文件中修改即可(当然工业环境下还是按照公司规范来吧哈哈哈)。

补充:尽量增加代码逻辑

本文的修改源码的方式的一大优点是:可以通过直接修改hpp文件的逻辑,不用再次编译,运行后即可生效。

这里进行示范。我们在mysurf.hpp文件中添加一行cout(这个是之前代码没有的),添加在如下位置:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

然后我们直接点击运行:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

控制台输出:

OpenCV 修改源码:基于hpp文件,修改SURF算子(支持打断点、调试、debug)

可以看到,我们对代码的修改是直接生效的。这是因为程序直接调用了hpp文件,而不是lib包或dll库。

总结

关于修改OpenCV源码,总结以下几个keypoints

0.首先要明确修改的是什么地方的源码,这样找代码时候才有目的性;

1.定位对应的hpp文件(这个可以在build后的代码中知道);

2.定位对应的cpp文件(这个需要去之前下载的源码里面找);

3.将cpp文件中的相关函数的实现拷贝到hpp文件中,让一个hpp文件就实现函数的声明和定义(实现),拷贝时候注意namespace,同时注意修改函数名称,防止冲突

4.修改hpp中的include的文件(这个主要是因为hpp文件中一些依赖没有导入);

5.运行测试确保源码修改成功。

其实在这篇文章之前,CSDN上也有一些修改源码的文章,但是他们的修改方式都是修改下载的源码,之后再cmake编译等。这种方法,最后虽然也能修改源码,但是开发过程中无法在Visual Studio中调试,一旦有了新的需求或者出现了bug,又需要再修改源码,之后又需要重新编译,对时间和精力消耗都巨大。

本文的方法实现了OpenCV源码的修改、debug、调试等功能,对于图像工程的科研工作者有着重要的参考作用。

(以后研究生们再和导师说“老师,OpenCV的源码没有办法修改,我做不下去TAT”,导师就可以反手把这篇文章拍到他们脸上哈哈哈)。

参考链接​​​​

SURF算法 – 进击的学徒 – 博客园

C++ hpp文件 – kaizen – 博客园

文章出处登录后可见!

已经登录?立即刷新

共计人评分,平均

到目前为止还没有投票!成为第一位评论此文章。

(0)
xiaoxingxing的头像xiaoxingxing管理团队
上一篇 2022年4月7日 下午5:23
下一篇 2022年4月7日 下午5:48

相关推荐