OpenCV旋转(Rodrigues)和用于在Unity3D中定位3D对象的平移向量

时间:2022-09-10 18:39:59

I'm using "OpenCV for Unity3d" asset (it's the same OpenCV package for Java but translated to C# for Unity3d) in order to create an Augmented Reality application for my MSc Thesis (Computer Science).

我正在使用“OpenCV for Unity3d”资产(它与Java相同的OpenCV包,但是为Unity3d转换为C#),以便为我的理论硕士论文(计算机科学)创建增强现实应用程序。

So far, I'm able to detect an object from video frames using ORB feature detector and also I can find the 3D-to-2D relation using OpenCV's SolvePnP method (I did the camera calibration as well). From that method I'm getting the Translation and Rotation vectors. The problem occurs at the augmentation stage where I have to show a 3d object as a virtual object and update its position and rotation at each frame. OpenCV returns Rodrigues Rotation matrix, but Unity3d works with Quaternion rotation so I'm updating object's position and rotation wrong and I can't figure it out how to implement the conversion forumla (from Rodrigues to Quaternion).

到目前为止,我能够使用ORB特征检测器从视频帧中检测到一个对象,我也可以使用OpenCV的SolvePnP方法找到3D到2D的关系(我也进行了相机校准)。从那个方法我得到翻译和旋转向量。问题出现在增强阶段,我必须将3d对象显示为虚拟对象并更新其在每个帧的位置和旋转。 OpenCV返回Rodrigues旋转矩阵,但Unity3d使用四元数旋转,所以我更新对象的位置和旋转错误,我无法弄清楚如何实现转换论坛(从Rodrigues到Quaternion)。

Getting the rvec and tvec:

得到rvec和tvec:

    Mat rvec = new Mat();
    Mat tvec = new Mat();
    Mat rotationMatrix = new Mat ();

    Calib3d.solvePnP (object_world_corners, scene_flat_corners, CalibrationMatrix, DistortionCoefficientsMatrix, rvec, tvec);
    Calib3d.Rodrigues (rvec, rotationMatrix);

Updating the position of the virtual object:

更新虚拟对象的位置:

    Vector3 objPosition = new Vector3 (); 
    objPosition.x = (model.transform.position.x + (float)tvec.get (0, 0)[0]);
    objPosition.y = (model.transform.position.y + (float)tvec.get (1, 0)[0]);
    objPosition.z = (model.transform.position.z - (float)tvec.get (2, 0)[0]);
    model.transform.position = objPosition;

I have a minus sign for the Z axis because when you convert OpenCV's to Unty3d's system coordinate you must invert the Z axis (I checked the system coordinates by myself).

我对Z轴有一个减号,因为当你将OpenCV转换为Unty3d的系统坐标时,你必须反转Z轴(我自己检查了系统坐标)。

Unity3d's Coordinate System (Green is Y, Red is X and Blue is Z) :

Unity3d的坐标系统(绿色为Y,红色为X,蓝色为Z):

OpenCV旋转(Rodrigues)和用于在Unity3D中定位3D对象的平移向量

OpenCV's Coordinate System:

OpenCV的坐标系:

OpenCV旋转(Rodrigues)和用于在Unity3D中定位3D对象的平移向量

In addition I did the same thing for the rotation matrix and I updated the virtual object's rotation.

另外我对旋转矩阵做了同样的事情,我更新了虚拟对象的旋转。

p.s I found a similar question but the guy who asked for it he did not post clearly the solution.

p.s我发现了一个类似的问题,但要求它的人他并没有明确地发布解决方案。

Thanks!

谢谢!

1 个解决方案

#1


4  

You have your rotation 3x3 matrix right after cv::solvePnP. That matrix, since it is a rotation, is both orthogonal and normalized. Thus, columns of that matrix are in order from left to right :

在cv :: solvePnP之后你有旋转3x3矩阵。该矩阵,因为它是一个旋转,是正交和标准化的。因此,该矩阵的列按从左到右的顺序排列:

  1. Right vector (on X axis);
  2. 右矢量(在X轴上);
  3. Up vector (on Y axis);
  4. 向上矢量(在Y轴上);
  5. Forward vector (on Z axis).
  6. 前向矢量(在Z轴上)。

OpenCV uses a right-handed coordinates system. Sitting on camera looking along optical axis, X axis goes right, Y axis goes downward and Z axis goes forward.

OpenCV使用右手坐标系。沿着光轴看相机,X轴向右,Y轴向下,Z轴向前。

You pass forward vector F = (fx, fy, fz) and up vector U = (ux, uy, uz) to Unity. These are the third and second columns respectively. No need to normalize; they are normalized already.

你向前传递向量F =(fx,fy,fz)和向上向量U =(ux,uy,uz)到Unity。这些分别是第三和第二列。无需规范化;他们已经正常化了。

In Unity, you build your quaternion like this:

在Unity中,您可以像这样构建四元数:

Vector3 f; // from OpenCV
Vector3 u; // from OpenCV

// notice that Y coordinates here are inverted to pass from OpenCV right-handed coordinates system to Unity left-handed one
Quaternion rot = Quaternion.LookRotation(new Vector3(f.x, -f.y, f.z), new Vector3(u.x, -u.y, u.z));

And that is pretty much it. Hope this helps!

这就是它。希望这可以帮助!

EDITED FOR POSITION RELATED COMMENTS

编辑位置相关评论

NOTE : Z axis in OpenCV is on camera's optical axis which goes through image near center but not exactly at center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from center to where the Z axis goes through image. That shift must be taken into account to map exactly 3D stuff over 2D background.

注意:OpenCV中的Z轴位于摄像机的光轴上,该轴通过中心附近的图像,但一般不在中心。在您的校准参数中,有Cx和Cy参数。这些组合是图像空间中从中心到Z轴穿过图像的位置的2D偏移。必须考虑这种转变才能在2D背景上准确地绘制3D内容。

To get proper positioning in Unity:

要在Unity中获得正确的定位:

// STEP 1 : fetch position from OpenCV + basic transformation
Vector3 pos; // from OpenCV
pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)

// STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
Vector2 resolution; // image resolution from OpenCV
float vfov =  2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
Camera cam; // TODO get reference one way or another
cam.fieldOfView = vfov;
cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach

// STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center

You put images retrieved from physical camera right in front of virtual camera centered on its forward axis (scaled to fit frustrum) then you have proper 3D positions mapped over 2D background!

您将从物理相机检索到的图像放在虚拟相机前面的中心位于其前轴上(按比例缩放以适应截头体),然后在2D背景上绘制正确的3D位置!

#1


4  

You have your rotation 3x3 matrix right after cv::solvePnP. That matrix, since it is a rotation, is both orthogonal and normalized. Thus, columns of that matrix are in order from left to right :

在cv :: solvePnP之后你有旋转3x3矩阵。该矩阵,因为它是一个旋转,是正交和标准化的。因此,该矩阵的列按从左到右的顺序排列:

  1. Right vector (on X axis);
  2. 右矢量(在X轴上);
  3. Up vector (on Y axis);
  4. 向上矢量(在Y轴上);
  5. Forward vector (on Z axis).
  6. 前向矢量(在Z轴上)。

OpenCV uses a right-handed coordinates system. Sitting on camera looking along optical axis, X axis goes right, Y axis goes downward and Z axis goes forward.

OpenCV使用右手坐标系。沿着光轴看相机,X轴向右,Y轴向下,Z轴向前。

You pass forward vector F = (fx, fy, fz) and up vector U = (ux, uy, uz) to Unity. These are the third and second columns respectively. No need to normalize; they are normalized already.

你向前传递向量F =(fx,fy,fz)和向上向量U =(ux,uy,uz)到Unity。这些分别是第三和第二列。无需规范化;他们已经正常化了。

In Unity, you build your quaternion like this:

在Unity中,您可以像这样构建四元数:

Vector3 f; // from OpenCV
Vector3 u; // from OpenCV

// notice that Y coordinates here are inverted to pass from OpenCV right-handed coordinates system to Unity left-handed one
Quaternion rot = Quaternion.LookRotation(new Vector3(f.x, -f.y, f.z), new Vector3(u.x, -u.y, u.z));

And that is pretty much it. Hope this helps!

这就是它。希望这可以帮助!

EDITED FOR POSITION RELATED COMMENTS

编辑位置相关评论

NOTE : Z axis in OpenCV is on camera's optical axis which goes through image near center but not exactly at center in general. Among your calibration parameters, there are Cx and Cy parameters. These combined are the 2D offset in image space from center to where the Z axis goes through image. That shift must be taken into account to map exactly 3D stuff over 2D background.

注意:OpenCV中的Z轴位于摄像机的光轴上,该轴通过中心附近的图像,但一般不在中心。在您的校准参数中,有Cx和Cy参数。这些组合是图像空间中从中心到Z轴穿过图像的位置的2D偏移。必须考虑这种转变才能在2D背景上准确地绘制3D内容。

To get proper positioning in Unity:

要在Unity中获得正确的定位:

// STEP 1 : fetch position from OpenCV + basic transformation
Vector3 pos; // from OpenCV
pos = new Vector3(pos.x, -pos.y, pos.z); // right-handed coordinates system (OpenCV) to left-handed one (Unity)

// STEP 2 : set virtual camera's frustrum (Unity) to match physical camera's parameters
Vector2 fparams; // from OpenCV (calibration parameters Fx and Fy = focal lengths in pixels)
Vector2 resolution; // image resolution from OpenCV
float vfov =  2.0f * Mathf.Atan(0.5f * resolution.y / fparams.y) * Mathf.Rad2Deg; // virtual camera (pinhole type) vertical field of view
Camera cam; // TODO get reference one way or another
cam.fieldOfView = vfov;
cam.aspect = resolution.x / resolution.y; // you could set a viewport rect with proper aspect as well... I would prefer the viewport approach

// STEP 3 : shift position to compensate for physical camera's optical axis not going exactly through image center
Vector2 cparams; // from OpenCV (calibration parameters Cx and Cy = optical center shifts from image center in pixels)
Vector3 imageCenter = new Vector3(0.5f, 0.5f, pos.z); // in viewport coordinates
Vector3 opticalCenter = new Vector3(0.5f + cparams.x / resolution.x, 0.5f + cparams.y / resolution.y, pos.z); // in viewport coordinates
pos += cam.ViewportToWorldPoint(imageCenter) - cam.ViewportToWorldPoint(opticalCenter); // position is set as if physical camera's optical axis went exactly through image center

You put images retrieved from physical camera right in front of virtual camera centered on its forward axis (scaled to fit frustrum) then you have proper 3D positions mapped over 2D background!

您将从物理相机检索到的图像放在虚拟相机前面的中心位于其前轴上(按比例缩放以适应截头体),然后在2D背景上绘制正确的3D位置!