Perception(0-1.1)

时间:2023-03-09 05:10:31
Perception(0-1.1)

The perception modules run in the context of the process Cognition. They detect features in the image that was just taken by the camera. The modules can be separated into four categories. The modules of the perception infrastructure provide representations that deal with the perspective of the image taken, provide the image in different formats, and provide representations that limit the area interesting for further image processing steps. Based on these representations, modules detect features useful for self-localization, the ball, and obstacles. All information provided by the perception modules is relative to the robot’s position.

感知模块在认知进程中运行。他们检测到相机拍摄图像中的特征。该模块可分为四类。这个感知基础架构模块提供表示(?),来处理采集的图像的角度,提供不同格式的图像,并提供表示,限制用于进一步的图像处理的感兴趣区域。基于这些表示,模块检测有用特征来自我定位,以及球和障碍的检测。所有提供的感知模块信息是相对于机器人位置的。

4.1Perception Infrastructure

4.1.1Using Both Cameras

The NAO robot is equipped with two video cameras that are mounted in the head of the robot.The first camera is installed in the middle forehead and the second one approx. 4cm below. The lower camera is tilted by 39.7 ◦ with respect to the upper camera and both cameras have a vertical opening angle of 47.64 ◦ . Because of that, the overlapping parts of the images are too small for stereo vision. It is also impossible to get images from both cameras at the exact same time, as they are not synchronized on a hardware level. This is why we analyze only one picture at a time and do not stitch the images together. To be able to analyze the pictures from both the upper and lower camera in real-time without loosing any images, the Cognition process runs at 60Hz. Since the NAO is currently not able to provide images from both cameras at their maximum resolution, we use a smaller resolution for the lower camera. During normal play the lower camera sees only a very small portion of the field, which is directly in front of the robot’s feet. Therefore, objects in the lower image are close to the robot and rather big. We take advantage of this fact and run the lower camera with half the resolution of the upper camera, thereby saving a lot of computation time. Both cameras deliver their images in the YUV422 format. The upper camera provides 640 × 480 pixels while the lower camera only provides 320 × 240 pixels. As the perception of features in the images relies either on color classes (e. g. for reqion building) or the luminance values of the image pixels (e. g. for computing edges in the image), the YUV422 images are converted to the “extracted and color-classified” ECImage. The ECImage consists of two images: the gray scaled image obtained from the Y channel of the camera image and a so-called “colored” image mapping each image pixel to a color class. Cognition modules processing an image need to know from which camera it comes. For this reason, we implemented the representation CameraInfo, which contains this information as well as the resolution of the current image.

NAO机器人配备了两个摄像头在机器人的头部,第一摄像机安装在前额中间,第二个约偏下4cm。较低的相机是相对于上相机倾斜39.7◦,双摄像机有47.64◦垂直张角。正因为如此,重叠的部分的图像对于立体视觉而言太小。同时从两台相机获取图像也是不可能的,因为它们不是在硬件级别上同步的.。这就是为什么我们一次只分析一幅图片,而不是将图像拼接在一起.。为了能够实时地分析上下摄像头的图像,而不丢失任何图像,认知进程运行在60Hz。由于NAO目前无法从两个相机以最大分辨率提供图像,我们对低相机使用较低分辨率。在正常比赛中,低摄像头只看到一小部分的区域,这是在机器人脚前的区域。因此,在低相机的图像中的物体是靠近机器人且相当大的。我们利用这一事实,以上摄像头一半的分辨率运行低摄像头,从而节省了大量的计算时间。两个相机提供YUV422格式图像。上相机640×480像素,而下摄像头只有320×240像素。因为图像特征感知是依赖颜色类(例如内建),和图像像素的亮度值(例如图像中计算的边缘),YUV422图像被转换为“提取和颜色分类”的ECImage。ECImage包含两幅图像——从摄像机图像的Y通道获得的灰度图,和一个所谓的“彩色”图像(每个像素点匹配到一个颜色类)。处理图像的认知模块需要知道它来自哪个摄像机。为此,我们采用了CameraInfo标识,包含此信息以及当前图像的分辨率。

(representations 到底应该翻译成什么?表示,标识?希望大家可以留言)