Numerical Analysis

插值(Interpolation)

最近邻插值(Nearest Interpolation)

\[\mathbf{I}_\mathrm{dst}(i, j) = \mathbf{I}_\mathrm{src}(u^{\prime}, v^{\prime}) = \mathbf{I}_\mathrm{src}(u, v)\]

with

\[(i, j) \Rightarrow (u^{\prime}, v^{\prime}) \Rightarrow \begin{cases} u = \text{round}(u^{\prime}) \\ v = \text{round}(v^{\prime}) \end{cases}\]
case INTERPOLATION_NEAREST: {
    int lx = static_cast<int>(std::round(x));
    int ly = static_cast<int>(std::round(y));
    result = (*this)(ly, lx);
}

双线性插值(Bilinear Interpolation)

\[\mathbf{I}_\mathrm{dst}(i, j) = \mathbf{I}_\mathrm{src}(u^{\prime}, v^{\prime}) = \mathbf{I}_\mathrm{src}(u+\alpha, v+\beta)\]

with

\[(i, j) \Rightarrow (u^{\prime}, v^{\prime}) \Rightarrow \begin{cases} u = \text{floor}(u^{\prime}) = \lfloor u^{\prime} \rfloor \\ v = \text{floor}(v^{\prime}) = \lfloor v^{\prime} \rfloor \end{cases} \Rightarrow \begin{cases} \alpha = u^{\prime} - u \\ \beta = v^{\prime} - v \end{cases}\]

\[\begin{aligned} f_{00} &= f(u, v) \\ f_{01} &= f(u, v + 1) \\ f_{10} &= f(u + 1, v) \\ f_{11} &= f(u + 1, v + 1) \end{aligned}\]

(1) V方向线性插值

\[\begin{cases} f(u, & v+\beta) = (f_{01} - f_{00}) \beta + f_{00} \\ f(u+1, & v+\beta) = (f_{11} - f_{10}) \beta + f_{10} \end{cases}\]

(2) U方向线性插值

\[f(u+\alpha, v+\beta) = [f(u+1, v+\beta) - f(u, v+\beta)]\alpha + f(u, v+\beta)\]

(3)最终

\[f(u+\alpha, v+\beta) = [(1-\alpha)(1-\beta)]f_{00} + \beta (1-\alpha) f_{01} + \alpha (1-\beta) f_{10} + \alpha \beta f_{11}\]
case INTERPOLATION_BILINEAR: {
    const int lx = std::floor(x);
    const int ly = std::floor(y);

    _T f00 = (*this)(ly, lx);
    _T f01 = (*this)(ly + 1, lx);
    _T f10 = (*this)(ly, lx + 1);
    _T f11 = (*this)(ly + 1, lx + 1);

    double alpha = x - lx;
    double beta  = y - ly;

    result =
      (1 - beta) * ((1 - alpha) * f00 + alpha * f10) +
      beta * ((1 - alpha) * f01 + alpha * f11);
}
Read More

相机标定与矫正

[TOC]

Overview


What Calib Do

Camera calibration is the process of estimating intrinsic and/or extrinsic parameters.

  • Intrinsic parameters deal with the camera’s internal characteristics, such as, its focal length, skew, distortion, and image center.
  • Extrinsic parameters describe its position and orientation in the world

When To Calib

Your camera might be out of calibration if you had the following symptoms:

  • Reduced depth density on objects in the operating range (might still get some depth).
  • Flat surfaces look “wobbly”, i.e. there is more deviation from the flatness than usual.
  • Measuring the physical distance to objects are not within expected range.

How To Calib

Calibration Placement

  • from boofcv

  • from ros

Calibration Best Practices

  • Choose the right size calibration target
  • Perform calibration at the approximate working distance (WD) of your final application
  • Use good lighting
    • normal lighting conditions from general office lighting (around 200 LUX) to more brighter lighting with additional floor lamp (around 1000 LUX)
    • The lighting on the chart should be generally uniform without hotspots
    • Use diffuse lighting, a spotlight will make the calibration target much more difficult to detect
  • Collect images from different areas and tilts
  • Have enough observations (at least 6 observations)
  • Consider using CharuCo boards
  • Calibration is only as accurate as the calibration target used
  • Proper mounting of calibration target and camera
    • the target needs to be mounted on a flat surface. Any warping will decrease calibration accuracy. An ideal surface will be rigid and smooth.
  • Remove bad observations

OpenCV

Basically, you need to take snapshots of these patterns with your camera and let OpenCV find them.
Each found pattern results in a new equation.
To solve the equation you need at least a predetermined number of pattern snapshots to form a well-posed equation system.
This number is higher for the chessboard pattern and less for the circle ones.
For example, in theory the chessboard pattern requires at least two snapshots.
However, in practice we have a good amount of noise present in our input images, so for good results you will probably need at least 10 good snapshots of the input pattern in different positions.

Calibration Patterns

Pattern size

As a rule of thumb, the calibration plate should have an area of at least half the available pixel area when observed frontally

Pattern type

Checkerboard or Chessboard

  • their corners are simple to detect and “mostly” invariant to lens distortion
  • its hard get right next to the image border, but you can get close

Square Grids

  • allow you to get right up next to the image border
  • It’s more complex for a library developer to write a good high precision unbiased corner.

Circle Grids

  • Circle Hexagonal Grid
    • works well for regular camera lenses but is typically less accurate than chessboard of square grid because their features can’t be measured directly
    • Tangent points are invariant under perspective distortion
    • Sometimes a library will use the center point, but this is ill advised because it’s not invariant under perspective distortion
    • Errors introduced by lens distortion are less significant when the circles are small inside the image, but under heavy lens distortion these are a poor choice
  • Circle Regular Grid
    • have essentially the same pros/cons as circle hexagonal but don’t have the same circle density

ChArUCo: Chessboard + ArUco

ArUco vs Chessboard

  • ArUco markers and boards
    • fast detection and their versatility
    • the accuracy of their corner positions is not too high, even after applying subpixel refinement
  • Chessboard patterns
    • the corners of chessboard patterns can be refined more accurately since each corner is surrounded by two black squares
    • finding a chessboard pattern is not as versatile as finding an ArUco board

Calibration with ChArUco Boards and ArUco Boards

As it can be stated, calibration can be done using both, marker corners or ChArUco corners. However, it is highly recommended using the ChArUco corners approach since the provided corners are much more accurate in comparison to the marker corners. Calibration using a standard Board should only be employed in those scenarios where the ChArUco boards cannot be employed because of any kind of restriction.

AprilTag Grid

Calibration Quality Check

Quick Check

  • if straight edges are straight
  • Point the camera to a flat surface such as a wall about 1 to 2 meters away (3 to 6 feet) and avoid black surfaces. Visually inspect the depth image display of the wall. A lot of black dots or holes on the image is an indication of the camera being out of calibration.
  • For stereo images you can see if rectification is correct by clicking on an easily recognizable feature and seeing if it is at the same y-coordinate in the other image.

Accuracy Check

This procedure should be used to check the accuracy of the camera.

  • Reprojection error statistic
    • Qualitatively speaking, a good calibration yields +- 1px reprojection error
    • Calibrate the camera with the ATAN model and make sure you have a very low reprojection error (~0.1px) (from SVO)
    • For a well made target and a decent camera reprojection error is typically around 0.1 pixels
    • Typically, an epipolar error below 0.25 pixel is considered acceptable, and below 0.1 excellent (from ROS StereoCalibration)
  • Expect accuracy within 2% at @2 meters
    • Place the camera in parallel to a flat wall and exactly two meter (2000 mm) away. Once the camera is placed in its position, Use Intel® RealSenseTMViewer or Depth Quality Tool to measure the absolute distance. For a flat surface at a distance of 2 meter the absolute distance should be within 2% or better at 2 meter (2000mm). If the distance is not within the defined range, then the camera needs to be calibrated.

Rectification

Stereo Rectification

Stereo rectification is the process of distorting two images such that both their epipoles are at infinity, typically along the x-axis. When this happens the epipolar lines are all parallel to each other simplifying the problem of finding feature correspondences to searching along the image axis. Many stereo algorithms require images to be rectified first.

Calibration & Rectification Utils

OpenCV

camera_calibration (ROS Wiki)

  • http://wiki.ros.org/camera_calibration

Supported camera model: pinhole camera model, which is standard in OpenCV and ROS

Matlab

Caltech’s Camera Calibration Toolbox for Matlab (by Jean-Yves Bouguet)

  • http://www.vision.caltech.edu/bouguetj/calib_doc/

Omnidirectional Calibration Toolbox (by Christopher Mei)

  • The toolbox has been successfully used to calibrate hyperbolic, parabolic, folded mirror, spherical and wide-angle sensors.
  • It is a combination of the unified projection model from Geyer and Barreto and a radial distortion function. This model makes it possible take into account the distortion introduced by telecentric lenses (for parabolic mirrors) and gives a greater flexibility (spherical mirrors can be calibrated).

OCamCalib: Omnidirectional Camera Calibration Toolbox for Matlab (by Davide Scaramuzza)

  • https://sites.google.com/site/scarabotix/ocamcalib-toolbox
  • Omnidirectional Camera Calibration Toolbox for Matlab (for Windows, MacOS & Linux)
  • For catadioptric and fisheye cameras up to 195 degrees

Improved OcamCalib

Camera Calibration Toolbox for Generic Lenses (by Juho Kannala)

  • http://www.ee.oulu.fi/~jkannala/calibration/calibration.html

This is a camera calibration toolbox for Matlab which can be used for calibrating several different kinds of central cameras. A central camera is a camera which has a single effective viewpoint. The toolbox has been successfully used for both conventional and omnidirectional cameras such as fish-eye lens cameras and catadioptric cameras.

Calibr

Stereo Camera Calibrator App

  • https://www.mathworks.com/help/vision/ug/stereo-camera-calibrator-app.html Stereo Camera Calibrator

SWARD Camera Calibration Toolbox

  • http://swardtoolbox.github.io/

Matlab code for Super-Wide-Angle-lens Radial Distortion correction just using a single image of a checkerboard

CamOdoCal

  • https://github.com/hengli/camodocal

Automatic Intrinsic and Extrinsic Calibration of a Rig with Multiple Generic Cameras and Odometry.

This C++ library supports the following tasks:

  • Intrinsic calibration of a generic camera.
  • Extrinsic self-calibration of a multi-camera rig for which odometry data is provided.
  • Extrinsic infrastructure-based calibration of a multi-camera rig for which a map generated from task 2 is provided.

The intrinsic calibration process computes the parameters for one of the following three camera models:

  • Pinhole camera model
  • Unified projection model (C. Mei, and P. Rives, Single View Point Omnidirectional Camera Calibration from Planar Grids, ICRA 2007)
  • Equidistant fish-eye model (J. Kannala, and S. Brandt, A Generic Camera Model and Calibration Method for Conventional, Wide-Angle, and Fish-Eye Lenses, PAMI 2006)

By default, the unified projection model is used since this model approximates a wide range of cameras from normal cameras to catadioptric cameras. Note that in our equidistant fish-eye model, we use 8 parameters: k2, k3, k4, k5, mu, mv, u0, v0. k1 is set to 1.

GML C++ Camera Calibration Toolbox

GML Camera Calibration toolbox is a free functionally completed tool for cameras’ calibrating. You can easy calculate intrinsic and extrinsic camera parameters after calibrating.

EasyCal Toolbox

  • The EasyCal Toolbox can be used to calibrate a large cluster of cameras easily eliminating the need to click tediously on multiple images.

Image Rectification Utils

Read More

Robot Calibration

Spatial

Hand-Eye Calibration

IMU-Cam Calibration

Kalibr

Kalibr is a toolbox that solves the following calibration problems:

  • Multiple camera calibration
  • Camera-IMU calibration
  • Rolling Shutter Camera calibration

My Blog: Kalibr 之 Camera-IMU 标定 (总结)

TUM Related:

InerVis Toolbox for Matlab

  • http://home.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html

IMU CAM calibration, Inertial Measurement Unit and Camera Calibration Toolbox

vicalib

Lidar-Cam Calibration

Temporal

Time Synchronization

  • chrony: is a versatile implementation of the Network Time Protocol (NTP). It can synchronise the system clock with NTP servers, reference clocks (e.g. GPS receiver), and manual input using wristwatch and keyboard. It can also operate as an NTPv4 (RFC 5905) server and peer to provide a time service to other computers in the network.
  • ROS Clock
    rosparam set use_sim_time true  # (or set in launch file if you use one)
    rosbag play <your bag> --clock
    
  • TICSync is an extremely efficient algorithm for learning the mapping between distributed clocks, which typically achieves better than millisecond accuracy within just a few seconds.
  • ethz-asl/cuckoo_time_translator
  • leggedrobotics/hardware_time_sync: Guidelines on how to hardware synchronize the time of multiple sensors, e.g., IMU, cameras, etc.
Read More

机器人自主定位导航 概述

Overview

机器人自主移动的四大核心功能:

  • 环境构建
  • 自主导航(室内定位、智能避障、路径规划等)
  • 智能跟随
  • 自主回充

SLAM & 自主导航

自主导航,从大的方面来讲包括局域导航和全局导航两部分。

  • 局域导航是指通过视觉、雷达、超声波等传感器实时获取当前环境信息,提取数据融合后的特征,经智能算法处理后实现当前可通行区域的判断和多目标跟踪
  • 全局导航主要指利用GPS提供的全局导航数据进行全局路径规划,并实现全电子地图范围内的路径导航。

SLAM ≠ 自主定位导航,不解决运动问题;需要在完成 SLAM 之后,进行一个叫做目标点导航的能力,规划一条从A点到B点的路径出来,然后让机器人移动过去。

SLAM如其名一样,主要解决的是机器人的地图构建和即时定位问题,而自主导航需要解决的是智能移动机器人与环境进行自主交互,尤其是点到点自主移动的问题,这需要更多的技术支持。

机器人自主定位导航 = SLAM + 路径规划 + 运动控制

定位

要先知道自己在地图中的位置,才可以进行后续的路径规划。

建图

地图构建也是机器人实现自主导航行动的前提。地图一方面可以帮助机器人配合自身的传感器进行实时定位,同时也用于后续展开行动时,导航过程的路径规划。

路径规划

运动规划是一个很大的概念,从机械臂的运动、飞行器的飞行,到扫地机的清扫,机器人的移动,其实这些都是属于运动规划的范畴。运动规划主要分为: 全局规划 、 局部规划。

  • 全局规划:是最上层的运动规划逻辑,它按照机器人预先记录的环境地图并结合机器人当前位姿以及任务目标点的位置,在地图上找到前往目标点最快捷的路径。

  • 局部规划:当环境出现变化或者上层规划的路径不利于机器人实际行走的时候(比如机器人无法按照规划的路径完成特定转弯半径的转向),局部路径规划将做出微调。与全局规划有所区别的是,局部规划可能并不知道机器人最终要去哪,但是对于机器人怎么绕开眼前的障碍物特别在行。

运动规划的过程中还包含静态地图和动态地图两种情况。

  • A(A-Star)算法是一种 静态 路网中求解最短路径最有效的直接搜索方法,也是解决许多搜索问题的有效算法。算法中的距离估算值与实际值越接近,最终搜索速度越快。但是,A 算法同样也可用于动态路径规划当中,只是当环境发生变化时,需要重新规划路线。

  • D* 算法则是一种 动态 启发式路径搜索算法,它事先对环境位置,让机器人在陌生环境中行动自如,在瞬息万变的环境中游刃有余。D* 算法的最大优点是不需要预先探明地图,机器人可以和人一样,即使在未知环境中,也可以展开行动,随着机器人不断探索,路径也会时刻调整。

空间覆盖(space coverage)

扫地机器人所需要的功能跟市面上的机器人有所不同,比如针对折返的工字形清扫,如何有效进行清扫而不重复清扫?如何让扫地机和人一样,理解房间、门、走廊这种概念?

针对这些问题,学术界长久以来有一个专门的研究课题,叫做空间覆盖(space coverage),同时也提出了非常多的算法和理论。其中,比较有名的是Morse Decompositions,扫地机通过它实现对空间进行划分,随后进行清扫。

所以,他要实现的不是尽快实现从A到B的算法,为了家里能尽量扫得干净,要尽量覆盖从A到B点的所有区域,实现扫地机器人“扫地”的这个功能。

运动控制

Read More

IMU数据仿真公式推导及代码实现

[TOC]

IMU测量模型(离散时间)

IMU测量模型:

\[\begin{aligned} {\omega}_m &= \omega + b_{gd} + n_{gd} \\ a_m &= a + b_{ad} + n_{ad} \end{aligned}\]

其中,

离散时间的 Bias:

\[\begin{aligned} b_{gd}[k] &= b_{gd}[k-1] + \sigma_{bgd} \cdot w[k] \\ &= b_{gd}[k-1] + \sigma_{bg} \sqrt{\Delta t} \cdot w[k] \\ b_{ad}[k] &= b_{ad}[k-1] + \sigma_{bad} \cdot w[k] \\ &= b_{ad}[k-1] + \sigma_{ba} \sqrt{\Delta t} \cdot w[k] \end{aligned}\]

离散时间的 White Noise:

\[\begin{aligned} n_{gd} &= \sigma_{gd} \cdot w[k] \\ &= \sigma_{g} \frac{1}{\sqrt{\Delta t}} \cdot w[k] \\ n_{ad} &= \sigma_{ad} \cdot w[k] \\ &= \sigma_{a} \frac{1}{\sqrt{\Delta t}} \cdot w[k] \end{aligned}\]

其中,$w[k] \sim \mathcal{N}(0,1)$,$\Delta t$ 为采样时间。

代码实现

参考贺一家博士的代码(HeYijia/vio_data_simulation

std::random_device rd;
std::default_random_engine generator_(rd());
std::normal_distribution<double> noise(0.0, 1.0);

Eigen::Vector3d noise_gyro(noise(generator_),noise(generator_),noise(generator_));
Eigen::Matrix3d gyro_sqrt_cov = param_.gyro_noise_sigma * Eigen::Matrix3d::Identity();
data.imu_gyro = data.imu_gyro + gyro_sqrt_cov * noise_gyro / sqrt( param_.imu_timestep ) + gyro_bias_;

Eigen::Vector3d noise_acc(noise(generator_),noise(generator_),noise(generator_));
Eigen::Matrix3d acc_sqrt_cov = param_.acc_noise_sigma * Eigen::Matrix3d::Identity();
data.imu_acc = data.imu_acc + acc_sqrt_cov * noise_acc / sqrt( param_.imu_timestep ) + acc_bias_;

// gyro_bias update
Eigen::Vector3d noise_gyro_bias(noise(generator_),noise(generator_),noise(generator_));
gyro_bias_ += param_.gyro_bias_sigma * sqrt(param_.imu_timestep ) * noise_gyro_bias;
data.imu_gyro_bias = gyro_bias_;

// acc_bias update
Eigen::Vector3d noise_acc_bias(noise(generator_),noise(generator_),noise(generator_));
acc_bias_ += param_.acc_bias_sigma * sqrt(param_.imu_timestep ) * noise_acc_bias;
data.imu_acc_bias = acc_bias_;
Read More

^