间接飞行时间相机性能参数的测量方法与验证

刘佳鑫, 关新, 曹中祥, 华宝成, 李林

刘佳鑫, 关新, 曹中祥, 等. 间接飞行时间相机性能参数的测量方法与验证[J]. 应用光学, 2025, 46(2): 395-403. DOI: 10.5768/JAO202546.0203005
引用本文: 刘佳鑫, 关新, 曹中祥, 等. 间接飞行时间相机性能参数的测量方法与验证[J]. 应用光学, 2025, 46(2): 395-403. DOI: 10.5768/JAO202546.0203005
LIU Jiaxin, GUAN Xin, CAO Zhongxiang, et al. Measurement method and verification of performance parameters of indirect time-of-flight depth camera[J]. Journal of Applied Optics, 2025, 46(2): 395-403. DOI: 10.5768/JAO202546.0203005
Citation: LIU Jiaxin, GUAN Xin, CAO Zhongxiang, et al. Measurement method and verification of performance parameters of indirect time-of-flight depth camera[J]. Journal of Applied Optics, 2025, 46(2): 395-403. DOI: 10.5768/JAO202546.0203005

间接飞行时间相机性能参数的测量方法与验证

基金项目: 国家自然科学基金(52275083);北京市科技新星计划资助
详细信息
    作者简介:

    刘佳鑫(1999—),女,硕士研究生,主要从事视觉导航敏感器研究。E-mail:liu_jia_xin@foxmail.com

  • 中图分类号: TN215;TP391

Measurement method and verification of performance parameters of indirect time-of-flight depth camera

  • 摘要:

    间接飞行时间(indirect time of flight, iTOF)相机在三维环境感知领域有着广泛的应用前景。根据iTOF相机成像原理,当曝光时间过大导致相机工作在非线性区时,解算的深度信息也会引入额外的偏差,从而影响测量准确度。为了进一步提高飞行时间相机的应用精度,根据飞行时间相机的光学成像机理,提出一种针对飞行时间相机的性能参数测量方法,通过实验和计算间接得到飞行时间深度相机的系统增益等性能参数,将其代入飞行时间相机的光学成像模型,即可得到距离与相机输出灰度值的对应曲线。建模与实验结果的相对误差可以达到20%以内,平均相对误差为0.16%。这些性能参数和仿真模型用于指导iTOF相机在不同场景使用时的积分时间选择,可以有效解决因使用不当引入非线性误差而降低距离测量精度的问题。

    Abstract:

    Indirect time of flight (iTOF) camera has a wide application prospect in the field of three-dimensional environmental perception. According to the imaging principle of iTOF camera, its imaging quality is closely related to the exposure time. When the exposure time is too large causing the camera to work in the nonlinear area, the depth information of the solution will also introduce additional biases and thereby affecting the accuracy of the measurement results. In order to further improve the application accuracy of time-of-flight camera, according to the optical imaging mechanism of time-of-flight camera, a method was put forward to measure the performance parameters of time-of-flight camera. Through experiments and calculations, the global system gain and other performance parameters of time-of-flight depth camera were indirectly obtained, then the corresponding curves between distance and camera output gray value could be obtained by substituting them into the optical imaging model of time-of-flight camera, and the necessity of measuring related performance parameters was verified by experimental results. The relative error between modeling and experimental results can be within 20%, and the average relative error is 0.16%. These performance parameters and simulation models are used to guide the integration time selection of iTOF cameras when used in different scenarios, which can effectively solve the problem of reducing distance measurement accuracy due to the introduction of non-linear errors by improper use.

  • 微透镜及其阵列器件是光学透镜微型化、芯片化发展的必然趋势,在光通信、光整形、光探测等领域应用广泛。微透镜的检测方法目前主要有探针测量[1-2]和干涉测量[3]两种。基于探针的接触测量法难以保证透镜中心位置的获取,需要多次重复测量,且对光学元件表面易造成损伤[4-5];干涉非接触测量精度较高,但结构复杂,对环境要求较为苛刻、价格昂贵[6-7]、测试效率低。哈特曼传感器[8]具有体积较小,精度较高,动态影响可以通过多次采样取平均来消除的特点,目前被广泛使用。

    目前多名学者曾在利用哈特曼传感器测量透镜性能方面进行过研究,如西安工业大学的田爱玲等人[9]对口径为3 mm的透镜进行测量,利用Zernike多项式重构波前,得到了待测透镜的波前及像差参数;长春光学精密机械与物理研究所的王晶等人[10]提出了一种夏克哈特曼扫描拼接检测平面镜面形的方法;成都光电技术研究所的戴勋义[11]提出了一种扫描哈特曼法检测大口径望远镜像质的方法。以上研究均针对大口径透镜进行检测,利用哈特曼传感器测量小尺寸透镜目前还没有成熟技术。本文在前人研究的基础上,重新设计测试方案,首次利用该方法测试并分析其可行性,主要针对更小口径的微透镜进行波前测试研究。重点研究了测量方案,分析方案中误差产生及消除的办法,并结合实际测试,实现了对口径为200 μm凸透镜的光学性能检测。

    测试系统总体方案光路图如图1所示,主要由测量光路和参考光路两部分构成。测量光路包括发射激光器、激光扩束器、分光棱镜、平面反射镜;参考光路包括发射激光器、激光扩束器、分光棱镜、标准透镜、待测透镜和平面反射镜。需保证所使用的两个平面反射镜面型偏差一致,经干涉仪测试的标准透镜的面型偏差对测量精度不产生影响,或作为系统误差剔除。

    图  1  测试系统总体方案光路图
    Figure  1.  Optical path diagram of overall scheme for test system

    图1中,波长632.8 nm激光器1发出的光束经扩束镜组2进行扩束、准直,通过分光棱镜3后,透射光直接入射到标准平面反射镜7并按原路返回,经分光棱镜3反射后入射至哈特曼传感器8获得参考光路波像差。经分光棱镜3反射的光束经聚焦透镜4会聚后入射至待测小透镜5,聚焦透镜4焦点与待测透镜5焦点重合。经待测透镜5出射的平行光束经标准平面反射镜6反射后按原路返回,经分光棱镜3透射后入射至哈特曼传感器获得待测透镜波像差,将第二次获得的波前减去第一次获得的波前,即可得到待测透镜的波像差。

    根据光通信耦合的实际需求,口径200 μm的透镜波前检测精度要求保证均方根(root-mean-square, RMS)误差≤1/16 λ。分析哈特曼传感器主要参数,设计完成该传感器检测系统的搭建,基准波长为632.8 nm,质心算法精度须达到0.02 pixel,波前检测的RMS达到1/20 λ

    测试系统误差来源主要有两个方面:一方面是哈特曼传感器自身的检测误差,主要包括质心提取误差${\sigma _1}$和波前重构误差${\sigma _2}$;另一方面是微透镜孔径衍射效应带来的误差${\sigma _3}$。需对3个部分进行分析,保证微透镜波前的检测精度。

    哈特曼传感器在检测微透镜波前时,通过CCD采集激光光斑图像[12],由于像差的存在,该光斑是一个弥散圆,形状不规则,需对光斑质心提取算法进行研究。根据光斑特点,采用加权灰度平均质心提取算法来实现光斑质心的有效提取[13]。其计算表达式为

    $$ {x_{\text c}} = \frac{{\displaystyle\mathop \sum \nolimits_{{{i}} = 1}^M \displaystyle\mathop \sum \nolimits_{{{j}} = 1}^N {x_{{{i}},{{j}}}}I_{{{i}},{{j}}}^w}}{{\displaystyle\mathop \sum \nolimits_{{{i}} = 1}^M \displaystyle\mathop \sum \nolimits_{{{j}} = 1}^N I_{{{i}},{{j}}}^w}} $$ (1)
    $$ {y_{\text c}} = \frac{{\displaystyle\mathop \sum \nolimits_{{{i}} = 1}^M \displaystyle\mathop \sum \nolimits_{{{j}} = 1}^N {y_{{{i}},{{j}}}}I_{{{i}},{{j}}}^w}}{{\displaystyle\mathop \sum \nolimits_{{{i}} = 1}^M \displaystyle\mathop \sum \nolimits_{{{j}} = 1}^N I_{{{i}},{{j}}}^w}} $$ (2)

    式中:M×N为CCD的分辨率;$ x\mathit{_{{i},{j}}},y\mathit{_{{i},{j}}} $为$\left( {i,j} \right)$处的$x$坐标和$y$坐标;$I_{{{i}},{{j}}}^w$为$\left( {i,j} \right)$处像素灰度值的$w$次幂,$w$为像素灰度值的幂指数;$ x_{\mathrm{c}},y\mathrm{_c} $为光斑质心的$x$和$y$坐标。

    CCD获取的是包含噪声的灰度图像,在对光斑质心提取之前,需对图片进行预处理,降低背景噪声的影响[14]。分别使用全局阈值法、自适应高斯阈值法和自适应均值阈值法对同一光斑质心提取多组数据,利用最小二乘法求出各自平均值。再利用3种方法对分布光斑质心提取20次,与各自的平均值相减,得出各自残余误差。根据贝塞尔公式计算标准差,求出光斑质心的提取标准差,如图2所示。$w$在2~3之间取值进行质心提取,精度最高,为了减小计算量,$w$通常取整数。当像素灰度值的幂指数为2时,质心提取精度相较于传统质心法($w{\text{ = }}1$)有了很大的提高,且指数由2增加到3时精度相差不大,因此指数$w$取2即可。自适应均值阈值化在像素灰度值指数为2时,质心提取精度最高,即${\sigma _1}{\text{ = }}0.018$ pixel。

    图  2  光斑质心提取误差图
    Figure  2.  Error diagram of spot centroid extraction

    带有畸变的入射波前经微透镜阵列分割后,在焦面上形成一个不规则光斑阵列,计算每个光斑质心位置$ (x_{\mathrm{c}},y\mathrm{_c}) $与其理想位置$ ({x}_{0},{y}_{0}) $的偏移量,其与微透镜焦距$f'$的比值为子孔径范围内的波前平均斜率[15],如式(3)所示:

    $$ \left\{ \begin{array}{l}G_x=\dfrac{\Delta x}{f'}=\dfrac{x_{\text c}-x_0}{f'} \\ G_y=\dfrac{\Delta y}{f'}=\dfrac{y_{\text c}-y_0}{f'} \end{array} \right. $$ (3)

    式中:$ G_x $为$ x $方向斜率;$ G_y $为$ y $方向斜率;$ (x_{\mathrm{c}},y_{\mathrm{c}}) $为计算得到的光斑质心位置;$({x_0},{y_0})$为光斑质心理想位置;$f'$为微透镜阵列焦距。

    Zernike正交多项式的每一项都有明确的物理意义,并与初级像差具有一一对应关系,对进一步分析透镜的其他像差具有很好的指导作用[16],因此利用Zernike正交多项式对波前进行重构。首先计算哈特曼传感器CCD像面多个子孔径质心的偏移量,得到其波前斜率的变化信息,使用Zernike正交多项式进行波面拟合[17],如式(4)所示:

    $$ W = \mathop \sum \limits_{k = 1}^K {a_k}{Z_k}\left( {x,y} \right) $$ (4)

    式中:K为Zernike多项式阶数;${a_k}$表示Zernike系数;${Z_k}\left( {x,y} \right)$表示第k项Zernike多项式。用式(4)分别对$x$,$y$求偏导数,得到两个方向上的波前斜率:

    $$ \left\{ \begin{array}{l} {G_x} = \displaystyle\sum \limits_{k = 1}^K {a_k}\dfrac{{\partial {{\text{Z}}_k}\left( {x,y} \right)}}{{\partial x}} \\ {G_y} = \displaystyle \sum \limits_{k = 1}^K {a_k}\dfrac{{\partial {{\text{Z}}_k}\left( {x,y} \right)}}{{\partial y}} \end{array} \right. $$ (5)

    哈特曼波前传感器采用的子孔径数目为M,得到2×M个测量值。在所有的测量值中,有一半是$x$方向上的斜率,另一半是$y$方向上的斜率,波前斜率的矩阵可以表示为

    $$ \left[\begin{array}{*{20}{c}}\dfrac{\partial Z_{11}(x,y)}{\partial x} & \dfrac{\partial Z_{12}(x,y)}{\partial x} & \cdots & \dfrac{\partial Z_{1K}(x,y)}{\partial x} \\ \dfrac{\partial Z_{21}(x,y)}{\partial x} & \dfrac{\partial Z_{22}(x,y)}{\partial x} & \cdots & \dfrac{\partial Z_{2K}(x,y)}{\partial x} \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac{\partial Z_{M1}(x,y)}{\partial x} & \dfrac{\partial Z_{M2}(x,y)}{\partial x} & \cdots & \dfrac{\partial Z_{MK}(x,y)}{\partial x} \\ \dfrac{\partial Z_{11}(x,y)}{\partial y} & \dfrac{\partial Z_{12}(x,y)}{\partial y} & \cdots & \dfrac{\partial Z_{1K}(x,y)}{\partial y} \\ \dfrac{\partial Z_{21}(x,y)}{\partial y} & \dfrac{\partial Z_{22}(x,y)}{\partial y} & \cdots & \dfrac{\partial Z_{2K}(x,y)}{\partial y} \\ \vdots & \vdots & \ddots & \vdots \\ \dfrac{\partial Z_{M1}(x,y)}{\partial y} & \dfrac{\partial Z_{M2}(x,y)}{\partial y} & \cdots & \dfrac{\partial Z_{MK}(x,y)}{\partial y}\end{array}\right]\left[\begin{array}{*{20}{c}}\begin{array}{*{20}{c}}\boldsymbol{a}_1 \\ \boldsymbol{a}_2 \\ \vdots \\ \boldsymbol{a}_K\end{array} \\ \end{array}\right]=\left[\begin{array}{*{20}{c}}G_{X1} \\ G_{X2} \\ \vdots \\ \begin{array}{*{20}{c}}\begin{array}{*{20}{c}}G_{XM} \\ G_{Y1}\end{array} \\ \begin{array}{*{20}{c}}G_{Y2} \\ \vdots \\ G_{YM}\end{array}\end{array}\end{array}\right] $$ (6)

    式(6)可以简写为

    $$ {\boldsymbol{B}} = {\boldsymbol{Z}}'{\boldsymbol{a}} $$ (7)

    式中:矢量${\boldsymbol{B}}$为斜率矢量;${\boldsymbol{a}}$是Zernike多项式系数,可表示为拟合系数列向量。则系数矩阵${\boldsymbol{a}}$可由${\boldsymbol{Z}}'$的伪逆矩阵得到:

    $$ {\boldsymbol{a}} = ({\boldsymbol{Z}}'^{ {\mathrm{T}}}{\boldsymbol{Z}}')^{ - 1}{\boldsymbol{Z}}'^{ {\mathrm{T}}}{\boldsymbol{B}} $$ (8)

    将求得的系数代入式(4),即可得到还原波前。当采用的Zernike阶数偏低时,高阶像差被解释成低阶像差,重构精度较低。当采用的Zernike阶数偏高时,低阶像差被解释为高阶像差,重构精度也随之下降。设定一组随机Zernike系数,利用MATLAB仿真生成一个已知波面,分别利用12阶、15阶、18阶、21阶、28阶和36阶Zernike多项式进行重构,其重构误差与Zernike多项式阶次的关系如表1所示。

    表  1  Zernike多项式不同阶次的重构误差
    Table  1.  Reconstruction errors of different orders of Zernike polynomial
    多项式阶次121518212836
    RMS/λ0.062 00.0510.050 30.048 00.049 20.047 0
    PV/λ0.097 80.0820.083 00.08140.079 60.076 5
    下载: 导出CSV 
    | 显示表格

    待测波面经微透镜阵列进行分割,采样点数的多少也会对拟合精度造成影响。微透镜紧密排布,构成面积为7 mm×5 mm的阵列。当微透镜子孔径减小时,阵元个数增多,故采样点数增多,检测精度提高,但是哈特曼传感器的动态范围会降低;当子孔径增大时,采样点数减少,检测精度降低,动态范围增大。

    子孔径的大小和数量存在制约关系,且口径小、曲率低的透镜加工难度较大。因此综合考虑,最终选择哈特曼微透镜阵列,其中每个子孔径尺寸为160 μm ,焦距为1.3 mm。通过菲涅耳数和艾里斑直径所占像素数来进一步判断哈特曼传感器的性能,系统的菲涅耳数为6.7,哈特曼传感器具备较高的灵敏度并且焦斑间的串扰可以忽略,艾里斑直径为13 μm,直径所占像素数为7 pixel×7 pixel,达到质心计算的精度要求。采用36阶Zernike多项式进行拟合,依次增加采样点数,哈特曼传感器检测到的波前直径覆盖15、20、25、27以及31个子透镜,采样点数对波前拟合精度的影响如表2所示。当多项式阶数确定时,采样点数的增多会在一定程度上提高拟合精度。但采样点数的多少并不是提高拟合精度的决定因素,当采样点数增加到一定数量后,拟合精度会趋于稳定。最终采用36阶Zernike多项式、采样点数为585个来进行拟合,拟合后的均方根误差RMS=0.047 λ,即${\sigma _2} = 0.047\lambda $。

    表  2  采样点数对拟合误差的影响
    Table  2.  Influence of sampling numbers on fitting errors
    采样点数170276431585679
    RMS/λ0.0810.062 00.051 30.047 00.046 8
    PV/λ0.1020.096 80.083 30.076 50.076 6
    下载: 导出CSV 
    | 显示表格

    由式(1)和式(2)可知,光斑质心的提取精度与光斑的光强分布有关,被测微透镜在测量光路中会发生明显的衍射现象。光线经微透镜发生衍射之后,焦平面上的光斑强度呈高斯分布,且存在次级衍射峰,使光斑质心位置发生偏移导致波前重构误差。因此哈特曼传感器检测到的相位信息包含微透镜衍射效应带来的相位改变,质心偏移量图如图3所示。以被测口径为200 μm的微透镜为例,采用λ=632.8 nm的理想平行光入射,其在焦平面上的质心位置如图3(a)所示,红色为理想无衍射光斑质心,蓝色为有衍射光斑质心。为了分析衍射效应对光斑质心偏移量的影响,分别对口径为20 μm、50 μm、80 μm、100 μm以及200 μm的微透镜进行分析,结果如图3(b)所示,透镜口径越小,其衍射效应越明显,相应的质心偏移量就越大。

    图  3  质心偏移量图
    Figure  3.  Centroid offset diagrams

    以口径200 μm透镜为例,使用36阶Zernike多项式对其理论计算的波前进行还原,如图4所示。图4(a)为不考虑衍射时的波前还原图,波前PV为2.526 39 λ,RMS为0.706 λ图4(b)为考虑衍射时的波前还原图,波前PV为2.564 79 λ,RMS为0.713 97 λ,衍射所带来的波前误差$ \sigma_3=0.007\ 9\mathit{\lambda} $。

    图  4  波前还原图
    Figure  4.  Wavefront restoration diagrams

    总误差由合成公式表示:

    $$ {\sigma _{}} = \sqrt {\sigma _1^2 + \sigma _2^2{\text{ + }}\sigma _3^2} = 0.048\lambda \lt \frac{\lambda }{{20}} $$ (9)

    式中${\sigma _{}}$为总体误差。按照以上误差分配方案,总体检测误差为0.048 λ,略低于预期1/20 λ的检测精度要求,满足高精度检测条件。

    按照图1所述的波前像差检测方案,搭建了如图5所示的检测光路。开始检测前,需将准直镜组2的两片透镜焦点重合放置,调整各元件位置使所有元件的光轴重合,对除待测透镜以外的光学元件的波前像差进行测试。测试时,通过放置聚焦透镜和待测透镜,移除标准平面反射镜7或对其进行遮挡,对所有光学元件的波前像差进行测试,二者相减可得到待测透镜波前像差的相对测量结果。由于待测透镜5口径较小,直接测量时入射到哈特曼传感器8的采样点数较少,影响测量精度,因此通过准直镜4扩束增加采样点数来提高测量精度。通过观察哈特曼传感器8上的光斑大小,确定聚焦透镜4和待测透镜5焦点位置是否重合。由于微米级透镜轻微倾斜会导致光斑形状发生很大变化,可以通过Zemax等仿真软件对其各个方向倾斜以及离焦情况进行仿真,从而进行针对性调整。另外,标准平面反射镜6应该尽量靠近待测透镜5,二者间距保证在3 mm左右,效果最佳。

    图  5  测试实验现场
    Figure  5.  Test experiment site

    实验光斑图如图6所示,其中参考光路采集到的光斑图像如图6(a)所示,测量光路采集到的光斑图像如图6(b)所示。将哈特曼传感器获得的波前斜率信息利用 Zernike 多项式进行拟合,待测透镜的Zernike 多项式前36 项系数如表3所示。根据波前系数拟合出的波前像差如图7所示,其波前误差值RMS为69 nm,PV为266.8 nm。

    图  6  实验光斑图
    Figure  6.  Experimental spot diagrams
    表  3  待测透镜Zernike系数
    Table  3.  Zernike coefficients of lens under test
    Zernike系数 Z1 Z2 Z3 Z4 Z5 Z6 Z7 Z8 Z9
    计算结果 −0.006 −0.108 9 −0.070 6 0.010 9 0.062 2 0.008 2 0.015 4 0.013 4 −0.001
    Zernike系数 Z10 Z11 Z12 Z13 Z14 Z15 Z16 Z17 Z18
    计算结果 −0.009 4 −0.002 1 −0.002 2 −0.010 6 0.005 −0.002 5 −0.001 8 0.001 8 0.002
    Zernike系数 Z19 Z20 Z21 Z22 Z23 Z24 Z25 Z26 Z27
    计算结果 0.000 2 −0.002 8 −0.000 2 0.000 6 0.007 0.009 0.001 5 −0.001 0.001 3
    Zernike系数 Z28 Z29 Z30 Z31 Z32 Z33 Z34 Z35 Z36
    计算结果 0.000 4 −0.001 0.001 6 −0.000 2 0.000 3 0.000 5 0.000 7 0.000 2 −0.001 7
    下载: 导出CSV 
    | 显示表格
    图  7  波前像差图
    Figure  7.  Wavefront aberration diagram

    Zernike多项式每一项的系数表示透镜的不同像差,如表4所示。图8则表示拟合后透镜的初级像差,证明了通过哈特曼传感器检测微透镜光学性能的有效性。

    表  4  待测微透镜各种像差一览
    Table  4.  List of aberrations of microlenses to be tested
    像差 离焦 0°像散 45°像散 X方向彗
    差和倾斜
    大小/μm 0.010 9 0.062 2 0.008 2 0.015 4
    像差 Y方向彗
    差和倾斜
    球差和离焦 0°三叶像散
    大小/μm 0.013 4 −0.001 −0.009 4
    下载: 导出CSV 
    | 显示表格
    图  8  透镜的初级像差图
    Figure  8.  Primary aberration diagrams of lens

    本文研究了利用哈特曼传感器测量微透镜光学性能的有效性,分析了该方法的主要误差来源及减小措施。通过对口径为200 μm的凸透镜进行检测,波前误差值RMS为69 nm,PV值为266.8 nm,略低于干涉仪检测结果(RMS为43 nm,PV值为157 nm)。测试表明:通过选取合适的透镜聚焦及提高其采样点数,利用自适应均值阈值化的方法对图像进行预处理,采用加权灰度平均质心提取算法进行计算,利用哈特曼传感器可实现对微透镜光学性能的有效测量。后续可以通过提高采样点数以及质心提取、波前重构精度,进一步提高检测精度。相信改善实验条件并实现量产后,可达到和干涉仪相当的检测精度,即可实现对微光学元件的高精度检测。

    随着微透镜及其阵列的广泛应用,该方法可以有效检测其光学性能,并为微透镜的生产提供质量保障,具有广泛的应用价值。

  • 图  1   iTOF相机的原理图

    Figure  1.   Schematic diagram of iTOF camera

    图  2   灰度值未饱和与过饱和时的深度数据对比

    Figure  2.   Comparison of depth data between unsaturated and oversaturated gray values

    图  3   普通可见光相机的增益计算曲线[27]

    Figure  3.   Gain calculation curves of ordinary visible light camera[27]

    图  4   普通可见光相机的线性响应度计算曲线[27]

    Figure  4.   Linear responsivity curves of ordinary visible light camera[27]

    图  5   实验装置示意图

    Figure  5.   Schematic diagram of test device

    图  6   四相位的线性响应度曲线

    Figure  6.   Four-phase linear responsivity curves

    图  7   P0相位的线性响应度拟合计算曲线

    Figure  7.   Fitting calculation curves of linear responsivity of P0 phase

    图  8   四相位的系统增益曲线

    Figure  8.   System gain curves for four phases

    图  9   P0相位的增益拟合曲线

    Figure  9.   Gain fitting curves of P0 phase

    图  10   四相位的距离与平均灰度值仿真曲线

    Figure  10.   Simulation curves of distances and average gray values of four phases

    图  11   仿真深度值与深度真值的对比

    Figure  11.   Comparison of simulation depth values with depth true values

    图  12   不同曝光时间的测量结果

    Figure  12.   Measurement results at different exposure times

    图  13   实验与模拟灰度值曲线的比较

    Figure  13.   Comparison of gray value curves between experiment and simulation

    图  14   实验与模拟的相对误差

    Figure  14.   Relative errors between experiment and simulation

    表  1   不同积分时间和不同相位的暗场平均灰度值

    Table  1   Average gray values of dark field with different integration time and different phases

    单元 不同相位的暗背景 积分时间/μs
    50 250 950
    Tap A $ \mu_{y_0,\text{dark}}/\mathrm{DN} $ 186.85 186.77 186.84
    $ \mu_{y_{90},\text{dark}}/\mathrm{DN} $ 186.76 186.72 186.77
    $ \mu_{y_{180},\text{dark}}/\mathrm{DN} $ 186.76 186.83 186.76
    $ \mu_{y_{270},\text{dark}}/\mathrm{DN} $ 186.73 186.78 186.82
    Tap B $ \mu_{y_0,\text{dark}}/\mathrm{DN} $ 186.54 186.49 186.57
    $ \mu_{y_{90},\text{dark}}/\mathrm{DN} $ 186.57 186.56 186.53
    $ \mu_{y_{180},\text{dark}}/\mathrm{DN} $ 186.58 186.61 186.64
    $ \mu_{y_{270}\text{dark}}/\mathrm{DN} $ 186.62 186.55 186.56
    下载: 导出CSV

    表  2   四相位线性响应度对比

    Table  2   Comparison of four-phase linear responsivity

    单元 相位 响应度/(DN·μs−1)
    Tap A P0 1.866
    P180 1.997
    P90 1.995
    P270 2.000
    Tap B P0 1.910
    P180 1.907
    P90 1.912
    P270 1.906
    下载: 导出CSV

    表  3   四相位系统增益对比

    Table  3   Comparison of four-phase system gain

    单元 相位 系统增益/(DN·e−1)
    Tap A P0 0.003 414
    P180 0.003 410
    P90 0.003 336
    P270 0.003 267
    Tap B P0 0.005 159
    P180 0.005 319
    P90 0.005 166
    P270 0.005 335
    下载: 导出CSV
  • [1] 张宗华, 刘巍, 刘国栋, 等. 三维视觉测量技术及应用进展[J]. 中国图象图形学报, 2021, 26(6): 1483-1502. doi: 10.11834/jig.200841

    ZHANG Zonghua, LIU Wei, LIU Guodong, et al. Overview of the development and application of 3D vision measurement technology[J]. Journal of Image and Graphics, 2021, 26(6): 1483-1502. doi: 10.11834/jig.200841

    [2]

    STYTZ M R, FRIEDER G, FRIEDER O. Three-dimensional medical imaging[J]. ACM Computing Surveys, 1991, 23(4): 421-499.

    [3]

    FISHMAN E K, MAGID D, NEY D R, et al. Three-dimensional imaging[J]. Radiology, 1991, 181(2): 321-337.

    [4]

    CHIABRANDO F, CHIABRANDO R, PIATTI D, et al. Sensors for 3D imaging: metric evaluation and calibration of a CCD/CMOS time-of-flight camera[J]. Sensors, 2009, 9(12): 10080-10096.

    [5] 魏加立, 曲慧东, 王永宪, 等. 基于飞行时间法的3D相机研究综述[J]. 红外技术, 2021, 43(1): 60-67.

    WEI Jiali, QU Huidong, WANG Yongxian, et al. Research review of 3D cameras based on time-of-flight method[J]. Infrared Technology, 2021, 43(1): 60-67.

    [6]

    CHEN X B, XI J T, JIN Y, et al. Accurate calibration for a camera-projector measurement system based on structured light projection[J]. Optics and Lasers in Engineering, 2009, 47(3/4): 310-319.

    [7]

    PATEL D K, BACHANI P A, SHAH N R. Distance measurement system using binocular stereo vision approach[J]. International Journal of Engineering Research & Technology, 2013, 2(12): 2461-2464.

    [8]

    FOIX S, ALENYA G, TORRAS C. Lock-in time-of-flight (ToF) cameras: a survey[J]. IEEE Sensors Journal, 2011, 11(9): 1917-1926.

    [9] RICE K, MOIGNE J L E, JAIN P. Analyzing range maps data for future space robotics applications[C]//Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Information Technology. New York: IEEE, 2006: 357.

    RICE K, MOIGNE J L E, JAIN P. Analyzing range maps data for future space robotics applications[C]//Proceedings of the 2nd IEEE International Conference on Space Mission Challenges for Information Technology. New York: IEEE, 2006: 357.

    [10] 侯飞. 基于飞行时间的三维目标点云提取和重建研究[D]. 北京: 中国科学院大学 (中国科学院国家空间科学中心), 2019.

    HOU Fei. Research on 3D target point cloud extraction and reconstruction based on flight time[D]. Beijing: University of Chinese Academy of Sciences (National Center for Space Science, Chinese Academy of Sciences), 2019.

    [11]

    KOHOUTEK T. Analysis and processing the 3D-range-image-data for robot monitoring[J]. Geodesy and Cartography, 2008, 34(3): 92-96.

    [12]

    KUEHNLE J U, XUE Z, STOTZ M, et al. Grasping in depth maps of time-of-flight cameras[C]//2008 International Workshop on Robotic and Sensors Environments. Ottawa: IEEE, 2008: 132-137.

    [13]

    OGGIER T, LEHMANN M, KAUFMANN R, et al. An all-solid-state optical range camera for 3D real-time imaging with sub-centimeter depth resolution (SwissRanger)[C]//Optical Design and Engineering. St. Etienne: SPIE, 2004, 5249: 534-545.

    [14]

    GOW R D, RENSHAW D, FINDLATER K, et al. A comprehensive tool for modeling CMOS image-sensor-noise performance[J]. IEEE Transactions on Electron Devices, 2007, 54(6): 1321-1329.

    [15]

    BELHEDI A, BARTOLI A, BOURGEOIS S, et al. Noise modelling in time-of-flight sensors with application to depth noise removal and uncertainty estimation in three-dimensional measurement[J]. IET Computer Vision, 2015, 9(6): 967-977. doi: 10.1049/iet-cvi.2014.0135

    [16]

    FALIE D, BUZULOIU V. Noise characteristics of 3D time-of-flight cameras[C]//2007 International Symposium on Signals, Circuits and Systems. Iasi: IEEE, 2007: 229-232.

    [17]

    PLAUE M. Theoretical and experimental error analysis of continuous-wave time-of-flight range cameras[J]. Optical Engineering, 2009, 48(1): 1-16.

    [18] SEITEL A, DOS SANTOS T R, MERSMANN S, et al. Adaptive bilateral filter for image denoising and its application to in-vitro time-of-flight data[C]//Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling. Lake Buena Vista: SPIE, 2011: 636-643.

    SEITEL A, DOS SANTOS T R, MERSMANN S, et al. Adaptive bilateral filter for image denoising and its application to in-vitro time-of-flight data[C]//Medical Imaging 2011: Visualization, Image-Guided Procedures, and Modeling. Lake Buena Vista: SPIE, 2011: 636-643.

    [19]

    PLAUE M. Denoising of continuous-wave time-of-flight depth images using confidence measures[J]. Optical Engineering, 2009, 48(7): 077003. doi: 10.1117/1.3159869

    [20]

    MOSER B, BAUER F, ELBAU P, et al. Denoising techniques for raw 3D data of TOF cameras based on clustering and wavelets[C]//Three-Dimensional Image Capture and Applications 2008. San Jose: SPIE, 2008: 68050E.

    [21]

    EDELER T, OHLIGER K, HUSSMANN S, et al. Time-of-flight depth image denoising using prior noise information[C]//IEEE 10th International Conference on Signal Processing Proceedings. Beijing: IEEE, 2010: 119-122.

    [22]

    SCHÖNER H. Image processing for three-dimensional scans generated by time-of-flight range cameras[J]. Journal of Electronic Imaging, 2012, 21(2): 023012. doi: 10.1117/1.JEI.21.2.023012

    [23]

    LENZEN F, SCHÄFER H, GARBE C. Denoising time-of-flight data with adaptive total variation[M]//Advances in Visual Computing. Berlin: Springer Berlin Heidelberg, 2011: 337-346.

    [24]

    LANGE R, SEITZ P. Solid-state time-of-flight range camera[J]. IEEE Journal of Quantum Electronics, 2001, 37(3): 390-397. doi: 10.1109/3.910448

    [25]

    HORAUD R, HANSARD M, EVANGELIDIS G, et al. An overview of depth cameras and range scanners based on time-of-flight technologies[J]. Machine Vision and Applications, 2016, 27(7): 1005-1020.

    [26] 卢纯青, 宋玉志, 武延鹏, 等. 基于TOF计算成像的三维信息获取与误差分析[J]. 红外与激光工程, 2018, 47(10): 160-166.

    LU Chunqing, SONG Yuzhi, WU Yanpeng, et al. 3D information acquisition and error analysis based on TOF computational imaging[J]. Infrared and Laser Engineering, 2018, 47(10): 160-166.

    [27]

    European Machine Vision Association. EMVA standard 1288: standard for characterization of image sensors and cameras[EB/OL]. (2016-12-30)[2023-1-30]. https://www.emva.org/wp-content/uploads/EMVA1288-3.1a.pdf.

    [28] JANESICK J, KLAASEN K, ELLIOTT T. CCD charge collection efficiency and the photon transfer technique[C]//Solid-State Imaging Arrays. San Diego: SPIE, 1985, 570: 7-19 .

    JANESICK J, KLAASEN K, ELLIOTT T. CCD charge collection efficiency and the photon transfer technique[C]//Solid-State Imaging Arrays. San Diego: SPIE, 1985, 570: 7-19 .

    [29]

    JANESICK J R, KLAASEN K P, ELLIOTT T. Charge-coupled-device charge-collection efficiency and the photon-transfer technique[J]. Optical Engineering, 1987, 26(10): 972-980.

    [30] WIDENHORN R, BLOUKE M M, WEBER A, et al. Temperature dependence of dark current in a CCD[C]//Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications III. San Jose: SPIE, 2002, 4669: 193-201.

    WIDENHORN R, BLOUKE M M, WEBER A, et al. Temperature dependence of dark current in a CCD[C]//Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications III. San Jose: SPIE, 2002, 4669: 193-201.

    [31]

    GARCIA J E, DYER A G, GREENTREE A D, et al. Linearisation of RGB camera responses for quantitative image analysis of visible and UV photography: a comparison of two techniques[J]. PLoS One, 2013, 8(11): e79534. doi: 10.1371/journal.pone.0079534

图(14)  /  表(3)
计量
  • 文章访问数:  37
  • HTML全文浏览量:  3
  • PDF下载量:  19
  • 被引次数: 0
出版历程
  • 收稿日期:  2024-01-28
  • 修回日期:  2024-05-12
  • 网络出版日期:  2025-03-12
  • 刊出日期:  2025-03-14

目录

/

返回文章
返回