Processing math: 0%
LUO Shiqi, LIU Jingzhou, CHEN Xuanhui, XIE Xiangsheng. Speckle autocorrelation resolution measurement and correction[J]. Journal of Applied Optics, 2023, 44(2): 398-405. DOI: 10.5768/JAO202344.0203006
Citation: LUO Shiqi, LIU Jingzhou, CHEN Xuanhui, XIE Xiangsheng. Speckle autocorrelation resolution measurement and correction[J]. Journal of Applied Optics, 2023, 44(2): 398-405. DOI: 10.5768/JAO202344.0203006

Speckle autocorrelation resolution measurement and correction

More Information
  • Received Date: April 14, 2022
  • Revised Date: June 27, 2022
  • Available Online: July 04, 2022
  • Speckle correlation is the basis of speckle-based optical measurements and imaging recovery technologies, which determines the resolution of the opitcal system. At present, the theoretical description of speckle size (granularity or resolution) is not accurate enough and lacks of experimental verification. The influencing factors of speckle pattern autocorrelation size were explored, compared it with the same numerical aperture objective, and the scattering lens properties of thin scattering medium were revealed. Through the multiple sets of measurements for speckle autocorrelation and lens focusing size, the results show that the apodization function will affect the resolution, and the Abbe criterion needs to be corrected according to the specific optical path, which has a certain reference value for speckle-based measurement and imaging technology.

  • With the rapid development of aviation technology and the upgrade of communication technology, the unmanned aerial vehicles (UAVs) have been widely used in fire fighting[1], agricultural monitoring[2] and other fields. UAV has low flying altitude, uncertain flight trajectory and high flexibility[3], which will pose a threat to public security and privacy when used by criminals. It is necessary to take countermeasures against UAVs, and UAV target detection is the key to interfere and strike them.

    Common UAV target detection methods include classical moving target detection based on optical flow method and frame difference method [4]. Since AlexNet network was proposed, deep learning has been gradually applied to object detection [5]. Although the detection accuracy of two-stage algorithms such as R-CNN(region with convolutional neural network feature)[6], Fast R-CNN[7], Faster R-CNN[8], SPP-Net(spatial pyramid pooling network)[9] is significantly improved compared with traditional algorithms, they cannot meet the real-time requirements of engineering. In one-stage algorithm, SSD(singleshot multibox detector) algorithm[10] adopts multi-scale feature map combined with anchor mechanism to improve the detection accuracy as much as possible while ensuring the speed. For small target detection, WANG Ruoxiao et al. reduced the channels of VGG16 to meet the real-time detection of UAV on the embedded platform[11]. LIN T Y et al. proposed RetinaNet, which uses focal loss to overcome the class imbalance problem caused by high foreground to background ratio[12]. RAZA M A et al. proposed BV-RNet, which can effectively detect small scale targets by extracting dense features and optimizing predefined anchor points[13]. SUN Han et al. proposed a lightweight detection network for UAVs: TIB-Net[14]. In view of the lack of texture and shape features of infrared UAVs, DING Lianghui et al. enhanced the high-resolution network layer and adopted the adaptive pipeline filter (APF) based on temporal correlation and motion information to correct the results[15]. FANG H et al. transformed the infrared small UAV target detection into nonlinear mapping from infrared image space to residual image and got better detection performance in complex background[16]. YOLO(you only look once) algorithm uses whole-process convolution for target discrimination and candidate box prediction[17], which has high detection accuracy and fast detection speed. HU Y et al. used feature maps of 4 scales to predict bounding boxes in YOLOv3 to obtain more texture and contour information, the mAP was increased by about 4.16%[18]. LI Zhipeng et al. used the super-resolution algorithm to reconstruct high-resolution UAV images, and used YOLOv3 to realize the effective detection of low-slow-small UAVs[19].

    The lack of semantic information in small UAV target imaging will reduce the detection accuracy, and the memory and computing power of the embedded platform are limited, which cannot meet the real-time requirements of UAV detection tasks, there is a lack of high-precision real-time target detection algorithms for small UAV. Aiming at the above problems, this paper improves the mAP(mean average precision) by 6.2% and the FPS(frame per second) by 22 frame/s on the basis of YOLO4 through model improvement and pruning, and achieves 85.6% mAP and nearly 60 frame/s detection performance with half-precision deployment on the embedded platform. Experiments have verified the effectiveness of this method for high-precision real-time detection of low-slow-small UAV targets.

    YOLOv4 algorithm was proposed in 2020. Compared with YOLOv3, it has been optimized in backbone network, multi-scale fusion, activation function, loss function and other aspects [20], its structure is shown in Figure 1. The backbone network part refers to the idea of jump connection of CSPNet[21], and forms CSPDarkNet53 on the basis of DarkNet53(as shown in the residual part in Fig. 1), which enhances the network feature extraction ability and speeds up the network training speed. The neck part uses the SPP structure (see SPP structure diagram in Fig. 1) to improve the size of the receptive field, and then PANet is used to achieve the fusion of feature maps of different scales and sizes. Through repeated feature extraction, the feature extraction capability of network for objects of different sizes is effectively enhanced. In the position loss function, CIoU(complete intersection over union) is used to comprehensively evaluate the overlap area, aspect ratio, distance of the center position and other factors between the ground truth box and the predicted box. The Mish activation function is used to avoid gradient saturation.

    Figure  1.  Network structure diagram of YOLOv4

    Since YOLOv4 performs well in the field of traditional target detection and has made some optimization for small target detection, this paper improved the YOLOv4 algorithm according to the characteristics of low-slow-small UAV targets.

    There are still some problems in YOLOv4 algorithm for the detection of low-slow-small UAV targets: the feature maps extracted by YOLO4 have fewer small target features; deep feature extraction network makes UAV features easy to be lost; the generalization ability of anchor adopted in YOLOv4 algorithm for small targets is weak[22]. This paper improves YOLOv4 from the aspects of network structure, small target enhancement and candidate box adjustment.

    As shown in Figure 2, this work improves the feature fusion part of YOLOv4 by up-sampling the shallow feature map and splicing it with the shallow UAV feature image, adding the output branch with a scale of 104×104 pixel. Figure 3 shows the feature maps output from neck and head of the improved YOLOv4. More details of UAV are obtained in the newly added scale, which is conducive to the improvement of UAV detection accuracy. The improved network makes full use of the low-level and high-level information, and achieves the detection of small object scale through the new detection layer.

    Figure  2.  Comparison of YOLOv4 network structure before and after improvement
    Figure  3.  Feature maps of improved YOLOv4

    YOLOv4 adopts k-means clustering, k is the number of clusters, the higher the value of k, the better the quality of the preset anchor box, which is conducive to the convergence of the model in the training process[23]. YOLOv4 allocates 3 anchor boxes to each scale, and gets 9 anchor boxes in total. k-Means randomly selects k initial cluster centers, which can greatly affect the results when not initialized properly.

    The improved YOLOv4 adopts k-means ++ to cluster UAV samples, k-means ++ randomly selects a cluster center and calculates the distance with other samples. The sample with larger distance is more likely to become the next cluster center, until k cluster centers are obtained. Euclidean distance is used to measure the distance between the sample and the cluster center, and the objective function of clustering is expressed as follows:

    f=min (1)

    where k is the number of clusters, {k_i} is the ith cluster, and {\rm{dist}}{({c_i},x)^2} is the squared distance from sample x to the ith cluster center {c_i}. For the improved YOLOv4, resize the image to 416×416 pixel, 3 anchor boxes are assigned to the feature maps of each scale, resulting in a total of 12 anchor boxes. The clustering process for anchor boxes is shown in Fig. 4. The clustering results are shown in Table 1. The k-means ++ makes the anchor frame of clustering pay more attention to small targets, and the clustering result is more consistent with the real label.

    Figure  4.  Process of obtaining anchor boxes by k-means++ clustering
    Table  1.  Clustering results of different clustering methods in training set
    AlgorithmLayer 1Layer 2Layer 3Layer 4
    YOLOv4(282,242)(73,58)(15,24)
    (203,160) (38,32) (11,15)
    (110,95) (21,16) (8,11)
    Improved YOLOv4
    k-means)
    (331,301)(227,250)(99,86)(20,24)
    (292,238) (195,159) (74,57) (12,17)
    (252,186) (144,118) (40,36) (8,11)
    Improved YOLOv4
    k-means++)
    (298,269) (130,108) (55,48) (18,20)
    (220,190) (103,60) (35,19) (12,17)
    (178,135)(74,85)(26,31)(8,11)
     | Show Table
    DownLoad: CSV

    The mosaic data enhancement used in YOLOv4 will randomly scale the target, possibly resulting in serious loss of drone target information. This paper adopts the method of copying multiple UAVs into one image to increase the number of UAVs (as shown in Figure 5), so that the model pays more attention to small UAVs and improves the contribution of small UAVs to the loss function[24] .

    Figure  5.  UAV data augmentation

    Network pruning reduces network parameters and computational complexity by removing a large number of unimportant channels to improve inference speed, its general process includes sparse training, network pruning, and model fine-tuning[25] .

    The scale factor \gamma of the batch normalization (BN) layer is used as the index to evaluate the importance of the channel, and L1 regularization is used to train \gamma , the loss function is expressed as:

    L\left( \gamma \right) = l{\left( \gamma \right)_{{\rm{YOLO}}{\text{v}}4}} + \rho {\left\| \gamma \right\|_1} (2)

    where L\left( \gamma \right) is the total loss function, l{\left( \gamma \right)_{{\rm{YOLO}}{\text{v}}4}} is the loss function of YOLOv4, {\left\| \gamma \right\|_1} is the penalty term, and p represents the parameter factor of L1 norm.

    Channel pruning is carried out according to the sparsely trained \gamma value, the channel corresponding to a small \gamma value has a small contribution to the network inference results. Sort the value of s and set the pruning rate to remove unimportant channels in the network. The channel pruning of shortcut structure refers to the practice of SlimYOLOv3[26], as shown in Figure 4. Assuming that layer A retains channels 1 and 2, layer C retains channels 2 and 3, and layer F retains channels 2 and 4, layer A, C, D, F and G retains channels 1, 2, 3 and 4.

    Layer pruning is based on the \gamma value of the convolution module before the shortcut layer. The two convolution modules before the shortcut layer are pruned together with it. As shown in the red box of Fig. 6, when layer D is cut, layer B and layer C are also cut.

    Figure  6.  Structure diagram of shortcut layer

    A large number of UAV images (with the size of 1 920×1080 pixel) collected by the camera were combined with the UAV Dataset (Drone Dataset, Drone-data-2021) to form an experimental dataset containing 20000 UAV images, of which 80% were used as the training set and the rest were used as the testing set.

    The comparison experiments of model improvement and pruning were carried out on windows10 operating system, equipped with i7-7700 processor and NVIDIA GeForce GTX 1080Ti. The network was implemented by Pytorch1.6-GPU. The input image was resized to 416×416 pixel, batch size was set to 8, initial learning rate was set to 0.002324, and Adam optimization strategy was used. The network was trained using a fine-tuning approach to reduce training time, first on the COCO dataset and then on the UAV training set. Finally, the embedded computing performance was verified on a Jetson AGX Xavier (16GB).

    In object detection, mAP and FPS are commonly used for evaluation, where FPS represents the model inference speed, and mAP needs to be calculated by confusion matrix (see Table 2).

    Table  2.  Confusion matrix
    Annotated resultsPredicted result
    TrueFalse
    TrueTPFN
    FalseFPTN
     | Show Table
    DownLoad: CSV

    The average precision (AP) is the area enclosed by the PR curve plotted with precision (P) and recall (R). See Formula (3) and Formula (4) for the calculation of accuracy and recall, and AP is calculated by Formula (5):

    P = \frac{{T_{\rm{P}}}}{{T_{\rm{P}} + F_{\rm{P}}}} (3)
    R = \frac{{T_{\rm{P}}}}{{F_{\rm{N}} + T_{\rm{P}}}} (4)
    P_{\rm{AP}} = \displaystyle\sum\limits_{i = 1}^n {P(i)\Delta R(i)} (5)

    mAP is the average accuracy of all categories, which can be calculated by Formula (6):

    P_{\rm{mAP}} = \frac{{\displaystyle\sum {P_{{\rm{AP}}}^c} }}{{c}} (6)

    The IoU(intersection over union) threshold is set to 0.5 to test the algorithm before and after improvement. Fig. 7 shows the loss curve, the improved YOLOv4 has better convergence effect on the UAV data set, and the loss is reduced to below 0.6 after training. The PR curve plotted against recall and precision is shown in Fig. 8, where the curve closer to the top right corner indicates better detection performance. The PR curve of the improved YOLOv4 completely enveloped the curve of the original YOLOv4, proving its stronger detection ability. Fig. 9 shows the detection results for low-slow-small UAVs. Compared with the original YOLOv4, the improved YOLOv4 adds a small UAV target prediction branch and adjusts the candidate box, which can reduce missed detection and false detection, improve the prediction accuracy of the size and position of the bounding box.

    Figure  7.  Comparison of loss curves during training of YOLOv4 and improved YOLOv4
    Figure  8.  Comparison of precision-recall curves of YOLOv4 and improved YOLOv4
    Figure  9.  Comparison of detection results of YOLOv4 and improved YOLOv4 on UAV dataset

    The comparison results of mAP and FPS of different algorithms are shown in Table 3 and Fig.10.

    Table  3.  Detection effects of different algorithms
    AlgorithmFaster
    R-CNN
    SSDYOLOv3YOLOv4Improved
    YOLOv4
    mAP/%74.576.176.579.686.7
    FPS21.7028.9343.5753.1149.26
     | Show Table
    DownLoad: CSV
    Figure  10.  Detection effects of different algorithms

    The mAP of the improved YOLOv4 is 7.1% higher than that of the original YOLOv4, the FPS of the improved algorithm is 49 on NVIDIA GeForce GTX 1080Ti, which is slightly lower than the original YOLOv4. The experiment proves that the improved YOLOv4 algorithm performs better in mAP and FPS than YOLOv3, SSD and other algorithms, it achieves a good balance in FPS and mAP, and has a stronger detection ability for small UAV targets.

    Sparse training was performed by local sparsity rate decay, setting sparsity rate to 0.001 and epoch to 300. In the last 50% training phase, 85% of the network channels are sparsified normally, and the remaining 15% channels are sparsified with a sparsity rate of 1% to prevent the model accuracy from decreasing sharply.

    The weights of BN(batch normlization) layers before and after sparse training are shown in Fig.11. During the training process, the weight distribution gradually approaches 0, and channels with weights close to 0 will be pruned preferentially. The pruning rate is set to 0.8, the number of pruning layers is 16, and a total of 27137 channels and 48 layers are pruned. The pruning result is shown in Fig.12. The weights of the middle part of the network are closer to 0, so most channels are pruned, and the clipped layers are the layers with fewer remaining channels after channel pruning. Table 4 shows the network performance before and after pruning. After pruning, the number of model parameters is reduced to 7.5% of the original, the model size is reduced from 248 MB to 18.5 MB, and the detection speed is improved from 49 frame/s to nearly 75 frame/s (1080Ti). Finally, we recover the mAP to 85.8% with 100 epochs of model fine-tuning.

    Figure  11.  Weight distribution of BN layers in sparse training
    Figure  12.  Comparison of network layers and channels in improved YOLOv4 before and after pruning
    Table  4.  Comparison of network pruning results with different parameters
    Channel
    pruning rate
    Number of
    pruning layers
    mAP/%Number of
    parameters
    Inference
    time /s
    0086.2495649674640.0203
    0.81680.748348540540.0132
     | Show Table
    DownLoad: CSV

    Figure 13 shows the comparison between the pruned model and YOLOv4-tiny. Although the detection speed of the lightweight model based on the improved YOLOv4 is slightly lower than that of YOLOv4-tiny, it has greater advantages than YOLOv4-tiny in terms of precision, recall and mAP. Experiments show that the network is suitable for low-slow-small UAV detection.

    Figure  13.  Performance comparison of prune-improve-YOLOv4 and YOLOv4-tiny

    The data of the model is stored in the form of 32-bit double floating point precision (FP32), which will occupy a large memory space and increase the inference time. Using low precision data (FP16, INT8) for inference can reduce the storage space and bandwidth required for calculation and improve the inference speed. In this experiment, the prune-improve-YOLOv4 is accelerated and quantized by TensorRT on the Xavier platform. The Xavier running power is 30 W, the test image size is 3×416×416 pixel, the mAP and inference time under different precision are shown in Table 5.

    Table  5.  Inference performance of prune-improve-YOLOv4 deployed with different precision on Xavier
    PrecisionTorchTrt FP32Trt FP16Trt INT8
    mAP85.885.885.868.4
    Inference time/ms58.5027.5916.8311.46
     | Show Table
    DownLoad: CSV

    The mAP shows almost no drop for half-precision (FP16) inference, but shows a large drop for INT8 inference. In this paper, semi-precision (FP16) deployment is adopted to achieve 85.8% mAP and a detection speed of nearly 60 frame/s on Xavier, the detected image is shown in Figure 14. Experiments show that the prune-improve-YOLOv4 can meet the real-time and high-precision requirements of detecting low-slow-small UAV targets using embedded platform.

    Figure  14.  Detection result of prune-improve-YOLOv4 deployed on Xavier platform

    Aiming at the difficulties of small UAV target semantic information is less and the target is not obvious, this paper firstly improves the original YOLOv4 network from three aspects: algorithm network structure, small target enhancement and anchor box allocation. The improved YOLOv4 network can detect most UAVs and the position of the boundary box is more accurate. Recall, accuracy and mAP have been greatly improved. Then, by setting the channel pruning rate of 0.8 and the layer pruning number of 16, the network parameters and inference time are greatly reduced. Finally, 85.8% mAP and nearly 60 FPS real-time detection are achieved on Xavier. Experimental results show that the proposed algorithm achieves higher accuracy and speed than YOLOv4 in small target detection, and can be used for embedded equipment to carry out real-time detection of low-slow-small UAV.

  • [1]
    FREUND I. Looking through walls and around corners[J]. Physica A,1990,168:49-65. doi: 10.1016/0378-4371(90)90357-X
    [2]
    GU M, GAN X , DENG X. Microscopic imaging through turbid media[M]. 缺出版地: Springer-Verlag Berlin Heidelberg: Springer Berlin Heidelberg, 2015.
    [3]
    MOSK A P, LAGENDIJK A, LEROSEY G, et al. Controlling waves in space and time for imaging and focusing in complex media[J]. Nature Photonics,2012,6:283-292. doi: 10.1038/nphoton.2012.88
    [4]
    KATZ O, SMALL E, SILBERBERG Y. Looking around corners and through thin turbid layers in real time with scattered incoherent light[J]. Nature Photonics,2012,6:549-553. doi: 10.1038/nphoton.2012.150
    [5]
    VELLEKOOP I M, MOSK A P. Focusing coherent light through opaque strongly scattering media[J]. Optics Letters,2007,32(16):2309-2311. doi: 10.1364/OL.32.002309
    [6]
    YAQOOB Z, PSALTIS D, FELD M, et al. Optical phase conjugation for turbidity suppression in biological samples[J]. Nature Photonics,2008,2:110-115. doi: 10.1038/nphoton.2007.297
    [7]
    HSIEH C L, PU Y, GRANGE R, et al. Imaging through turbid layers by scanning the phase conjugated second harmonic radiation from a nanoparticle[J]. Optical Express,2010,18:20723-20731. doi: 10.1364/OE.18.020723
    [8]
    WANG Y M, JUDKEWITZ B, DIMARZIO C A, et al. Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light[J]. Nature Communication,2012,3:928. doi: 10.1038/ncomms1925
    [9]
    POPOFF S, LEROSEY G, FINK M, et al. Image transmission through an opaque material[J]. Nature Communication,2010,1:81-86. doi: 10.1038/ncomms1078
    [10]
    POPOFF S M, LROSEY G, CARMINATI R, et al. Measuring the transmission matrix in optics: an approach to the study and control of light propagation in disordered media[J]. Physical Review,2010,104:100601-100604.
    [11]
    CHOI Y, YOON C, KIM M, et al. Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber[J]. Physical Review,2012,109:203901-203905.
    [12]
    BERTOLOTTI J, PUTTEN E G, BLUM C, et al. Non-invasive imaging through opaque scattering layers[J]. Nature,2012,491:232-234. doi: 10.1038/nature11578
    [13]
    KATZ O, HEIDMANN P, FINK M, et al. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations[J]. Nature Photonics,2014,8:784-790. doi: 10.1038/nphoton.2014.189
    [14]
    WU T, KATZ O, SHAO X, et al. Single-shot diffraction-limited imaging through scattering layers via bispectrum analysis[J]. Optical Letter,2016,41(21):5003-5006. doi: 10.1364/OL.41.005003
    [15]
    LI L, LI Q, SUN S, et al. Imaging through scattering layers exceeding memory effect range with spatial-correlation-achieved point-spread-function[J]. Optical. Letter,2018,43(8):1670-1673. doi: 10.1364/OL.43.001670
    [16]
    XU X, XIE X, THENDIYAMMAL A, et al. Imaging of objects through a thin scattering layer using a spectrally and spatially separated reference[J]. Optical Express,2018,26(12):15073-15083. doi: 10.1364/OE.26.015073
    [17]
    XIE X, ZHUANG H, ZHOU J, et al. Extended depth-resolved imaging through a thin scattering medium with PSF manipulation[J]. Scientific Reports,2018,8:4585-4594. doi: 10.1038/s41598-018-22966-7
    [18]
    SAUNDERS C, MURRAY B J, GOYAL V K. Computational periscopy with an ordinary digital camera[J]. Nature,2019,565(7740):472-475. doi: 10.1038/s41586-018-0868-6
    [19]
    WANG X, JINX, LI J. Blind position detection for large field-of-view scattering imaging[J]. Photonics Research,2020,8(6):920-928. doi: 10.1364/PRJ.388522
    [20]
    LI W, LIU J, HE S, et al. Multitarget imaging through scattering media beyond the 3D optical memory effect[J]. Optical Letter,2020,45(10):2692-2695. doi: 10.1364/OL.388552
    [21]
    YANG W, LI G, SITU G. Imaging through scattering media with the auxiliary of a known reference object[J]. Scientific Reports,2018,8:9614-9622. doi: 10.1038/s41598-018-27754-x
    [22]
    NEWMAN J A, QIAOEN L, WEBB K J. Imaging hidden objects with spatial speckle intensity correlations over object position[J]. Physical Review,2016,116(7):073902.1-073902.6.
    [23]
    EDREI E, SCARCELLI G. Optical imaging through dynamic turbid media using the Fourier-domain shower-curtain effect[J]. Optica,2016,3(1):71-74. doi: 10.1364/OPTICA.3.000071
    [24]
    XIE X S, QI Z H, ZHOU J Y, et al. Non-invasive optical imaging using the extension of the Fourier–domain shower–curtain effect[J]. Optical Letter,2021,46(1):98-101. doi: 10.1364/OL.415181
    [25]
    谢向生, 刘忆琨, 周建英, 等. 散斑相关成像: 从点扩展函数到光场全要素(特邀综述)[J]. 光学学报,2020,40(1):0111004.

    XIE X S, LIU Y K, ZHOU J Y. Speckle Correlation Imaging: from Point Spread Functions to Light Field Plenoptics[J]. Acta Optica Sinica,2020,40(1):0111004.
    [26]
    HE H X, XIE X S, LIU Y K, et al. Exploiting the point spread function for optical imaging through a scattering medium based on deconvolution method[J]. Journal of Innovative Optical Health Sciences,2019,12(8):1930005.
    [27]
    张莉, 刘元硕, 荣振宇, 等. 利用激光散斑测量透明材料的折射率[J]. 物理实验,2021,41(6):37-40. doi: 10.19655/j.cnki.1005-4642.2021.06.005

    ZHANG Li, LIU Yuanshuo, RONG Zhengyu, et al. The refractive index of transparent materials was measured by laser speckle[J]. Physical experiment,2021,41(6):37-40. doi: 10.19655/j.cnki.1005-4642.2021.06.005
    [28]
    王志军, 于之靖, 马凯, 等. 数字散斑亚像素小角位移测量的曲面拟合法[J]. 应用光学,2017,38(2):256-263.

    WANG Zhijun, YU Zhijing, MA Kai, et al. Surface fitting method for digital speckle subpixel measurement of small angular displacement[J]. Journal of Applied Optics,2017,38(2):256-263.
    [29]
    韩刚, 许亚娥, 沈阳, 等. 散斑技术在激光寻的制导武器仿真系统中的应用[J]. 应用光学,2015,36(3):356-361. doi: 10.5768/JAO201536.0301004

    HAN Gang, XU Yae, SHEN Yang, et al. Application of speckle technique in laser homing guided weapon simulation system[J]. Journal of Applied Optics,2015,36(3):356-361. doi: 10.5768/JAO201536.0301004
    [30]
    吴凡, 吴思进, 李伟仙, 等. 应用数字散斑投影测量纸页厚度[J]. 应用光学,2019,40(5):847-852. doi: 10.5768/JAO201940.0503002

    WU Fan, WU Sijin, LI Weixian, et al. The paper thickness was measured by digital speckle projection[J]. Journal of Applied Optics,2019,40(5):847-852. doi: 10.5768/JAO201940.0503002
    [31]
    GOODMAN J W. Statistical properties of laser speckle patterns. in: dainty, J. C. (eds) laser speckle and related phenomena. topics in applied physics[M]. Heidelberg: Springer, Berlin, 1975, 9: 9-75.
    [32]
    NOVOTNY L, HECHT B. The Principle of Nano Optics[M]. Cambridge: Cambridge University Press, New York,2006:93-94.
    [33]
    RICHARDS B, WOLF E. Electromagnetic diffraction in optical systems II. structure of the image field in an aplanatic system[J]. Proceedings of the Royal Society of London. Series A,1959,253:358-379.
    [34]
    VELLEKOOP I M, MOSK A P. Focusing coherent light through opaque strongly scattering media [J]. Optics Letters. 2007, 32(16): 2309–2311.
    [35]
    谢向生, 魏洁, 周建英, 等. 散射透镜的成像原理及应用[J]. 物理实验,2021,41(8):1005-4642.

    XIE Xiangsheng, WEI Jie, ZHOU Jianying, et al. Imaging principle and application of scattering lens[J]. Physical experiment,2021,41(8):1005-4642.
    [36]
    GOODMAN J W. 光学中的散斑现象[M]. 曹其智, 陈家璧, 译. 北京: 科学出版社, 2009: 65-67.

    GOODMAN J W. Speckle phenomenon in optics[M]. CAO Qizhi, CHEN Jiabi, translated. Beijing: Science Press, 2009: 65-67.
    [37]
    FREUND I. Looking through walls and around corners [J]. Physica A: Statistical Mechanics and its Applications, 1990, 168: 49–65.
    [38]
    YANG L X, XIE X S, ZHOU J Y, et al. Minimized spot of annular radially polarized focusing beam[J]. Optical. Letter,2013,38:1331-1333. doi: 10.1364/OL.38.001331
    [39]
    HU Y W, FU S H, LI Z, et al. Focusing optical waves with a rotationally symmetric sharp-edge aperture[J]. Optics Communications,2018,413:136-140. doi: 10.1016/j.optcom.2017.12.043

Catalog

    Figures(7)  /  Tables(4)

    Article views (538) PDF downloads (124) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return