欢迎您访问《智慧农业(中英文)》官方网站! English

Smart Agriculture ›› 2023, Vol. 5 ›› Issue (3): 121-131.doi: 10.12133/j.smartag.SA202308005

• 专刊--作物信息监测技术 • 上一篇    下一篇

基于PADC-PCNN与平稳小波变换多焦距绿色植株图像融合算法

李嘉豪1(), 瞿宏俊1, 高名喆2, 仝德之3, 郭亚1,2,3()   

  1. 1. 江南大学 物联网工程学院“轻工过程先进控制”教育部重点实验室,江苏 无锡 214122,中国
    2. 江南大学物后勤管理处环境中心,江苏 无锡 214122,中国
    3. 绿视芯科技(无锡)有限公司,江苏 无锡 214000,中国
  • 收稿日期:2023-07-31 出版日期:2023-09-30
  • 基金资助:
    国家自然科学基金国际合作项目(51961125102); 国家自然科学基金面上项目(31771680); 江苏省农业科技自主创新资金项目(SCX(22)3669)
  • 作者简介:
    李嘉豪,研究方向为植物表型设备研发。Email:
  • 通信作者:
    郭 亚,博士,教授,研究方向为系统建模与控制、大数据分析、传感器与仪器。E-mail:

A Multi-Focal Green Plant Image Fusion Method Based on Stationary Wavelet Transform and Parameter-Adaptation Dual Channel Pulse-Coupled Neural Network

LI Jiahao1(), QU Hongjun1, GAO Mingzhe2, TONG Dezhi3, GUO Ya1,2,3()   

  1. 1. Key Laboratory of Advanced Process Control in Light Industry, Ministry of Education, School of Internet of Things Engineering, Jiangnan University, Wuxi 214122, China
    2. Logistics Management Service Environmental Centre of Jiangnan University, Wuxi 214122, China
    3. Chloview Technology (Wuxi) Co. , LTD. , Wuxi 214000, China
  • Received:2023-07-31 Online:2023-09-30
  • Supported by:
    International Cooperation Project of the National Natural Science Foundation of China(51961125102); National Natural Science Foundation of China(31771680); Independent Innovation Fund of Agricultural Science and Technology of Jiangsu Province(SCX(22)3669)

摘要:

[目的/意义] 构建绿色植株三维点云模型需大量清晰图像,受镜头景深限制,在采集大纵深植株图像时图像会部分失焦,导致出现边缘模糊、纹理细节丢失等问题,现有的处理算法难以兼顾处理质量和处理速度。本研究目的是提出一种新型算法,提高融合图像质量问题的同时兼顾处理速度。 [方法] 提出了一种基于非下采样剪切波变换(Non-Subsampled Shearlet Transform,NSST)的参数自适应双通道脉冲耦合神经网络(Parameter Adaptation Dual Channel Pulse Coupled Neural Network,PADC-PCNN)与平稳小波变换(Stationary Wavelet Transform, SWT)的植株图像融合方法。首先对植株的RGB图像进行通道分离,针对含纹理细节等特征较多的G通道进行NSST分解,低频子带使用梯度能量融合规则,高频子带使用PADC-PCNN融合规则;对轮廓信息和背景信息多的R、B通道,采用速度快且具备平移不变性的平稳小波变换,用以抑制伪Gibbs效应。自建了480幅图像共8组数据,以光照环境、距离和植株颜色为变量,同时采集不同焦距图像验证算法性能。 [结果和讨论] 基于PADC-PCNN-SWT算法与常用的快速导向滤波算法(Fast Guided Filter,FGF)、随机游走算法(Random Walk,RW)、非下采样剪切波变换的脉冲耦合神经网络算法(Nonsubsampled Shearlet Transform based Pulse-Coupled Neural Network,NSST-PCNN)、平稳小波变换算法(Stationary Wavelet Transform,SWT)和非下采样剪切波变换的参数自适应双通道脉冲耦合神经网络(Nonsubsampled Shearlet Transform based Parameter-Adaptive Dual-Channel Pulse-Coupled Neural Network,NSST-PADC)等五种算法相比,在清晰度上比前四种算法分别提高了5.6%、8.1%、6.1%和17.6%,在空间频率上比前四种算法分别提高了2.9%、4.8%、7.1%和15.9%,而相较于融合效果最好的NSST-PADC算法在处理速度方面平均提升200.0%,同时调焦范围约6 mm。 [结论] 本研究提出的基于PADC-PCNN-SWT多焦距图像融合算法实现了在保障融合质量的同时,提高了融合图像的效率,为搭建绿色植株三维点云模型提供高质量数据的同时节省了时间。

关键词: 多焦距, 图像融合, 平稳小波变换, 参数自适应, 脉冲耦合, 神经网络

Abstract:

[Objective] To construct the 3D point cloud model of green plants a large number of clear images are needed. Due to the limitation of the depth of field of the lens, part of the image would be out of focus when the green plant image with a large depth of field is collected, resulting in problems such as edge blurring and texture detail loss, which greatly affects the accuracy of the 3D point cloud model. However, the existing processing algorithms are difficult to take into account both processing quality and processing speed, and the actual effect is not ideal. The purpose of this research is to improve the quality of the fused image while taking into account the processing speed. [Methods] A plant image fusion method based on non-subsampled shearlet transform (NSST) based parameter-adaptive dual channel pulse-coupled neural network (PADC-PCNN) and stationary wavelet transform (SWT) was proposed. Firstly, the RGB image of the plant was separated into three color channels, and the G channel with many features such as texture details was decomposed by NSST in four decomposition layers and 16 directions, which was divided into one group of low frequency subbands and 64 groups of high frequency subbands. The low frequency subband used the gradient energy fusion rule, and the high frequency subband used the PADC-PCNN fusion rule. In addition, the weighting of the eight-neighborhood modified Laplacian operator was used as the link strength of the high-frequency fusion part, which enhanced the fusion effect of the detailed features. At the same time, for the R and B channels with more contour information and background information, a SWT with fast speed and translation invariance was used to suppress the pseudo-Gibbs effect. Through the high-precision and high-stability multi-focal length plant image acquisition system, 480 images of 8 experimental groups were collected. The 8 groups of data were divided into an indoor light group, natural light group, strong light group, distant view group, close view group, overlooking group, red group, and yellow group. Meanwhile, to study the application range of the algorithm, the focus length of the collected clear plant image was used as the reference (18 mm), and the image acquisition was adjusted four times before and after the step of 1.5 mm, forming the multi-focus experimental group. Subjective evaluation and objective evaluation were carried out for each experimental group to verify the performance of the algorithm. Subjective evaluation was analyzed through human eye observation, detail comparison, and other forms, mainly based on the human visual effect. The image fusion effect of the algorithm was evaluated using four commonly used objective indicators, including average gradient (AG), spatial frequency (SF), entropy (EN), and standard deviation (SD). [Results and Discussions] The proposed PADC-PCNN-SWT algorithm and other five algorithms of common fast guided filtering algorithm (FGF), random walk algorithm (RW), non-subsampled shearlet transform based PCNN (NSST-PCNN) algorithm, SWT algorithm and non-subsampled shearlet transform based parameter-adaptive dual-channel pulse-coupled neural network (NSST-PADC) and were compared. In the objective evaluation data except for the red group and the yellow group, each index of the PADC-PCNN-SWT algorithm was second only to the NSST-PADC algorithm, but the processing speed was 200.0% higher than that of the NSST-PADC algorithm on average. At the same time, compared with the FDF, RW, NSST-PCNN, and SWT algorithms, the PADC-PCN -SWT algorithm improved the clarity index by 5.6%, 8.1%, 6.1%, and 17.6%, respectively, and improved the spatial frequency index by 2.9%, 4.8%, 7.1%, and 15.9%, respectively. However, the difference between the two indicators of information entropy and standard deviation was less than 1%, and the influence was ignored. In the yellow group and the red group, the fusion quality of the non-green part of the algorithm based on PADC-PCNN-SWT was seriously degraded. Compared with other algorithms, the sharpness index of the algorithm based on PADC-PCNN-SWT decreased by an average of 1.1%, and the spatial frequency decreased by an average of 5.1%. However, the indicators of the green part of the fused image were basically consistent with the previous several groups of experiments, and the fusion effect was good. Therefore, the algorithm based on PADC-PCNN-SWT only had a good fusion effect on green plants. Finally, by comparing the quality of four groups of fused images with different focal length ranges, the results showed that the algorithm based on PADC-PCNN-SWT had a better contour and color restoration effect for out-of-focus images in the range of 15-21 mm, and the focusing range based on PADC-PCNN-SWT was about 6 mm. [Conclusions] The multi-focal length image fusion algorithm based on PADC-PCNN-SWT achieved better detail fusion performance and higher image fusion efficiency while ensuring fusion quality, providing high-quality data, and saving a lot of time for building 3D point cloud model of green plants.

Key words: multi-focal length, image fusion, stationary wavelet transform, parameter-adaptive, pulse-coupled, neural network