欢迎您访问《智慧农业(中英文)》官方网站! English

Smart Agriculture ›› 2020, Vol. 2 ›› Issue (4): 103-115.doi: 10.12133/j.smartag.2020.2.4.202010-SA006

• 专刊--农业机器人与智能装备 • 上一篇    下一篇

基于图像处理多算法融合的杂草检测方法及试验

苗中华1(), 余孝有1, 徐美红2, 何创新1, 李楠1, 孙腾1()   

  1. 1.上海大学 机电工程与自动化学院,上海 200444
    2.上海新弘生态农业有限公司,上海 202162
  • 收稿日期:2020-10-29 修回日期:2020-11-30 出版日期:2020-12-30
  • 基金资助:
    上海市科技兴农项目(202002080009F01466)
  • 作者简介:苗中华(1977-),男,博士,教授,研究方向为智能装备与机器人技术研究。E-mail:zhhmiao@shu.edu.cn
  • 通信作者:

Automatic Weed Detection Method Based on Fusion of Multiple Image Processing Algorithms

MIAO Zhonghua1(), YU Xiaoyao1, XU Meihong2, HE Chuangxin1, LI Nan1, SUN Teng1()   

  1. 1.Shanghai University, Shanghai 200444, China
    2.Shanghai Xinhong Ecological Agriculture Co. , Ltd, Shanghai 202162, China
  • Received:2020-10-29 Revised:2020-11-30 Online:2020-12-30

摘要:

自动化除草是现代精确农业科学领域的研究热点。已有的自动化除草解决方案中普遍存在鲁棒性不强、过度依赖大量样本等问题,针对上述问题,本研究提出了基于图像处理多算法融合的田间杂草检测方法,设计了一套田间杂草自动识别算法。首先通过设置颜色空间的阈值从图像中分割土壤背景。然后采用面积阈值、模板匹配和饱和度阈值三种方法对作物和杂草进行分类。最后基于投票的方式,综合权衡上述三种方法,实现对作物和杂草的精准识别与定位。以大豆田间除草为对象进行了试验研究,结果表明,使用融合多图像处理算法的投票方法进行作物和杂草识别定位,杂草识别平均错误率为1.79%,识别精度达到98.21%。相较单一的面积阈值、模板匹配和饱和度阈值方法,基于投票权重识别杂草的精度平均提升5.71%。同时,针对复杂多变的农业场景,进行了存在雨滴和阴影干扰的鲁棒性测试,实现了90%以上的作物识别结果,表明本研究方法具有较好的适应性和鲁棒性。本研究算法可为智能移动机器人除草作业等智慧农业领域应用提供技术支持。

关键词: 杂草检测, 投票权重, 算法融合, 图像处理, 自动识别

Abstract:

Automatic weeding is a hot research field of smart agriculture, which has many benefits such as achieving precise weed control, saving human cost, and avoiding damage on crops, etc. Recently, many researchers have focused on the research using the deep learning method, such as the convolutional neural network (CNN) and recurrent neural network (RNN) and have achieved decent outcomes related to the automatic weed detection. However, there are still generally problems of the projects such as weak robustness and excessive reliance on a large number of samples. To solve these problems, a recognition algorithm for automatic identification and weed removal was designed, and a soybean field weed detection and localization method based on the fusion of multiple image processing methods was proposed in this study. The images and video stream were obtained through the camera mounted on a mobile robot platform. Firstly, the soil background inside the image was segmented from the foreground (including the weeds and crops) by setting the threshold for a specific color space (hue). Then, three different methods including the area threshold method, template matching and saturation threshold method were used to classify the crops and weeds. Finally, based on a proposed innovative voting method, the three recognition methods were comprehensively weighed and fused to achieve more accurate recognition and localization results of the crops and weeds inside the image. Experimental validations were carried out using the samples obtained through the moving platform, and the experimental results showed that the average accuracy of the proposed weed detection algorithm was as high as 98.21%, while the recognition error was only 1.79%. Meanwhile, compared with each single method as the scale threshold, template matching and saturation threshold, the fused method based on the weighted voting has been able to raise the average accuracy by 5.71%. Even though the samples used in the validations were limited in covering different scenarios, the high recognition accuracy has proved the practicability of the proposed method. In addition, the robustness test that images with raindrop and shadow interference in the complex and unstructured agricultural scene was carried out, and satisfied results showed that above 90% of the plant were successfully detected, which verified the fine adaptability and robustness of the proposed method.

Key words: weed detection, vote weight, algorithm fusion, image processing, automatic detection

中图分类号: