Welcome to Smart Agriculture 中文

Content of Information Perception and Acquisition in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    A Rapid Detection Method for Wheat Seedling Leaf Number in Complex Field Scenarios Based on Improved YOLOv8
    HOU Yiting, RAO Yuan, SONG He, NIE Zhenjun, WANG Tan, HE Haoxu
    Smart Agriculture    2024, 6 (4): 128-137.   DOI: 10.12133/j.smartag.SA202403019
    Abstract513)   HTML97)    PDF(pc) (2913KB)(844)       Save

    [Objective] The enumeration of wheat leaves is an essential indicator for evaluating the vegetative state of wheat and predicting its yield potential. Currently, the process of wheat leaf counting in field settings is predominantly manual, characterized by being both time-consuming and labor-intensive. Despite advancements, the efficiency and accuracy of existing automated detection and counting methodologies have yet to satisfy the stringent demands of practical agricultural applications. This study aims to develop a method for the rapid quantification of wheat leaves to refine the precision of wheat leaf tip detection. [Methods] To enhance the accuracy of wheat leaf detection, firstly, an image dataset of wheat leaves across various developmental stages—seedling, tillering, and overwintering—under two distinct lighting conditions and using visible light images sourced from both mobile devices and field camera equipmen, was constructed. Considering the robust feature extraction and multi-scale feature fusion capabilities of YOLOv8 network, the foundational architecture of the proposed model was based on the YOLOv8, to which a coordinate attention mechanism has been integrated. To expedite the model's convergence, the loss functions were optimized. Furthermore, a dedicated small object detection layer was introduced to refine the recognition of wheat leaf tips, which were typically difficult for conventional models to discern due to their small size and resemblance to background elements. This deep learning network was named as YOLOv8-CSD, tailored for the recognition of small targets such as wheat leaf tips, ascertains the leaf count by detecting the number of leaf tips present within the image. A comparative analysis was conducted on the YOLOv8-CSD model in comparison with the original YOLOv8 and six other prominent network architectures, including Faster R-CNN, Mask R-CNN, YOLOv7, and SSD, within a uniform training framework, to evaluate the model's effectiveness. In parallel, the performance of both the original and YOLOv8-CSD models was assessed under challenging conditions, such as the presence of weeds, occlusions, and fluctuating lighting, to emulate complex real-world scenarios. Ultimately, the YOLOv8-CSD model was deployed for wheat leaf number detection in intricate field conditions to confirm its practical applicability and generalization potential. [Results and Discussions] The research presented a methodology that achieved a recognition precision of 91.6% and an mAP0.5 of 85.1% for wheat leaf tips, indicative of its robust detection capabilities. This method exceled in adaptability within complex field environments, featuring an autonomous adjustment mechanism for different lighting conditions, which significantly enhanced the model's robustness. The minimal rate of missed detections in wheat seedlings' leaf counting underscored the method's suitability for wheat leaf tip recognition in intricate field scenarios, consequently elevating the precision of wheat leaf number detection. The sophisticated algorithm embedded within this model had demonstrated a heightened capacity to discern and focus on the unique features of wheat leaf tips during the detection process. This capability was essential for overcoming challenges such as small target sizes, similar background textures, and the intricacies of feature extraction. The model's consistent performance across diverse conditions, including scenarios with weeds, occlusions, and fluctuating lighting, further substantiated its robustness and its readiness for real-world application. [Conclusions] This research offers a valuable reference for accurately detecting wheat leaf numbers in intricate field conditions, as well as robust technical support for the comprehensive and high-quality assessment of wheat growth.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Recognition Method of Facility Cucumber Farming Behaviours Based on Improved SlowFast Model
    HE Feng, WU Huarui, SHI Yangming, ZHU Huaji
    Smart Agriculture    2024, 6 (3): 118-127.   DOI: 10.12133/j.smartag.SA202402001
    Abstract348)   HTML104)    PDF(pc) (1737KB)(450)       Save

    [Objective] The identification of agricultural activities plays a crucial role for greenhouse vegetables production, particularly in the precise management of cucumber cultivation. By monitoring and analyzing the timing and procedures of agricultural operations, effective guidance can be provided for agricultural production, leading to increased crop yield and quality. However, in practical applications, the recognition of agricultural activities in cucumber cultivation faces significant challenges. The complex and ever-changing growing environment of cucumbers, including dense foliage and internal facility structures that may obstruct visibility, poses difficulties in recognizing agricultural activities. Additionally, agricultural tasks involve various stages such as planting, irrigation, fertilization, and pruning, each with specific operational intricacies and skill requirements. This requires the recognition system to accurately capture the characteristics of various complex movements to ensure the accuracy and reliability of the entire recognition process. To address the complex challenges, an innovative algorithm: SlowFast-SMC-ECA (SlowFast-Spatio-Temporal Excitation, Channel Excitation, Motion Excitation-Efficient Channel Attention) was proposed for the recognition of agricultural activity behaviors in cucumber cultivation within facilities. [Methods] This algorithm represents a significant enhancement to the traditional SlowFast model, with the goal of more accurately capturing hand motion features and crucial dynamic information in agricultural activities. The fundamental concept of the SlowFast model involved processing video streams through two distinct pathways: the Slow Pathway concentrated on capturing spatial detail information, while the Fast Pathway emphasized capturing temporal changes in rapid movements. To further improve information exchange between the Slow and Fast pathways, lateral connections were incorporated at each stage. Building upon this foundation, the study introduced innovative enhancements to both pathways, improving the overall performance of the model. In the Fast Pathway, a multi-path residual network (SMC) concept was introduced, incorporating convolutional layers between different channels to strengthen temporal interconnectivity. This design enabled the algorithm to sensitively detect subtle temporal variations in rapid movements, thereby enhancing the recognition capability for swift agricultural actions. Meanwhile, in the Slow Pathway, the traditional residual block was replaced with the ECA-Res structure, integrating an effective channel attention mechanism (ECA) to improve the model's capacity to capture channel information. The adaptive adjustment of channel weights by the ECA-Res structure enriched feature expression and differentiation, enhancing the model's understanding and grasp of key spatial information in agricultural activities. Furthermore, to address the challenge of class imbalance in practical scenarios, a balanced loss function (Smoothing Loss) was developed. By introducing regularization coefficients, this loss function could automatically adjust the weights of different categories during training, effectively mitigating the impact of class imbalance and ensuring improved recognition performance across all categories. [Results and Discussions] The experimental results significantly demonstrated the outstanding performance of the improved SlowFast-SMC-ECA model on a specially constructed agricultural activity dataset. Specifically, the model achieved an average recognition accuracy of 80.47%, representing an improvement of approximately 3.5% compared to the original SlowFast model. This achievement highlighted the effectiveness of the proposed improvements. Further ablation studies revealed that replacing traditional residual blocks with the multi-path residual network (SMC) and ECA-Res structures in the second and third stages of the SlowFast model leads to superior results. This highlighted that the improvements made to the Fast Pathway and Slow Pathway played a crucial role in enhancing the model's ability to capture details of agricultural activities. Additional ablation studies also confirmed the significant impact of these two improvements on improving the accuracy of agricultural activity recognition. Compared to existing algorithms, the improved SlowFast-SMC-ECA model exhibited a clear advantage in prediction accuracy. This not only validated the potential application of the proposed model in agricultural activity recognition but also provided strong technical support for the advancement of precision agriculture technology. In conclusion, through careful refinement and optimization of the SlowFast model, it was successfully enhanced the model's recognition capabilities in complex agricultural scenarios, contributing valuable technological advancements to precision management in greenhouse cucumber cultivation. [Conclusions] By introducing advanced recognition technologies and intelligent algorithms, this study enhances the accuracy and efficiency of monitoring agricultural activities, assists farmers and agricultural experts in managing and guiding the operational processes within planting facilities more efficiently. Moreover, the research outcomes are of immense value in improving the traceability system for agricultural product quality and safety, ensuring the reliability and transparency of agricultural product quality.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Identification Method of Kale Leaf Ball Based on Improved UperNet
    ZHU Yiping, WU Huarui, GUO Wang, WU Xiaoyan
    Smart Agriculture    2024, 6 (3): 128-137.   DOI: 10.12133/j.smartag.SA202401020
    Abstract392)   HTML112)    PDF(pc) (1568KB)(656)       Save

    [Objective] Kale is an important bulk vegetable crop worldwide, its main growth characteristics are outer leaves and leaf bulbs. The traits of leaf bulb kale are crucial for adjusting water and fertilizer parameters in the field to achieve maximum yield. However, various factors such as soil quality, light exposure, leaf overlap, and shading can affect the growth of in practical field conditions. The similarity in color and texture between leaf bulbs and outer leaves complicates the segmentation process for existing recognition models. In this paper, the segmentation of kale outer leaves and leaf bulbs in complex field background was proposed, using pixel values to determine leaf bulb size for intelligent field management. A semantic segmentation algorithm, UperNet-ESA was proposed to efficiently and accurately segment nodular kale outer leaf and leaf bulb in field scenes using the morphological features of the leaf bulbs and outer leaves of nodular kale to realize the intelligent management of nodular kale in the field. [Methods] The UperNet-ESA semantic segmentation algorithm, which uses the unified perceptual parsing network (UperNet) as an efficient semantic segmentation framework, is more suitable for extracting crop features in complex environments by integrating semantic information across different scales. The backbone network was improved using ConvNeXt, which is responsible for feature extraction in the model. The similarity between kale leaf bulbs and outer leaves, along with issues of leaf overlap affecting accurate target contour localization, posed challenges for the baseline network, leading to low accuracy. ConvNeXt effectively combines the strengths of convolutional neural networks (CNN) and Transformers, using design principles from Swin Transformer and building upon ResNet50 to create a highly effective network structure. The simplicity of the ConvNeXt design not only enhances segmentation accuracy with minimal model complexity, but also positions it as a top performer among CNN architectures. In this study, the ConvNeXt-B version was chosen based on considerations of computational complexity and the background characteristics of the knotweed kale image dataset. To enhance the model's perceptual acuity, block ratios for each stage were set at 3:3:27:3, with corresponding channel numbers of 128, 256, 512 and 1 024, respectively. Given the visual similarity between kale leaf bulbs and outer leaves, a high-efficiency channel attention mechanism was integrated into the backbone network to improve feature extraction in the leaf bulb region. By incorporating attention weights into feature mapping through residual inversion, attention parameters were cyclically trained within each block, resulting in feature maps with attentional weights. This iterative process facilitated the repeated training of attentional parameters and enhanced the capture of global feature information. To address challenges arising from direct pixel addition between up-sampling and local features, potentially leading to misaligned context in feature maps and erroneous classifications at kale leaf boundaries, a feature alignment module and feature selection module were introduced into the feature pyramid network to refine target boundary information extraction and enhance model segmentation accuracy. [Results and Discussions] The UperNet-ESA semantic segmentation model outperforms the current mainstream UNet model, PSPNet model, DeepLabV3+ model in terms of segmentation accuracy, where mIoU and mPA reached 92.45% and 94.32%, respectively, and the inference speed of up to 16.6 frames per second (fps). The mPA values were better than that of the UNet model, PSPNet model, ResNet-50 based, MobilenetV2, and DeepLabV3+ model with Xception as the backbone, showing improvements of 11.52%, 13.56%, 8.68%, 4.31%, and 6.21%, respectively. Similarly, the mIoU exhibited improvements of 12.21%, 13.04%, 10.65%, 3.26% and 7.11% compared to the mIoU of the UNet-based model, PSPNet model, and DeepLabV3+ model based on the ResNet-50, MobilenetV2, and Xception backbones, respectively. This performance enhancement can be attributed to the introduction of the ECA module and the improvement made to the feature pyramid network in this model, which strengthen the judgement of the target features at each stage to obtain effective global contextual information. In addition, although the PSPNet model had the fastest inference speed, the overall accuracy was too low to for developing kale semantic segmentation models. On the contrary, the proposed model exhibited superior inference speed compared to all other network models. [Conclusions] The experimental results showed that the UperNet-ESA semantic segmentation model proposed in this study outperforms the original network in terms of performance. The improved model achieves the best accuracy-speed balance compared to the current mainstream semantic segmentation networks. In the upcoming research, the current model will be further optimized and enhanced, while the kale dataset will be expanded to include a wider range of samples of nodulated kale leaf bulbs. This expansion is intended to provide a more robust and comprehensive theoretical foundation for intelligent kale field management.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Phenotypic Traits Extraction of Wheat Plants Using 3D Digitization
    ZHENG Chenxi, WEN Weiliang, LU Xianju, GUO Xinyu, ZHAO Chunjiang
    Smart Agriculture    2022, 4 (2): 150-162.   DOI: 10.12133/j.smartag.SA202203009
    Abstract870)   HTML111)    PDF(pc) (1803KB)(1709)       Save

    Aiming at the difficulty of accurately extract the phenotypic traits of plants and organs from images or point clouds caused by the multiple tillers and serious cross-occlusion among organs of wheat plants, to meet the needs of accurate phenotypic analysis of wheat plants, three-dimensional (3D) digitization was used to extract phenotypic parameters of wheat plants. Firstly, digital representation method of wheat organs was given and a 3D digital data acquisition standard suitable for the whole growth period of wheat was formulated. According to this standard, data acquisition was carried out using a 3D digitizer. Based on the definition of phenotypic parameters and semantic coordinates information contained in the 3D digitizing data, eleven conventional measurable phenotypic parameters in three categories were quantitative extracted, including lengths, thicknesses, and angles of wheat plants and organs. Furthermore, two types of new parameters for shoot architecture and 3D leaf shape were defined. Plant girth was defined to quantitatively describe the looseness or compactness by fitting 3D discrete coordinates based on the least square method. For leaf shape, wheat leaf curling and twisting were defined and quantified according to the direction change of leaf surface normal vector. Three wheat cultivars including FK13, XN979, and JM44 at three stages (rising stage, jointing stage, and heading stage) were used for method validation. The Open3D library was used to process and visualize wheat plant data. Visualization results showed that the acquired 3D digitization data of maize plants were realistic, and the data acquisition approach was capable to present morphological differences among different cultivars and growth stages. Validation results showed that the errors of stem length, leaf length, stem thickness, stem and leaf angle were relatively small. The R2 were 0.93, 0.98, 0.93, and 0.85, respectively. The error of the leaf width and leaf inclination angle were also satisfactory, the R2 were 0.75 and 0.73. Because wheat leaves are narrow and easy to curl, and some of the leaves have a large degree of bending, the error of leaf width and leaf angle were relatively larger than other parameters. The data acquisition procedure was rather time-consuming, while the data processing was quite efficient. It took around 133 ms to extract all mentioned parameters for a wheat plant containing 7 tillers and total 27 leaves. The proposed method could achieve convenient and accurate extraction of wheat phenotypes at individual plant and organ levels, and provide technical support for wheat shoot architecture related research.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Identification and Counting of Silkworms in Factory Farm Using Improved Mask R-CNN Model
    HE Ruimin, ZHENG Kefeng, WEI Qinyang, ZHANG Xiaobin, ZHANG Jun, ZHU Yihang, ZHAO Yiying, GU Qing
    Smart Agriculture    2022, 4 (2): 163-173.   DOI: 10.12133/j.smartag.SA202201012
    Abstract700)   HTML37)    PDF(pc) (2357KB)(2872)       Save

    Factory-like rearing of silkworm (Bombyx mori) using artificial diet for all instars is a brand-new rearing mode of silkworm. Accurate feeding is one of the core technologies to save cost and increase efficiency in factory silkworm rearing. Automatic identification and counting of silkworm play a key role to realize accurate feeding. In this study, a machine vision system was used to obtain digital images of silkworms during main instars, and an improved Mask R-CNN model was proposed to detect the silkworms and residual artificial diet. The original Mask R-CNN was improved using the noise data of annotations by adding a pixel reweighting strategy and a bounding box fine-tuning strategy to the model frame. A more robust model was trained to improve the detection and segmentation abilities of silkworm and residual feed. Three different data augmentation methods were used to expand the training dataset. The influences of silkworm instars, data augmentation, and the overlap between silkworms on the model performance were evaluated. Then the improved Mask R-CNN was used to detect silkworms and residual feed. The AP50 (Average Precision at IoU=0.5) of the model for silkworm detection and segmentation were 0.790 and 0.795, respectively, and the detection accuracy was 96.83%. The detection and segmentation AP50 of residual feed were 0.641 and 0.653, respectively, and the detection accuracy was 87.71%. The model was deployed on the NVIDIA Jetson AGX Xavier development board with an average detection time of 1.32 s and a maximum detection time of 2.05 s for a image. The computational speed of the improved Mask R-CNN can meet the requirement of real-time detection of the moving unit of the silkworm box on the production line. The model trained by the fifth instar data showed a better performance on test data than the fourth instar model. The brightness enhancement method had the greatest contribution to the model performance as compared to the other data augmentation methods. The overlap between silkworms also negatively affected the performance of the model. This study can provide a core algorithm for the research and development of the accurate feeding information system and feeding device for factory silkworm rearing, which can improve the utilization rate of artificial diet and improve the production and management level of factory silkworm rearing.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Multi-Band Image Fusion Method for Visually Identifying Tomato Plant’s Organs With Similar Color
    FENG Qingchun​, CHEN Jian, CHENG Wei​, WANG Xiu
    Smart Agriculture    2020, 2 (2): 126-134.   DOI: 10.12133/j.smartag.2020.2.2.202002-SA001
    Abstract938)   HTML692)    PDF(pc) (1659KB)(942)       Save

    Considering at the robotic management for tomato plants in the greenhouse, it is necessary to identify the stem, leaf and fruit with the similar color from the broad-band visible image. In order to highlight the difference between the target and background, and improve the identification efficiency, the multiple narrow-band image fusion method for identifying the tomato’s three similar-colored organs, including stem, leaf, and green fruit, was proposed, based on the spectral features of these organs. According to the 300-1000 nm spectral data of three organs, the regularized logistic regression model with Lasso for distinguishing their spectral characteristic was built. Based on the sparse solution of the model’s weight coefficients, the wavelengths 450, 600 and 900 nm with the maximum coefficients were determined as the optimal imaging band. The multi-spectral image capturing system was designed, which could output three images of optimal bands from the same view-field. The relationship between the organs’ image gray and their spectral feature was analyzed, and the optimal images could accurately show the organs’ reflection character at the various band. In order to obtain more significant distinctions, the weighted-fusion method based NSGA-II was proposed, which was supposed to combine the organ’s difference in the optimal band image. The algorithm’s objective function was defined to maximize the target-background difference and minimize the background-background difference. The coefficients obtained were adopted as the linear fusion factors for the optimal band images.Finally, the fusion method was evaluated based on intuitional and quantitative indexes, respectively considering the one among stem, leaf and green fruit as target, and the other two as the backgrounds. As the result showed, compared with the single optimal band image, the fused image greatly intensified the difference between the similar-colored target and background, and restrained the difference among the background. Specifically, the sum of absolute difference (SAD) was used to describe the grey value difference between the various organs, and the fusion result images’ SAD between the target and the background raised to 2.02, 8.63 and 7.89 times than the single band images. The Otsu automatic segmentation algorithm could respectively obtain the recognition accuracy of 71.14%, 60.32% and 98.32% for identifying the stem, leaf and fruit on the fusion result image. The research was supposed as a reference for the identification on similar-colored plant organs under agricultural condition.

    Reference | Related Articles | Metrics | Comments0
    Method of tomato leaf diseases recognition method based on deep residual network
    Wu Huarui
    Smart Agriculture    2019, 1 (4): 42-49.   DOI: 10.12133/j.smartag.2019.1.4.201908-SA002
    Abstract1989)   HTML935)    PDF(pc) (1077KB)(1650)       Save

    Intelligent recognition of greenhouse vegetable diseases plays an important role in the efficient production and management. The color, texture and shape of some diseases in greenhouse vegetables are often very similar, it is necessary to construct a deep neural network to judge vegetable diseases. Based on the massive image data of greenhouse vegetable diseases, the depth learning model can automatically extract image details, which has better disease recognition effect than the artificial design features. For the traditional deep learning model of vegetable disease image recognition, the model recognition accuracy can be improved by increasing the network level. However, as the network level increases to a certain depth, it will lead to the degradation / disappearance of the network gradient, which degrades the recognition performance of the learning model. Therefore, a method of vegetable disease identification based on deep residual network model was studied in this paper. Firstly, considering that the super parameter value in the deep network model has a great influence on the accuracy of network identification, Bayesian optimization algorithm was used to autonomously learn the hyper-parameters such as regularization parameters, network width, stochastic momentum et al, which are difficult to determine in the network, eliminate the complexity of manual parameter adjustment, and reduce the difficulty of network training and saves the time of network construction. On this basis, the gradient could flow directly from the latter layer to the former layer through the identical activation function by adding residual elements to the traditional deep neural network. The deep residual recognition model takes the whole image as the input, and obtains the optimal feature through multi-layer convolution screening in the network, which not only avoids the interference of human factors, but also solves the problem of the performance degradation of the disease recognition model caused by the deep network, and realizes the high-dimensional feature extraction and effective disease recognition of the vegetable image. Relevant simulation results show that compared with other traditional models for vegetable disease identification, the deep residual neural network shows better stability, accuracy and robustness. The deep residual network model based on hyperparametric self-learning achievesd good recognition performance on the open data set of tomato diseases, and the recognition accuracy of 4 common diseases of tomato leaves reached more than 95%. The researth can provide a basic methed for fast and accurate recognition of tomato leaf diseases.

    Reference | Related Articles | Metrics | Comments0
    Edge extraction method of remote sensing UAV terrace image based on topographic feature
    Yang Yanan, Kang Yang, Fan Xiao, Chang Yadong, Zhang Hanwen, Zhang Hongming
    Smart Agriculture    2019, 1 (4): 50-61.   DOI: 10.12133/j.smartag.2019.1.4.201908-SA005
    Abstract1335)   HTML1030)    PDF(pc) (2234KB)(1682)       Save

    Terraces achieve water storage and sediment function by slowing down the slope and soil erosion. This kind of terraced or wave-section farmland built along the contour line on is a high-yield and stable farmland facility with key construction in the dry farming area. It provides a strong guarantee for increasing grain production and farmers' income. In recent years, Gansu province has carried out a large amount of construction on terraces, however, due to the poor quality of the previous construction and management, the terraced facilities are in danger of being destroyed. In order to prevent and repair the terraces, it is necessary to timely and accurately extract the terrace information. The segmentation of terraces can be obtained by edge extraction, but the effect of satellite data is not ideal. With the continuous development of remote sensing technology of drones, the acquisition of high-precision terrace topographic information has become possible. In this research, the slope is extracted from the digital elevation model data in the data preprocessing stage, then the orthophoto data of the three experimental areas are merged with the corresponding slope data, respectively. Then the rough edge extraction method based on Canny operator and the fine edge extraction method based on multi-scale segmentation are used to perform edge detection on two data sources. Finally, the influence of slope on the extraction of terraced edges of remote sensing images of UAVs was analyzed based on the overall accuracy of edge detection and user accuracy. The experimental results showed that, in the rough edge extraction method, the data source accuracy of the fusion slope and image was improved by 23.97% in the OA precision evaluation, and the average improvement in the UV accuracy was 20.68%. In the fine edge extraction method, the accuracy based on the data source 2 was also increased by 17.84% on average in the OA accuracy evaluation of the data source 1, and by an average of 19.0% in the UV accuracy evaluation. The research shows that in the extraction of terraced edges of UAV remote sensing images, adding certain terrain features can achieve better edge extraction results.

    Reference | Related Articles | Metrics | Comments0
    Method for identifying crop disease based on CNN and transfer learning
    Li Miao, Wang Jingxian, Li Hualong, Hu Zelin, Yang XuanJiang, Huang Xiaoping, Zeng Weihui, Zhang Jian, Fang Sisi
    Smart Agriculture    2019, 1 (3): 46-55.   DOI: 10.12133/j.smartag.2019.1.3.201903-SA005
    Abstract3040)   HTML2346)    PDF(pc) (2845KB)(4349)       Save

    The internet is a huge resource base and a rich knowledge base. Aiming at the problem of small agricultural samples, the utilization technology of network resources was studied in the research, which would provide an idea and method for the research and application of crop disease identification and diagnosis. The knowledge transfer and deep learning methods to carry out research and experiments on public data sets (ImageNet, PlantVillage) and laboratory small sample disease data (AES-IMAGE) were introduced: first the batch normalization algorithm was applied to the AlexNet and VGG of Convolutional Neural Network (CNN) models to improve the over-fitting problem of the network; second the transfer learning strategy using parameter fine-tuning: The PlantVillage large-scale plant disease dataset was used to obtain the pre-trained model. On the improved network (AlexNet, VGG model), the pre-trained model was adjusted by our small sample dataset AES-IMAGE to obtain the disease identification model of cucumber and rice; third the transfer learning strategy was used for the bottleneck feature extraction: using the ImageNet big dataset to obtain the network parameters, CNN model (Inception-v3 and Mobilenet) was used as feature extractor to extract disease features. This method requires only a quick identification of the disease on the CPU and does not require a lot of training time, which can quickly complete the process of disease identification on the CPU. The experimental results show that: first in the transfer learning strategy of parameter fine-tuning: the highest accuracy rate was 98.33%, by using the VGG network parameter fine-tuning strategy; second in the transfer learning strategy of bottleneck feature extraction, using the Mobilenet model for bottleneck layer feature extraction and identification could obtain 96.8% validation accuracy. The results indicate that the combination of CNN and transfer learning is effective for the identification of small sample crop diseases.

    Reference | Related Articles | Metrics | Comments0
    Corn plant disease recognition based on migration learning and convolutional neural network
    Chen Guifen, Zhao Shan, Cao Liying, Fu Siwei, Zhou Jiaxin
    Smart Agriculture    2019, 1 (2): 34-44.   DOI: 10.12133/j.smartag.2019.1.2.201812-SA007
    Abstract2533)   HTML729)    PDF(pc) (4817KB)(3081)       Save

    Corn is one of the most important food crops in China, and the occurrence of disease will result in serious yield reduction. Therefore, the diagnosis and treatment of corn disease is an important link in corn production. Under the background of big data, massive image data are generated. The traditional image recognition method has a low accuracy in identifying corn plant diseases, which is far from meeting the needs. With the development of artificial intelligence and deep learning, convolutional neural network, as a common algorithm in deep learning, is widely used to deal with machine vision problems. It can automatically identify and extract image features. However, in image classification, CNN still has problems such as small sample size, high sample similarity and long training convergence time. CNN has the limitations of expression ability and lack of feedback mechanism, and data enhancement and transfer learning can solve the corresponding problems. Therefore, this research proposed an optimization algorithm for corn plant disease recognition based on the convolution neural network recognition model combining data enhancement and transfer learning. Firstly, the algorithm preprocessed the data through the data enhancement method to expand the data set, so as to improve the generalization and accuracy of the model. Then, the CNN model based on transfer learning was constructed. The Inception V3 model was adopted through transfer learning to extract the image characteristics of the disease while keeping the parameters unchanged. In this way, the training process of the convolutional neural network was accelerated and the over-fitting degree of the network was reduced. The extracted image features were used as input of the CNN to train the network, and finally the recognition results were obtained. Finally, the model was applied to the pictures of corn diseases collected from the farmland to accurately identify five kinds of corn diseases. Identification test results showed that using data to enhance the CNN optimization algorithm and the migration study on the average recognition accuracy main diseases of com (spot, southern leaf blight, gray leaf spot, smut, gall smut) reached 96.6%, which compared with single CNN, has greatly improved the precision and identification precision by 25.6% on average. The average processing time of each image was 0.28 s, shortens nearly 10 times than a single convolution neural network. The experimental results show that the algorithm is more accurate and faster than the traditional CNN, which provides a new method for identification of corn plant diseases.

    Reference | Related Articles | Metrics | Comments0
    Recognition and localization method of occluded apples based on K-means clustering segmentation algorithm and convex hull theory
    Jiang Mei, Sun Sashuang, He Dongjian, Song Huaibo
    Smart Agriculture    2019, 1 (2): 45-54.   DOI: 10.12133/j.smartag.2019.1.2.201903-SA003
    Abstract1375)   HTML451)    PDF(pc) (1240KB)(1891)       Save

    Accurate segmentation and localization of apple objects in natural scenes is an important part of wisdom agriculture research for information perception and acquisition. In order to solve the problem that apples recognition and positioning are susceptible to occlusion of leaves in natural scenes, based on the K-means clustering segmentation algorithm, the object recognition algorithm based on convex hull theory was proposed. And the algorithm was compared with the object recognition algorithm based on removing false contours and the full-contour points fitting object recognition algorithm. The object recognition algorithm based on convex hull theory utilized that apples were like circle, combining K-means algorithm with Otsu algorithm to separate fruit from background. The convex polygon was obtained by convex hull theory and fit it circle to determine the position of the fruit. To verify the effectiveness of the algorithm, 157 apple images in natural scenes were tested. The average overlap rates of the object recognition algorithm based on convex hull theory, the object recognition algorithm based on removing false contour points and the full-contour points fitting object recognition algorithm were 83.7%, 79.5% and 70.3% respectively, the average false positive rates were 2.9%, 1.7% and 1.2% respectively, and the average false negative rates were 16.3%, 20.5% and 29.7% respectively. The experimental results showed that the object recognition algorithm based on convex hull theory had better localization performance and environmental adaptability compared to the other two algorithms and had no recognition error, which can provide reference for occluded fruits segmentation and localization in the natural scenes.

    Reference | Related Articles | Metrics | Comments0
    Development and performance evaluation of a multi-rotor unmanned aircraft system for agricultural monitoring
    Zhu Jiangpeng, Cen Haiyan, He Liwen, He Yong
    Smart Agriculture    2019, 1 (1): 43-52.   DOI: 10.12133/j.smartag.2019.1.1.201812-SA011
    Abstract1927)   HTML676)    PDF(pc) (1255KB)(2786)       Save

    In modern agriculture production, to obtain real-time, accurate and comprehensive information of the farmlands is necessary for farmers. Unmanned Aircraft System (UAS) is one of the most popular platforms for agricultural information monitoring, especially the multi-rotor aircraft due to its simplicity of operation. It is easy to control the speed and altitude of multi-rotor aircraft, even at low altitude. The above features enable multi-rotor UAS to acquire high-resolution images at low altitudes by integrating different imaging sensors. The aim of this work was to develop an octocopter UAS for agricultural information monitoring. In order to obtain the high-resolution aerial images of the entire experimental field, the Sony Nex-7 camera was attached to the aircraft. According to the real-time position of the aircraft got from global position system (GPS) and inertial measurement unit (IMU), the flight control system of the aircraft will send signals to control the camera to capture images at desired locations. Besides, position and orientation system (POS) and an illuminance sensor were loaded on the aircraft to get the location, shooting angle and ambient illumination information of each image. The system can be used to collect the remote sensing data of a field, and the performance was comprehensively evaluated in the field of oilseed rape experimental station in Zhuji, Zhejiang Province, China. The result shows that the system can keep the camera optical axis perpendicular to the ground during the operation. Because the effective communication was established between the mission equipment and the flight control system, the UAS can accurately acquire the images at the pre-defined locations, which improved the operation efficiency of the system. The images collected by the system can be mosaicked into an image of the whole field. In summary, the system can satisfy the demand for the agricultural information collection.

    Reference | Related Articles | Metrics | Comments0