Welcome to Smart Agriculture

Content of Topic--Smart Farming of Field Crops in our journal

        Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    State-of-the-art and Prospect of Research on Key Technical for Unmanned Farms of Field Corp
    YIN Yanxin, MENG Zhijun, ZHAO Chunjiang, WANG Hao, WEN Changkai, CHEN Jingping, LI Liwei, DU Jingwei, WANG Pei, AN Xiaofei, SHANG Yehua, ZHANG Anqi, YAN Bingxin, WU Guangwei
    Smart Agriculture    2022, 4 (4): 1-25.   DOI: 10.12133/j.smartag.SA202212005
    Abstract2601)   HTML640)    PDF(pc) (2582KB)(3615)       Save

    As one of the important way for constructing smart agriculture, unmanned farms are the most attractive in nowadays, and have been explored in many countries. Generally, data, knowledge and intelligent equipment are the core elements of unmanned farms. It deeply integrates modern information technologies such as the Internet of Things, big data, cloud computing, edge computing, and artificial intelligence with agriculture to realize agricultural production information perception, quantitative decision-making, intelligent control, precise input and personalized services. In the paper, the overall technical architecture of unmanned farms is introduced, and five kinds of key technologies of unmanned farms are proposed, which include information perception and intelligent decision-making technology, precision control technology and key equipment for agriculture, automatic driving technology in agriculture, unmanned operation agricultural equipment, management and remote controlling system for unmanned farms. Furthermore, the latest research progress of the above technologies both worldwide are analyzed. Based on which, critical scientific and technological issues to be solved for developing unmanned farms in China are proposed, include unstructured environment perception of farmland, automatic drive for agriculture machinery in complex and changeable farmland environment, autonomous task assignment and path planning of unmanned agricultural machinery, autonomous cooperative operation control of unmanned agricultural machinery group. Those technologies are challenging and absolutely, and would be the most competitive commanding height in the future. The maize unmanned farm constructed in the city of Gongzhuling, Jilin province, China, was also introduced in detail. The unmanned farms is mainly composed of information perception system, unmanned agricultural equipment, management and controlling system. The perception system obtains and provides the farmland information, maize growth, pest and disease information of the farm. The unmanned agricultural machineries could complete the whole process of the maize mechanization under unattended conditions. The management and controlling system includes the basic GIS, remote controlling subsystem, precision operation management subsystem and working display system for unmanned agricultural machineries. The application of the maize unmanned farm has improved maize production efficiency (the harvesting efficiency has been increased by 3-4 times) and reduced labors. Finally, the paper summarizes the important role of the unmanned farm technology were summarized in solving the problems such as reduction of labors, analyzes the opportunities and challenges of developing unmanned farms in China, and put forward the strategic goals and ideas of developing unmanned farm in China.

    Reference | Related Articles | Metrics | Comments0
    Goals, Key Technologies, and Regional Models of Smart Farming for Field Crops in China
    LI Li, LI Minzan, LIU Gang, ZHANG Man, WANG Maohua
    Smart Agriculture    2022, 4 (4): 26-34.   DOI: 10.12133/j.smartag.SA202207003
    Abstract1421)   HTML202)    PDF(pc) (853KB)(1534)       Save

    Smart farming for field crops is a significant part of the smart agriculture. It aims at crop production, integrating modern sensing technology, new generation mobile communication technology, computer and network technology, Internet of Things(IoT), big data, cloud computing, blockchain and expert wisdom and knowledge. Deeply integrated application of biotechnology, engineering technology, information technology and management technology, it realizes accurate perception, quantitative decision-making, intelligent operation and intelligent service in the process of crop production, to significantly improve land output, resource utilization and labor productivity, comprehensively improves the quality, and promotes efficiency of agricultural products. In order to promote the sustainable development of the smart farming, through the analysis of the development process of smart agriculture, the overall objectives and key tasks of the development strategy were clarified, the key technologies in smart farming were condensed. Analysis and breakthrough of smart farming key technologies were crucial to the industrial development strategy. The main problems of the smart farming for field crops include: the lack of in-situ accurate measurement technology and special agricultural sensors, the large difference between crop model and actual production, the instantaneity, reliability, universality, and stability of the information transmission technologies, and the combination of intelligent agricultural equipment with agronomy. Based on the above analysis, five primary technologies and eighteen corresponding secondary technologies of smart farming for field crops were proposed, including: sensing technologies of environmental and biological information in field, agricultural IoT technologies and mobile internet, cloud computing and cloud service technologies in agriculture, big data analysis and decision-making technology in agriculture, and intelligent agricultural machinery and agricultural robots in fireld production. According to the characteristics of China's cropping region, the corresponding smart farming development strategies were proposed: large-scale smart production development zone in the Northeast region and Inner Mongolia region, smart urban agriculture and water-saving agriculture development zone in the region of Beijing, Tianjin, Hebei and Shandong, large-scale smart farming of cotton and smart dry farming green development comprehensive test zone in the Northwest arid region, smart farming of rice comprehensive development test zone in the Southeast coast region, and characteristic smart farming development zone in the Southwest mountain region. Finally, the suggestions were given from the perspective of infrastructure, key technology, talent and policy.

    Reference | Related Articles | Metrics | Comments0
    Advances in Forage Crop Growth Monitoring by UAV Remote Sensing
    ZHUO Yue, DING Feng, YAN Haijun, XU Jing
    Smart Agriculture    2022, 4 (4): 35-48.   DOI: 10.12133/j.smartag.SA202206004
    Abstract841)   HTML118)    PDF(pc) (863KB)(1151)       Save

    Dynamic monitoring and quantitative estimation of forage crop growth are of great importance to the large-scale production of forage crop. UAV remote sensing has the advantages of high resolution, strong flexibility and low cost. In recent years, it has developed rapidly in the field of forage crop growth monitoring. In order to clarify the development status of forage crop growth monitoring and find the development direction, first, methods of UAV crop remote sensing monitoring were briefly described from two aspects of data acquisition and processing. Second, three key technologies of forage crop including canopy information extraction, spectral feature optimization and forage biomass estimation were described. Then the development trend of related research in recent years was analyzed, and it was pointed out that the number of papers published on UAV remote sensing forage crop monitoring showed an overall trend of rapidly increasing. With the rapid development of computer information technology and remote sensing technology, the application potential of UAV in the field of forage crop monitoring has been fully explored. Then, the research progress of UAV remote sensing in forage crop growth monitoring was described in five parts according to sensor types, i.e., visible, multispectral, hyperspectral, thermal infrared and LiDAR, and the research of each type of sensor were summarized and reviewed, pointing out that the current researches of hyperspectral, thermal infrared and LiDAR sensors in forage crop monitoring were less than that of visible and multispectral sensors. Finally, the future development directions were clarified according to the key technical problems that have not been solved in the research and application of UAV remote sensing forage crop growth monitoring: (1) Build a multi-temporal growth monitoring model based on the characteristics of different growth stages and different growth years of forage crops, carry out UAV remote sensing monitoring of forage crops around representative research areas to further improve the scope of application of the model. (2) Establish a multi-source database of UAV remote sensing, and carry out integrated collaborative monitoring combined with satellite remote sensing data, historical yield, soil conductivity and other data. (3) Develop an intelligent and user-friendly UAV remote sensing data analysis system, and shorten the data processing time through 5G communication network and edge computing devices. This paper could provide relevant technical references and directional guidelines for researchers in the field of forage crops and further promote the application and development of precision agriculture technology.

    Reference | Related Articles | Metrics | Comments0
    Infield Corn Kernel Detection and Counting Based on Multiple Deep Learning Networks
    LIU Xiaohang, ZHANG Zhao, LIU Jiaying, ZHANG Man, LI Han, FLORES Paulo, HAN Xiongzhe
    Smart Agriculture    2022, 4 (4): 49-60.   DOI: 10.12133/j.smartag.SA202207004
    Abstract1074)   HTML119)    PDF(pc) (3336KB)(2153)       Save

    Machine vision has been increasingly used for agricultural sensing tasks. The detection method based on deep learning for infield corn kernels can improve the detection accuracy. In order to obtain the number of lost corn kernels quickly and accurately after the corn harvest, and evaluate the corn harvest combine performance on grain loss, the method of directly using deep learning technology to count corn kernels in the field was developed and evaluated. Firstly, an RGB camera was used to collect image with different backgrounds and illuminations, and the datasets were generated. Secondly, different target detection networks for kernel recognition were constructed, including Mask R-CNN, EfficientDet-D5, YOLOv5-L and YOLOX-L, and the collected 420 effective images were used to train, verify and test each model. The number of images in train, verify and test datasets were 200, 40 and 180, respectively. Finally, the counting performances of different models were evaluated and compared according to the recognition results of test set images. The experimental results showed that among the four models, YOLOv5-L had overall the best performance, and could reliably identify corn kernels under different scenes and light conditions. The average precision (AP) value of the model for the image detection of the test set was 78.3%, and the size of the model was 89.3 MB. The correct rate of kernel count detection in four scenes of non-occlusion, surface mid-level-occlusion, surface severe-occlusion and aggregation were 98.2%, 95.5%, 76.1% and 83.3%, respectively, and F1 values were 94.7%, 93.8%, 82.8% and 87%, respectively. The overall detection correct rate and F1 value of the test set were 90.7% and 91.1%, respectively. The frame rate was 55.55 f/s, and the detection and counting performance were better than Mask R-CNN, EfficientDet-D5 and YOLOX-L networks. The detection accuracy was improved by about 5% compared with the second best performance of Mask R-CNN. With good precision, high throughput, and proven generalization, YOLOv5-L can realize real-time monitoring of corn harvest loss in practical operation.

    Reference | Related Articles | Metrics | Comments0
    Machine Learning Inversion Model of Soil Salinity in the Yellow River Delta Based on Field Hyperspectral and UAV Multispectral Data
    FAN Chengzhi, WANG Ziwen, YANG Xingchao, LUO Yongkai, XU Xuexin, GUO Bin, LI Zhenhai
    Smart Agriculture    2022, 4 (4): 61-73.   DOI: 10.12133/j.smartag.SA202212001
    Abstract699)   HTML47)    PDF(pc) (1831KB)(1146)       Save

    Soil salinization in the Yellow River Delta is a difficult and miscellaneous disease to restrict the development of agricultural economy, and further hinders agricultural production. To explore the retrieval of soil salt content from remote sensing images under the condition of no vegetation coverage, the typical area of the Yellow River Delta was taken as the study area to obtain the hyperspectral of surface features, the multispectral of UAVs and the soil salt content of sample points. Three representative experimental areas with flat terrain and obvious soil salinization characteristics were set up in the study area, and 90 samples were collected in total. By optimizing the sensitive spectral parameters, machine learning algorithms of partial least squares regression (PLSR) and random forest (RF) for inversion of soil salt content were used in the study area. The results showed that: (1) Hyperspectral band of 1972 nm had the highest sensitivity to soil salt content, with correlation r of -0.31. The optimized spectral parameters of shortwave infrared can improve the accuracy of estimating soil salt content. (2) RF model optimized by two different data sources had better stability than PLSR model. RF model performed well in terms of generalization ability and balance error, but it had some over-fitting problems. (3) RF model based on ground feature hyperspectral (R2 =0.54, verified RMSE=3.30 g/kg) was superior to the random forest model based on UAV multispectral (R2 =0.54, verified RMSE=3.35 g/kg). The combination of image texture features improved the estimation accuracy of multispectral model, but the verification accuracy was still lower than that of hyperspectral model. (4) Soil salt content based on UAV multi-spectral imageries and RF model was mapped in the study area. This study demonstrates that the level of soil salinization in the Yellow River Delta region is significantly different in geographical location. The cultivated land in the study area is mainly light and moderate salinized soil with has certain restrictions on crop cultivation. Areas with low soil salt content are suitable for planting crops in low salinity fields, and farmland with high soil salt content is suitable for planting crops with high salinity tolerance. This study constructed and compared the soil salinity inversion models of the Yellow River Delta from two different sources of data, optimized them based on the advantages of each data source, explored the inversion of soil salinity content without vegetation coverage, and can provide a reference for more accurate inversion of land salinization.

    Reference | Related Articles | Metrics | Comments0
    High Quality Ramie Resource Screening Based on UAV Remote Sensing Phenotype Monitoring
    FU Hongyu, WANG Wei, LIAO Ao, YUE Yunkai, XU Mingzhi, WANG Ziwei, CHEN Jianfu, SHE Wei, CUI Guoxian
    Smart Agriculture    2022, 4 (4): 74-83.   DOI: 10.12133/j.smartag.SA202209001
    Abstract739)   HTML39)    PDF(pc) (1187KB)(783)       Save

    Ramie is an important fiber crop. Due to the shortage of land resources and the promotion of excellent varieties, the genetic variation and diversity of ramie decreased, which increased the need for investigation and protection of the ramie germplasm resources diversity. The crop phenotype measurement method based on UAV remote sensing can conduct frequent, rapid, non-destructive and accurate monitoring of different genotypes, which can fulfill the investigation of crop germplasm resources and screen specific and high-quality varieties. In order to realize efficient comprehensive evaluation of ramie germplasm phenotype and assist in screening of dominant ramie varieties, a method for monitoring and screening ramie germplasm phenotype was proposed based on UAV remote sensing images. Firstly, based on UAV remote sensing images, the digital surface model (DSM) and orthophoto of the test area were generated by Pix4dmapper. Then, the key phenotypic parameters (plant height, plant number, leaf area index, leaf chlorophyll content and water content) of ramie germplasm resources were estimated. The subtraction method was used to extract ramie plant height based on DSM, while the target detection algorithm was applied to extract ramie plant number based on orthographic images, and four machine learning methods were used to estimate the leaf area index (LAI), leaf chlorophyll content (SPAD value) and water content. Finally, according to the extracted remote sensing phenotypic parameters, the genetic diversity of ramie germplasm was analyzed by using variability analysis and principal component analysis. The results showed that: (1) The ramie phenotype estimation based on UAV remote sensing was effective, with the fitting accuracy of plant height 0.93, and the root mean square error (RMSE) 5.654 cm. The fitting indexes of SPAD value, water content and LAI were 0.66, 0.79 and 0.74, respectively, and RMSE were 2.03, 2.21 and 0.63, respectively; (2) The remote sensing phenotypes of ramie germplasm were significantly different, as the coefficients of variation of LAI, plant height and plant number reached 20.82%, 24.61% and 35.48%, respectively; (3) Principal component analysis was used to cluster the remote sensing phenotypes into factor 1 (plant height and LAI) and factor 2 (LAI and SPAD value), factor 1 can be used to evaluate the structural characteristics of ramie germplasm resources, and factor 2 can be used as the screening index of high-light efficiency ramie resources. This study could provide references for crop germplasm phenotypic monitoring and breeding correlation analysis.

    Reference | Related Articles | Metrics | Comments0
    Multi-Class on-Tree Peach Detection Using Improved YOLOv5s and Multi-Modal Images
    LUO Qing, RAO Yuan, JIN Xiu, JIANG Zhaohui, WANG Tan, WANG Fengyi, ZHANG Wu
    Smart Agriculture    2022, 4 (4): 84-104.   DOI: 10.12133/j.smartag.SA202210004
    Abstract901)   HTML112)    PDF(pc) (3285KB)(1166)       Save

    Accurate peach detection is a prerequisite for automated agronomic management, e.g., peach mechanical harvesting. However, due to uneven illumination and ubiquitous occlusion, it is challenging to detect the peaches, especially when the peaches are bagged in orchards. To this end, an accurate multi-class peach detection method was proposed by means of improving YOLOv5s and using multi-modal visual data for mechanical harvesting in this paper. RGB-D dataset with multi-class annotations of naked and bagging peach was proposed, including 4127 multi-modal images of corresponding pixel-aligned color, depth, and infrared images acquired with consumer-level RGB-D camera. Subsequently, an improved lightweight YOLOv5s (small depth) model was put forward by introducing a direction-aware and position-sensitive attention mechanism, which could capture long-range dependencies along one spatial direction and preserve precise positional information along the other spatial direction, helping the networks accurately detect peach targets. Meanwhile, the depthwise separable convolution was employed to reduce the model computation by decomposing the convolution operation into convolution in the depth direction and convolution in the width and height directions, which helped to speed up the training and inference of the network while maintaining accuracy. The comparison experimental results demonstrated that the improved YOLOv5s using multi-modal visual data recorded the detection mAP of 98.6% and 88.9% on the naked and bagging peach with 5.05 M model parameters in complex illumination and severe occlusion environment, increasing by 5.3% and 16.5% than only using RGB images, as well as by 2.8% and 6.2% when compared to YOLOv5s. As compared with other networks in detecting bagging peaches, the improved YOLOv5s performed best in terms of mAP, which was 16.3%, 8.1% and 4.5% higher than YOLOX-Nano, PP-YOLO-Tiny, and EfficientDet-D0, respectively. In addition, the proposed improved YOLOv5s model offered better results in different degrees than other methods in detecting Fuji apple and Hayward kiwifruit, verified the effectiveness on different fruit detection tasks. Further investigation revealed the contribution of each imaging modality, as well as the proposed improvement in YOLOv5s, to favorable detection results of both naked and bagging peaches in natural orchards. Additionally, on the popular mobile hardware platform, it was found out that the improved YOLOv5s model could implement 19 times detection per second with the considered five-channel multi-modal images, offering real-time peach detection. These promising results demonstrated the potential of the improved YOLOv5s and multi-modal visual data with multi-class annotations to achieve visual intelligence of automated fruit harvesting systems.

    Reference | Related Articles | Metrics | Comments0