Welcome to Smart Agriculture 中文

Top Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    State-of-the-art and Prospect of Research on Key Technical for Unmanned Farms of Field Corp
    YIN Yanxin, MENG Zhijun, ZHAO Chunjiang, WANG Hao, WEN Changkai, CHEN Jingping, LI Liwei, DU Jingwei, WANG Pei, AN Xiaofei, SHANG Yehua, ZHANG Anqi, YAN Bingxin, WU Guangwei
    Smart Agriculture    2022, 4 (4): 1-25.   DOI: 10.12133/j.smartag.SA202212005
    Abstract3427)   HTML785)    PDF(pc) (2582KB)(7285)       Save

    As one of the important way for constructing smart agriculture, unmanned farms are the most attractive in nowadays, and have been explored in many countries. Generally, data, knowledge and intelligent equipment are the core elements of unmanned farms. It deeply integrates modern information technologies such as the Internet of Things, big data, cloud computing, edge computing, and artificial intelligence with agriculture to realize agricultural production information perception, quantitative decision-making, intelligent control, precise input and personalized services. In the paper, the overall technical architecture of unmanned farms is introduced, and five kinds of key technologies of unmanned farms are proposed, which include information perception and intelligent decision-making technology, precision control technology and key equipment for agriculture, automatic driving technology in agriculture, unmanned operation agricultural equipment, management and remote controlling system for unmanned farms. Furthermore, the latest research progress of the above technologies both worldwide are analyzed. Based on which, critical scientific and technological issues to be solved for developing unmanned farms in China are proposed, include unstructured environment perception of farmland, automatic drive for agriculture machinery in complex and changeable farmland environment, autonomous task assignment and path planning of unmanned agricultural machinery, autonomous cooperative operation control of unmanned agricultural machinery group. Those technologies are challenging and absolutely, and would be the most competitive commanding height in the future. The maize unmanned farm constructed in the city of Gongzhuling, Jilin province, China, was also introduced in detail. The unmanned farms is mainly composed of information perception system, unmanned agricultural equipment, management and controlling system. The perception system obtains and provides the farmland information, maize growth, pest and disease information of the farm. The unmanned agricultural machineries could complete the whole process of the maize mechanization under unattended conditions. The management and controlling system includes the basic GIS, remote controlling subsystem, precision operation management subsystem and working display system for unmanned agricultural machineries. The application of the maize unmanned farm has improved maize production efficiency (the harvesting efficiency has been increased by 3-4 times) and reduced labors. Finally, the paper summarizes the important role of the unmanned farm technology were summarized in solving the problems such as reduction of labors, analyzes the opportunities and challenges of developing unmanned farms in China, and put forward the strategic goals and ideas of developing unmanned farm in China.

    Reference | Related Articles | Metrics | Comments0
    Agricultural Robots: Technology Progress, Challenges and Trends
    ZHAO Chunjiang, FAN Beibei, LI Jin, FENG Qingchun
    Smart Agriculture    2023, 5 (4): 1-15.   DOI: 10.12133/j.smartag.SA202312030
    Abstract2811)   HTML426)    PDF(pc) (2498KB)(4225)       Save

    [Significance] Autonomous and intelligent agricultural machinery, characterized by green intelligence, energy efficiency, and reduced emissions, as well as high intelligence and man-machine collaboration, will serve as the driving force behind global agricultural technology advancements and the transformation of production methods in the context of smart agriculture development. Agricultural robots, which utilize intelligent control and information technology, have the unique advantage of replacing manual labor. They occupy the strategic commanding heights and competitive focus of global agricultural equipment and are also one of the key development directions for accelerating the construction of China's agricultural power. World agricultural powers and China have incorporated the research, development, manufacturing, and promotion of agricultural robots into their national strategies, respectively strengthening the agricultural robot policy and planning layout based on their own agricultural development characteristics, thus driving the agricultural robot industry into a stable growth period. [Progress] This paper firstly delves into the concept and defining features of agricultural robots, alongside an exploration of the global agricultural robot development policy and strategic planning blueprint. Furthermore, sheds light on the growth and development of the global agricultural robotics industry; Then proceeds to analyze the industrial backdrop, cutting-edge advancements, developmental challenges, and crucial technology aspects of three representative agricultural robots, including farmland robots, orchard picking robots, and indoor vegetable production robots. Finally, summarizes the disparity between Chinese agricultural robots and their foreign counterparts in terms of advanced technologies. (1) An agricultural robot is a multi-degree-of-freedom autonomous operating equipment that possesses accurate perception, autonomous decision-making, intelligent control, and automatic execution capabilities specifically designed for agricultural environments. When combined with artificial intelligence, big data, cloud computing, and the Internet of Things, agricultural robots form an agricultural robot application system. This system has relatively mature applications in key processes such as field planting, fertilization, pest control, yield estimation, inspection, harvesting, grafting, pruning, inspection, harvesting, transportation, and livestock and poultry breeding feeding, inspection, disinfection, and milking. Globally, agricultural robots, represented by plant protection robots, have entered the industrial application phase and are gradually realizing commercialization with vast market potential. (2) Compared to traditional agricultural machinery and equipment, agricultural robots possess advantages in performing hazardous tasks, executing batch repetitive work, managing complex field operations, and livestock breeding. In contrast to industrial robots, agricultural robots face technical challenges in three aspects. Firstly, the complexity and unstructured nature of the operating environment. Secondly, the flexibility, mobility, and commoditization of the operation object. Thirdly, the high level of technology and investment required. (3) Given the increasing demand for unmanned and less manned operations in farmland production, China's agricultural robot research, development, and application have started late and progressed slowly. The existing agricultural operation equipment still has a significant gap from achieving precision operation, digital perception, intelligent management, and intelligent decision-making. The comprehensive performance of domestic products lags behind foreign advanced counterparts, indicating that there is still a long way to go for industrial development and application. Firstly, the current agricultural robots predominantly utilize single actuators and operate as single machines, with the development of multi-arm cooperative robots just emerging. Most of these robots primarily engage in rigid operations, exhibiting limited flexibility, adaptability, and functionality. Secondly, the perception of multi-source environments in agricultural settings, as well as the autonomous operation of agricultural robot equipment, relies heavily on human input. Thirdly, the progress of new teaching methods and technologies for human-computer natural interaction is rather slow. Lastly, the development of operational infrastructure is insufficient, resulting in a relatively low degree of "mechanization". [Conclusions and Prospects] The paper anticipates the opportunities that arise from the rapid growth of the agricultural robotics industry in response to the escalating global shortage of agricultural labor. It outlines the emerging trends in agricultural robot technology, including autonomous navigation, self-learning, real-time monitoring, and operation control. In the future, the path planning and navigation information perception of agricultural robot autonomy are expected to become more refined. Furthermore, improvements in autonomous learning and cross-scenario operation performance will be achieved. The development of real-time operation monitoring of agricultural robots through digital twinning will also progress. Additionally, cloud-based management and control of agricultural robots for comprehensive operations will experience significant growth. Steady advancements will be made in the innovation and integration of agricultural machinery and techniques.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Big Models in Agriculture: Key Technologies, Application and Future Directions
    GUO Wang, YANG Yusen, WU Huarui, ZHU Huaji, MIAO Yisheng, GU Jingqiu
    Smart Agriculture    2024, 6 (2): 1-13.   DOI: 10.12133/j.smartag.SA202403015
    Abstract2805)   HTML497)    PDF(pc) (1482KB)(3679)       Save

    [Significance] Big Models, or Foundation Models, have offered a new paradigm in smart agriculture. These models, built on the Transformer architecture, incorporate numerous parameters and have undergone extensive training, often showing excellent performance and adaptability, making them effective in addressing agricultural issues where data is limited. Integrating big models in agriculture promises to pave the way for a more comprehensive form of agricultural intelligence, capable of processing diverse inputs, making informed decisions, and potentially overseeing entire farming systems autonomously. [Progress] The fundamental concepts and core technologies of big models are initially elaborated from five aspects: the generation and core principles of the Transformer architecture, scaling laws of extending big models, large-scale self-supervised learning, the general capabilities and adaptions of big models, and the emerging capabilities of big models. Subsequently, the possible application scenarios of the big model in the agricultural field are analyzed in detail, the development status of big models is described based on three types of the models: Large language models (LLMs), large vision models (LVMs), and large multi-modal models (LMMs). The progress of applying big models in agriculture is discussed, and the achievements are presented. [Conclusions and Prospects] The challenges and key tasks of applying big models technology in agriculture are analyzed. Firstly, the current datasets used for agricultural big models are somewhat limited, and the process of constructing these datasets can be both expensive and potentially problematic in terms of copyright issues. There is a call for creating more extensive, more openly accessible datasets to facilitate future advancements. Secondly, the complexity of big models, due to their extensive parameter counts, poses significant challenges in terms of training and deployment. However, there is optimism that future methodological improvements will streamline these processes by optimizing memory and computational efficiency, thereby enhancing the performance of big models in agriculture. Thirdly, these advanced models demonstrate strong proficiency in analyzing image and text data, suggesting potential future applications in integrating real-time data from IoT devices and the Internet to make informed decisions, manage multi-modal data, and potentially operate machinery within autonomous agricultural systems. Finally, the dissemination and implementation of these big models in the public agricultural sphere are deemed crucial. The public availability of these models is expected to refine their capabilities through user feedback and alleviate the workload on humans by providing sophisticated and accurate agricultural advice, which could revolutionize agricultural practices.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Agricultural Knowledge Intelligent Service Technology: A Review
    ZHAO Chunjiang
    Smart Agriculture    2023, 5 (2): 126-148.   DOI: 10.12133/j.smartag.SA202306002
    Abstract2345)   HTML457)    PDF(pc) (3579KB)(26838)       Save

    Significance Agricultural environment is dynamic and variable, with numerous factors affecting the growth of animals and plants and complex interactions. There are numerous factors that affect the growth of all kinds of animals and plants. There is a close but complex correlation between these factors such as air temperature, air humidity, illumination, soil temperature, soil humidity, diseases, pests, weeds and etc. Thus, farmers need agricultural knowledge to solve production problems. With the rapid development of internet technology, a vast amount of agricultural information and knowledge is available on the internet. However, due to the lack of effective organization, the utilization rate of these agricultural information knowledge is relatively low.How to analyze and generate production knowledge or decision cases from scattered and disordered information is a big challenge all over the world. Agricultural knowledge intelligent service technology is a good way to resolve the agricultural data problems such as low rank, low correlation, and poor interpretability of reasoning. It is also the key technology to improving the comprehensive prediction and decision-making analysis capabilities of the entire agricultural production process. It can eliminate the information barriers between agricultural knowledge, farmers, and consumers, and is more conducive to improve the production and quality of agricultural products, provide effective information services. Progress The definition, scope, and technical application of agricultural knowledge intelligence services are introduced in this paper. The demand for agricultural knowledge services are analyzed combining with artificial intelligence technology. Agricultural knowledge intelligent service technologies such as perceptual recognition, knowledge coupling, and inference decision-making are conducted. The characteristics of agricultural knowledge services are analyzed and summarized from multiple perspectives such as industrial demand, industrial upgrading, and technological development. The development history of agricultural knowledge services is introduced. Current problems and future trends are also discussed in the agricultural knowledge services field. Key issues in agricultural knowledge intelligence services such as animal and plant state recognition in complex and uncertain environments, multimodal data association knowledge extraction, and collaborative reasoning in multiple agricultural application scenarios have been discussed. Combining practical experience and theoretical research, a set of intelligent agricultural situation analysis service framework that covers the entire life cycle of agricultural animals and plants and combines knowledge cases is proposed. An agricultural situation perception framework has been built based on satellite air ground multi-channel perception platform and Internet real-time data. Multimodal knowledge coupling, multimodal knowledge graph construction and natural language processing technology have been used to converge and manage agricultural big data. Through knowledge reasoning decision-making, agricultural information mining and early warning have been carried out to provide users with multi-scenario agricultural knowledge services. Intelligent agricultural knowledge services have been designed such as multimodal fusion feature extraction, cross domain knowledge unified representation and graph construction, and complex and uncertain agricultural reasoning and decision-making. An agricultural knowledge intelligent service platform composed of cloud computing support environment, big data processing framework, knowledge organization management tools, and knowledge service application scenarios has been built. Rapid assembly and configuration management of agricultural knowledge services could be provide by the platform. The application threshold of artificial intelligence technology in agricultural knowledge services could be reduced. In this case, problems of agricultural users can be solved. A novel method for agricultural situation analysis and production decision-making is proposed. A full chain of intelligent knowledge application scenario is constructed. The scenarios include planning, management, harvest and operations during the agricultural before, during and after the whole process. Conclusions and Prospects The technology trend of agricultural knowledge intelligent service is summarized in five aspects. (1) Multi-scale sparse feature discovery and spatiotemporal situation recognition of agricultural conditions. The application effects of small sample migration discovery and target tracking in uncertain agricultural information acquisition and situation recognition are discussed. (2) The construction and self-evolution of agricultural cross media knowledge graph, which uses robust knowledge base and knowledge graph to analyze and gather high-level semantic information of cross media content. (3) In response to the difficulties in tracing the origin of complex agricultural conditions and the low accuracy of comprehensive prediction, multi granularity correlation and multi-mode collaborative inversion prediction of complex agricultural conditions is discussed. (4) The large language model (LLM) in the agricultural field based on generative artificial intelligence. ChatGPT and other LLMs can accurately mine agricultural data and automatically generate questions through large-scale computing power, solving the problems of user intention understanding and precise service under conditions of dispersed agricultural data, multi-source heterogeneity, high noise, low information density, and strong uncertainty. In addition, the agricultural LLM can also significantly improve the accuracy of intelligent algorithms such as identification, prediction and decision-making by combining strong algorithms with Big data and super computing power. These could bring important opportunities for large-scale intelligent agricultural production. (5) The construction of knowledge intelligence service platforms and new paradigm of knowledge service, integrating and innovating a self-evolving agricultural knowledge intelligence service cloud platform. Agricultural knowledge intelligent service technology will enhance the control ability of the whole agricultural production chain. It plays a technical support role in achieving the transformation of agricultural production from "observing the sky and working" to "knowing the sky and working". The intelligent agricultural application model of "knowledge empowerment" provides strong support for improving the quality and efficiency of the agricultural industry, as well as for the modernization transformation and upgrading.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Agricultural Metaverse: Key Technologies, Application Scenarios, Challenges and Prospects
    CHEN Feng, SUN Chuanheng, XING Bin, LUO Na, LIU Haishen
    Smart Agriculture    2022, 4 (4): 126-137.   DOI: 10.12133/j.smartag.SA202206006
    Abstract2104)   HTML293)    PDF(pc) (1045KB)(6205)       Save

    As an emerging concept, metaverse has attracted extensive attention from industry, academia and scientific research field. The combination of agriculture and metaverse will greatly promote the development of agricultural informatization and agricultural intelligence, provide new impetus for the transformation and upgrading of agricultural intelligence. Firstly, to expound feasibility of the application research of metaverse in agriculture, the basic principle and key technologies of agriculture metaverse were briefly described, such as blockchain, non-fungible token, 5G/6G, artificial intelligence, Internet of Things, 3D reconstruction, cloud computing, edge computing, augmented reality, virtual reality, mixed reality, brain computer interface, digital twins and parallel system. Then, the main scenarios of three agricultural applications of metaverse in the fields of virtual farm, agricultural teaching system and agricultural product traceability system were discussed. Among them, virtual farm is one of the most important applications of agricultural metaverse. Agricultural metaverse can help the growth of crops and the raising of livestock and poultry in the field of agricultural production, provide a three-dimensional and visual virtual leisure agricultural experience, provide virtual characters in the field of agricultural product promotion. The agricultural metaverse teaching system can provide virtual agricultural teaching similar to natural scenes, save training time and improve training efficiency by means of fragmentation. Traceability of agricultural products can let consumers know the production information of agricultural products and feel more confident about enterprises and products. Finally, the challenges in the development of agricultural metaverse were summarized in the aspects of difficulties in establishing agricultural metaverse system, weak communication foundation of agricultural metaverse, immature agricultural metaverse hardware equipment and uncertain agricultural meta universe operation, and the future development directions of agricultural metaverse were prospected. In the future, researches on the application of metaverse, agricultural growth mechanism, and low power wireless communication technologies are suggested to be carried out. A rural broadband network covering households can be established. The industrialization application of agricultural meta universe can be promoted. This review can provide theoretical references and technical supports for the development of metaverse in the field of agriculture.

    Reference | Related Articles | Metrics | Comments0
    Key Technologies and Equipment for Smart Orchard Construction and Prospects
    HAN Leng, HE Xiongkui, WANG Changling, LIU Yajia, SONG Jianli, QI Peng, LIU Limin, LI Tian, ZHENG Yi, LIN Guihai, ZHOU Zhan, HUANG Kang, WANG Zhong, ZHA Hainie, ZHANG Guoshan, ZHOU Guotao, MA Yong, FU Hao, NIE Hongyuan, ZENG Aijun, ZHANG Wei
    Smart Agriculture    2022, 4 (3): 1-11.   DOI: 10.12133/j.smartag.SA200201014
    Abstract2043)   HTML499)    PDF(pc) (2824KB)(3338)       Save

    Traditional orchard production is facing problems of labor shortage due to the aging, difficulties in the management of agricultural equipment and production materials, and low production efficiency which can be expected to be solved by building a smart orchard that integrates technologies of Internet of Things(IoT), big data, equipment intelligence, et al. In this study, based on the objectives of full mechanization and intelligent management, a smart orchard was built in Pinggu district, an important peaches and pears fruit producing area of Beijing. The orchard covers an aera of more than 30 hm2 in Xiying village, Yukou town. In the orchard, more than 10 kinds of information acquisition sensors for pests, diseases, water, fertilizers and medicines are applied, 28 kinds of agricultural machineries with intelligent technical support are equipped. The key technologies used include: intelligent information acquisition system, integrated water and fertilizer management system and intelligent pest management system. The intelligent operation equipment system includes: unmanned lawn mower, intelligent anti-freeze machine, trenching and fertilizer machine, automatic driving crawler, intelligent profiling variable sprayer, six-rotor branch-to-target drone, multi-functional picking platform and finishing and pruning machine, etc. At the same time, an intelligent management platform has been built in the smart orchard. The comparison results showed that, smart orchard production can reduce labor costs by more than 50%, save pesticide dosage by 30% ~ 40%, fertilizer dosage by 25% ~ 35%, irrigation water consumption by 60% ~ 70%, and comprehensive economic benefits increased by 32.5%. The popularization and application of smart orchards will further promote China's fruit production level and facilitate the development of smart agriculture in China.

    Reference | Related Articles | Metrics | Comments0
    Agricultural Technology Knowledge Intelligent Question-Answering System Based on Large Language Model
    WANG Ting, WANG Na, CUI Yunpeng, LIU Juan
    Smart Agriculture    2023, 5 (4): 105-116.   DOI: 10.12133/j.smartag.SA202311005
    Abstract2003)   HTML331)    PDF(pc) (1475KB)(2899)       Save

    [Objective] The rural revitalization strategy presents novel requisites for the extension of agricultural technology. However, the conventional method encounters the issue of a contradiction between supply and demand. Therefore, there is a need for further innovation in the supply form of agricultural knowledge. Recent advancements in artificial intelligence technologies, such as deep learning and large-scale neural networks, particularly the advent of large language models (LLMs), render anthropomorphic and intelligent agricultural technology extension feasible. With the agricultural technology knowledge service of fruit and vegetable as the demand orientation, the intelligent agricultural technology question answering system was built in this research based on LLM, providing agricultural technology extension services, including guidance on new agricultural knowledge and question-and-answer sessions. This facilitates farmers in accessing high-quality agricultural knowledge at their convenience. [Methods] Through an analysis of the demands of strawberry farmers, the agricultural technology knowledge related to strawberry cultivation was categorized into six themes: basic production knowledge, variety screening, interplanting knowledge, pest diagnosis and control, disease diagnosis and control, and drug damage diagnosis and control. Considering the current situation of agricultural technology, two primary tasks were formulated: named entity recognition and question answering related to agricultural knowledge. A training corpus comprising entity type annotations and question-answer pairs was constructed using a combination of automatic machine annotation and manual annotation, ensuring a small yet high-quality sample. After comparing four existing Large Language Models (Baichuan2-13B-Chat, ChatGLM2-6B, Llama 2-13B-Chat, and ChatGPT), the model exhibiting the best performance was chosen as the base LLM to develop the intelligent question-answering system for agricultural technology knowledge. Utilizing a high-quality corpus, pre-training of a Large Language Model and the fine-tuning method, a deep neural network with semantic analysis, context association, and content generation capabilities was trained. This model served as a Large Language Model for named entity recognition and question answering of agricultural knowledge, adaptable to various downstream tasks. For the task of named entity recognition, the fine-tuning method of Lora was employed, fine-tuning only essential parameters to expedite model training and enhance performance. Regarding the question-answering task, the Prompt-tuning method was used to fine-tune the Large Language Model, where adjustments were made based on the generated content of the model, achieving iterative optimization. Model performance optimization was conducted from two perspectives: data and model design. In terms of data, redundant or unclear data was manually removed from the labeled corpus. In terms of the model, a strategy based on retrieval enhancement generation technology was employed to deepen the understanding of agricultural knowledge in the Large Language Model and maintain real-time synchronization of knowledge, alleviating the problem of LLM hallucination. Drawing upon the constructed Large Language Model, an intelligent question-answering system was developed for agricultural technology knowledge. This system demonstrates the capability to generate high-precision and unambiguous answers, while also supporting the functionalities of multi-round question answering and retrieval of information sources. [Results and Discussions] Accuracy rate and recall rate served as indicators to evaluate the named entity recognition task performance of the Large Language Models. The results indicated that the performance of Large Language Models was closely related to factors such as model structure, the scale of the labeled corpus, and the number of entity types. After fine-tuning, the ChatGLM Large Language Model demonstrated the highest accuracy and recall rate. With the same number of entity types, a higher number of annotated corpora resulted in a higher accuracy rate. Fine-tuning had different effects on different models, and overall, it improved the average accuracy of all models under different knowledge topics, with ChatGLM, Llama, and Baichuan values all surpassing 85%. The average recall rate saw limited increase, and in some cases, it was even lower than the values before fine-tuning. Assessing the question-answering task of Large Language Models using hallucination rate and semantic similarity as indicators, data optimization and retrieval enhancement generation techniques effectively reduced the hallucination rate by 10% to 40% and improved semantic similarity by more than 15%. These optimizations significantly enhanced the generated content of the models in terms of correctness, logic, and comprehensiveness. [Conclusion] The pre-trained Large Language Model of ChatGLM exhibited superior performance in named entity recognition and question answering tasks in the agricultural field. Fine-tuning pre-trained Large Language Models for downstream tasks and optimizing based on retrieval enhancement generation technology mitigated the problem of language hallucination, markedly improving model performance. Large Language Model technology has the potential to innovate agricultural technology knowledge service modes and optimize agricultural knowledge extension. This can effectively reduce the time cost for farmers to obtain high-quality and effective knowledge, guiding more farmers towards agricultural technology innovation and transformation. However, due to challenges such as unstable performance, further research is needed to explore optimization methods for Large Language Models and their application in specific scenarios.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Goals, Key Technologies, and Regional Models of Smart Farming for Field Crops in China
    LI Li, LI Minzan, LIU Gang, ZHANG Man, WANG Maohua
    Smart Agriculture    2022, 4 (4): 26-34.   DOI: 10.12133/j.smartag.SA202207003
    Abstract1991)   HTML276)    PDF(pc) (853KB)(4207)       Save

    Smart farming for field crops is a significant part of the smart agriculture. It aims at crop production, integrating modern sensing technology, new generation mobile communication technology, computer and network technology, Internet of Things(IoT), big data, cloud computing, blockchain and expert wisdom and knowledge. Deeply integrated application of biotechnology, engineering technology, information technology and management technology, it realizes accurate perception, quantitative decision-making, intelligent operation and intelligent service in the process of crop production, to significantly improve land output, resource utilization and labor productivity, comprehensively improves the quality, and promotes efficiency of agricultural products. In order to promote the sustainable development of the smart farming, through the analysis of the development process of smart agriculture, the overall objectives and key tasks of the development strategy were clarified, the key technologies in smart farming were condensed. Analysis and breakthrough of smart farming key technologies were crucial to the industrial development strategy. The main problems of the smart farming for field crops include: the lack of in-situ accurate measurement technology and special agricultural sensors, the large difference between crop model and actual production, the instantaneity, reliability, universality, and stability of the information transmission technologies, and the combination of intelligent agricultural equipment with agronomy. Based on the above analysis, five primary technologies and eighteen corresponding secondary technologies of smart farming for field crops were proposed, including: sensing technologies of environmental and biological information in field, agricultural IoT technologies and mobile internet, cloud computing and cloud service technologies in agriculture, big data analysis and decision-making technology in agriculture, and intelligent agricultural machinery and agricultural robots in fireld production. According to the characteristics of China's cropping region, the corresponding smart farming development strategies were proposed: large-scale smart production development zone in the Northeast region and Inner Mongolia region, smart urban agriculture and water-saving agriculture development zone in the region of Beijing, Tianjin, Hebei and Shandong, large-scale smart farming of cotton and smart dry farming green development comprehensive test zone in the Northwest arid region, smart farming of rice comprehensive development test zone in the Southeast coast region, and characteristic smart farming development zone in the Southwest mountain region. Finally, the suggestions were given from the perspective of infrastructure, key technology, talent and policy.

    Reference | Related Articles | Metrics | Comments0
    Agricultural Sensor: Research Progress, Challenges and Perspectives
    WANG Rujing
    Smart Agriculture    2024, 6 (1): 1-17.   DOI: 10.12133/j.smartag.SA202401017
    Abstract1832)   HTML414)    PDF(pc) (1179KB)(10704)       Save

    Significance Agricultural sensor is the key technology for developing modern agriculture. Agricultural sensor is a kind of detection device that can sense and convert physical signal, which is related to the agricultural environment, plants and animals, into an electrical signal. Agricultural sensors could be applied to monitor crops and livestock in different agricultural environments, including weather, water, atmosphere and soil. It is also an important driving force to promote the iterative upgrading of agricultural technology and change agricultural production methods. Progress The different agricultural sensors are categorized, the cutting-edge research trends of agricultural sensors are analyzed, and summarizes the current research status of agricultural sensors are summarized in different application scenarios. Moreover, a deep analysis and discussion of four major categories is conducted, which include agricultural environment sensors, animal and plant life information sensors, agricultural product quality and safety sensors, and agricultural machinery sensors. The process of research, development, the universality and limitations of the application of the four types of agricultural sensors are summarized. Agricultural environment sensors are mainly used for real-time monitoring of key parameters in agricultural production environments, such as the quality of water, gas, and soil. The soil sensors provide data support for precision irrigation, rational fertilization, and soil management by monitoring indicators such as soil humidity, pH, temperature, nutrients, microorganisms, pests and diseases, heavy metals and agricultural pollution, etc. Monitoring of dissolved oxygen, pH, nitrate content, and organophosphorus pesticides in irrigation and aquaculture water through water sensors ensures the rational use of water resources and water quality safety. The gas sensor monitors the atmospheric CO2, NH3, C2H2, CH4 concentration, and other information, which provides the appropriate environmental conditions for the growth of crops in greenhouses. The animal life information sensor can obtain the animal's growth, movement, physiological and biochemical status, which include movement trajectory, food intake, heart rate, body temperature, blood pressure, blood glucose, etc. The plant life information sensors monitor the plant's health and growth, such as volatile organic compounds of the leaves, surface temperature and humidity, phytohormones, and other parameters. Especially, the flexible wearable plant sensors provide a new way to measure plant physiological characteristics accurately and monitor the water status and physiological activities of plants non-destructively and continuously. These sensors are mainly used to detect various indicators in agricultural products, such as temperature and humidity, freshness, nutrients, and potentially hazardous substances (e.g., bacteria, pesticide residues, heavy metals, etc. Agricultural machinery sensors can achieve real-time monitoring and controlling of agricultural machinery to achieve real-time cultivation, planting, management, and harvesting, automated operation of agricultural machinery, and accurate application of pesticide, fertilizer. [Conclusions and Prospects In the challenges and prospects of agricultural sensors, the core bottlenecks of large-scale application of agricultural sensors at the present stage are analyzed in detail. These include low-cost, specialization, high stability, and adaptive intelligence of agricultural sensors. Furthermore, the concept of "ubiquitous sensing in agriculture" is proposed, which provides ideas and references for the research and development of agricultural sensor technology.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Agricultural Intelligent Knowledge Service: Overview and Future Perspectives
    ZHAO Ruixue, YANG Chenxue, ZHENG Jianhua, LI Jiao, WANG Jian
    Smart Agriculture    2022, 4 (4): 105-125.   DOI: 10.12133/j.smartag.SA202207009
    Abstract1830)   HTML201)    PDF(pc) (1435KB)(4643)       Save

    The wide application of advanced information technologies such as big data, Internet of Things and artificial intelligence in agriculture has promoted the modernization of agriculture in rural areas and the development of smart agriculture. This trend has also led to the boost of demands for technology and knowledge from a large amount of agricultural business entities. Faced with problems such as dispersiveness of knowledges, hysteric knowledge update, inadequate agricultural information service and prominent contradiction between supply and demand of knowledge, the agricultural knowledge service has become an important engine for the transformation, upgrading and high-quality development of agriculture. To better facilitate the agriculture modernization in China, the research and application perspectives of agricultural knowledge services were summarized and analyzed. According to the whole life cycle of agricultural data, based on the whole agricultural industry chain, a systematic framework for the construction of agricultural intelligent knowledge service systems towards the requirement of agricultural business entities was proposed. Three layers of techniques in necessity were designed, ranging from AIoT-based agricultural situation perception to big data aggregation and governance, and from agricultural knowledge organization to computation/mining based on knowledge graph and then to multi-scenario-based agricultural intelligent knowledge service. A wide range of key technologies with comprehensive discussion on their applications in agricultural intelligent knowledge service were summarized, including the aerial and ground integrated Artificial Intelligence & Internet-of-Things (AIoT) full-dimensional of agricultural condition perception, multi-source heterogeneous agricultural big data aggregation/governance, knowledge modeling, knowledge extraction, knowledge fusion, knowledge reasoning, cross-media retrieval, intelligent question answering, personalized recommendation, decision support. At the end, the future development trends and countermeasures were discussed, from the aspects of agricultural data acquisition, model construction, knowledge organization, intelligent knowledge service technology and application promotion. It can be concluded that the agricultural intelligent knowledge service is the key to resolve the contradiction between supply and demand of agricultural knowledge service, can provide support in the realization of the advance from agricultural cross-media data analytics to knowledge reasoning, and promote the upgrade of agricultural knowledge service to be more personalized, more precise and more intelligent. Agricultural knowledge service is also an important support for agricultural science and technologies to be more self-reliance, modernized, and facilitates substantial development and upgrading of them in a more effective manner.

    Reference | Related Articles | Metrics | Comments0
    Crop Pest Target Detection Algorithm in Complex Scenes:YOLOv8-Extend
    ZHANG Ronghua, BAI Xue, FAN Jiangchuan
    Smart Agriculture    2024, 6 (2): 49-61.   DOI: 10.12133/j.smartag.SA202311007
    Abstract1706)   HTML172)    PDF(pc) (2287KB)(40054)       Save

    [Objective] It is of great significance to improve the efficiency and accuracy of crop pest detection in complex natural environments, and to change the current reliance on expert manual identification in the agricultural production process. Targeting the problems of small target size, mimicry with crops, low detection accuracy, and slow algorithm reasoning speed in crop pest detection, a complex scene crop pest target detection algorithm named YOLOv8-Entend was proposed in this research. [Methods] Firstly, the GSConv was introduecd to enhance the model's receptive field, allowing for global feature aggregation. This mechanism enables feature aggregation at both node and global levels simultaneously, obtaining local features from neighboring nodes through neighbor sampling and aggregation operations, enhancing the model's receptive field and semantic understanding ability. Additionally, some Convs were replaced with lightweight Ghost Convolutions and HorBlock was utilized to capture longer-term feature dependencies. The recursive gate convolution employed gating mechanisms to remember and transmit previous information, capturing long-term correlations. Furthermore, Concat was replaced with BiFPN for richer feature fusion. The bidirectional fusion of depth features from top to bottom and from bottom to top enhances the transmission of feature information acrossed different network layers. Utilizing the VoVGSCSP module, feature maps of different scales were connected to create longer feature map vectors, increasing model diversity and enhancing small object detection. The convolutional block attention module (CBAM) attention mechanism was introduced to strengthen features of field pests and reduce background weights caused by complexity. Next, the Wise IoU dynamic non-monotonic focusing mechanism was implemented to evaluate the quality of anchor boxes using "outlier" instead of IoU. This mechanism also included a gradient gain allocation strategy, which reduced the competitiveness of high-quality anchor frames and minimizes harmful gradients from low-quality examples. This approach allowed WIoU to concentrate on anchor boxes of average quality, improving the network model's generalization ability and overall performance. Subsequently, the improved YOLOv8-Extend model was compared with the original YOLOv8 model, YOLOv5, YOLOv8-GSCONV, YOLOv8-BiFPN, and YOLOv8-CBAM to validate the accuracy and precision of model detection. Finally, the model was deployed on edge devices for inference verification to confirm its effectiveness in practical application scenarios. [Results and Discussions] The results indicated that the improved YOLOv8-Extend model achieved notable improvements in accuracy, recall, mAP@0.5, and mAP@0.5:0.95 evaluation indices. Specifically, there were increases of 2.6%, 3.6%, 2.4% and 7.2%, respectively, showcasing superior detection performance. YOLOv8-Extend and YOLOv8 run respectively on the edge computing device JETSON ORIN NX 16 GB and were accelerated by TensorRT, mAP@0.5 improved by 4.6%, FPS reached 57.6, meeting real-time detection requirements. The YOLOv8-Extend model demonstrated better adaptability in complex agricultural scenarios and exhibited clear advantages in detecting small pests and pests sharing similar growth environments in practical data collection. The accuracy in detecting challenging data saw a notable increased of 11.9%. Through algorithm refinement, the model showcased improved capability in extracting and focusing on features in crop pest target detection, addressing issues such as small targets, similar background textures, and challenging feature extraction. [Conclusions] The YOLOv8-Extend model introduced in this study significantly boosts detection accuracy and recognition rates while upholding high operational efficiency. It is suitable for deployment on edge terminal computing devices to facilitate real-time detection of crop pests, offering technological advancements and methodologies for the advancement of cost-effective terminal-based automatic pest recognition systems. This research can serve as a valuable resource and aid in the intelligent detection of other small targets, as well as in optimizing model structures.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Intelligent Identification of Crop Agronomic Traits and Morphological Structure Phenotypes: A Review
    ZHANG Jianhua, YAO Qiong, ZHOU Guomin, WU Wendi, XIU Xiaojie, WANG Jian
    Smart Agriculture    2024, 6 (2): 14-27.   DOI: 10.12133/j.smartag.SA202401015
    Abstract1661)   HTML110)    PDF(pc) (1376KB)(5922)       Save

    [Significance] The crop phenotype is the visible result of the complex interplay between crop genes and the environment. It reflects the physiological, ecological, and dynamic aspects of crop growth and development, serving as a critical component in the realm of advanced breeding techniques. By systematically analyzing crop phenotypes, researchers can gain valuable insights into gene function and identify genetic factors that influence important crop traits. This information can then be leveraged to effectively harness germplasm resources and develop breakthrough varieties. Utilizing data-driven, intelligent, dynamic, and non-invasive methods for measuring crop phenotypes allows researchers to accurately capture key growth traits and parameters, providing essential data for breeding and selecting superior crop varieties throughout the entire growth cycle. This article provides an overview of intelligent identification technologies for crop agronomic traits and morphological structural phenotypes. [Progress] Crop phenotype acquisition equipment serves as the essential foundation for acquiring, analyzing, measuring, and identifying crop phenotypes. This equipment enables detailed monitoring of crop growth status. The article presents an overview of the functions, performance, and applications of the leading high-throughput crop phenotyping platforms, as well as an analysis of the characteristics of various sensing and imaging devices used to obtain crop phenotypic information. The rapid advancement of high-throughput crop phenotyping platforms and sensory imaging equipment has facilitated the integration of cutting-edge imaging technology, spectroscopy technology, and deep learning algorithms. These technologies enable the automatic and high-throughput acquisition of yield, resistance, quality, and other relevant traits of large-scale crops, leading to the generation of extensive multi-dimensional, multi-scale, and multi-modal crop phenotypic data. This advancement supports the rapid progression of crop phenomics. The article also discusses the research progress of intelligent recognition technologies for agronomic traits such as crop plant height acquisition, crop organ detection, and counting, as well as crop ideotype recognition, crop morphological information measurement, and crop three-dimensional reconstruction for morphological structure intelligent recognition. Furthermore, this article outlines the main challenges faced in this field, including: difficulties in data collection in complex environments, high requirements for data scale, diversity, and preprocessing, the need to improve the lightweight nature and generalization ability of models, as well as the high cost of data collection equipment and the need to enhance practicality. [Conclusions and Prospects] Finally, this article puts forward the development directions of crop phenotype intelligent recognition technology, including: developing new and low cost intelligent field equipment for acquiring and analyzing crop phenotypes, enhancing the standardization and consistency of field crop phenotype acquisition, strengthening the generality of intelligent crop phenotype recognition models, researching crop phenotype recognition methods that involve multi-perspective, multimodal, multi-point continuous analysis, and spatiotemporal feature fusion, as well as improving model interpretability.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress and Enlightenment of Japanese Harvesting Robot in Facility Agriculture
    HUANG Zichen, SUGIYAMA Saki
    Smart Agriculture    2022, 4 (2): 135-149.   DOI: 10.12133/j.smartag.SA202202008
    Abstract1602)   HTML224)    PDF(pc) (1780KB)(8073)       Save

    Intelligent equipment is necessary to ensure stable, high-quality, and efficient production of facility agriculture. Among them, intelligent harvesting equipment needs to be designed and developed according to the characteristics of fruits and vegetables, so there is little large-scale mechanization. The intelligent harvesting equipment in Japan has nearly 40 years of research and development history since the 1980s, and the review of its research and development products has specific inspiration and reference significance. First, the preferential policies that can be used for harvesting robots in the support policies of the government and banks to promote the development of facility agriculture were introduced. Then, the development of agricultural robots in Japan was reviewed. The top ten fruits and vegetables in the greenhouse were selected, and the harvesting research of tomato, eggplant, green pepper, cucumber, melon, asparagus, and strawberry harvesting robots based on the combination of agricultural machinery and agronomy was analyzed. Next, the commercialized solutions for tomato, green pepper, and strawberry harvesting system were detailed and reviewed. Among them, taking the green pepper harvesting robot developed by the start-up company AGRIST Ltd. in recent years as an example, the harvesting robot developed by the company based on the Internet of Things technology and artificial intelligence algorithms was explained. This harvesting robot can work 24 h a day and can control the robot's operation through the network. Then, the typical strawberry harvesting robot that had undergone four generations of prototype development were reviewed. The fourth-generation system was a systematic solution developed by the company and researchers. It consisted of high-density movable seedbeds and a harvesting robot with the advantages of high space utilization, all-day work, and intelligent quality grading. The strengths, weaknesses, challenges, and future trends of prototype and industrialized solutions developed by universities were also summarized. Finally, suggestions for accelerating the development of intelligent, smart, and industrialized harvesting robots in China's facility agriculture were provided.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Agricultural Disease Named Entity Recognition with Pointer Network Based on RoFormer Pre-trained Model
    WANG Tong, WANG Chunshan, LI Jiuxi, ZHU Huaji, MIAO Yisheng, WU Huarui
    Smart Agriculture    2024, 6 (2): 85-94.   DOI: 10.12133/j.smartag.SA202311021
    Abstract1499)   HTML33)    PDF(pc) (1219KB)(830)       Save

    [Objective] With the development of agricultural informatization, a large amount of information about agricultural diseases exists in the form of text. However, due to problems such as nested entities and confusion of entity types, traditional named entities recognition (NER) methods often face challenges of low accuracy when processing agricultural disease text. To address this issue, this study proposes a new agricultural disease NER method called RoFormer-PointerNet, which combines the RoFormer pre-trained model with the PointerNet baseline model. The aim of this method is to improve the accuracy of entity recognition in agricultural disease text, providing more accurate data support for intelligent analysis, early warning, and prevention of agricultural diseases. [Methods] This method first utilized the RoFormer pre-trained model to perform deep vectorization processing on the input agricultural disease text. This step was a crucial foundation for the subsequent entity extraction task. As an advanced natural language processing model, the RoFormer pre-trained model's unique rotational position embedding approach endowed it with powerful capabilities in capturing textual positional information. In agricultural disease text, due to the diversity of terminology and the existence of polysemy, traditional entity recognition methods often faced challenges in confusing entity types. However, through its unique positional embedding mechanism, the RoFormer model was able to incorporate more positional information into the vector representation, effectively enriching the feature information of words. This characteristic enabled the model to more accurately distinguish between different entity types in subsequent entity extraction tasks, reducing the possibility of type confusion. After completing the vectorization representation of the text, this study further emploied a pointer network for entity extraction. The pointer network was an advanced sequence labeling approach that utilizes head and tail pointers to annotate entities within sentences. This labeling method was more flexible compared to traditional sequence labeling methods as it was not restricted by fixed entity structures, enabling the accurate extraction of all types of entities within sentences, including complex entities with nested relationships. In agricultural disease text, entity extraction often faced the challenge of nesting, such as when multiple different entity types are nested within a single disease symptom description. By introducing the pointer network, this study effectively addressed this issue of entity nesting, improving the accuracy and completeness of entity extraction. [Results and Discussions] To validate the performance of the RoFormer-PointerNet method, this study constructed an agricultural disease dataset, which comprised 2 867 annotated corpora and a total of 10 282 entities, including eight entity types such as disease names, crop names, disease characteristics, pathogens, infected areas, disease factors, prevention and control methods, and disease stages. In comparative experiments with other pre-trained models such as Word2Vec, BERT, and RoBERTa, RoFormer-PointerNet demonstrated superiority in model precision, recall, and F1-Score, achieving 87.49%, 85.76% and 86.62%, respectively. This result demonstrated the effectiveness of the RoFormer pre-trained model. Additionally, to verify the advantage of RoFormer-PointerNet in mitigating the issue of nested entities, this study compared it with the widely used bidirectional long short-term memory neural network (BiLSTM) and conditional random field (CRF) models combined with the RoFormer pre-trained model as decoding methods. RoFormer-PointerNet outperformed the RoFormer-BiLSTM, RoFormer-CRF, and RoFormer-BiLSTM-CRF models by 4.8%, 5.67% and 3.87%, respectively. The experimental results indicated that RoFormer-PointerNet significantly outperforms other models in entity recognition performance, confirming the effectiveness of the pointer network model in addressing nested entity issues. To validate the superiority of the RoFormer-PointerNet method in agricultural disease NER, a comparative experiment was conducted with eight mainstream NER models such as BiLSTM-CRF, BERT-BiLSTM-CRF, and W2NER. The experimental results showed that the RoFormer-PointerNet method achieved precision, recall, and F1-Score of 87.49%, 85.76% and 86.62%, respectively in the agricultural disease dataset, reaching the optimal level among similar methods. This result further verified the superior performance of the RoFormer-PointerNet method in agricultural disease NER tasks. [Conclusions] The agricultural disease NER method RoFormer-PointerNet, proposed in this study and based on the RoFormer pre-trained model, demonstrates significant advantages in addressing issues such as nested entities and type confusion during the entity extraction process. This method effectively identifies entities in Chinese agricultural disease texts, enhancing the accuracy of entity recognition and providing robust data support for intelligent analysis, early warning, and prevention of agricultural diseases. This research outcome holds significant importance for promoting the development of agricultural informatization and intelligence.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Shrimp Diseases Detection Method Based on Improved YOLOv8 and Multiple Features
    XU Ruifeng, WANG Yaohua, DING Wenyong, YU Junqi, YAN Maocang, CHEN Chen
    Smart Agriculture    2024, 6 (2): 62-71.   DOI: 10.12133/j.smartag.SA201311014
    Abstract1497)   HTML54)    PDF(pc) (1597KB)(8760)       Save

    [Objective] In recent years, there has been a steady increase in the occurrence and fatality rates of shrimp diseases, causing substantial impacts in shrimp aquaculture. These diseases are marked by their swift onset, high infectivity, complex control requirements, and elevated mortality rates. With the continuous growth of shrimp factory farming, traditional manual detection approaches are no longer able to keep pace with the current requirements. Hence, there is an urgent necessity for an automated solution to identify shrimp diseases. The main goal of this research is to create a cost-effective inspection method using computer vision that achieves a harmonious balance between cost efficiency and detection accuracy. The improved YOLOv8 (You Only Look Once) network and multiple features were employed to detect shrimp diseases. [Methods] To address the issue of surface foam interference, the improved YOLOv8 network was applied to detect and extract surface shrimps as the primary focus of the image. This target detection approach accurately recognizes objects of interest in the image, determining their category and location, with extraction results surpassing those of threshold segmentation. Taking into account the cost limitations of platform computing power in practical production settings, the network was optimized by reducing parameters and computations, thereby improving detection speed and deployment efficiency. Additionally, the Farnberck optical flow method and gray level co-occurrence matrix (GLCM) were employed to capture the movement and image texture features of shrimp video clips. A dataset was created using these extracted multiple feature parameters, and a Support Vector Machine (SVM) classifier was trained to categorize the multiple feature parameters in video clips, facilitating the detection of shrimp health. [Results and Discussions] The improved YOLOv8 in this study effectively enhanced detection accuracy without increasing the number of parameters and flops. According to the results of the ablation experiment, replacing the backbone network with FasterNet lightweight backbone network significantly reduces the number of parameters and computation, albeit at the cost of decreased accuracy. However, after integrating the efficient multi-scale attention (EMA) on the neck, the mAP0.5 increased by 0.3% compared to YOLOv8s, while mAP0.95 only decreased by 2.1%. Furthermore, the parameter count decreased by 45%, and FLOPs decreased by 42%. The improved YOLOv8 exhibits remarkable performance, ranking second only to YOLOv7 in terms of mAP0.5 and mAP0.95, with respective reductions of 0.4% and 0.6%. Additionally, it possesses a significantly reduced parameter count and FLOPS compared to YOLOv7, matching those of YOLOv5. Despite the YOLOv7-Tiny and YOLOv8-VanillaNet models boasting lower parameters and Flops, their accuracy lags behind that of the improved YOLOv8. The mAP0.5 and mAP0.95 of YOLOv7-Tiny and YOLOv8-VanillaNet are 22.4%, 36.2%, 2.3%, and 4.7% lower than that of the improved YOLOv8, respectively. Using a support vector machine (SVM) trained on a comprehensive dataset incorporating multiple feature, the classifier achieved an impressive accuracy rate of 97.625%. The 150 normal fragments and the 150 diseased fragments were randomly selected as test samples. The classifier exhibited a detection accuracy of 89% on this dataset of the 300 samples. This result indicates that the combination of features extracted using the Farnberck optical flow method and GLCM can effectively capture the distinguishing dynamics of movement speed and direction between infected and healthy shrimp. In this research, the majority of errors stem from the incorrect recognition of diseased segments as normal segments, accounting for 88.2% of the total error. These errors can be categorized into three main types: 1) The first type occurs when floating foam obstructs the water surface, resulting in a small number of shrimp being extracted from the image. 2) The second type is attributed to changes in water movement. In this study, nanotubes were used for oxygenation, leading to the generation of sprays on the water surface, which affected the movement of shrimp. 3) The third type of error is linked to video quality. When the video's pixel count is low, the difference in optical flow between diseased shrimp and normal shrimp becomes relatively small. Therefore, it is advisable to adjust the collection area based on the actual production environment and enhance video quality. [Conclusions] The multiple features introduced in this study effectively capture the movement of shrimp, and can be employed for disease detection. The improved YOLOv8 is particularly well-suited for platforms with limited computational resources and is feasible for deployment in actual production settings. However, the experiment was conducted in a factory farming environment, limiting the applicability of the method to other farming environments. Overall, this method only requires consumer-grade cameras as image acquisition equipment and has lower requirements on the detection platform, and can provide a theoretical basis and methodological support for the future application of aquatic disease detection methods.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Advances and Challenges in Physiological Parameters Monitoring and Diseases Diagnosing of Dairy Cows Based on Computer Vision
    KANG Xi, LIU Gang, CHU Mengyuan, LI Qian, WANG Yanchao
    Smart Agriculture    2022, 4 (2): 1-18.   DOI: 10.12133/j.smartag.SA202204005
    Abstract1454)   HTML232)    PDF(pc) (1097KB)(2827)       Save

    Realizing the construction of intelligent farming by using advanced information technology, thus improving the living welfare of dairy cows and the economic benefits of dairy farms has become an important goal and task in dairy farming research field. Computer vision technology has the advantages of non-contact, stress-free, low cost and high throughput, and has a broad application prospect in animal production. On the basis of describing the importance of computer vision technology in the development of intelligent farming industry, this paper introduced the cutting-edge technology of cow physiological parameters and disease diagnosis based on computer vision, including cow temperature monitoring, body size monitoring, weight measurement, mastitis detection and lameness detection. The introduction coverd the development process of these studies, the current mainstream techniques, and discussed the problems and challenges in the research and application of related technology, aiming at the problem that the current computer vision-based detection methods are susceptible to individual difference and environmental changes. Combined with the development status of farming industry, suggestions on how to improve the universality of computer vision technology in intelligent farming industry, how to improve the accuracy of monitoring cows' physiological parameters and disease diagnosis, and how to reduce the influence of environment on the system were put forward. Future research work should focus on research and developmentof algorithm, make full use of computer vision technology continuous detection and the advantage of large amount of data, to ensure the accuracy of the detection, and improve the function of the system integration and data utilization, expand the computer vision system function. Under the premise that does not affect the ability of the system, to improve the study on the number of function integration and system function and reduce equipment costs.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Advances in the Applications of Deep Learning Technology for Livestock Smart Farming
    GUO Yangyang, DU Shuzeng, QIAO Yongliang, LIANG Dong
    Smart Agriculture    2023, 5 (1): 52-65.   DOI: 10.12133/j.smartag.SA202205009
    Abstract1441)   HTML206)    PDF(pc) (1118KB)(8634)       Save

    Accurate and efficient monitoring of animal information, timely analysis of animal physiological and physical health conditions, and automatic feeding and farming management combined with intelligent technologies are of great significance for large-scale livestock farming. Deep learning techniques, with automatic feature extraction and powerful image representation capabilities, solve many visual challenges, and are more suitable for application in monitoring animal information in complex livestock farming environments. In order to further analyze the research and application of artificial intelligence technology in intelligent animal farming, this paper presents the current state of research on deep learning techniques for tag detection recognition, body condition evaluation and weight estimation, and behavior recognition and quantitative analysis for cattle, sheep and pigs. Among them, target detection and recognition is conducive to the construction of electronic archives of individual animals, on which basis the body condition and weight information, behavior information and health status of animals can be related, which is also the trend of intelligent animal farming. At present, intelligent animal farming still faces many problems and challenges, such as the existence of multiple perspectives, multi-scale, multiple scenarios and even small sample size of a certain behavior in data samples, which greatly increases the detection difficulty and the generalization of intelligent technology application. In addition, animal breeding and animal habits are a long-term process. How to accurately monitor the animal health information in real time and effectively feed it back to the producer is also a technical difficulty. According to the actual feeding and management needs of animal farming, the development of intelligent animal farming is prospected and put forward. First, enrich the samples and build a multi perspective dataset, and combine semi supervised or small sample learning methods to improve the generalization ability of in-depth learning models, so as to realize the perception and analysis of the animal's physical environment. Secondly, the unified cooperation and harmonious development of human, intelligent equipment and breeding animals will improve the breeding efficiency and management level as a whole. Third, the deep integration of big data, deep learning technology and animal farming will greatly promote the development of intelligent animal farming. Last, research on the interpretability and security of artificial intelligence technology represented by deep learning model in the breeding field. And other development suggestions to further promote intelligent animal farming. Aiming at the progress of research application of deep learning in livestock smart farming, it provides reference for the modernization and intelligent development of livestock farming.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Infield Corn Kernel Detection and Counting Based on Multiple Deep Learning Networks
    LIU Xiaohang, ZHANG Zhao, LIU Jiaying, ZHANG Man, LI Han, FLORES Paulo, HAN Xiongzhe
    Smart Agriculture    2022, 4 (4): 49-60.   DOI: 10.12133/j.smartag.SA202207004
    Abstract1407)   HTML141)    PDF(pc) (3336KB)(2955)       Save

    Machine vision has been increasingly used for agricultural sensing tasks. The detection method based on deep learning for infield corn kernels can improve the detection accuracy. In order to obtain the number of lost corn kernels quickly and accurately after the corn harvest, and evaluate the corn harvest combine performance on grain loss, the method of directly using deep learning technology to count corn kernels in the field was developed and evaluated. Firstly, an RGB camera was used to collect image with different backgrounds and illuminations, and the datasets were generated. Secondly, different target detection networks for kernel recognition were constructed, including Mask R-CNN, EfficientDet-D5, YOLOv5-L and YOLOX-L, and the collected 420 effective images were used to train, verify and test each model. The number of images in train, verify and test datasets were 200, 40 and 180, respectively. Finally, the counting performances of different models were evaluated and compared according to the recognition results of test set images. The experimental results showed that among the four models, YOLOv5-L had overall the best performance, and could reliably identify corn kernels under different scenes and light conditions. The average precision (AP) value of the model for the image detection of the test set was 78.3%, and the size of the model was 89.3 MB. The correct rate of kernel count detection in four scenes of non-occlusion, surface mid-level-occlusion, surface severe-occlusion and aggregation were 98.2%, 95.5%, 76.1% and 83.3%, respectively, and F1 values were 94.7%, 93.8%, 82.8% and 87%, respectively. The overall detection correct rate and F1 value of the test set were 90.7% and 91.1%, respectively. The frame rate was 55.55 f/s, and the detection and counting performance were better than Mask R-CNN, EfficientDet-D5 and YOLOX-L networks. The detection accuracy was improved by about 5% compared with the second best performance of Mask R-CNN. With good precision, high throughput, and proven generalization, YOLOv5-L can realize real-time monitoring of corn harvest loss in practical operation.

    Reference | Related Articles | Metrics | Comments0
    Lightweighted Wheat Leaf Diseases and Pests Detection Model Based on Improved YOLOv8
    YANG Feng, YAO Xiaotong
    Smart Agriculture    2024, 6 (1): 147-157.   DOI: 10.12133/j.smartag.SA202309010
    Abstract1348)   HTML227)    PDF(pc) (1991KB)(22036)       Save

    Objective To effectively tackle the unique attributes of wheat leaf pests and diseases in their native environment, a high-caliber and efficient pest detection model named YOLOv8-SS (You Only Look Once Version 8-SS) was proposed. This innovative model is engineered to accurately identify pests, thereby providing a solid scientific foundation for their prevention and management strategies. Methods A total of 3 639 raw datasets of images of wheat leaf pests and diseases were collected from 6 different wheat pests and diseases in various farmlands in the Yuchong County area of Gansu Province, at different periods of time, using mobile phones. This collection demonstrated the team's proficiency and commitment to advancing agricultural research. The dataset was meticulously constructed using the LabelImg software to accurately label the images with targeted pest species. To guarantee the model's superior generalization capabilities, the dataset was strategically divided into a training set and a test set in an 8:2 ratio. The dataset includes thorough observations and recordings of the wheat leaf blade's appearance, texture, color, as well as other variables that could influence these characteristics. The compiled dataset proved to be an invaluable asset for both training and validation activities. Leveraging the YOLOv8 algorithm, an enhanced lightweight convolutional neural network, ShuffleNetv2, was selected as the basis network for feature extraction from images. This was accomplished by integrating a 3×3 Depthwise Convolution (DWConv) kernel, the h-swish activation function, and a Squeeze-and-Excitation Network (SENet) attention mechanism. These enhancements streamlined the model by diminishing the parameter count and computational demands, all while sustaining high detection precision. The deployment of these sophisticated methodologies exemplified the researchers' commitment and passion for innovation. The YOLOv8 model employs the SEnet attention mechanism module within both its Backbone and Neck components, significantly reducing computational load while bolstering accuracy. This method exemplifies the model's exceptional performance, distinguishing it from other models in the domain. By integrating a dedicated small target detection layer, the model's capabilities have been augmented, enabling more efficient and precise pest and disease detection. The introduction of a new detection feature map, sized 160×160 pixels, enables the network to concentrate on identifying small-targeted pests and diseases, thereby enhancing the accuracy of pest and disease recognition. Results and Discussion The YOLOv8-SS wheat leaf pests and diseases detection model has been significantly improved to accurately detect wheat leaf pests and diseases in their natural environment. By employing the refined ShuffleNet V2 within the DarkNet-53 framework, as opposed to the conventional YOLOv8, under identical experimental settings, the model exhibited a 4.53% increase in recognition accuracy and a 4.91% improvement in F1-Score, compared to the initial model. Furthermore, the incorporation of a dedicated small target detection layer led to a subsequent rise in accuracy and F1-Scores of 2.31% and 2.16%, respectively, despite a minimal upsurge in the number of parameters and computational requirements. The integration of the SEnet attention mechanism module into the YOLOv8 model resulted in a detection accuracy rate increase of 1.85% and an F1-Score enhancement of 2.72%. Furthermore, by swapping the original neural network architecture with an enhanced ShuffleNet V2 and appending a compact object detection sublayer (namely YOLOv8-SS), the resulting model exhibited a heightened recognition accuracy of 89.41% and an F1-Score of 88.12%. The YOLOv8-SS variant substantially outperformed the standard YOLOv8, showing a remarkable enhancement of 10.11% and 9.92% in accuracy, respectively. This outcome strikingly illustrates the YOLOv8-SS's prowess in balancing speed with precision. Moreover, it achieves convergence at a more rapid pace, requiring approximately 40 training epochs, to surpass other renowned models such as Faster R-CNN, MobileNetV2, SSD, YOLOv5, YOLOX, and the original YOLOv8 in accuracy. Specifically, the YOLOv8-SS boasted an average accuracy 23.01%, 15.13%, 11%, 25.21%, 27.52%, and 10.11% greater than that of the competing models, respectively. In a head-to-head trial involving a public dataset (LWDCD 2020) and a custom-built dataset, the LWDCD 2020 dataset yielded a striking accuracy of 91.30%, outperforming the custom-built dataset by a margin of 1.89% when utilizing the same network architecture, YOLOv8-SS. The AI Challenger 2018-6 and Plant-Village-5 datasets did not perform as robustly, achieving accuracy rates of 86.90% and 86.78% respectively. The YOLOv8-SS model has shown substantial improvements in both feature extraction and learning capabilities over the original YOLOv8, particularly excelling in natural environments with intricate, unstructured backdrops. Conclusion The YOLOv8-SS model is meticulously designed to deliver unmatched recognition accuracy while consuming a minimal amount of storage space. In contrast to conventional detection models, this groundbreaking model exhibits superior detection accuracy and speed, rendering it exceedingly valuable across various applications. This breakthrough serves as an invaluable resource for cutting-edge research on crop pest and disease detection within natural environments featuring complex, unstructured backgrounds. Our method is versatile and yields significantly enhanced detection performance, all while maintaining a lean model architecture. This renders it highly appropriate for real-world scenarios involving large-scale crop pest and disease detection.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress and Technology Trend of Intelligent Morning of Dairy Cow Motion Behavior
    WANG Zheng, SONG Huaibo, WANG Yunfei, HUA Zhixin, LI Rong, XU Xingshi
    Smart Agriculture    2022, 4 (2): 36-52.   DOI: 10.12133/j.smartag.SA202203011
    Abstract1318)   HTML153)    PDF(pc) (1155KB)(6540)       Save

    The motion behavior of dairy cows contains much of health information. The application of information and intelligent technology will help farms grasp the health status of dairy cows in time and improve breeding efficiency. In this paper, the development trend of intelligent morning technology of cow's motion behavior was mainly analyzed. Firstly, on the basis of expounding the significance of monitoring the basic motion (lying, walking, standing), oestrus, breathing, rumination and limping of dairy cows, the necessity of behavior monitoring of dairy cows was introduced. Secondly, the current research status was summarized from contact monitoring methods and non-contact monitoring methods in chronological order. The principle and achievements of related research were introduced in detail and classified. It is found that the current contact monitoring methods mainly rely on acceleration sensors, pedometers and pressure sensors, while the non-contact monitoring methods mainly rely on video images, including traditional video image analysis and video image analysis based on deep learning. Then, the development status of cow behavior monitoring industry was analyzed, and the main businesses and mainstream products of representative livestock farm automation equipment suppliers were listed. Industry giants, such as Afimilk and DeLaval, as well as their products such as intelligent collar (AfiCollar), pedometer (AfiActll Tag) and automatic milking equipment (VMS™ V300) were introduced. After that, the problems and challenges of current contact and non-contact monitoring methods of dairy cow motion behavior were put forward. The current intelligent monitoring methods of dairy cows' motion behavior are mainly wearable devices, but they have some disadvantages, such as bring stress to dairy cows and are difficult to install and maintain. Although the non-contact monitoring methods based on video image analysis technology does not bring stress to dairy cows and is low cost, the relevant research is still in its infancy, and there is still a certain distance from commercial use. Finally, the future development directions of relevant key technologies were prospected, including miniaturization and integration of wearable monitoring equipment, improving the robustness of computer vision technology, multi-target monitoring with limited equipment and promoting technology industrialization.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Technological Revolution, Disruptive Technology and Smart Agriculture
    HU Ruifa, LIU Wanjiawen
    Smart Agriculture    2022, 4 (4): 138-143.   DOI: 10.12133/j.smartag.SA202205002
    Abstract1317)   HTML151)    PDF(pc) (475KB)(1202)       Save

    This paper described the concept and basic satisfaction of scientific and technological revolution, defines the endogenous and exogenous agricultural disruptive technology. The revolution of agricultural science and technology refers to the process in which the key disruptive core technology innovation applied to agricultural production drives a series of technological innovations adopted in production. Endogenous disruptive technology in agriculture refers to technology indicators that can fundamentally change the original technology, such as productivity improvement, overturn the economic or social necessity of the adoption of the original technology, and completely replace the original technology. Particularly the paper puts forwards the concept of the transboundary technology and demonstrates its endogenous application and impacts on the development of agricultural industry. The transboundary technology for exogenous application refers to the technology whose original invention and innovation are applied in non-agricultural fields and has nothing to do with agricultural industry. Focusing on smart agriculture and the typical transboundary technology, the paper analyzed the characteristics of the smart agriculture, discussed its impacts on the traditional agricultural production and rural transformation. Smart agriculture technology will be the disruptive core technology to promote a new round of technological and industrial revolution and rural transformation. It will fundamentally change the production mode of traditional agriculture, realize factory production and promote the revolutionary transformation of rural areas. The production and application of smart agricultural technology in China has shown good economic and social benefits and great potential for production and application. However, the application of intelligent agricultural technology based on artificial intelligence technology is still in the exploratory stage. As an agricultural application of transboundary technology, the smart agricultural technology with intelligent sensing technology as the core is not dominated by agricultural scientists like agricultural machinery technology revolution, chemical technology revolution and green revolution technology. At present, the application smart agriculture technology in China is only in its infancy. Hence, policy recommendations of strengthening key disruptive technology development, reforming agricultural higher education system, promoting the agricultural industry development of the transboundary technology, and pushing the application of smart agriculture technology to be implemented in high standard farmland and large-scale farms of agricultural production, etc., were proposed to promote the development of smart agriculture.

    Reference | Related Articles | Metrics | Comments0
    Automatic Measurement of Multi-Posture Beef Cattle Body Size Based on Depth Image
    YE Wenshuai, KANG Xi, HE Zhijiang, LI Mengfei, LIU Gang
    Smart Agriculture    2022, 4 (4): 144-155.   DOI: 10.12133/j.smartag.SA202210001
    Abstract1316)   HTML101)    PDF(pc) (1532KB)(2794)       Save

    Beef cattle in the farm are active, which leads the collection of posture of the beef cattle changeable, so it is difficult to automatically measure the body size of the beef cattle. Aiming at the above problems, an automatic measurement method for beef cattle's body size under multi-pose was proposed by analyzing the skeleton features of beef cattle head and the edge contour features of beef cattle images. Firstly, the consumer-grade depth camera Azure Kinect DK was used to collect the top-view depth video data directly above the beef cattle and the video data were divided into frames to obtain the original depth image. Secondly, the original depth image was processed by shadow interpolation, normalization, image segmentation and connected domain to remove the complex background and obtain the target image containing only beef cattle. Thirdly, the Zhang-Suen algorithm was used to extract the beef cattle skeleton of the target image, and calculated the intersection points and endpoints of the skeleton, so as to analyze the characteristics of the beef cattle head to determine the head removal point, and to remove the beef cattle head information from the image. Finally, the curvature curve of the beef cattle profile was obtained by the improved U-chord curvature method. The body measurement points were determined according to the curvature value and converted into three-dimensional spaces to calculate the body size parameters. In this paper, the postures of beef cattle, which were analyzed by a large amount of depth image data, were divided into left crooked, right crooked, correct posture, head down and head up, respectively. The test results showed that the head removal method proposed based on the skeleton in multiple postures hads head removel success rate higher than 92% in the five postures. Using the body measurement point extraction method based on the improved U-chord curvature proposed, the average absolute error of body length measurement was 2.73 cm, the average absolute error of body height measurement was 2.07 cm, and the average absolute error of belly width measurement was 1.47 cm. The method provides a better way to achieve the automatic measurement of beef cattle body size in multiple poses.

    Reference | Related Articles | Metrics | Comments0
    Typical Raman Spectroscopy Ttechnology and Research Progress in Agriculture Detection
    GAO Zhen, ZHAO Chunjiang, YANG Guiyan, DONG Daming
    Smart Agriculture    2022, 4 (2): 121-134.   DOI: 10.12133/j.smartag.SA202201013
    Abstract1302)   HTML134)    PDF(pc) (819KB)(9727)       Save

    Raman spectroscopy is a type of scattering spectroscopy with features such as rapid, less susceptible to moisture interference, no sample pre-treatment and in vivo detection. As a powerful characterization tool for analyzing and testing the molecular composition and structure of substances, Raman spectroscopy is also playing an extremely important role in the detection of plant and animal phenotypes, food safety, soil and water quality in the agricultural field with the continuous improvement of Raman spectroscopy technology. In this paper, the detection principles of Raman spectroscopy are introduced, and the new progresses of eight Raman spectroscopy technology are summarized, including confocal microscopy Raman spectroscopy, Fourier transform Raman spectroscopy, surface-enhanced Raman spectroscopy, tip-enhanced Raman spectroscopy, resonance Raman spectroscopy, spatially shifted Raman spectroscopy, frequency-shifted excitation Raman difference spectroscopy and Raman spectroscopy based on nonlinear optics, etc. And their advantages and disadvantages and application scenarios are prerented, respectively. The applications of Raman spectroscopy in plant detection, soil detection, water quality detection, food detection, etc. are summarized. It can be specifically subdivided into plant phenotype, plant stress, soil pesticide residue detection, soil colony detection, soil nutrient detection, food pesticide detection, food quality detection, food adulteration detection, and water quality detection. In future agricultural applications, the elimination of fluorescence background due to complex living organisms in Raman spectroscopy is the next research direction. The study of stable enhanced substrates is an important direction in the application of Surface Enhanced Raman Spectroscopy (SERS). In order to meet the measurement of different scenarios, portable and telemetric Raman spectrometers will also play an important role in the future. Raman spectroscopy needs to be further explored for a wide variety of research objects in agriculture, especially for applications in animal science, for which there is still a paucity of relevant studies up to now. In the existing field of agricultural research, it is necessary to pursue the characterization of more specific substances by Raman spectroscopy, which can prompt the application of Raman spectroscopy for a wider range of uses in agriculture. Further, the pursuit of lower detection limits and higher stability for practical applications is also the direction of development of Raman spectroscopy in the field of agriculture. Finally, the challenges that need to be solved and the future development directions of Raman spectroscopy are proposed in the field of agriculture in order to bring more inspiration to future agricultural production and research.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Pig Back Transformer: Automatic 3D Pig Body Measurement Model
    WANG Yuxiao, SHI Yuanyuan, CHEN Zhaoda, WU Zhenfang, CAI Gengyuan, ZHANG Sumin, YIN Ling
    Smart Agriculture    2024, 6 (4): 76-90.   DOI: 10.12133/j.smartag.SA202401023
    Abstract1278)   HTML41)    PDF(pc) (2776KB)(1175)       Save

    [Objective] Nowadays most no contact body size measurement studies are based on point cloud segmentation method, they use a trained point cloud segmentation neural network to segment point cloud of pigs, then locate measurement points based on them. But point cloud segmentation neural network always need a larger graphics processing unit (GPU) memory, moreover, the result of the measurement key point still has room of improvement. This study aims to design a key point generating neural network to extract measurement key points from pig's point cloud. Reducing the GPU memory usage and improve the result of measurement points at the same time, improve both the efficiency and accuracy of the body size measurement. [Methods] A neural network model was proposed using improved Transformer attention mechanic called Pig Back Transformer for generating key points and back orientation points which were related to pig body dimensions. In the first part of the network, it was introduced an embedding structure for initial feature extraction and a Transformer encoder structure with edge attention which was a self-attention mechanic improved from Transformer's encoder. The embedding structure using two shared multilayer perceptron (MLP) and a distance embedding algorithm, it takes a set of points from the edge of pig back's point cloud as input and then extract information from the edge points set. In the encoder part, information about the offset distances between edge points and mass point which were their feature that extracted by the embedding structure mentioned before incorporated. Additionally, an extraction algorithm for back edge point was designed for extracting edge points to generate the input of the neural network model. In the second part of the network, it was proposed a Transformer encoder with improved self-attention called back attention. In the design of back attention, it also had an embedding structure before the encoder structure, this embedding structure extracted features from offset values, these offset values were calculated by the points which are none-edge and down sampled by farthest point sampling (FPS) to both the relative centroid point and model generated global key point from the first part that introduced before. Then these offset values were processed with max pooling with attention generated by the extracted features of the points' axis to extract more information that the original Transformer encoder couldn't extract with the same number of parameters. The output part of the model was designed to generate a set of offsets of the key points and points for back direction fitting, than add the set offset to the global key point to get points for pig body measurements. At last, it was introduced the methods for calculating body dimensions which were length, height, shoulder width, abdomen width, hip width, chest circumference and abdomen circumference using key points and back direction fitting points. [Results and Discussions] In the task of generating key points and points for back direction fitting, the improved Pig Back Transformer performed the best in the accuracy wise in the models tested with the same size of parameters, and the back orientation points generated by the model were evenly distributed which was a good preparation for a better body length calculation. A melting test for edge detection part with two attention mechanic and edge trim method both introduced above had being done, when the edge detection and the attention mechanic got cut off, the result had been highly impact, it made the model couldn't perform as well as before, when the edge trim method of preprocessing part had been cut off, there's a moderate impact on the trained model, but it made the loss of the model more inconsistence while training than before. When comparing the body measurement algorithm with human handy results, the relative error in length was 0.63%, which was an improvement compared to other models. On the other hand, the relative error of shoulder width, abdomen width and hip width had edged other models a little but there was no significant improvement so the performance of these measurement accuracy could be considered negligible, the relative error of chest circumference and abdomen circumference were a little bit behind by the other methods existed, it's because the calculate method of circumferences were not complicated enough to cover the edge case in the dataset which were those point cloud that have big holes in the bottom of abdomen and chest, it impacted the result a lot. [Conclusions] The improved Pig Back Transformer demonstrates higher accuracy in generating key points and is more resource-efficient, enabling the calculation of more accurate pig body measurements. And provides a new perspective for non-contact livestock body size measurements.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Identification and Severity Classification of Typical Maize Foliar Diseases Based on Hyperspectral Data
    SHEN Yanyan, ZHAO Yutao, CHEN Gengshen, LYU Zhengang, ZHAO Feng, YANG Wanneng, MENG Ran
    Smart Agriculture    2024, 6 (2): 28-39.   DOI: 10.12133/j.smartag.SA202310016
    Abstract1273)   HTML74)    PDF(pc) (1519KB)(837)       Save

    [Objective] In recent years, there has been a significant increase in the severity of leaf diseases in maize, with a noticeable trend of mixed occurrence. This poses a serious threat to the yield and quality of maize. However, there is a lack of studies that combine the identification of different types of leaf diseases and their severity classification, which cannot meet the needs of disease prevention and control under the mixed occurrence of different diseases and different severities in actual maize fields. [Methods] A method was proposed for identifying the types of typical leaf diseases in maize and classifying their severity using hyperspectral technology. Hyperspectral data of three leaf diseases of maize: northern corn leaf blight (NCLB), southern corn leaf blight (SCLB) and southern corn rust (SCR), were obtained through greenhouse pathogen inoculation and natural inoculation. The spectral data were preprocessed by spectral standardization, SG filtering, sensitive band extraction and vegetation index calculation, to explore the spectral characteristics of the three leaf diseases of maize. Then, the inverse frequency weighting method was utilized to balance the number of samples to reduce the overfitting phenomenon caused by sample imbalance. Relief-F and variable selection using random forests (VSURF) method were employed to optimize the sensitive spectral features, including band features and vegetation index features, to construct models for disease type identification based on the full stages of disease development (including all disease severities) and for individual disease severities using several representative machine learning approaches, demonstrating the effectiveness of the research method. Furthermore, the study individual occurrence severity classification models were also constructed for each single maize leaf disease, including the NCLB, SCLB and SCR severity classification models, respectively, aiming to achieve full-process recognition and disease severity classification for different leaf diseases. Overall accuracy (OA) and Macro F1 were used to evaluate the model accuracy in this study. Results and Discussion The research results showed significant spectrum changes of three kinds of maize leaf diseases primarily focusing on the visible (550-680 nm), red edge (740-760 nm), near-infrared (760-1 000 nm) and shortwave infrared (1 300-1 800 nm) bands. Disease-specific spectral features, optimized based on disease spectral response rules, effectively identified disease species and classify their severity. Moreover, vegetation index features were more effective in identifying disease-specific information than sensitive band features. This was primarily due to the noise and information redundancy present in the selected hyperspectral sensitive bands, whereas vegetation index could reduce the influence of background and atmospheric noise to a certain extent by integrating relevant spectral signals through band calculation, so as to achieve higher precision in the model. Among several machine learning algorithms, the support vector machine (SVM) method exhibited better robustness than random forest (RF) and decision tree (DT). In the full stage of disease development, the optimal overall accuracy (OA) of the disease classification model constructed by SVM based on vegetation index reached 77.51%, with a Macro F1 of 0.77, representing a 28.75% increase in OA and 0.30 higher of Macro F1 compared to the model based on sensitive bands. Additionally, the accuracy of the disease classification model with a single severity of the disease increased with the severity of the disease. The accuracy of disease classification during the early stage of disease development (OA=70.31%) closely approached that of the full disease development stage (OA=77.51%). Subsequently, in the moderate disease severity stage, the optimal accuracy of disease classification (OA=80.00%) surpassed the optimal accuracy of disease classification in the full disease development stage. Furthermore, the optimal accuracy of disease classification under severe severity reached 95.06%, with a Macro F1 of 0.94. This heightened accuracy during the severity stage can be attributed to significant changes in pigment content, water content and cell structure of the diseased leaves, intensifying the spectral response of each disease and enhancing the differentiation between different diseases. In disease severity classification model, the optimal accuracy of the three models for maize leaf disease severity all exceeded 70%. Among the three kinds of disease severity classification results, the NCLB severity classification model exhibited the best performance. The NCLB severity classification model, utilizing SVM based on the optimal vegetation index features, achieved an OA of 86.25%, with a Macro F1 of 0.85. In comparison, the accuracy of the SCLB severity classification model (OA=70.35%, Macro F1=0.70) and SCR severity classification model (OA=71.39%, Macro F1=0.69) were lower than that of NCLB. [Conclusions] The aforementioned results demonstrate the potential to effectively identify and classify the types and severity of common leaf diseases in maize using hyperspectral data. This lays the groundwork for research and provides a theoretical basis for large-scale crop disease monitoring, contributing to precision prevention and control as well as promoting green agriculture.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    CSD-YOLOv8s: Dense Sheep Small Target Detection Model Based on UAV Images
    WENG Zhi, LIU Haixin, ZHENG Zhiqiang
    Smart Agriculture    2024, 6 (4): 42-52.   DOI: 10.12133/j.smartag.SA202401004
    Abstract1269)   HTML92)    PDF(pc) (1772KB)(538)       Save

    [Objective] The monitoring of livestock grazing in natural pastures is a key aspect of the transformation and upgrading of large-scale breeding farms. In order to meet the demand for large-scale farms to achieve accurate real-time detection of a large number of sheep, a high-precision and easy-to-deploy small-target detection model: CSD-YOLOv8s was proposed to realize the real-time detection of small-targeted individual sheep under the high-altitude view of the unmanned aerial vehicle (UAV). [Methods] Firstly, a UAV was used to acquire video data of sheep in natural grassland pastures with different backgrounds and lighting conditions, and together with some public datasets downloaded formed the original image data. The sheep detection dataset was generated through data cleaning and labeling. Secondly, in order to solve the difficult problem of sheep detection caused by dense flocks and mutual occlusion, the SPPFCSPC module was constructed with cross-stage local connection based on the you only look once (YOLO)v8 model, which combined the original features with the output features of the fast spatial pyramid pooling network, fully retained the feature information at different stages of the model, and effectively solved the problem of small targets and serious occlusion of the sheep, and improved the detection performance of the model for small sheep targets. In the Neck part of the model, the convolutional block attention module (CBAM) convolutional attention module was introduced to enhance the feature information capture based on both spatial and channel aspects, suppressing the background information spatially and focusing on the sheep target in the channel, enhancing the network's anti-jamming ability from both channel and spatial dimensions, and improving the model's detection performance of multi-scale sheep under complex backgrounds and different illumination conditions. Finally, in order to improve the real-time and deploy ability of the model, the standard convolution of the Neck network was changed to a lightweight convolutional C2f_DS module with a changeable kernel, which was able to adaptively select the corresponding convolutional kernel for feature extraction according to the input features, and solved the problem of input scale change in the process of sheep detection in a more flexible way, and at the same time, the number of parameters of the model was reduced and the speed of the model was improved. [Results and Discussions] The improved CSD-YOLOv8s model exhibited excellent performance in the sheep detection task. Compared with YOLO, Faster R-CNN and other classical network models, the improved CSD-YOLOv8s model had higher detection accuracy and frames per second (FPS) of 87 f/s in the flock detection task with comparable detection speed and model size. Compared with the YOLOv8s model, Precision was improved from 93.0% to 95.2%, mAP was improved from 91.2% to 93.1%, and it had strong robustness to sheep targets with different degree of occlusion and different scales, which effectively solved the serious problems of missed and misdetection of sheep in the grassland pasture UAV-on-ground sheep detection task due to the small sheep targets, large background noise, and high degree of densification. misdetection serious problems. Validated by the PASCAL VOC 2007 open dataset, the CSD-YOLOv8s model proposed in this study improved the detection accuracy of 20 different objects, including transportation vehicles, animals, etc., especially in sheep detection, the detection accuracy was improved by 9.7%. [Conclusions] This study establishes a sheep dataset based on drone images and proposes a model called CSD-YOLOv8s for detecting grazing sheep in natural grasslands. The model addresses the serious issues of missed detections and false alarms in sheep detection under complex backgrounds and lighting conditions, enabling more accurate detection of grazing livestock in drone images. It achieves precise detection of targets with varying degrees of clustering and occlusion and possesses good real-time performance. This model provides an effective detection method for detecting sheep herds from the perspective of drones in natural pastures and offers technical support for large-scale livestock detection in breeding farms, with wide-ranging potential applications.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress and Outlook of Livestock Feeding Robot
    YANG Liang, XIONG Benhai, WANG Hui, CHEN Ruipeng, ZHAO Yiguang
    Smart Agriculture    2022, 4 (2): 86-98.   DOI: 10.12133/j.smartag.SA202204001
    Abstract1266)   HTML134)    PDF(pc) (1912KB)(4276)       Save

    The production mode of livestock breeding has changed from extensive to intensive, and the production level is improved. However, low labor productivity and labor shortage have seriously restricted the rapid development of China's livestock breeding industry. As a new intelligent agricultural machinery equipment, agricultural robot integrates advanced technologies, such as intelligent monitoring, automatic control, image recognition technology, environmental modeling algorithm, sensors, flexible execution, etc. Using modern information and artificial intelligence technology, developing livestock feeding and pushing robots, realizing digital and intelligent livestock breeding, improving livestock breeding productivity are the main ways to solve the above contradiction. In order to deeply analyze the research status of robot technology in livestock breeding, products and literature were collected worldwide. This paper mainly introduced the research progress of livestock feeding robot from three aspects: Rail feeding robot, self-propelled feeding robot and pushing robot, and analyzed the technical characteristics and practical application of feeding robot.The rail feeding robot runs automatically along the fixed track, identifies the target animal, positions itself, and accurately completes feed delivery through preset programs to achieve accurate feeding of livestock. The self-propelled feeding robot can walk freely in the farm and has automatic navigation and positioning functions. The system takes single chip microcomputer as the control core and realizes automatic walking by sensor and communication module. Compared with the rail feeding robot, the feeding process is more flexible, convenient and technical, which is more conducive to the promotion and application of livestock farms. The pushing robot will automatically push the feed to the feeding area, promote the increase of feed intake of livestock, and effectively reduce the labor demand of the farm. By comparing the feeding robots of developed countries and China from two aspects of technology and application, it is found that China has achieved some innovation in technology, while developted countries do better in product application. The development of livestock robot was prospected. In terms of strategic planning, it is necessary to keep up with the international situation and the trend of technological development, and formulate the agricultural robot development strategic planning in line with China's national conditions. In terms of the development of core technologies, more attention should be paid to the integration of information perception, intelligent sensors and deep learning algorithms to realize human-computer interaction. In terms of future development trends, it is urgent to strengthen innovation, improve the friendliness and intelligence of the robot, and improve the learning ability of the robot. To sum up, intelligent livestock feeding and pushing machine operation has become a cutting-edge technology in the field of intelligent agriculture, which will surely lead to a new round of agricultural production technology reform, promote the transformation and upgrading of China's livestock industry. .

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Multi-Class on-Tree Peach Detection Using Improved YOLOv5s and Multi-Modal Images
    LUO Qing, RAO Yuan, JIN Xiu, JIANG Zhaohui, WANG Tan, WANG Fengyi, ZHANG Wu
    Smart Agriculture    2022, 4 (4): 84-104.   DOI: 10.12133/j.smartag.SA202210004
    Abstract1202)   HTML138)    PDF(pc) (3285KB)(2206)       Save

    Accurate peach detection is a prerequisite for automated agronomic management, e.g., peach mechanical harvesting. However, due to uneven illumination and ubiquitous occlusion, it is challenging to detect the peaches, especially when the peaches are bagged in orchards. To this end, an accurate multi-class peach detection method was proposed by means of improving YOLOv5s and using multi-modal visual data for mechanical harvesting in this paper. RGB-D dataset with multi-class annotations of naked and bagging peach was proposed, including 4127 multi-modal images of corresponding pixel-aligned color, depth, and infrared images acquired with consumer-level RGB-D camera. Subsequently, an improved lightweight YOLOv5s (small depth) model was put forward by introducing a direction-aware and position-sensitive attention mechanism, which could capture long-range dependencies along one spatial direction and preserve precise positional information along the other spatial direction, helping the networks accurately detect peach targets. Meanwhile, the depthwise separable convolution was employed to reduce the model computation by decomposing the convolution operation into convolution in the depth direction and convolution in the width and height directions, which helped to speed up the training and inference of the network while maintaining accuracy. The comparison experimental results demonstrated that the improved YOLOv5s using multi-modal visual data recorded the detection mAP of 98.6% and 88.9% on the naked and bagging peach with 5.05 M model parameters in complex illumination and severe occlusion environment, increasing by 5.3% and 16.5% than only using RGB images, as well as by 2.8% and 6.2% when compared to YOLOv5s. As compared with other networks in detecting bagging peaches, the improved YOLOv5s performed best in terms of mAP, which was 16.3%, 8.1% and 4.5% higher than YOLOX-Nano, PP-YOLO-Tiny, and EfficientDet-D0, respectively. In addition, the proposed improved YOLOv5s model offered better results in different degrees than other methods in detecting Fuji apple and Hayward kiwifruit, verified the effectiveness on different fruit detection tasks. Further investigation revealed the contribution of each imaging modality, as well as the proposed improvement in YOLOv5s, to favorable detection results of both naked and bagging peaches in natural orchards. Additionally, on the popular mobile hardware platform, it was found out that the improved YOLOv5s model could implement 19 times detection per second with the considered five-channel multi-modal images, offering real-time peach detection. These promising results demonstrated the potential of the improved YOLOv5s and multi-modal visual data with multi-class annotations to achieve visual intelligence of automated fruit harvesting systems.

    Reference | Related Articles | Metrics | Comments0
    Research Progress and Prospects of Key Navigation Technologies for Facility Agricultural Robots
    HE Yong, HUANG Zhenyu, YANG Ningyuan, LI Xiyao, WANG Yuwei, FENG Xuping
    Smart Agriculture    2024, 6 (5): 1-19.   DOI: 10.12133/j.smartag.SA202404006
    Abstract1193)   HTML294)    PDF(pc) (2130KB)(3997)       Save

    [Significance] With the rapid development of robotics technology and the persistently rise of labor costs, the application of robots in facility agriculture is becoming increasingly widespread. These robots can enhance operational efficiency, reduce labor costs, and minimize human errors. However, the complexity and diversity of facility environments, including varying crop layouts and lighting conditions, impose higher demands on robot navigation. Therefore, achieving stable, accurate, and rapid navigation for robots has become a key issue. Advanced sensor technologies and algorithms have been proposed to enhance robots' adaptability and decision-making capabilities in dynamic environments. This not only elevates the automation level of agricultural production but also contributes to more intelligent agricultural management. [Progress] This paper reviews the key technologies of automatic navigation for facility agricultural robots. It details beacon localization, inertial positioning, simultaneous localization and mapping (SLAM) techniques, and sensor fusion methods used in autonomous localization and mapping. Depending on the type of sensors employed, SLAM technology could be subdivided into vision-based, laser-based and fusion systems. Fusion localization is further categorized into data-level, feature-level, and decision-level based on the types and stages of the fused information. The application of SLAM technology and fusion localization in facility agriculture has been increasingly common. Global path planning plays a crucial role in enhancing the operational efficiency and safety of facility aricultural robots. This paper discusses global path planning, classifying it into point-to-point local path planning and global traversal path planning. Furthermore, based on the number of optimization objectives, it was divided into single-objective path planning and multi-objective path planning. In regard to automatic obstacle avoidance technology for robots, the paper discusses sevelral commonly used obstacle avoidance control algorithms commonly used in facility agriculture, including artificial potential field, dynamic window approach and deep learning method. Among them, deep learning methods are often employed for perception and decision-making in obstacle avoidance scenarios. [Conclusions and Prospects] Currently, the challenges for facility agricultural robot navigation include complex scenarios with significant occlusions, cost constraints, low operational efficiency and the lack of standardized platforms and public datasets. These issues not only affect the practical application effectiveness of robots but also constrain the further advancement of the industry. To address these challenges, future research can focus on developing multi-sensor fusion technologies, applying and optimizing advanced algorithms, investigating and implementing multi-robot collaborative operations and establishing standardized and shared data platforms.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Corn and Soybean Futures Price Intelligent Forecasting Based on Deep Learning
    XU Yulin, KANG Mengzhen, WANG Xiujuan, HUA Jing, WANG Haoyu, SHEN Zhen
    Smart Agriculture    2022, 4 (4): 156-163.   DOI: 10.12133/j.smartag.SA20220712
    Abstract1186)   HTML143)    PDF(pc) (872KB)(1680)       Save

    Corn and soybean are upland grain in the same season, and the contradiction of scrambling for land between corn and soybean is prominent in China, so it is necessary to explore the price relations between corn and soybean. In addition, agricultural futures have the function of price discovery compared with the spot. Therefore, the analysis and prediction of corn and soybean futures prices are of great significance for the management department to adjust the planting structure and for farmers to select the crop varieties. In this study, the correlation between corn and soybean futures prices was analyzed, and it was found that the corn and soybean futures prices have a strong correlation by correlation test, and soybean futures price is the Granger reason of corn futures price by Granger causality test. Then, the corn and soybean futures prices were predicted using a long short-term memory (LSTM) model. To optimize the futures price prediction model performance, Attention mechanism was introduced as Attention-LSTM to assign weights to the outputs of the LSTM model at different times. Specifically, LSTM model was used to process the input sequence of futures prices, the Attention layer assign different weights to the outputs, and then the model output the prediction results after a layer of linearity. The experimental results showed that Attention-LSTM model could significantly improve the prediction performance of both corn and soybean futures prices compared to autoregressive integrated moving average model (ARIMA), support vector regression model (SVR), and LSTM. For example, mean absolute error (MAE) was improved by 3.8% and 3.3%, root mean square error (RMSE) was improved by 0.6% and 1.8% and mean absolute error percentage (MAPE) was improved by 4.8% and 2.9% compared with a single LSTM, respectively. Finally, the corn futures prices were forecasted using historical corn and soybean futures prices together. Specifically, two LSTM models were used to process the input sequences of corn futures prices and soybean futures prices respectively, two parameters were trained to perform a weighted summation of the output of two LSTM models, and the prediction results were output by the model after a layer of linearity. The experimental results showed that MAE was improved by 6.9%, RMSE was improved by 1.1% and MAPE was improved by 5.3% compared with the LSTM model using only corn futures prices. The results verify the strong correlation between corn and soybean futures prices at the same time. In conclusion, the results verify the Attention-LSTM model can improve the performances of soybean and corn futures price forecasting compared with the general prediction model, and the combination of related agricultural futures price data can improve the prediction performances of agricultural product futures forecasting model.

    Reference | Related Articles | Metrics | Comments0
    Advances in Forage Crop Growth Monitoring by UAV Remote Sensing
    ZHUO Yue, DING Feng, YAN Haijun, XU Jing
    Smart Agriculture    2022, 4 (4): 35-48.   DOI: 10.12133/j.smartag.SA202206004
    Abstract1182)   HTML136)    PDF(pc) (863KB)(2683)       Save

    Dynamic monitoring and quantitative estimation of forage crop growth are of great importance to the large-scale production of forage crop. UAV remote sensing has the advantages of high resolution, strong flexibility and low cost. In recent years, it has developed rapidly in the field of forage crop growth monitoring. In order to clarify the development status of forage crop growth monitoring and find the development direction, first, methods of UAV crop remote sensing monitoring were briefly described from two aspects of data acquisition and processing. Second, three key technologies of forage crop including canopy information extraction, spectral feature optimization and forage biomass estimation were described. Then the development trend of related research in recent years was analyzed, and it was pointed out that the number of papers published on UAV remote sensing forage crop monitoring showed an overall trend of rapidly increasing. With the rapid development of computer information technology and remote sensing technology, the application potential of UAV in the field of forage crop monitoring has been fully explored. Then, the research progress of UAV remote sensing in forage crop growth monitoring was described in five parts according to sensor types, i.e., visible, multispectral, hyperspectral, thermal infrared and LiDAR, and the research of each type of sensor were summarized and reviewed, pointing out that the current researches of hyperspectral, thermal infrared and LiDAR sensors in forage crop monitoring were less than that of visible and multispectral sensors. Finally, the future development directions were clarified according to the key technical problems that have not been solved in the research and application of UAV remote sensing forage crop growth monitoring: (1) Build a multi-temporal growth monitoring model based on the characteristics of different growth stages and different growth years of forage crops, carry out UAV remote sensing monitoring of forage crops around representative research areas to further improve the scope of application of the model. (2) Establish a multi-source database of UAV remote sensing, and carry out integrated collaborative monitoring combined with satellite remote sensing data, historical yield, soil conductivity and other data. (3) Develop an intelligent and user-friendly UAV remote sensing data analysis system, and shorten the data processing time through 5G communication network and edge computing devices. This paper could provide relevant technical references and directional guidelines for researchers in the field of forage crops and further promote the application and development of precision agriculture technology.

    Reference | Related Articles | Metrics | Comments0
    A Regional Farming Pig Counting System Based on Improved Instance Segmentation Algorithm
    ZHANG Yanqi, ZHOU Shuo, ZHANG Ning, CHAI Xiujuan, SUN Tan
    Smart Agriculture    2024, 6 (4): 53-63.   DOI: 10.12133/j.smartag.SA202310001
    Abstract1180)   HTML64)    PDF(pc) (2077KB)(592)       Save

    [Objective] Currently, pig farming facilities mainly rely on manual counting for tracking slaughtered and stored pigs. This is not only time-consuming and labor-intensive, but also prone to counting errors due to pig movement and potential cheating. As breeding operations expand, the periodic live asset inventories put significant strain on human, material and financial resources. Although methods based on electronic ear tags can assist in pig counting, these ear tags are easy to break and fall off in group housing environments. Most of the existing methods for counting pigs based on computer vision require capturing images from a top-down perspective, necessitating the installation of cameras above each hogpen or even the use of drones, resulting in high installation and maintenance costs. To address the above challenges faced in the group pig counting task, a high-efficiency and low-cost pig counting method was proposed based on improved instance segmentation algorithm and WeChat public platform. [Methods] Firstly, a smartphone was used to collect pig image data in the area from a human view perspective, and each pig's outline in the image was annotated to establish a pig count dataset. The training set contains 606 images and the test set contains 65 images. Secondly, an efficient global attention module was proposed by improving convolutional block attention module (CBAM). The efficient global attention module first performed a dimension permutation operation on the input feature map to obtain the interaction between its channels and spatial dimensions. The permuted features were aggregated using global average pooling (GAP). One-dimensional convolution replaced the fully connected operation in CBAM, eliminating dimensionality reduction and significantly reducing the model's parameter number. This module was integrated into the YOLOv8 single-stage instance segmentation network to build the pig counting model YOLOv8x-Ours. By adding an efficient global attention module into each C2f layer of the YOLOv8 backbone network, the dimensional dependencies and feature information in the image could be extracted more effectively, thereby achieving high-accuracy pig counting. Lastly, with a focus on user experience and outreach, a pig counting WeChat mini program was developed based on the WeChat public platform and Django Web framework. The counting model was deployed to count pigs using images captured by smartphones. [Results and Discussions] Compared with existing methods of Mask R-CNN, YOLACT(Real-time Instance Segmentation), PolarMask, SOLO and YOLOv5x, the proposed pig counting model YOLOv8x-Ours exhibited superior performance in terms of accuracy and stability. Notably, YOLOv8x-Ours achieved the highest accuracy in counting, with errors of less than 2 and 3 pigs on the test set. Specifically, 93.8% of the total test images had counting errors of less than 3 pigs. Compared with the two-stage instance segmentation algorithm Mask R-CNN and the YOLOv8x model that applies the CBAM attention mechanism, YOLOv8x-Ours showed performance improvements of 7.6% and 3%, respectively. And due to the single-stage design and anchor-free architecture of the YOLOv8 model, the processing speed of a single image was only 64 ms, 1/8 of Mask R-CNN. By embedding the model into the WeChat mini program platform, pig counting was conducted using smartphone images. In cases where the model incorrectly detected pigs, users were given the option to click on the erroneous location in the result image to adjust the statistical outcomes, thereby enhancing the accuracy of pig counting. [Conclusions] The feasibility of deep learning technology in the task of pig counting was demonstrated. The proposed method eliminates the need for installing hardware equipment in the breeding area of the pig farm, enabling pig counting to be carried out effortlessly using just a smartphone. Users can promptly spot any errors in the counting results through image segmentation visualization and easily rectify any inaccuracies. This collaborative human-machine model not only reduces the need for extensive manpower but also guarantees the precision and user-friendliness of the counting outcomes.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Advances and Prospect of Intelligent Monitoring Systems for the Physiological Indicators of Beef Cattle
    ZHANG Fan, ZHOU Mengting, XIONG Benhai, YANG Zhengang, LIU Minze, FENG Wenxiao, TANG Xiangfang
    Smart Agriculture    2024, 6 (4): 1-17.   DOI: 10.12133/j.smartag.SA202312001
    Abstract1173)   HTML80)    PDF(pc) (1471KB)(1434)       Save

    [Significance] The beef cattle industry plays a pivotal role in the development of China's agricultural economy and the enhancement of people's dietary structure. However, there exists a substantial disparity in feeding management practices and economic efficiency of beef cattle industry compared to developed countries. While the beef cattle industry in China is progressing towards intensive, modern, and large-scale development, it encounters challenges such as labor shortage and rising labor costs that seriously affect its healthy development. The determination of animal physiological indicators plays an important role in monitoring animal welfare and health status. Therefore, leveraging data collected from various sensors as well as technologies like machine learning, data mining, and modeling analysis enables automatic acquisition of meaningful information on beef cattle physiological indicators for intelligent management of beef cattle. In this paper, the intelligent monitoring technology of physiological indicators in beef cattle breeding process and its application value are systematically summarized, and the existing challenges and future prospects of intelligent beef cattle breeding process in China are prospected. [Progress] The methods of obtaining information on beef cattle physiological indicators include contact sensors worn on the body and non-contact sensors based on various image acquisitions. Monitoring the exercise behavior of beef cattle plays a crucial role in disease prevention, reproduction monitoring, and status assessment. The three-axis accelerometer sensor, which tracks the amount of time that beef cattle spend on lying, walking, and standing, is a widely used technique for tracking the movement behavior of beef cattle. Through machine vision analysis, individual recognition of beef cattle and identification of standing, lying down, and straddling movements can also be achieved, with the characteristics of non-contact, stress-free, low cost, and generating high data volume. Body temperature in beef cattle is associated with estrus, calving, and overall health. Sensors for monitoring body temperature include rumen temperature sensors and rectal temperature sensors, but there are issues with their inconvenience. Infrared temperature measurement technology can be utilized to detect beef cattle with abnormal temperatures by monitoring eye and ear root temperatures, although the accuracy of the results may be influenced by environmental temperature and monitoring distance, necessitating calibration. Heart rate and respiratory rate in beef cattle are linked to animal diseases, stress, and pest attacks. Monitoring heart rate can be accomplished through photoelectric volume pulse wave measurement and monitoring changes in arterial blood flow using infrared emitters and receivers. Respiratory rate monitoring can be achieved by identifying different nostril temperatures during inhalation and exhalation using thermal infrared imaging technology. The ruminating behavior of beef cattle is associated with health and feed nutrition. Currently, the primary tools used to detect rumination behavior are pressure sensors and three-axis accelerometer sensors positioned at various head positions. Rumen acidosis is a major disease in the rapid fattening process of beef cattle, however, due to limitations in battery life and electrode usage, real-time pH monitoring sensors placed in the rumen are still not widely utilized. Changes in animal physiology, growth, and health can result in alterations in specific components within body fluids. Therefore, monitoring body fluids or surrounding gases through biosensors can be employed to monitor the physiological status of beef cattle. By processing and analyzing the physiological information of beef cattle, indicators such as estrus, calving, feeding, drinking, health conditions, and stress levels can be monitored. This will contribute to the intelligent development of the beef cattle industry and enhance management efficiency. While there has been some progress made in developing technology for monitoring physiological indicators of beef cattle, there are still some challenges that need to be addressed. Contact sensors consume more energy which affects their lifespan. Various sensors are susceptible to environmental interference which affects measurement accuracy. Additionally, due to a wide variety of beef cattle breeds, it is difficult to establish a model database for monitoring physiological indicators under different feeding conditions, breeding stages, and breeds. Furthermore, the installation cost of various intelligent monitoring devices is relatively high, which also limits its utilization coverage. [Conclusion and Prospects] The application of intelligent monitoring technology for beef cattle physiological indicators is highly significance in enhancing the management level of beef cattle feeding. Intelligent monitoring systems and devices are utilized to acquire physiological behavior data, which are then analyzed using corresponding data models or classified through deep learning techniques to promptly monitor subtle changes in physiological indicators. This enables timely detection of sick, estrus, and calving cattle, facilitating prompt measures by production managers, reducing personnel workload, and improving efficiency. The future development of physiological indicators monitoring technologies in beef cattle primarily focuses on the following three aspects: (1) Enhancing the lifespan of contact sensors by reducing energy consumption, decreasing data transmission frequency, and improving battery life. (2) Integrating and analyzing various monitoring data from multiple perspectives to enhance the accuracy and utility value. (3) Strengthening research on non-contact, high-precision and automated analysis technologies to promote the precise and intelligent development within the beef cattle industry.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Automatic Measurement Method of Beef Cattle Body Size Based on Multimodal Image Information and Improved Instance Segmentation Network
    WENG Zhi, FAN Qi, ZHENG Zhiqiang
    Smart Agriculture    2024, 6 (4): 64-75.   DOI: 10.12133/j.smartag.SA202310007
    Abstract1171)   HTML40)    PDF(pc) (3345KB)(1278)       Save

    [Objective] The body size parameter of cattle is a key indicator reflecting the physical development of cattle, and is also a key factor in the cattle selection and breeding process. In order to solve the demand of measuring body size of beef cattle in the complex environment of large-scale beef cattle ranch, an image acquisition device and an automatic measurement algorithm of body size were designed. [Methods] Firstly, the walking channel of the beef cattle was established, and when the beef cattle entered the restraining device through the channel, the RGB and depth maps of the image on the right side of the beef cattle were acquired using the Inter RealSense D455 camera. Secondly, in order to avoid the influence of the complex environmental background, an improved instance segmentation network based on Mask2former was proposed, adding CBAM module and CA module, respectively, to improve the model's ability to extract key features from different perspectives, extracting the foreground contour from the 2D image of the cattle, partitioning the contour, and comparing it with other segmentation algorithms, and using curvature calculation and other mathematical methods to find the required body size measurement points. Thirdly, in the processing of 3D data, in order to solve the problem that the pixel point to be measured in the 2D RGB image was null when it was projected to the corresponding pixel coordinates in the depth-valued image, resulting in the inability to calculate the 3D coordinates of the point, a series of processing was performed on the point cloud data, and a suitable point cloud filtering and point cloud segmentation algorithm was selected to effectively retain the point cloud data of the region of the cattle's body to be measured, and then the depth map was 16. Then the depth map was filled with nulls in the field to retain the integrity of the point cloud in the cattle body region, so that the required measurement points could be found and the 2D data could be returned. Finally, an extraction algorithm was designed to combine 2D and 3D data to project the extracted 2D pixel points into a 3D point cloud, and the camera parameters were used to calculate the world coordinates of the projected points, thus automatically calculating the body measurements of the beef cattle. [Results and Discussions] Firstly, in the part of instance segmentation, compared with the classical Mask R-CNN and the recent instance segmentation networks PointRend and Queryinst, the improved network could extract higher precision and smoother foreground images of cattles in terms of segmentation accuracy and segmentation effect, no matter it was for the case of occlusion or for the case of multiple cattles. Secondly, in three-dimensional data processing, the method proposed in the study could effectively extract the three-dimensional data of the target area. Thirdly, the measurement error of body size was analysed, among the four body size measurement parameters, the smallest average relative error was the height of the cross section, which was due to the more prominent position of the cross section, and the different standing positions of the cattle have less influence on the position of the cross section, and the largest average relative error was the pipe circumference, which was due to the influence of the greater overlap of the two front legs, and the higher requirements for the standing position. Finally, automatic body measurements were carried out on 137 beef cattle in the ranch, and the automatic measurements of the four body measurements parameters were compared with the manual measurements, and the results showed that the average relative errors of body height, cross section height, body slant length, and tube girth were 4.32%, 3.71%, 5.58% and 6.25%, respectively, which met the needs of the ranch. The shortcomings were that fewer body-size parameters were measured, and the error of measuring circumference-type body-size parameters was relatively large. Later studies could use a multi-view approach to increase the number of body rule parameters to be measured and improve the accuracy of the parameters in the circumference category. [Conclusions] The article designed an automatic measurement method based on two-dimensional and three-dimensional contactless body measurements of beef cattle. Moreover, the innovatively proposed method of measuring tube girth has higher accuracy and better implementation compared with the current research on body measurements in beef cattle. The relative average errors of the four body tape parameters meet the needs of pasture measurements and provide theoretical and practical guidance for the automatic measurement of body tape in beef cattle.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Automatic Measurement of Mongolian Horse Body Based on Improved YOLOv8n-pose and 3D Point Cloud Analysis
    LI Minghuang, SU Lide, ZHANG Yong, ZONG Zheying, ZHANG Shun
    Smart Agriculture    2024, 6 (4): 91-102.   DOI: 10.12133/j.smartag.SA202312027
    Abstract1141)   HTML55)    PDF(pc) (2477KB)(1654)       Save

    [Objective] There exist a high genetic correlation among various morphological characteristics of Mongolian horses. Utilizing advanced technology to obtain body structure parameters related to athletic performance could provide data support for breeding institutions to develop scientific breeding plans and establish the groundwork for further improvement of Mongolian horse breeds. However, traditional manual measurement methods are time-consuming, labor-intensive, and may cause certain stress responses in horses. Therefore, ensuring precise and effective measurement of Mongolian horse body dimensions is crucial for formulating early breeding plans. [Method] Video images of 50 adult Mongolian horses in the suitable breeding stage at the Inner Mongolia Agricultural University Horse Breeding Technical Center was first collected. Fifty images per horse were captured to construct the training and validation sets, resulting in a total of 2 500 high-definition RGB images of Mongolian horses, with an equal ratio of images depicting horses in motion and at rest. To ensure the model's robustness and considering issues such as angles, lighting, and image blurring during actual image capture, a series of enhancement algorithms were applied to the original dataset, expanding the Mongolian horse image dataset to 4 000 images. The YOLOv8n-pose was employed as the foundational keypoint detection model. Through the design of the C2f_DCN module, deformable convolution (DCNV2) was integrated into the C2f module of the Backbone network to enhance the model's adaptability to different horse poses in real-world scenes. Besides, an SA attention module was added to the Neck network to improve the model's focus on critical features. The original loss function was replaced with SCYLLA-IoU (SIoU) to prioritize major image regions, and a cosine annealing method was employed to dynamically adjust the learning rate during model training. The improved model was named DSS-YOLO (DCNv2-SA-SIoU-YOLO) network model. Additionally, a test set comprising 30 RGB-D images of mature Mongolian horses was selected for constructing body dimension measurement tasks. DSS-YOLO was used for keypoint detection of body dimensions. The 2D keypoint coordinates from RGB images were fused with corresponding depth values from depth images to obtain 3D keypoint coordinates, and Mongolian horse's point cloud information was transformed. Point cloud processing and analysis were performed using pass-through filtering, random sample consensus (RANSAC) shape fitting, statistical outlier filtering, and principal component analysis (PCA) coordinate system correction. Finally, body height, body oblique length, croup height, chest circumference, and croup circumference were automatically computed based on keypoint spatial coordinates. [Results and Discussion] The proposed DSS-YOLO model exhibited parameter and computational costs of 3.48 M and 9.1 G, respectively, with an average accuracy mAP0.5:0.95 reaching 92.5%, and a dDSS of 7.2 pixels. Compared to Hourglass, HRNet, and SimCC, mAP0.5:0.95 increased by 3.6%, 2.8%, and 1.6%, respectively. By relying on keypoint coordinates for automatic calculation of body dimensions and suggesting the use of a mobile least squares curve fitting method to complete the horse's hip point cloud, experiments involving 30 Mongolian horses showed a mean average error (MAE) of 3.77 cm and mean relative error (MRE) of 2.29% in automatic measurements. [Conclusions] The results of this study showed that DSS-YOLO model combined with three-dimensional point cloud processing methods can achieve automatic measurement of Mongolian horse body dimensions with high accuracy. The proposed measurement method can also be extended to different breeds of horses, providing technical support for horse breeding plans and possessing practical application value.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Application of Artificial Intelligence in Agricultural Risk Management: A Review
    GUI Zechun, ZHAO Sijian
    Smart Agriculture    2023, 5 (1): 82-98.   DOI: 10.12133/j.smartag.SA202211004
    Abstract1139)   HTML135)    PDF(pc) (1410KB)(8556)       Save

    Agriculture is a basic industry deeply related to the national economy and people's livelihood, while it is also a weak industry. There are some problems with traditional agricultural risk management research methods, such as insufficient mining of nonlinear information, low accuracy and poor robustness. Artificial intelligence(AI) has powerful functions such as strong nonlinear fitting, end-to-end modeling, feature self-learning based on big data, which can solve the above problems well. The research progress of artificial intelligence technology in agricultural vulnerability assessment, agricultural risk prediction and agricultural damage assessment were first analyzed in this paper, and the following conclusions were obtained: 1. The feature importance assessment of AI in agricultural vulnerability assessment lacks scientific and effective verification indicators, and the application method makes it impossible to compare the advantages and disadvantages of multiple AI models. Therefore, it is suggested to use subjective and objective methods for evaluation; 2. In risk prediction, it is found that with the increase of prediction time, the prediction ability of machine learning model tends to decline. Overfitting is a common problem in risk prediction, and there are few researches on the mining of spatial information of graph data; 3. Complex agricultural production environment and varied application scenarios are important factors affecting the accuracy of damage assessment. Improving the feature extraction ability and robustness of deep learning models is a key and difficult issue to be overcome in future technological development. Then, in view of the performance improvement problem and small sample problem existing in the application process of AI technology, corresponding solutions were put forward. For the performance improvement problem, according to the user's familiarity with artificial intelligence, a variety of model comparison method, model group method and neural network structure optimization method can be used respectively to improve the performance of the model; For the problem of small samples, data augmentation, GAN (Generative Adversarial Network) and transfer learning can often be combined to increase the amount of input data of the model, enhance the robustness of the model, accelerate the training speed of the model and improve the accuracy of model recognition. Finally, the applications of AI in agricultural risk management were prospected: In the future, AI algorithm could be considered in the construction of agricultural vulnerability curve; In view of the relationship between upstream and downstream of agricultural industry chain and agriculture-related industries, the graph neural network can be used more in the future to further study the agricultural price risk prediction; In the modeling process of future damage assessment, more professional knowledge related to the assessment target can be introduced to enhance the feature learning of the target, and expanding the small sample data is also the key subject of future research.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Zero-Shot Pest Identification Based on Generative Adversarial Networks and Visual-Semantic Alignment
    LI Tianjun, YANG Xinting, CHEN Xiao, HU Huan, ZHOU Zijie, LI Wenyong
    Smart Agriculture    2024, 6 (2): 72-84.   DOI: 10.12133/j.smartag.SA202312014
    Abstract1128)   HTML45)    PDF(pc) (2294KB)(463)       Save

    [Objective] Accurate identification of insect pests is crucial for the effective prevention and control of crop infestations. However, existing pest identification methods primarily rely on traditional machine learning or deep learning techniques that are trained on seen classes. These methods falter when they encounter unseen pest species not included in the training set, due to the absence of image samples. An innovative method was proposed to address the zero-shot recognition challenge for pests. [Methods] The novel zero-shot learning (ZSL) method proposed in this study was capable of identifying unseen pest species. First, a comprehensive pest image dataset was assembled, sourced from field photography conducted around Beijing over several years, and from web crawling. The final dataset consisted of 2 000 images across 20 classes of adult Lepidoptera insects, with 100 images per class. During data preprocessing, a semantic dataset was manually curated by defining attributes related to color, pattern, size, and shape for six parts: antennae, back, tail, legs, wings, and overall appearance. Each image was annotated to form a 65-dimensional attribute vector for each class, resulting in a 20×65 semantic attribute matrix with rows representing each class and columns representing attribute values. Subsequently, 16 classes were designated as seen classes, and 4 as unseen classes. Next, a novel zero-shot pest recognition method was proposed, focusing on synthesizing high-quality pseudo-visual features aligned with semantic information using a generator. The wasserstein generative adversarial networks (WGAN) architecture was strategically employed as the fundamental network backbone. Conventional generative adversarial networks (GANs) have been known to suffer from training instabilities, mode collapse, and convergence issues, which can severely hinder their performance and applicability. The WGAN architecture addresses these inherent limitations through a principled reformulation of the objective function. In the proposed method, the contrastive module was designed to capture highly discriminative visual features that could effectively distinguish between different insect classes. It operated by creating positive and negative pairs of instances within a batch. Positive pairs consisted of different views of the same class, while negative pairs were formed from instances belonging to different classes. The contrastive loss function encouraged the learned representations of positive pairs to be similar while pushing the representations of negative pairs apart. Tightly integrated with the WGAN structure, this module substantially improved the generation quality of the generator. Furthermore, the visual-semantic alignment module enforced consistency constraints from both visual and semantic perspectives. This module constructed a cross-modal embedding space, mapping visual and semantic features via two projection layers: One for mapping visual features into the cross-modal space, and another for mapping semantic features. The visual projection layer took the synthesized pseudo-visual features from the generator as input, while the semantic projection layer ingested the class-level semantic vectors. Within this cross-modal embedding space, the module enforced two key constraints: Maximizing the similarity between same-class visual-semantic pairs and minimizing the similarity between different-class pairs. This was achieved through a carefully designed loss function that encourages the projected visual and semantic representations to be closely aligned for instances belonging to the same class, while pushing apart the representations of different classes. The visual-semantic alignment module acted as a regularizer, preventing the generator from producing features that deviated from the desired semantic information. This regularization effect complemented the discriminative power gained from the contrastive module, resulting in a generator that produces high-quality, diverse, and semantically aligned pseudo-visual features. [Results and Discussions] The proposed method was evaluated on several popular ZSL benchmarks, including CUB, AWA, FLO, and SUN. The results demonstrated that the proposed method achieved state-of-the-art performance across these datasets, with a maximum improvement of 2.8% over the previous best method, CE-GZSL. This outcome fully demonstrated the method's broad effectiveness in different benchmarks and its outstanding generalization ability. On the self-constructed 20-class insect dataset, the method also exhibited exceptional recognition accuracy. Under the standard ZSL setting, it achieved a precise recognition rate of 77.4%, outperforming CE-GZSL by 2.1%. Under the generalized ZSL setting, it achieved a harmonic mean accuracy of 78.3%, making a notable 1.2% improvement. This metric provided a balanced assessment of the model's performance across seen and unseen classes, ensuring that high accuracy on unseen classes does not come at the cost of forgetting seen classes. These results on the pest dataset, coupled with the performance on public benchmarks, firmly validated the effectiveness of the proposed method. [Conclusions] The proposed zero-shot pest recognition method represents a step forward in addressing the challenges of pest management. It effectively generalized pest visual features to unseen classes, enabling zero-shot pest recognition. It can facilitate pests identification tasks that lack training samples, thereby assisting in the discovery and prevention of novel crop pests. Future research will focus on expanding the range of pest species to further enhance the model's practical applicability.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    The Path of Smart Agricultural Technology Innovation Leading Development of Agricultural New Quality Productivity
    CAO Bingxue, LI Hongfei, ZHAO Chunjiang, LI Jin
    Smart Agriculture    2024, 6 (4): 116-127.   DOI: 10.12133/j.smartag.SA202405004
    Abstract1124)   HTML205)    PDF(pc) (1102KB)(2356)       Save

    [Significance] Building the agricultural new quality productivity is of great significance. It is the advanced quality productivity which realizes the transformation, upgrading, and deep integration of substantive, penetrating, operational, and media factors, and has outstanding characteristics such as intelligence, greenness, integration, and organization. As a new technology revolution in the field of agriculture, smart agricultural technology transforms agricultural production mode by integrating agricultural biotechnology, agricultural information technology, and smart agricultural machinery and equipment, with information and knowledge as important core elements. The inherent characteristics of "high-tech, high-efficiency, high-quality, and sustainable" in agricultural new quality productivity are fully reflected in the practice of smart agricultural technology innovation. And it has become an important core and engine for promoting the agricultural new quality productivity. [Progress] Through literature review and theoretical analysis, this article conducts a systematic study on the practical foundation, internal logic, and problem challenges of smart agricultural technology innovation leading the development of agricultural new quality productivity. The conclusions show that: (1) At present, the global innovation capability of smart agriculture technology is constantly enhancing, and significant technology breakthroughs have been made in fields such as smart breeding, agricultural information perception, agricultural big data and artificial intelligence, smart agricultural machinery and equipment, providing practical foundation support for leading the development of agricultural new quality productivity. Among them, the smart breeding of 'Phenotype+Genotype+Environmental type' has entered the fast lane, the technology system for sensing agricultural sky, air, and land information is gradually maturing, the research and exploration on agricultural big data and intelligent decision-making technology continue to advance, and the creation of smart agricultural machinery and equipment for different fields has achieved fruitful results; (2) Smart agricultural technology innovation provides basic resources for the development of agricultural new quality productivity through empowering agricultural factor innovation, provides sustainable driving force for the development of agricultural new quality productivity through empowering agricultural technology innovation, provides practical paradigms for the development of agricultural new quality productivity through empowering agricultural scenario innovation, provides intellectual support for the development of agricultural new quality productivity through empowering agricultural entity innovation, and provides important guidelines for the development of agricultural new quality productivity through empowering agricultural value innovation; (3) Compared to the development requirements of agricultural new quality productivity in China and the advanced level of international smart agriculture technology, China's smart agriculture technology innovation is generally in the initial stage of multi-point breakthroughs, system integration, and commercial application. It still faces major challenges such as an incomplete policy system for technology innovation, key technologies with bottlenecks, blockages and breakpoints, difficulties in the transformation and implementation of technology achievements, and incomplete support systems for technology innovation. [Conclusions and Prospects] Regarding the issue of technology innovation in smart agriculture, this article proposes the 'Four Highs' path of smart agriculture technology innovation to fill the gaps in smart agriculture technology innovation and accelerate the formation of agricultural new quality productivity in China. The "Four Highs" path specifically includes the construction of high-energy smart agricultural technology innovation platforms, the breakthroughs in high-precision and cutting-edge smart agricultural technology products, the creation of high-level smart agricultural application scenarios, and the cultivation of high-level smart agricultural innovation talents. Finally, this article proposes four strategic suggestions such as deepening the understanding of smart agriculture technology innovation and agricultural new quality productivity, optimizing the supply of smart agriculture technology innovation policies, building a national smart agriculture innovation development pilot zone, and improving the smart agriculture technology innovation ecosystem.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Oilseed Rape Sclerotinia in Hyperspectral Images Segmentation Method Based on Bi-GRU and Spatial-Spectral Information Fusion
    ZHANG Jing, ZHAO Zexuan, ZHAO Yanru, BU Hongchao, WU Xingyu
    Smart Agriculture    2024, 6 (2): 40-48.   DOI: 10.12133/j.smartag.SA202310010
    Abstract1116)   HTML32)    PDF(pc) (1594KB)(329)       Save

    [Objective] The widespread prevalence of sclerotinia disease poses a significant challenge to the cultivation and supply of oilseed rape, not only results in substantial yield losses and decreased oil content in infected plant seeds but also severely impacts crop productivity and quality, leading to significant economic losses. To solve the problems of complex operation, environmental pollution, sample destruction and low detection efficiency of traditional chemical detection methods, a Bi-directional Gate Recurrent Unit (Bi-GRU) model based on space-spectrum feature fusion was constructed to achieve hyperspectral images (HSIs) segmentation of oilseed rape sclerotinia infected area. [Methods] The spectral characteristics of sclerotinia disease from a spectral perspective was initially explored. Significantly varying spectral reflectance was notably observed around 550 nm and within the wavelength range of 750-1 000 nm at different locations on rapeseed leaves. As the severity of sclerotinia infection increased, the differences in reflectance at these wavelengths became more pronounced. Subsequently, a rapeseed leaf sclerotinia disease dataset comprising 400 HSIs was curated using an intelligent data annotation tool. This dataset was divided into three subsets: a training set with 280 HSIs, a validation set with 40 HSIs, and a test set with 80 HSIs. Expanding on this, a 7×7 pixel neighborhood was extracted as the spatial feature of the target pixel, incorporating both spatial and spectral features effectively. Leveraging the Bi-GRU model enabled simultaneous feature extraction at any point within the sequence data, eliminating the impact of the order of spatial-spectral data fusion on the model's performance. The model comprises four key components: an input layer, hidden layers, fully connected layers, and an output layer. The Bi-GRU model in this study consisted of two hidden layers, each housing 512 GRU neurons. The forward hidden layer computed sequence information at the current time step, while the backward hidden layer retrieves the sequence in reverse, incorporating reversed-order information. These two hidden layers were linked to a fully connected layer, providing both forward and reversed-order information to all neurons during training. The Bi-GRU model included two fully connected layers, each with 1 000 neurons, and an output layer with two neurons representing the healthy and diseased classes, respectively. [Results and Discussions] To thoroughly validate the comprehensive performance of the proposed Bi-GRU model and assess the effectiveness of the spatial-spectral information fusion mechanism, relevant comparative analysis experiments were conducted. These experiments primarily focused on five key parameters—ClassAP(1), ClassAP(2), mean average precision (mAP), mean intersection over union (mIoU), and Kappa coefficient—to provide a comprehensive evaluation of the Bi-GRU model's performance. The comprehensive performance analysis revealed that the Bi-GRU model, when compared to mainstream convolutional neural network (CNN) and long short-term memory (LSTM) models, demonstrated superior overall performance in detecting rapeseed sclerotinia disease. Notably, the proposed Bi-GRU model achieved an mAP of 93.7%, showcasing a 7.1% precision improvement over the CNN model. The bidirectional architecture, coupled with spatial-spectral fusion data, effectively enhanced detection accuracy. Furthermore, the study visually presented the segmentation results of sclerotinia disease-infected areas using CNN, Bi-LSTM, and Bi-GRU models. A comparison with the Ground-Truth data revealed that the Bi-GRU model outperformed the CNN and Bi-LSTM models in detecting sclerotinia disease at various infection stages. Additionally, the Dice coefficient was employed to comprehensively assess the actual detection performance of different models at early, middle, and late infection stages. The dice coefficients for the Bi-GRU model at these stages were 83.8%, 89.4% and 89.2%, respectively. While early infection detection accuracy was relatively lower, the spatial-spectral data fusion mechanism significantly enhanced the effectiveness of detecting early sclerotinia infections in oilseed rape. [Conclusions] This study introduces a Bi-GRU model that integrates spatial and spectral information to accurately and efficiently identify the infected areas of oilseed rape sclerotinia disease. This approach not only addresses the challenge of detecting early stages of sclerotinia infection but also establishes a basis for high-throughput non-destructive detection of the disease.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Digital Twin for Agricultural Machinery: From Concept to Application
    GUO Dafang, DU Yuefeng, WU Xiuheng, HOU Siyu, LI Xiaoyu, ZHANG Yan'an, CHEN Du
    Smart Agriculture    2023, 5 (2): 149-160.   DOI: 10.12133/j.smartag.SA202305007
    Abstract1095)   HTML174)    PDF(pc) (2531KB)(1977)       Save

    Significance Agricultural machinery serves as the fundamental support for implementing advanced agricultural production concepts. The key challenge for the future development of smart agriculture lies in how to enhance the design, manufacturing, operation, and maintenance of these machines to fully leverage their capabilities. To address this, the concept of the digital twin has emerged as an innovative approach that integrates various information technologies and facilitates the integration of virtual and real-world interactions. By providing a deeper understanding of agricultural machinery and its operational processes, the digital twin offers solutions to the complexity encountered throughout the entire lifecycle, from design to recycling. Consequently, it contributes to an all-encompassing enhancement of the quality of agricultural machinery operations, enabling them to better meet the demands of agricultural production. Nevertheless, despite its significant potential, the adoption of the digital twin for agricultural machinery is still at an early stage, lacking the necessary theoretical guidance and methodological frameworks to inform its practical implementation. Progress Drawing upon the successful experiences of the author's team in the digital twin for agricultural machinery, this paper presents an overview of the research progress made in digital twin. It covers three main areas: The digital twin in a general sense, the digital twin in agriculture, and the digital twin for agricultural machinery. The digital twin is conceptualized as an abstract notion that combines model-based system engineering and cyber-physical systems, facilitating the integration of virtual and real-world environments. This paper elucidates the relevant concepts and implications of digital twin in the context of agricultural machinery. It points out that the digital twin for agricultural machinery aims to leverage advanced information technology to create virtual models that accurately describe agricultural machinery and its operational processes. These virtual models act as a carrier, driven by data, to facilitate interaction and integration between physical agricultural machinery and their digital counterparts, consequently yielding enhanced value. Additionally, it proposes a comprehensive framework comprising five key components: Physical entities, virtual models, data and connectivity, system services, and business applications. Each component's functions operational mechanism, and organizational structure are elucidated. The development of the digital twin for agricultural machinery is still in its conceptual phase, and it will require substantial time and effort to gradually enhance its capabilities. In order to advance further research and application of the digital twin in this domain, this paper integrates relevant theories and practical experiences to propose an implementation plan for the digital twin for agricultural machinery. The macroscopic development process encompasses three stages: Theoretical exploration, practical application, and summarization. The specific implementation process entails four key steps: Intelligent upgrading of agricultural machinery, establishment of information exchange channels, construction of virtual models, and development of digital twin business applications. The implementation of digital twin for agricultural machinery comprises four stages: Pre-research, planning, implementation, and evaluation. The digital twin serves as a crucial link and bridge between agricultural machinery and the smart agriculture. It not only facilitates the design and manufacturing of agricultural machinery, aligning them with the realities of agricultural production and supporting the advancement of advanced manufacturing capabilities, but also enhances the operation, maintenance, and management of agricultural production to better meet practical requirements. This, in turn, expedites the practical implementation of smart agriculture. To fully showcase the value of the digital twin for agricultural machinery, this paper addresses the existing challenges in the design, manufacturing, operation, and management of agricultural machinery. It expounds the methods by which the digital twin can address these challenges and provides a technical roadmap for empowering the design, manufacturing, operation, and management of agricultural machinery through the use of the digital twin. In tackling the critical issue of leveraging the digital twin to enhance the operational quality of agricultural machinery, this paper presents two research cases focusing on high-powered tractors and large combine harvesters. These cases validate the feasibility of the digital twin in improving the quality of plowing operations for high-powered tractors and the quality of grain harvesting for large combine harvesters. Conclusions and Prospects This paper serves as a reference for the development of research on digital twin for agricultural machinery, laying a theoretical foundation for empowering smart agriculture and intelligent equipment with the digital twin. The digital twin provides a new approach for the transformation and upgrade of agricultural machinery, offering a new path for enhancing the level of agricultural mechanization and presenting new ideas for realizing smart agriculture. However, existing digital twin for agricultural machinery is still in its early stages, and there are a series of issues that need to be explored. It is necessary to involve more professionals from relevant fields to advance the research in this area.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Supply and Demand Forecasting Model of Multi-Agricultural Products Based on Deep Learning
    ZHUANG Jiayu, XU Shiwei, LI Yang, XIONG Lu, LIU Kebao, ZHONG Zhiping
    Smart Agriculture    2022, 4 (2): 174-182.   DOI: 10.12133/j.smartag.SA202203013
    Abstract1094)   HTML104)    PDF(pc) (1057KB)(3514)       Save

    To further improve the simulation and estimation accuracy of the supply and demand process of agricultural products, a large number of agricultural data at the national and provincial levels since 1980 were used as the basic research sample, including production, planted area, food consumption, industrial consumption, feed consumption, seed consumption, import, export, price, GDP, population, urban population, rural population, weather and so on, by fully considering the impact factors of agricultural products such as varieties, time, income and economic development, a multi-agricultural products supply and demand forecasting model based on long short-term memory neural network (LSTM) was constructed in this study. The general thought of supply and demand forecasting model is packaging deep neural network training model as an I/O-opening modular model, reserving control interface for input of outside data, and realizing the indicators forecasting of supply and demand and matrixing of balance sheet. The input of model included forecasting balance sheet data of agricultural products, annual price data, general economic data, and international currency data since 2000. The output of model was balance sheet data of next decade since forecasting time. Under the premise of fully considering the mechanical constraints, the model used the advantages of deep learning algorithms in nonlinear model analysis and prediction to analyze and predict supply and demand of 9 main types of agricultural products, including rice, wheat, corn, soybean, pork, poultry, beef, mutton, and aquatic products. The production forecast results of 2019-2021 based on this model were compared and verified with the data published by the National Bureau of Statistics, and the mean absolute percentage error was 3.02%, which meant the average forecast accuracy rate of 2019-2021 was 96.98%. The average forecast accuracy rate was 96.10% in 2019, 98.26% in 2020, and 96.58% in 2021, which shows that with the increase of sample size, the prediction effect of intelligent learning model would gradually get better. The forecasting results indicate that the multi-agricultural supply and demand prediction model based on LSTM constructed in this study can effectively reflect the impact of changes in hidden indicators on the prediction results, avoiding the uncontrollable error introduced by manual experience intervention. The model can provide data production and technical support such as market warning, policy evaluation, resource management and public opinion analysis for agricultural production and management and macroeconomic regulation, and can provide intelligent technical support for multi-regional and inter-temporal agricultural outlook work by monitoring agricultural operation data in a timely manner.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Pig Sound Analysis: A Measure of Welfare
    JI Nan, YIN Yanling, SHEN Weizheng, KOU Shengli, DAI Baisheng, WANG Guowei
    Smart Agriculture    2022, 4 (2): 19-35.   DOI: 10.12133/j.smartag.SA202204004
    Abstract1091)   HTML83)    PDF(pc) (700KB)(2348)       Save

    Pig welfare is closely related to the economical production of pig farms. With regard to pig welfare assessment, pig sounds are significant indicators, which can reflect the quality of the barn environment, the physical condition and the health of pigs. Therefore, pig sound analysis is of high priority and necessary. In this review, the relationship between pig sound and welfare was analyzed. Three kinds of pig sounds are closely related to pig welfare, including coughs, screams, and grunts. Subsequently, both wearable and non-contact sensors were briefly described in two aspects of advantages and disadvantages. Based on the advantages and feasibility of microphone sensors in contactless way, the existing techniques for processing pig sounds were elaborated and evaluated for further in-depth research from three aspects: sound recording and labeling, feature extraction, and sound classification. Finally, the challenges and opportunities of pig sound research were discussed for the ultimate purpose of precision livestock farming (PLF) in four ways: concerning sound monitoring technologies, individual pig welfare monitoring, commercial applications and pig farmers. In summary, it was found that most of the current researches on pig sound recognition tasks focused on the selection of classifiers and algorithm improvement, while fewer research was conducted on sound labeling and feature extraction. Meanwhile, pig sound recognition faces some challenging problems, involving the difficulty in obtaining the audio data from different pig growth stages and verifying the developed algorithms in a variety of pig farms. Overall, it is suggested that technologies involved in the automatic identification process should be explored in depth. In the future, strengthen cooperation among cross-disciplinary experts to promote the development and application of PLF is also nessary.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Identifying Multiple Apple Leaf Diseases Based on the Improved CBAM-ResNet18 Model Under Weak Supervision
    ZHANG Wenjing, JIANG Zezhong, QIN Lifeng
    Smart Agriculture    2023, 5 (1): 111-121.   DOI: 10.12133/j.smartag.SA202301005
    Abstract1085)   HTML148)    PDF(pc) (1490KB)(12378)       Save

    To deal with the issues of low accuracy of apple leaf disease images recognition under weak supervision with only image category labeling, an improved CBAM-ResNet-based algorithm was proposed in this research. Using ResNet18 as the base model, the multilayer perceptron (MLP) in the lightweight convolutional block attention module (CBAM) attention mechanism channel was improved by up-dimensioning to amplify the details of apple leaf disease features. The improved CBAM attention module was incorporated into the residual module to enhance the key details of AlphaDropout with SeLU (Scaled Exponential Linearunits) to prevent overfitting of its network and accelerate the convergence effect of the model. Finally, the learning rate was adjusted using a single-cycle cosine annealing algorithm to obtain the disease recognition model. The training test was performed under weak supervision with only image-level annotation of all sample images, which greatly reduced the annotation cost. Through ablation experiments, the best dimensional improvement of MLP in CBAM was explored as 2. Compared with the original CBAM, the accuracy rate was increased by 0.32%, and the training time of each round was reduced by 8 s when the number of parameters increased by 17.59%. Tests were conducted on a dataset of 6185 images containing five diseases, including apple spotted leaf drop, brown spot, mosaic, gray spot, and rust, and the results showed that the model achieved an average recognition accuracy of 98.44% for the five apple diseases under weakly supervised learning. The improved CBAM-ResNet18 had increased by 1.47% compared with the pre-improved ResNet18, and was higher than VGG16, DesNet121, ResNet50, ResNeXt50, EfficientNet-B0 and Xception control model. In terms of learning efficiency, the improved CBAM-ResNet18 compared to ResNet18 reduced the training time of each round by 6 s under the condition that the number of parameters increased by 24.9%, and completed model training at the fastest speed of 137 s per round in VGG16, DesNet121, ResNet50, ResNeXt50, Efficient Net-B0 and Xception control models. Through the results of the confusion matrix, the average precision, average recall rate, and average F1 score of the model were calculated to reach 98.43%, 98.46%, and 0.9845, respectively. The results showed that the proposed improved CBAM-ResNet18 model could perform apple leaf disease identification and had good identification results, and could provide technical support for intelligent apple leaf disease identification providing.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Artificial Intelligence-Driven High-Quality Development of New-Quality Productivity in Animal Husbandry: Restraining Factors, Generation Logic and Promotion Paths
    LIU Jifang, ZHOU Xiangyang, LI Min, HAN Shuqing, GUO Leifeng, CHI Liang, YANG Lu, WU Jianzhai
    Smart Agriculture    2025, 7 (1): 165-177.   DOI: 10.12133/j.smartag.SA202407010
    Abstract1060)   HTML15)    PDF(pc) (1692KB)(193)       Save

    [Significance] Developing new-quality productivity is of great significance for promoting high-quality development of animal husbandry. However, there is currently limited research on new-quality productivity in animal husbandry, and there is a lack of in-depth analysis on its connotation, characteristics, constraints, and promotion path. [Progress] This article conducts a systematic study on the high-quality development of animal husbandry productivity driven by artificial intelligence. The new-quality productivity of animal husbandry is led by cutting-edge technological innovations such as biotechnology, information technology, and green technology, with digitalization, greening, and ecologicalization as the direction of industrial upgrading. Its basic connotation is manifested as higher quality workers, more advanced labor materials, and a wider range of labor objects. Compared with traditional productivity, the new-quality productivity of animal husbandry is an advanced productivity guided by technological innovation, new development concepts, and centered on the improvement of total factor productivity. It has significant characteristics of high production efficiency, good industrial benefits, and strong sustainable development capabilities. China's new-quality productivity in animal husbandry has a good foundation for development, but it also faces constraints such as insufficient innovation in animal husbandry breeding technology, weak core competitiveness, low mechanization rate of animal husbandry, weak independent research and development capabilities of intelligent equipment, urgent demand for "machine replacement", shortcomings in the quantity and quality of animal husbandry talents, low degree of scale of animal husbandry, and limited level of intelligent management. Artificial intelligence in animal husbandry can be widely used in environmental control, precision feeding, health monitoring and disease prevention and control, supply chain optimization and other fields. Artificial intelligence, through revolutionary breakthroughs in animal husbandry technology represented by digital technology, innovative allocation of productivity factors in animal husbandry linked by data elements, and innovative allocation of productivity factors in animal husbandry adapted to the digital economy, has given birth to new-quality productivity in animal husbandry and empowered the high-quality development of animal husbandry. [Conclusions and Prospects] This article proposes a path to promote the development of new-quality productivity in animal husbandry by improving the institutional mechanism of artificial intelligence to promote the development of modern animal husbandry industry, strengthening the application of artificial intelligence in animal husbandry technology innovation and promotion, and improving the management level of artificial intelligence in the entire industry chain of animal husbandry.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Machine Learning Inversion Model of Soil Salinity in the Yellow River Delta Based on Field Hyperspectral and UAV Multispectral Data
    FAN Chengzhi, WANG Ziwen, YANG Xingchao, LUO Yongkai, XU Xuexin, GUO Bin, LI Zhenhai
    Smart Agriculture    2022, 4 (4): 61-73.   DOI: 10.12133/j.smartag.SA202212001
    Abstract1028)   HTML71)    PDF(pc) (1831KB)(4875)       Save

    Soil salinization in the Yellow River Delta is a difficult and miscellaneous disease to restrict the development of agricultural economy, and further hinders agricultural production. To explore the retrieval of soil salt content from remote sensing images under the condition of no vegetation coverage, the typical area of the Yellow River Delta was taken as the study area to obtain the hyperspectral of surface features, the multispectral of UAVs and the soil salt content of sample points. Three representative experimental areas with flat terrain and obvious soil salinization characteristics were set up in the study area, and 90 samples were collected in total. By optimizing the sensitive spectral parameters, machine learning algorithms of partial least squares regression (PLSR) and random forest (RF) for inversion of soil salt content were used in the study area. The results showed that: (1) Hyperspectral band of 1972 nm had the highest sensitivity to soil salt content, with correlation r of -0.31. The optimized spectral parameters of shortwave infrared can improve the accuracy of estimating soil salt content. (2) RF model optimized by two different data sources had better stability than PLSR model. RF model performed well in terms of generalization ability and balance error, but it had some over-fitting problems. (3) RF model based on ground feature hyperspectral (R2 =0.54, verified RMSE=3.30 g/kg) was superior to the random forest model based on UAV multispectral (R2 =0.54, verified RMSE=3.35 g/kg). The combination of image texture features improved the estimation accuracy of multispectral model, but the verification accuracy was still lower than that of hyperspectral model. (4) Soil salt content based on UAV multi-spectral imageries and RF model was mapped in the study area. This study demonstrates that the level of soil salinization in the Yellow River Delta region is significantly different in geographical location. The cultivated land in the study area is mainly light and moderate salinized soil with has certain restrictions on crop cultivation. Areas with low soil salt content are suitable for planting crops in low salinity fields, and farmland with high soil salt content is suitable for planting crops with high salinity tolerance. This study constructed and compared the soil salinity inversion models of the Yellow River Delta from two different sources of data, optimized them based on the advantages of each data source, explored the inversion of soil salinity content without vegetation coverage, and can provide a reference for more accurate inversion of land salinization.

    Reference | Related Articles | Metrics | Comments0
    Automatic Acquisition and Target Extraction of Beef Cattle 3D Point Cloud from Complex Environment
    LI Jiawei, MA Weihong, LI Qifeng, XUE Xianglong, WANG Zhiquan
    Smart Agriculture    2022, 4 (2): 64-76.   DOI: 10.12133/j.smartag.SA202206003
    Abstract969)   HTML69)    PDF(pc) (2809KB)(1960)       Save

    Non-contact measurement based on the point cloud acquisition technology is able to alleviate the stress responses among beef cattle while collecting core body dimension data, but the current 3D data collection for beef cattle is usually time-consuming and easily influenced by the environment, which is in fact inapplicable to the actual breeding environment. In order to overcome the difficulty in obtaining the complete beef cattle point clouds, a non-contact phenotype data acquisition equipment was developed with a 3D reconstruction function, which can provide a large amount of standardized 3D quantitative phenotype data for beef cattle breeding and fattening process. The system is made up of a Kinect DK depth camera, an infrared grating trigger, and an Radio Frequency Identification (RFID) trigger, which enables the multi-angle instantaneous acquisition of beef cattle point clouds when the beef cattle pass through the walkway. The point cloud processing algorithm was developed based on the C++ platform and Point Cloud Library (PCL), and 3D reconstruction of beef cattle point clouds was achieved through spatial and outlier point filtering, Random Sample Consensus (RANSAC) shape fitting, point cloud thinning, and perceptual box filtering based on the dimensionality reduction density clustering to effectively filter out the interference, such as noises from the railings close to the beef cattle, without destroying the integrity of the point clouds. In the present work, a total of 124 sets of point clouds were successfully collected from 20 beef cattles on the actual farm using this system, and the target extraction experiments were completed. Notably, the beef cattle passed through the walkway in a natural state without any intervention during the whole data collection process. The experimental results showed that the acquisition success rate of this device was 91.89%. The coordinate system of the collected point cloud was consistent with the real situation and the body dimension reconstruction error was 0.6%. This device can realize the automatic acquisition and 3D reconstruction of beef cattle point cloud data from multiple angles without human intervention, and can automatically extract the target beef cattle point clouds from a complex environment. The point cloud data collected by this system help to restore the body size and shape of beef cattle, thereby provide solid support for the measurement of core parameters such as body height, body width, body oblique length, chest circumference, abdominal circumference, and body weight.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Three-Dimensional Virtual Orchard Construction Method Based on Laser Point Cloud
    FENG Han, ZHANG Hao, WANG Zi, JIANG Shijie, LIU Weihong, ZHOU Linghui, WANG Yaxiong, KANG Feng, LIU Xingxing, ZHENG Yongjun
    Smart Agriculture    2022, 4 (3): 12-23.   DOI: 10.12133/j.smartag.SA202207002
    Abstract965)   HTML107)    PDF(pc) (2426KB)(1787)       Save

    To solve the problems of low level of digitalization of orchard management and relatively single construction method, a three-dimensional virtual orchard construction method based on laser point cloud was proposed in this research. First, the hand-held 3D point cloud acquistion equipment (3D-BOX) combined with the lidar odometry and mapping (SLAM-LOAM) algorithm was used to complete the acquisition of the point cloud data set of orchard; then the outliers and noise points of the point cloud data were removed by using the statistical filtering algorithm, which was based on the K-neighbor distance statistical method. To achieve this, a distance threshold model for removing noise points was established. When a discrete point exceeded, it would be marked as an outlier, and the point was separated from the point cloud dataset to achieve the effect of discrete point filtering. The VoxelGrid filter was used for down sampling, the cloth simulation filtering (CSF) cloth simulation algorithm was used to calculate the distance between the cloth grid points and the corresponding laser point cloud, and the distinction between ground points and non-ground points was achieved by dividing the distance threshold, and when combined with the density-based spatial clustering of applications with noise (DBSCAN) clustering algorithm, ground removal and cluster segmentation of orchard were realized; finally, the Unity3D engine was used to build a virtual orchard roaming scene, and convert the real-time GPS data of the operating equipment from the WGS-84 coordinate system to the Gauss projection plane coordinate system through Gaussian projection forward calculation. The real-time trajectory of the equipment was displayed through the LineRenderer, which realized the visual display of the motion trajectory control and operation trajectory of the working machine. In order to verify the effectiveness of the virtual orchard construction method, the test of orchard construction method was carried out in the Begonia fruit and the mango orchard. The results showed that the proposed point cloud data processing method could achieve the accuracy of cluster segmentation of Begonia fruit trees and mango trees 95.3% and 98.2%, respectively. Compared with the row spacing and plant spacing of fruit trees in the actual mango orchard, the average inter-row error of the virtual mango orchard was about 3.5%, and the average inter-plant error was about 6.6%. And compared the virtual orchard constructed by Unity3D with the actual orchard, the proposed method can effectively reproduce the actual three-dimensional situation of the orchard, and obtain a better visualization effect, which provides a technical solution for the digital modeling and management of the orchard.

    Reference | Related Articles | Metrics | Comments0
    Autonomous Navigation and Automatic Target Spraying Robot for Orchards
    LIU Limin, HE Xiongkui, LIU Weihong, LIU Ziyan, HAN Hu, LI Yangfan
    Smart Agriculture    2022, 4 (3): 63-74.   DOI: 10.12133/j.smartag.SA202207008
    Abstract960)   HTML82)    PDF(pc) (1905KB)(2801)       Save

    To realize the autonomous navigation and automatic target spraying of intelligent plant protect machinery in orchard, in this study, an autonomous navigation and automatic target spraying robot for orchards was developed. Firstly, a single 3D light detection and ranging (LiDAR) was used to collect fruit trees and other information around the robot. The region of interest (ROI) was determined using information on the fruit trees in the orchard (plant spacing, plant height, and row spacing), as well as the fundamental LiDAR parameters. Additionally, it must be ensured that LiDAR was used to detect the canopy information of a whole fruit tree in the ROI. Secondly, the point clouds within the ROI was two-dimension processing to obtain the fruit tree center of mass coordinates. The coordinate was the location of the fruit trees. Based on the location of the fruit trees, the row lines of fruit tree were obtained by random sample consensus (RANSAC) algorithm. The center line (navigation line) of the fruit tree row within ROI was obtained through the fruit tree row lines. The robot was controlled to drive along the center line by the angular velocity signal transmitted from the computer. Next, the ATRS's body speed and position were determined by encoders and the inertial measurement unit (IMU). And the collected fruit tree zoned canopy information was corrected by IMU. The presence or absence of fruit tree zoned canopy was judged by the logical algorithm designed. Finally, the nozzles were controlled to spray or not according to the presence or absence of corresponding zoned canopy. The conclusions were obtained. The maximum lateral deviation of the robot during autonomous navigation was 21.8 cm, and the maximum course deviation angle was 4.02°. Compared with traditional spraying, the automatic target spraying designed in this study reduced pesticide volume, air drift and ground loss by 20.06%, 38.68% and 51.40%, respectively. There was no significant difference between the automatic target spraying and the traditional spraying in terms of the percentage of air drift. In terms of the percentage of ground loss, automatic target spraying had 43% at the bottom of the test fruit trees and 29% and 28% at the middle of the test fruit trees and the left and right neighboring fruit trees. But in traditional spraying, the percentage of ground loss was, in that sequence, 25%, 38%, and 37%. The robot developted can realize autonomous navigation while ensuring the spraying effect, reducing the pesticides volume and loss.

    Reference | Related Articles | Metrics | Comments0
    Automatic Navigation and Spraying Robot in Sheep Farm
    FAN Mingshuo, ZHOU Ping, LI Miao, LI Hualong, LIU Xianwang, MA Zhirun
    Smart Agriculture    2024, 6 (4): 103-115.   DOI: 10.12133/j.smartag.SA202312016
    Abstract960)   HTML30)    PDF(pc) (2160KB)(661)       Save

    [Objective] Manual disinfection in large-scale sheep farm is laborious, time-consuming, and often results in incomplete coverage and inadequate disinfection. With the rapid development of the application of artificial intelligence and automation technology, the automatic navigation and spraying robot for livestock and poultry breeding, has become a research hotspot. To maintain shed hygiene and ensure sheep health, an automatic navigation and spraying robot was proposed for sheep sheds. [Methods] The automatic navigation and spraying robot was designed with a focus on three aspects: hardware, semantic segmentation model, and control algorithm. In terms of hardware, it consisted of a tracked chassis, cameras, and a collapsible spraying device. For the semantic segmentation model, enhancements were made to the lightweight semantic segmentation model ENet, including the addition of residual structures to prevent network degradation and the incorporation of a squeeze-and-excitation network (SENet) attention mechanism in the initialization module. This helped to capture global features when feature map resolution was high, addressing precision issues. The original 6-layer ENet network was reduced to 5 layers to balance the encoder and decoder. Drawing inspiration from dilated spatial pyramid pooling, a context convolution module (CCM) was introduced to improve scene understanding. A criss-cross attention (CCA) mechanism was adapted to acquire context global features of different scales without cascading, reducing information loss. This led to the development of a double attention enet (DAENet) semantic segmentation model was proposed to achieve real-time and accurate segmentation of sheep shed surfaces. Regarding control algorithms, a method was devised to address the robot's difficulty in controlling its direction at junctions. Lane recognition and lane center point identification algorithms were proposed to identify and mark navigation points during the robot's movement outside the sheep shed by simulating real roads. Two cameras were employed, and a camera switching algorithm was developed to enable seamless switching between them while also controlling the spraying device. Additionally, a novel offset and velocity calculation algorithm was proposed to control the speeds of the robot's left and right tracks, enabling control over the robot's movement, stopping, and turning. [Results and Discussions] The DAENet model achieved a mean intersection over union (mIoU) of 0.945 3 in image segmentation tasks, meeting the required segmentation accuracy. During testing of the camera switching algorithm, it was observed that the time taken for the complete transition from camera to spraying device action does not exceed 15 seconds when road conditions changed. Testing of the center point and offset calculation algorithm revealed that, when processing multiple frames of video streams, the algorithm averages 0.04 to 0.055 per frame, achieving frame rates of 20 to 24 frames per second, meeting real-time operational requirements. In field experiments conducted in sheep farm, the robot successfully completed automatic navigation and spraying tasks in two sheds without colliding with roadside troughs. The deviation from the road and lane centerlines did not exceed 0.3 meters. Operating at a travel speed of 0.2 m/s, the liquid in the medicine tank was adequate to complete the spraying tasks for two sheds. Additionally, the time taken for the complete transition from camera to spraying device action did not exceed 15 when road conditions changed. The robot maintained an average frame rate of 22.4 frames per second during operation, meeting the experimental requirements for accurate and real-time information processing. Observation indicated that the spraying coverage rate of the robot exceeds 90%, meeting the experimental coverage requirements. [Conclusions] The proposed automatic navigation and spraying robot, based on the DAENet semantic segmentation model and center point recognition algorithm, combined with hardware design and control algorithms, achieves comprehensive disinfection within sheep sheds while ensuring safety and real-time operation.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Real-Time Monitoring Method for Cow Rumination Behavior Based on Edge Computing and Improved MobileNet v3
    ZHANG Yu, LI Xiangting, SUN Yalin, XUE Aidi, ZHANG Yi, JIANG Hailong, SHEN Weizheng
    Smart Agriculture    2024, 6 (4): 29-41.   DOI: 10.12133/j.smartag.SA202405023
    Abstract948)   HTML46)    PDF(pc) (1694KB)(307)       Save

    [Objective] Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases. Currently, various strategies have been proposed for monitoring cow ruminant behavior, including video surveillance, sound recognition, and sensor monitoring methods. However, the application of edge device gives rise to the issue of inadequate real-time performance. To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior, a real-time monitoring method was proposed for cow ruminant behavior based on edge computing. [Methods] Autonomously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time. Based on these six-axis data, two distinct strategies, federated edge intelligence and split edge intelligence, were investigated for the real-time recognition of cow ruminant behavior. Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence, the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism. Additionally, a federated edge intelligence model was designed utilizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm. In the study on split edge intelligence, a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network. [Results and Discussions] Through comparative experiments with MobileNet v3 and MobileNet-LSTM, the federated edge intelligence model based on CA-MobileNet v3 achieved an average Precision rate, Recall rate, F1-Score, Specificity, and Accuracy of 97.1%, 97.9%, 97.5%, 98.3%, and 98.2%, respectively, yielding the best recognition performance. [Conclusions] It is provided a real-time and effective method for monitoring cow ruminant behavior, and the proposed federated edge intelligence model can be applied in practical settings.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    The Key Issues and Evaluation Methods for Constructing Agricultural Pest and Disease Image Datasets: A Review
    GUAN Bolun, ZHANG Liping, ZHU Jingbo, LI Runmei, KONG Juanjuan, WANG Yan, DONG Wei
    Smart Agriculture    2023, 5 (3): 17-34.   DOI: 10.12133/j.smartag.SA202306012
    Abstract940)   HTML172)    PDF(pc) (1576KB)(7421)       Save

    [Significance] The scientific dataset of agricultural pests and diseases is the foundation for monitoring and warning of agricultural pests and diseases. It is of great significance for the development of agricultural pest control, and is an important component of developing smart agriculture. The quality of the dataset affecting the effectiveness of image recognition algorithms, with the discovery of the importance of deep learning technology in intelligent monitoring of agricultural pests and diseases. The construction of high-quality agricultural pest and disease datasets is gradually attracting attention from scholars in this field. In the task of image recognition, on one hand, the recognition effect depends on the improvement strategy of the algorithm, and on the other hand, it depends on the quality of the dataset. The same recognition algorithm learns different features in different quality datasets, so its recognition performance also varies. In order to propose a dataset evaluation index to measure the quality of agricultural pest and disease datasets, this article analyzes the existing datasets and takes the challenges faced in constructing agricultural pest and disease image datasets as the starting point to review the construction of agricultural pest and disease datasets. [Progress] Firstly, disease and pest datasets are divided into two categories: private datasets and public datasets. Private datasets have the characteristics of high annotation quality, high image quality, and a large number of inter class samples that are not publicly available. Public datasets have the characteristics of multiple types, low image quality, and poor annotation quality. Secondly, the problems faced in the construction process of datasets are summarized, including imbalanced categories at the dataset level, difficulty in feature extraction at the dataset sample level, and difficulty in measuring the dataset size at the usage level. These include imbalanced inter class and intra class samples, selection bias, multi-scale targets, dense targets, uneven data distribution, uneven image quality, insufficient dataset size, and dataset availability. The main reasons for the problem are analyzed by two key aspects of image acquisition and annotation methods in dataset construction, and the improvement strategies and suggestions for the algorithm to address the above issues are summarized. The collection devices of the dataset can be divided into handheld devices, drone platforms, and fixed collection devices. The collection method of handheld devices is flexible and convenient, but it is inefficient and requires high photography skills. The drone platform acquisition method is suitable for data collection in contiguous areas, but the detailed features captured are not clear enough. The fixed device acquisition method has higher efficiency, but the shooting scene is often relatively fixed. The annotation of image data is divided into rectangular annotation and polygonal annotation. In image recognition and detection, rectangular annotation is generally used more frequently. It is difficult to label images that are difficult to separate the target and background. Improper annotation can lead to the introduction of more noise or incomplete algorithm feature extraction. In response to the problems in the above three aspects, the evaluation methods are summarized for data distribution consistency, dataset size, and image annotation quality at the end of the article. [Conclusions and Prospects] The future research and development suggestions for constructing high-quality agricultural pest and disease image datasets based are proposed on the actual needs of agricultural pest and disease image recognition:(1) Construct agricultural pest and disease datasets combined with practical usage scenarios. In order to enable the algorithm to extract richer target features, image data can be collected from multiple perspectives and environments to construct a dataset. According to actual needs, data categories can be scientifically and reasonably divided from the perspective of algorithm feature extraction, avoiding unreasonable inter class and intra class distances, and thus constructing a dataset that meets task requirements for classification and balanced feature distribution. (2) Balancing the relationship between datasets and algorithms. When improving algorithms, consider the more sufficient distribution of categories and features in the dataset, as well as the size of the dataset that matches the model, to improve algorithm accuracy, robustness, and practicality. It ensures that comparative experiments are conducted on algorithm improvement under the same evaluation standard dataset, and improved the pest and disease image recognition algorithm. Research the correlation between the scale of agricultural pest and disease image data and algorithm performance, study the relationship between data annotation methods and algorithms that are difficult to annotate pest and disease images, integrate recognition algorithms for fuzzy, dense, occluded targets, and propose evaluation indicators for agricultural pest and disease datasets. (3) Enhancing the use value of datasets. Datasets can not only be used for research on image recognition, but also for research on other business needs. The identification, collection, and annotation of target images is a challenging task in the construction process of pest and disease datasets. In the process of collecting image data, in addition to collecting images, attention can be paid to the collection of surrounding environmental information and host information. This method is used to construct a multimodal agricultural pest and disease dataset, fully leveraging the value of the dataset. In order to focus researchers on business innovation research, it is necessary to innovate the organizational form of data collection, develop a big data platform for agricultural diseases and pests, explore the correlation between multimodal data, improve the accessibility and convenience of data, and provide efficient services for application implementation and business innovation.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    High Quality Ramie Resource Screening Based on UAV Remote Sensing Phenotype Monitoring
    FU Hongyu, WANG Wei, LIAO Ao, YUE Yunkai, XU Mingzhi, WANG Ziwei, CHEN Jianfu, SHE Wei, CUI Guoxian
    Smart Agriculture    2022, 4 (4): 74-83.   DOI: 10.12133/j.smartag.SA202209001
    Abstract920)   HTML45)    PDF(pc) (1187KB)(1244)       Save

    Ramie is an important fiber crop. Due to the shortage of land resources and the promotion of excellent varieties, the genetic variation and diversity of ramie decreased, which increased the need for investigation and protection of the ramie germplasm resources diversity. The crop phenotype measurement method based on UAV remote sensing can conduct frequent, rapid, non-destructive and accurate monitoring of different genotypes, which can fulfill the investigation of crop germplasm resources and screen specific and high-quality varieties. In order to realize efficient comprehensive evaluation of ramie germplasm phenotype and assist in screening of dominant ramie varieties, a method for monitoring and screening ramie germplasm phenotype was proposed based on UAV remote sensing images. Firstly, based on UAV remote sensing images, the digital surface model (DSM) and orthophoto of the test area were generated by Pix4dmapper. Then, the key phenotypic parameters (plant height, plant number, leaf area index, leaf chlorophyll content and water content) of ramie germplasm resources were estimated. The subtraction method was used to extract ramie plant height based on DSM, while the target detection algorithm was applied to extract ramie plant number based on orthographic images, and four machine learning methods were used to estimate the leaf area index (LAI), leaf chlorophyll content (SPAD value) and water content. Finally, according to the extracted remote sensing phenotypic parameters, the genetic diversity of ramie germplasm was analyzed by using variability analysis and principal component analysis. The results showed that: (1) The ramie phenotype estimation based on UAV remote sensing was effective, with the fitting accuracy of plant height 0.93, and the root mean square error (RMSE) 5.654 cm. The fitting indexes of SPAD value, water content and LAI were 0.66, 0.79 and 0.74, respectively, and RMSE were 2.03, 2.21 and 0.63, respectively; (2) The remote sensing phenotypes of ramie germplasm were significantly different, as the coefficients of variation of LAI, plant height and plant number reached 20.82%, 24.61% and 35.48%, respectively; (3) Principal component analysis was used to cluster the remote sensing phenotypes into factor 1 (plant height and LAI) and factor 2 (LAI and SPAD value), factor 1 can be used to evaluate the structural characteristics of ramie germplasm resources, and factor 2 can be used as the screening index of high-light efficiency ramie resources. This study could provide references for crop germplasm phenotypic monitoring and breeding correlation analysis.

    Reference | Related Articles | Metrics | Comments0
    Progressive Convolutional Net Based Method for Agricultural Named Entity Recognition
    JI Jie, JIN Zhou, WANG Rujing, LIU Haiyan, LI Zhiyuan
    Smart Agriculture    2023, 5 (1): 122-131.   DOI: 10.12133/j.smartag.SA202303001
    Abstract907)   HTML39)    PDF(pc) (965KB)(1341)       Save

    Pre-training refers to the process of training deep neural network parameters on a large corpus before a specific task model performs a particular task. This approach enables downstream tasks to fine-tune the pre-trained model parameters based on a small amount of labeled data, eliminating the need to train a new model from scratch. Currently, research on named entity recognition (NER) using pre-trained language model (PLM) only uses the last layer of the PLM to express output when facing challenges such as complex entity naming methods and fuzzy entity boundaries in the agricultural field. This approach ignores the rich information contained in the internal layers of the model themselves. To address these issues, a named entity recognition method based on progressive convolutional networks has been proposed. This method stores natural sentences and outputs representations of each layer obtained through PLM. The intermediate outputs of the pre-trained model are sequentially convolved to extract shallow feature information that may have been overlooked previously. Using the progressive convolutional network module proposed in this research, the adjacent two-layer representations are convolved from the first layer, and the fusion result continues to be convolved with the next layer, resulting in enhanced sentence embedding that includes the entire information dimension of the model layer. The method does not require the introduction of external information, which makes the sentence representation contain richer information. Research has shown that the sentence embedding output of the model layer near the input contains more fine-grained information, such as phrases and phrases, which can assist with NER problems in the agricultural field. Fully utilizing the computational power already used, the results obtained can enhance the representation embedding of sentences. Finally, the conditional random field (CRF) model was used to generate the global optimal sequence. On a constructed agricultural dataset containing four types of agricultural entities, the proposed method's comprehensive indicator F1 value increased by 3.61% points compared to the basic BERT (Bidirectional Encoder Representation from Transformers) model. On the open dataset MSRA, the F1 value also increased to 94.96%, indicating that the progressive convolutional network can enhance the model's ability to represent natural language and has advantages in NER tasks.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Detection of Pear Inflorescence Based on Improved Ghost-YOLOv5s-BiFPN Algorithm
    XIA Ye, LEI Xiaohui, QI Yannan, XU Tao, YUAN Quanchun, PAN Jian, JIANG Saike, LYU Xiaolan
    Smart Agriculture    2022, 4 (3): 108-119.   DOI: 10.12133/j.smartag.SA202207006
    Abstract875)   HTML111)    PDF(pc) (2214KB)(4501)       Save

    Mechanized and intelligent flower thinning is a high-speed flower thinning method nowadays. The classification and detection of flowers and flower buds are the basic requirements to ensure the normal operation of the flower thinning machine. Aiming at the problems of pear inflorescence detection and classification in the current intelligent production of pear orchards, a Y-shaped shed pear orchard inflorescence recognition algorithm Ghost-YOLOv5s-BiFPN based on improved YOLOv5s was proposed in this research. The detection model was obtained by labeling and expanding the pear tree bud and flower images collected in the field and sending them to the algorithm for training. The Ghost-YOLOv5s-BiFPN algorithm used the weighted bidirectional feature pyramid network to replace the original path aggregation network structure, and effectively fuse the features of different sizes. At the same time, ghost module was used to replace the traditional convolution, so as to reduce the amount of model parameters and improve the operation efficiency of the equipment without reducing the accuracy. The field experiment results showed that the detection accuracy of the Ghost-YOLOv5s-BiFPN algorithm for the bud and flower in the pear inflorescence were 93.21% and 89.43%, respectively, with an average accuracy of 91.32%, and the detection time of a single image was 29 ms. Compared with the original YOLOv5s algorithm, the detection accuracy was improved by 4.18%, and the detection time and model parameters were reduced by 9 ms and 46.63% respectively. Compared with the original YOLOV5s network, the mAP and recall rate were improved by 4.2% and 2.7%, respectively; the number of parameters, model size and floating point operations were reduced by 46.6%, 44.4% and 47.5% respectively, and the average detection time was shortened by 9 ms. With Ghost convolution and BIFPN adding model, the detection accuracy has been improved to a certain extent, and the model has been greatly lightweight, effectively improving the detect efficiency. From the thermodynamic diagram results, it can be seen that BIFPN structure effectively enhances the representation ability of features, making the model more effective in focusing on the corresponding features of the target. The results showed that the algorithm can meet the requirements of accurate identification and classification of pear buds and flowers, and provide technical support for the follow-up pear garden to achieve intelligent flower thinning.

    Reference | Related Articles | Metrics | Comments0
    Development of China Feed Nutrition Big Data Analysis Platform
    XIONG Benhai, ZHAO Yiguang, LUO Qingyao, ZHENG Shanshan, GAO Huajie
    Smart Agriculture    2022, 4 (2): 110-120.   DOI: 10.12133/j.smartag.SA202205003
    Abstract874)   HTML59)    PDF(pc) (1590KB)(1417)       Save

    The shortage of feed grain is continually worsening in China, which leads to the transformation of feed grain security into national food security. Therefore, comprehensively integrating the basic data resources of feed nutrition and improving the nutritional value of all available feed resources will be one of the key technical strategies to ensure national food security in China. In this study, based on the description specification and attribute data standard of 16 categories of Chinese feed raw materials, more than 500,000 pieces of data on the types, spatial distribution, chemical composition and nutritional value characteristics of existing feed resources, which were accumulated through previous projects from the sixth Five-Year Plan to the thirteenth Five-Year Plan period, were digitally collected, recorded, categorized and comprehensively analyzed. By using MySQL relational database technology and PHP program, a new generation of feed nutrition big data online platform (http://www.chinafeeddata.org.cn/) was developed and web data sharing service was provided as well. First of all, the online platform provided visual analysis of all warehousing data, which could realize the visual comparison of a single or multiple feed nutrients in various graphic forms such as scatter diagram, histogram, curve line and column chart. By using two-dimensional code technology, all feed nutrition attribute data and feed entity sample traceability data could be shared and downloaded remotely in real-time on mobile phones. Secondly, the online platform also incorporated various regression models for prediction of feed effective nutrient values using readily available feed chemical composition in the datasets, providing dynamic analysis for feed raw material nutrient variation. Finally, based on Geographic Information System technology, the online platform integrated the data of feed chemical composition and major mineral element concentrations with their geographical location information, which was able to provide the distribution query and comparative analysis of the geographic information map of the feed raw material nutrition data at both provincial and national level. Meanwhile, the online platform can also provide a download service of the various datasets, which brought convenience to the comprehensive application of existing feed nutrition data. This research also showed that expanding feed resource data and providing prediction and analysis models of feed effective nutrients could maximize the utilization of the existing feed nutrition data. After embedding online calculation modules of various feed formulation software, this platform would be able to provide a one-stop service and optimize the utilization of the feed nutrition data.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress and Challenges of Oil Crop Yield Monitoring by Remote Sensing
    MA Yujing, WU Shangrong, YANG Peng, CAO Hong, TAN Jieyang, ZHAO Rongkun
    Smart Agriculture    2023, 5 (3): 1-16.   DOI: 10.12133/j.smartag.SA202303002
    Abstract874)   HTML179)    PDF(pc) (837KB)(5421)       Save

    [Significance] Oil crops play a significant role in the food supply, as well as the important source of edible vegetable oils and plant proteins. Real-time, dynamic and large-scale monitoring of oil crop growth is essential in guiding agricultural production, stabilizing markets, and maintaining health. Previous studies have made a considerable progress in the yield simulation of staple crops in regional scale based on remote sensing methods, but the yield simulation of oil crops in regional scale is still poor as its complexity of the plant traits and structural characteristics. Therefore, it is urgently needed to study regional oil crop yield estimation based on remote sensing technology. [Progress] This paper summarized the content of remote sensing technology in oil crop monitoring from three aspects: backgrounds, progressions, opportunities and challenges. Firstly, significances and advantages of using remote sensing technology to estimate the of oil crops have been expounded. It is pointed out that both parameter inversion and crop area monitoring were the vital components of yield estimation. Secondly, the current situation of oil crop monitoring was summarized based on remote sensing technology from three aspects of remote sensing parameter inversion, crop area monitoring and yield estimation. For parameter inversion, it is specified that optical remote sensors were used more than other sensors in oil crops inversion in previous studies. Then, advantages and disadvantages of the empirical model and physical model inversion methods were analyzed. In addition, advantages and disadvantages of optical and microwave data were further illustrated from the aspect of oil crops structure and traits characteristics. At last, optimal choice on the data and methods were given in oil crop parameter inversion. For crop area monitoring, this paper mainly elaborated from two parts of optical and microwave remote sensing data. Combined with the structure of oil crops and the characteristics of planting areas, the researches on area monitoring of oil crops based on different types of remote sensing data sources were reviewed, including the advantages and limitations of different data sources in area monitoring. Then, two yield estimation methods were introduced: remote sensing yield estimation and data assimilation yield estimation. The phenological period of oil crop yield estimation, remote sensing data source and modeling method were summarized. Next, data assimilation technology was introduced, and it was proposed that data assimilation technology has great potential in oil crop yield estimation, and the assimilation research of oil crops was expounded from the aspects of assimilation method and grid selection. All of them indicate that data assimilation technology could improve the accuracy of regional yield estimation of oil crops. Thirdly, this paper pointed out the opportunities of remote sensing technology in oil crop monitoring, put forward some problems and challenges in crop feature selection, spatial scale determination and remote sensing data source selection of oil crop yield, and forecasted the development trend of oil crop yield estimation research in the future. [Conclusions and Prospects] The paper puts forward the following suggestions for the three aspects: (1) Regarding crop feature selection, when estimating yields for oil crops such as rapeseed and soybeans, which have active photosynthesis in siliques or pods, relying solely on canopy leaf area index (LAI) as the assimilation state variable for crop yield estimation may result in significant underestimation of yields, thereby impacting the accuracy of regional crop yield simulation. Therefore, it is necessary to consider the crop plant characteristics and the agronomic mechanism of yield formation through siliques or pods when estimating yields for oil crops. (2) In determining the spatial scale, some oil crops are distributed in hilly and mountainous areas with mixed land cover. Using regularized yield simulation grids may result in the confusion of numerous background objects, introducing additional errors and affecting the assimilation accuracy of yield estimation. This poses a challenge to yield estimation research. Thus, it is necessary to choose appropriate methods to divide irregular unit grids and determine the optimal scale for yield estimation, thereby improving the accuracy of yield estimation. (3) In terms of remote sensing data selection, the monitoring of oil crops can be influenced by crop structure and meteorological conditions. Depending solely on spectral data monitoring may have a certain impact on yield estimation results. It is important to incorporate radar off-nadir remote sensing measurement techniques to perceive the response relationship between crop leaves and siliques or pods and remote sensing data parameters. This can bridge the gap between crop characteristics and remote sensing information for crop yield simulation. This paper can serve as a valuable reference and stimulus for further research on regional yield estimation and growth monitoring of oil crops. It supplements existing knowledge and provides insightful considerations for enhancing the accuracy and efficiency of oil crop production monitoring and management.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Vegetable Crop Growth Modeling in Digital Twin Platform Based on Large Language Model Inference
    ZHAO Chunjiang, LI Jingchen, WU Huarui, YANG Yusen
    Smart Agriculture    2024, 6 (6): 63-71.   DOI: 10.12133/j.smartag.SA202410008
    Abstract874)   HTML150)    PDF(pc) (1460KB)(882)       Save

    [Objective] In the era of digital agriculture, real-time monitoring and predictive modeling of crop growth are paramount, especially in autonomous farming systems. Traditional crop growth models, often constrained by their reliance on static, rule-based methods, fail to capture the dynamic and multifactorial nature of vegetable crop growth. This research tried to address these challenges by leveraging the advanced reasoning capabilities of pre-trained large language models (LLMs) to simulate and predict vegetable crop growth with accuracy and reliability. Modeling the growth of vegetable crops within these platforms has historically been hindered by the complex interactions among biotic and abiotic factors. [Methods] The methodology was structured in several distinct phases. Initially, a comprehensive dataset was curated to include extensive information on vegetable crop growth cycles, environmental conditions, and management practices. This dataset incorporates continuous data streams such as soil moisture, nutrient levels, climate variables, pest occurrence, and historical growth records. By combining these data sources, the study ensured that the model was well-equipped to understand and infer the complex interdependencies inherent in crop growth processes. Then, advanced techniques was emploied for pre-training and fine-tuning LLMs to adapt them to the domain-specific requirements of vegetable crop modeling. A staged intelligent agent ensemble was designed to work within the digital twin platform, consisting of a central managerial agent and multiple stage-specific agents. The managerial agent was responsible for identifying transitions between distinct growth stages of the crops, while the stage-specific agents were tailored to handle the unique characteristics of each growth phase. This modular architecture enhanced the model's adaptability and precision, ensuring that each phase of growth received specialized attention and analysis. [Results and Discussions] The experimental validation of this method was conducted in a controlled agricultural setting at the Xiaotangshan Modern Agricultural Demonstration Park in Beijing. Cabbage (Zhonggan 21) was selected as the test crop due to its significance in agricultural production and the availability of comprehensive historical growth data. Over five years, the dataset collected included 4 300 detailed records, documenting parameters such as plant height, leaf count, soil conditions, irrigation schedules, fertilization practices, and pest management interventions. This dataset was used to train the LLM-based system and evaluate its performance using ten-fold cross-validation. The results of the experiments demonstrating the efficacy of the proposed system in addressing the complexities of vegetable crop growth modeling. The LLM-based model achieved 98% accuracy in predicting crop growth degrees and a 99.7% accuracy in identifying growth stages. These metrics significantly outperform traditional machine learning approaches, including long short-term memory (LSTM), XGBoost, and LightGBM models. The superior performance of the LLM-based system highlights its ability to reason over heterogeneous data inputs and make precise predictions, setting a new benchmark for crop modeling technologies. Beyond accuracy, the LLM-powered system also excels in its ability to simulate growth trajectories over extended periods, enabling farmers and agricultural managers to anticipate potential challenges and make proactive decisions. For example, by integrating real-time sensor data with historical patterns, the system can predict how changes in irrigation or fertilization practices will impact crop health and yield. This predictive capability is invaluable for optimizing resource allocation and mitigating risks associated with climate variability and pest outbreaks. [Conclusions] The study emphasizes the importance of high-quality data in achieving reliable and generalizable models. The comprehensive dataset used in this research not only captures the nuances of cabbage growth but also provides a blueprint for extending the model to other crops. In conclusion, this research demonstrates the transformative potential of combining large language models with digital twin technology for vegetable crop growth modeling. By addressing the limitations of traditional modeling approaches and harnessing the advanced reasoning capabilities of LLMs, the proposed system sets a new standard for precision agriculture. Several avenues also are proposed for future work, including expanding the dataset, refining the model architecture, and developing multi-crop and multi-region capabilities.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Review on Energy Efficiency Assessment and Carbon Emission Accounting of Food Cold Chain
    WANG Xiang, ZOU Jingui, LI You, SUN Yun, ZHANG Xiaoshuan
    Smart Agriculture    2023, 5 (1): 1-21.   DOI: 10.12133/j.smartag.SA202301007
    Abstract872)   HTML179)    PDF(pc) (1296KB)(4059)       Save

    The global energy is increasingly tight, and the global temperature is gradually rising. Energy efficiency assessment and carbon emission accounting can provide theoretical tools and practical support for the formulation of energy conservation and emission reduction strategies for the food cold chain, and is also a prerequisite for the sustainable development of the food cold chain. In this paper, the relationship and differences between energy consumption and carbon emissions in the general food cold chain are first described, and the principle, advantages and disadvantages of three energy consumption conversion standards of solar emergy value, standard coal and equivalent electricity are discussed. Besides, the possibilities of applying these three energy consumption conversion standards to energy consumption analysis and energy efficiency evaluation of food cold chain are explored. Then, for a batch of fresh agricultural products, the energy consumption of six links of the food cold chain, including the first transportation, the manufacturer, the second transportation, the distribution center, the third transportation, and the retailer, are systematically and comprehensively analyzed from the product level, and the comprehensive energy consumption level of the food cold chain are obtained. On this basis, ten energy efficiency indicators from five aspects of macro energy efficiency are proposed, including micro energy efficiency, energy economy, environmental energy efficiency and comprehensive energy efficiency, and constructs the energy efficiency evaluation index system of food cold chain. At the same time, other energy efficiency evaluation indicators and methods are also summarized. In addition, the standard of carbon emission conversion of food cold chain, namely carbon dioxide equivalent is introduce, the boundary of carbon emission accounting is determined, and the carbon emission factors of China's electricity is mainly discussed. Furthermore, the origin, principle, advantages and disadvantages of the emission factor method, the life cycle assessment method, the input-output analysis method and the hybrid life cycle assessment method, and the basic process of life cycle assessment method in the calculation of food cold chain carbon footprint are also reviewed. In order to improve the energy efficiency level of the food cold chain and reduce the carbon emissions of each link of the food cold chain, energy conservation and emission reduction methods for food cold chain are proposed from five aspects: refrigerant, distribution path, energy, phase change cool storage technology and digital twin technology. Finally, the energy efficiency assessment and carbon emission accounting of the food cold chain are briefly prospected in order to provide reference for promoting the sustainable development of China's food cold chain.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Phenotypic Traits Extraction of Wheat Plants Using 3D Digitization
    ZHENG Chenxi, WEN Weiliang, LU Xianju, GUO Xinyu, ZHAO Chunjiang
    Smart Agriculture    2022, 4 (2): 150-162.   DOI: 10.12133/j.smartag.SA202203009
    Abstract870)   HTML111)    PDF(pc) (1803KB)(1709)       Save

    Aiming at the difficulty of accurately extract the phenotypic traits of plants and organs from images or point clouds caused by the multiple tillers and serious cross-occlusion among organs of wheat plants, to meet the needs of accurate phenotypic analysis of wheat plants, three-dimensional (3D) digitization was used to extract phenotypic parameters of wheat plants. Firstly, digital representation method of wheat organs was given and a 3D digital data acquisition standard suitable for the whole growth period of wheat was formulated. According to this standard, data acquisition was carried out using a 3D digitizer. Based on the definition of phenotypic parameters and semantic coordinates information contained in the 3D digitizing data, eleven conventional measurable phenotypic parameters in three categories were quantitative extracted, including lengths, thicknesses, and angles of wheat plants and organs. Furthermore, two types of new parameters for shoot architecture and 3D leaf shape were defined. Plant girth was defined to quantitatively describe the looseness or compactness by fitting 3D discrete coordinates based on the least square method. For leaf shape, wheat leaf curling and twisting were defined and quantified according to the direction change of leaf surface normal vector. Three wheat cultivars including FK13, XN979, and JM44 at three stages (rising stage, jointing stage, and heading stage) were used for method validation. The Open3D library was used to process and visualize wheat plant data. Visualization results showed that the acquired 3D digitization data of maize plants were realistic, and the data acquisition approach was capable to present morphological differences among different cultivars and growth stages. Validation results showed that the errors of stem length, leaf length, stem thickness, stem and leaf angle were relatively small. The R2 were 0.93, 0.98, 0.93, and 0.85, respectively. The error of the leaf width and leaf inclination angle were also satisfactory, the R2 were 0.75 and 0.73. Because wheat leaves are narrow and easy to curl, and some of the leaves have a large degree of bending, the error of leaf width and leaf angle were relatively larger than other parameters. The data acquisition procedure was rather time-consuming, while the data processing was quite efficient. It took around 133 ms to extract all mentioned parameters for a wheat plant containing 7 tillers and total 27 leaves. The proposed method could achieve convenient and accurate extraction of wheat phenotypes at individual plant and organ levels, and provide technical support for wheat shoot architecture related research.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Automatic Detection Method of Dairy Cow Lameness from Top-view Based on the Fusion of Spatiotemporal Stream Features
    DAI Xin, WANG Junhao, ZHANG Yi, WANG Xinjie, LI Yanxing, DAI Baisheng, SHEN Weizheng
    Smart Agriculture    2024, 6 (4): 18-28.   DOI: 10.12133/j.smartag.SA202405025
    Abstract865)   HTML31)    PDF(pc) (1828KB)(536)       Save

    [Objective] The detection of lameness in dairy cows is an important issue that needs to be solved urgently in the process of large-scale dairy farming. Timely detection and effective intervention can reduce the culling rate of young dairy cows, which has important practical significance for increasing the milk production of dairy cows and improving the economic benefits of pastures. Due to the low efficiency and low degree of automation of traditional manual detection and contact sensor detection, the mainstream cow lameness detection method is mainly based on computer vision. The detection perspective of existing computer vision-based cow lameness detection methods is mainly side view, but the side view perspective has limitations that are difficult to eliminate. In the actual detection process, there are problems such as cows blocking each other and difficulty in deployment. The cow lameness detection method from the top view will not be difficult to use on the farm due to occlusion problems. The aim is to solve the occlusion problem under the side view. [Methods] In order to fully explore the movement undulations of the trunk of the cow and the movement information in the time dimension during the walking process of the cow, a cow lameness detection method was proposed from a top view based on fused spatiotemporal flow features. By analyzing the height changes of the lame cow in the depth video stream during movement, a spatial stream feature image sequence was constructed. By analyzing the instantaneous speed of the lame cow's body moving forward and swaying left and right when walking, optical flow was used to capture the instantaneous speed of the cow's movement, and a time flow characteristic image sequence was constructed. The spatial flow and time flow features were combined to construct a fused spatiotemporal flow feature image sequence. Different from traditional image classification tasks, the image sequence of cows walking includes features in both time and space dimensions. There would be a certain distinction between lame cows and non-lame cows due to their related postures and walking speeds when walking, so using video information analysis was feasible to characterize lameness as a behavior. The video action classification network could effectively model the spatiotemporal information in the input image sequence and output the corresponding category in the predicted result. The attention module Convolutional Block Attention Module (CBAM) was used to improve the PP-TSMv2 video action classification network and build the Cow-TSM cow lameness detection model. The CBAM module could perform channel weighting on different modes of cows, while paying attention to the weights between pixels to improve the model's feature extraction capabilities. Finally, cow lameness experiments were conducted on different modalities, different attention mechanisms, different video action classification networks and comparison of existing methods. The data was used for cow lameness included a total of 180 video streams of cows walking. Each video was decomposed into 100‒400 frames. The ratio of the number of video segments of lame cows and normal cows was 1:1. For the feature extraction of cow lameness from the top view, RGB images had less extractable information, so this work mainly used depth video streams. [Results and Discussions] In this study, a total of 180 segments of cow image sequence data were acquired and processed, including 90 lame cows and 90 non-lame cows with a 1:1 ratio of video segments, and the prediction accuracy of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features reaches 88.7%, the model size was 22 M, and the offline inference time was 0.046 s. The prediction accuracy of the common mainstream video action classification models TSM, PP-TSM, SlowFast and TimesFormer models on the data set of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features reached 66.7%, 84.8%, 87.1% and 85.7%, respectively. The comprehensive performance of the improved Cow-TSM model in this paper was the most. At the same time, the recognition accuracy of the fused spatiotemporal flow feature image was improved by 12% and 4.1%, respectively, compared with the temporal mode and spatial mode, which proved the effectiveness of spatiotemporal flow fusion in this method. By conducting ablation experiments on different attention mechanisms of SE, SK, CA and CBAM, it was proved that the CBAM attention mechanism used has the best effect on the data of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features. The channel attention in CBAM had a better effect on fused spatiotemporal flow data, and the spatial attention could also focus on the key spatial information in cow images. Finally, comparisons were made with existing lameness detection methods, including different methods from side view and top view. Compared with existing methods in the side-view perspective, the prediction accuracy of automatic detection method for dairy cow lameness based on fusion of spatiotemporal stream features was slightly lower, because the side-view perspective had more effective cow lameness characteristics. Compared with the method from the top view, a novel fused spatiotemporal flow feature detection method with better performance and practicability was proposed. [Conclusions] This method can avoid the occlusion problem of detecting lame cows from the side view, and at the same time improves the prediction accuracy of the detection method from the top view. It is of great significance for reducing the incidence of lameness in cows and improving the economic benefits of the pasture, and meets the needs of large-scale construction of the pasture.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Agricultural Knowledge Recommendation Model Integrating Time Perception and Context Filtering
    WANG Pengzhe, ZHU Huaji, MIAO Yisheng, LIU Chang, WU Huarui
    Smart Agriculture    2024, 6 (1): 123-134.   DOI: 10.12133/j.smartag.SA202312012
    Abstract832)   HTML33)    PDF(pc) (1503KB)(1067)       Save

    Objective Knowledge services in agricultural scenarios have the characteristics of long periodicity and prolonged activity time. Traditional recommendation models cannot effectively mine hidden information in agricultural scenarios, in order to improve the quality of agricultural knowledge recommendation services, agricultural contextual information based on agricultural time should be fully considered. To address these issues, a Time-aware and filter-enhanced sequential recommendation model for agricultural knowledge (TiFSA) was proposed, integrating temporal perception and enhanced filtering. Methods First, based on the temporal positional embedding, combining the temporal information of farmers' interactions with positional embedding based on time perception, it helped to learn project relevance based on agricultural season in agricultural contexts. A multi-head self-attention network recommendation algorithm based on time-awareness was proposed for the agricultural knowledge recommendation task, which extracted different interaction time information in the user interaction sequence and introduced it into the multi-head self-attention network to calculate the attention weight, which encoded the user's periodic interaction information based on the agricultural time, and also effectively captured the user's dynamic preference information over time. Then, through the temporal positional embedding, a filter filtering algorithm was introduced to adaptively attenuate the noise in farmers' situational data adaptively. The filtering algorithm was introduced to enhance the filtering module to effectively filter the noisy information in the agricultural dataset and alleviate the overfitting problem due to the poorly normalized and sparse agricultural dataset. By endowing the model with lower time complexity and adaptive noise attenuation capability. The applicability of this method in agricultural scenarios was improved. Next, a multi-head self attention network with temporal information was constructed to achieve unified modeling of time, projects, and features, and represent farmers' preferences of farmers over time in context, thereby providing reliable recommendation results for users. Finally, the AdamW optimizer was used to update and compute the model parameters. AdamW added L2 regularization and an appropriate penalty mechanism for larger weights, which could update all weights more smoothly and alleviate the problem of falling into local minima. Applied in the field of agricultural recommendation, it could further improve the training effect of the model. The experimental data came from user likes, comments, and corresponding time information in the "National Agricultural Knowledge Intelligent Service Cloud Platform", and the dataset ml-1m in the movie recommendation scenario was selected as an auxiliary validation of the performance of this model. Results and Discussions According to the user interaction sequence datasets in the "National Agricultural Knowledge Intelligent Service Cloud Platform", from the experimental results, it could be learned that TiFSA outperforms the other models on two different datasets, in which the enhancement was more obvious on the Agriculture dataset, where HR and NDCG were improved by 14.02% and 16.19%, respectively, compared to the suboptimal model, TiSASRec; while on the ml-1m dataset compared to the suboptimal model, SASRec, HR and NDCG were improved by 1.90% and 2.30%, respectively. In summary, the TiFSA model proposed in this paper has a large improvement compared with other models, which verifies verified the effectiveness of the TiFSA model and showed that the time interval information of farmer interaction and the filtering algorithm play an important role in the improvement of the model performance in the agricultural context. From the results of the ablation experiments, it could be seen that when the time-aware and enhanced filtering modules were removed, the values of the two metrics HR@10 and NDCG@10 were 0.293 6 and 0.203 9, respectively, and the recommended performance was poor. When only the time-aware module and only the augmentation filtering module were removed, the experimental results had different degrees of improvement compared to TiFSA-tf, and the TiFSA model proposed in this paper achieved the optimal performance in the two evaluation metrics. When only the multi-head self-attention network was utilized for recommendation, both recommendation metrics of the model were lower, indicating that the traditional sequence recommendation method that only considered the item number was not applicable to agricultural scenarios. When the augmented filtering module was introduced without the time-aware module, the model performance was improved, but still failed to achieve the ideal recommendation effect. When only the time-aware module was introduced without the augmented filtering module, there was a significant improvement in the model effect, which proved that the time-aware module was more applicable to agricultural scenarios and can effectively improve the model performance of the sequence recommendation task. When both time-aware and augmented filtering modules were introduced, the model performance was further improved, which on the one hand illustrated the dependence of the augmented filtering module on the time-aware module, and on the other hand verified the necessity of adopting the augmented filtering to the time-aware self-attention network model. Conclusions This research proposes an agricultural knowledge recommendation model that integrates time-awareness and augmented filtering, which introduces the user's interaction time interval into the embedded information, so that the model effectively learns the information of agricultural time in the agricultural scene, and the prediction of the user's interaction time and the object is more closely related to the actual scene; augmented filtering algorithms are used to attenuate the noise in the agricultural data. At the same time, the enhanced filtering algorithm is used to attenuate the noise in the agricultural data, and can be effectively integrated into the model for use, further improving the recommendation performance of the model. The experimental results show the effectiveness of the proposed TiFSA model on the agricultural dataset. The ablation experiments confirm the positive effect of time-awareness and enhanced filtering modules on the improvement of recommendation performance.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Real-Time Monitoring System for Rabbit House Environment Based on NB-IoT Network
    QIN Yingdong, JIA Wenshen
    Smart Agriculture    2023, 5 (1): 155-165.   DOI: 10.12133/j.smartag.SA202211008
    Abstract823)   HTML92)    PDF(pc) (1662KB)(3151)       Save

    To meet the needs of environmental monitoring and regulation in rabbit houses, a real-time environmental monitoring system for rabbit houses was proposed based on narrow band Internet of Things (NB-IoT). The system overcomes the limitations of traditional wired networks, reduces network costs, circuit components, and expenses is low. An Arduino development board and the Quectel BC260Y-NB-IoT network module were used, along with the message queuing telemetry transport (MQTT) protocol for remote telemetry transmission, which enables network connectivity and communication with an IoT cloud platform. Multiple sensors, including SGP30, MQ137, and 5516 photoresistors, were integrated into the system to achieve real-time monitoring of various environmental parameters within the rabbit house, such as sound decibels, light intensity, humidity, temperature, and gas concentrations. The collected data was stored for further analysis and could be used to inform environmental regulation and monitoring in rabbit houses, both locally and in the cloud. Signal alerts based on circuit principles were triggered when thresholds were exceeded, creating an optimal living environment for the rabbits. The advantages of NB-IoT networks and other networks, such as Wi-Fi and LoRa were compared. The technology and process of building a system based on the three-layer architecture of the Internet of Things was introduced. The prices of circuit components were analyzed, and the total cost of the entire system was less than 400 RMB. The system underwent network and energy consumption tests, and the transmission stability, reliability, and energy consumption were reasonable and consistent across different time periods, locations, and network connection methods. An average of 0.57 transactions per second (TPS) was processed by the NB-IoT network using the MQTT communication protocol, and 34.2 messages per minute were sent and received with a fluctuation of 1 message. The monitored device was found to have an average voltage of approximately 12.5 V, a current of approximately 0.42 A, and an average power of 5.3 W after continuous monitoring using an electricity meter. No additional power consumption was observed during communication. The performance of various sensors was tested through a 24-hour indoor test, during which temperature and lighting conditions showed different variations corresponding to day and night cycles. The readings were stably and accurately captured by the environmental sensors, demonstrating their suitability for long-term monitoring purposes. This system is can provide equipment cost and network selection reference values for remote or large-scale livestock monitoring devices.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Orchard-Wide Visual Perception and Autonomous Operation of Fruit Picking Robots: A Review
    CHEN Mingyou, LUO Lufeng, LIU Wei, WEI Huiling, WANG Jinhai, LU Qinghua, LUO Shaoming
    Smart Agriculture    2024, 6 (5): 20-39.   DOI: 10.12133/j.smartag.SA202405022
    Abstract822)   HTML147)    PDF(pc) (4030KB)(3941)       Save

    [Significance] Fruit-picking robot stands as a crucial solution for achieving intelligent fruit harvesting. Significant progress has been made in developing foundational methods for picking robots, such as fruit recognition, orchard navigation, path planning for picking, and robotic arm control, the practical implementation of a seamless picking system that integrates sensing, movement, and picking capabilities still encounters substantial technical hurdles. In contrast to current picking systems, the next generation of fruit-picking robots aims to replicate the autonomous skills exhibited by human fruit pickers. This involves effectively performing ongoing tasks of perception, movement, and picking without human intervention. To tackle this challenge, this review delves into the latest research methodologies and real-world applications in this field, critically assesses the strengths and limitations of existing methods and categorizes the essential components of continuous operation into three sub-modules: local target recognition, global mapping, and operation planning. [Progress] Initially, the review explores methods for recognizing nearby fruit and obstacle targets. These methods encompass four main approaches: low-level feature fusion, high-level feature learning, RGB-D information fusion, and multi-view information fusion, respectively. Each of these approaches incorporates advanced algorithms and sensor technologies for cluttered orchard environments. For example, low-level feature fusion utilizes basic attributes such as color, shapes and texture to distinguish fruits from backgrounds, while high-level feature learning employs more complex models like convolutional neural networks to interpret the contextual relationships within the data. RGB-D information fusion brings depth perception into the mix, allowing robots to gauge the distance to each fruit accurately. Multi-view information fusion tackles the issue of occlusions by combining data from multiple cameras and sensors around the robot, providing a more comprehensive view of the environment and enabling more reliable sensing. Subsequently, the review shifts focus to orchard mapping and scene comprehension on a broader scale. It points out that current mapping methods, while effective, still struggle with dynamic changes in the orchard, such as variations of fruits and light conditions. Improved adaptation techniques, possibly through machine learning models that can learn and adjust to different environmental conditions, are suggested as a way forward. Building upon the foundation of local and global perception, the review investigates strategies for planning and controlling autonomous behaviors. This includes not only the latest advancements in devising movement paths for robot mobility but also adaptive strategies that allow robots to react to unexpected obstacles or changes within the whole environment. Enhanced strategies for effective fruit picking using the Eye-in-Hand system involve the development of more dexterous robotic hands and improved algorithms for precisely predicting the optimal picking point of each fruit. The review also identifies a crucial need for further advancements in the dynamic behavior and autonomy of these technologies, emphasizing the importance of continuous learning and adaptive control systems to improve operational efficiency in diverse orchard environments. [Conclusions and Prospects] The review underscores the critical importance of coordinating perception, movement, and picking modules to facilitate the transition from a basic functional prototype to a practical machine. Moreover, it emphasizes the necessity of enhancing the robustness and stability of core algorithms governing perception, planning, and control, while ensuring their seamless coordination which is a central challenge that emerges. Additionally, the review raises unresolved questions regarding the application of picking robots and outlines future trends, include deeper integration of stereo vision and deep learning, enhanced global vision sampling, and the establishment of standardized evaluation criteria for overall operational performance. The paper can provide references for the eventual development of robust, autonomous, and commercially viable picking robots in the future.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Forecast and Analysis of Agricultural Products Logistics Demand Based on Informer Neural Network: Take the Central China Aera as An Example
    ZUO Min, HU Tianyu, DONG Wei, ZHANG Kexin, ZHANG Qingchuan
    Smart Agriculture    2023, 5 (1): 34-43.   DOI: 10.12133/j.smartag.SA202302001
    Abstract816)   HTML195)    PDF(pc) (1323KB)(3408)       Save

    Ensuring the stability of agricultural products logistics is the key to ensuring people's livelihood. The forecast of agricultural products logistics demand is an important guarantee for rational planning of agricultural products logistics stability. However, the forecasting of agricultural products logistics demand is actually complicated, and it will be affected by various factors in the forecasting process. Therefore, in order to ensure the accuracy of forecasting the logistics demand of agricultural products, many influencing factors need to be considered. In this study, the logistics demand of agricultural products is taken as the research object, relevant indicators from 2017 to 2021 were selected as characteristic independent variables and a neural network model for forecasting the logistics demand of agricultural products was constructed by using Informer neural network. Taking Henan province, Hubei province and Hunan province in Central China as examples, the logistics demands of agricultural products in the three provinces were predicted. At the same time, long short-term memory network (LSTM) and Transformer neural network were used to forecast the demand of agricultural products logistics in three provinces of Central China, and the prediction results of the three models were compared. The results showed that the average percentage of prediction test error based on Informer neural network model constructed in this study was 3.39%, which was lower than that of LSTM and Transformer neural network models of 4.43% and 4.35%. The predicted value of Informer neural network model for three provinces was close to the actual value. The predicted value of Henan province in 2021 was 4185.33, the actual value was 4048.10, and the error was 3.389%. The predicted value of Hubei province in 2021 was 2503.64, the actual value was 2421.78, and the error was 3.380%. The predicted value of Hunan province in 2021 was 2933.31, the actual value was 2836.86, and the error was 3.340%. Therefore, it showed that the model can accurately predict the demand of agricultural products logistics in three provinces of Central China, and can provide a basis for rational planning and policy making of agricultural products logistics. Finally, the model and parameters were used to predict the logistics demand of agricultural products in Henan, Hunan, and Hubei provinces in 2023, and the predicted value of Henan province in 2023 was 4217.13; Hubei province was 2521.47, and Hunan province was 2974.65, respectively. The predicted values for the three provinces in 2023 are higher than the predicted values in 2021. Therefore, based on the logistics and transportation supporting facilities in 2021, it is necessary to ensure logistics and transportation efficiency and strengthen logistics and transportation capacity, so as to meet the growing logistics demand in Central China.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Cow Hoof Slippage Detecting Method Based on Enhanced DeepLabCut Model
    NIAN Yue, ZHAO Kaixuan, JI Jiangtao
    Smart Agriculture    2024, 6 (5): 153-163.   DOI: 10.12133/j.smartag.SA202406014
    Abstract809)   HTML26)    PDF(pc) (1765KB)(646)       Save

    [Objective] The phenomenon of hoof slipping occurs during the walking process of cows, which indicates the deterioration of the farming environment and a decline in the cows' locomotor function. Slippery grounds can lead to injuries in cows, resulting in unnecessary economic losses for farmers. To achieve automatically recognizing and detecting slippery hoof postures during walking, the study focuses on the localization and analysis of key body points of cows based on deep learning methods. Motion curves of the key body points were analyzed, and features were extracted. The effectiveness of the extracted features was verified using a decision tree classification algorithm, with the aim of achieving automatic detection of slippery hoof postures in cows. [Method] An improved localization method for the key body points of cows, specifically the head and four hooves, was proposed based on the DeepLabCut model. Ten networks, including ResNet series, MobileNet-V2 series, and EfficientNet series, were selected to respectively replace the backbone network structure of DeepLabCut for model training. The root mean square error(RMSE), model size, FPS, and other indicators were chosen, and after comprehensive consideration, the optimal backbone network structure was selected as the pre-improved network. A network structure that fused the convolutional block attention module (CBAM) attention mechanism with ResNet-50 was proposed. A lightweight attention module, CBAM, was introduced to improve the ResNet-50 network structure. To enhance the model's generalization ability and robustness, the CBAM attention mechanism was embedded into the first convolution layer and the last convolution layer of the ResNet-50 network structure. Videos of cows with slippery hooves walking in profile were predicted for key body points using the improved DeepLabCut model, and the obtained key point coordinates were used to plot the motion curves of the cows' key body points. Based on the motion curves of the cows' key body points, the feature parameter Feature1 for detecting slippery hooves was extracted, which represented the local peak values of the derivative of the motion curves of the cows' four hooves. The feature parameter Feature2 for predicting slippery hoof distances was extracted, specifically the minimum local peak points of the derivative curve of the hooves, along with the local minimum points to the left and right of these peaks. The effectiveness of the extracted Feature1 feature parameters was verified using a decision tree classification model. Slippery hoof feature parameters Feature1 for each hoof were extracted, and the standard deviation of Feature1 was calculated for each hoof. Ultimately, a set of four standard deviations for each cow was extracted as input parameters for the classification model. The classification performance was evaluated using four common objective metrics, including accuracy, precision, recall, and F1-Score. The prediction accuracy for slippery hoof distances was assessed using RMSE as the evaluation metric. [Results and Discussion] After all ten models reached convergence, the loss values ranked from smallest to largest were found in the EfficientNet series, ResNet series, and MobileNet-V2 series, respectively. Among them, ResNet-50 exhibited the best localization accuracy in both the training set and validation set, with RMSE values of only 2.69 pixels and 3.31 pixels, respectively. The MobileNet series had the fastest inference speed, reaching 48 f/s, while the inference speeds of the ResNet series and MobileNet series were comparable, with ResNet series performing slightly better than MobileNet series. Considering the above factors, ResNet-50 was ultimately selected as the backbone network for further improvements on DeepLabCut. Compared to the original ResNet-50 network, the ResNet-50 network improved by integrating the CBAM module showed a significant enhancement in localization accuracy. The accuracy of the improved network increased by 3.7% in the training set and by 9.7% in the validation set. The RMSE between the predicted body key points and manually labeled points was only 2.99 pixels, with localization results for the right hind hoof, right front hoof, left hind hoof, left front hoof, and head improved by 12.1%, 44.9%, 0.04%, 48.2%, and 39.7%, respectively. To validate the advancement of the improved model, a comparison was made with the mainstream key point localization model, YOLOv8s-pose, which showed that the RMSE was reduced by 1.06 pixels compared to YOLOv8s-pose. This indicated that the ResNet-50 network integrated with the CBAM attention mechanism possessed superior localization accuracy. In the verification of the cow slippery hoof detection classification model, a 10-fold cross-validation was conducted to evaluate the performance of the cow slippery hoof classification model, resulting in average values of accuracy, precision, recall, and F1-Score at 90.42%, 0.943, 0.949, and 0.941, respectively. The error in the calculated slippery hoof distance of the cows, using the slippery hoof distance feature parameter Feature2, compared to the manually calibrated slippery hoof distance was found to be 1.363 pixels. [Conclusion] The ResNet-50 network model improved by integrating the CBAM module showed a high accuracy in the localization of key body points of cows. The cow slippery hoof judgment model and the cow slippery hoof distance prediction model, based on the extracted feature parameters for slippery hoof judgment and slippery hoof distance detection, both exhibited small errors when compared to manual detection results. This indicated that the proposed enhanced deeplabcut model obtained good accuracy and could provide technical support for the automatic detection of slippery hooves in cows.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Comparative Study of the Regulation Effects of Artificial Intelligence-Assisted Planting Strategies on Strawberry Production in Greenhouse
    GENG Wenxuan, ZHAO Junye, RUAN Jiwei, HOU Yuehui
    Smart Agriculture    2022, 4 (2): 183-193.   DOI: 10.12133/j.smartag.SA202203006
    Abstract803)   HTML123)    PDF(pc) (869KB)(1373)       Save

    Artificial intelligence (AI) assisted planting can improve in the precise management of protected horticultural crops while also alleviating the increasingly prevalent problem of labor shortage. As a typical representative of labor-intensive industries, the strawberry industry has a growing need for intelligent technology. To assess the regulatory effects of various AI strategies and key technologies on strawberry production in greenhouse, as well as provide valuable references for the innovation and industrial application of AI in horticultural crops, four AI planting strategies were evaluated. Four 96 m2 modern greenhouses were used for planting strawberry plants. Each greenhouse was equipped with standard sensors and actuators, and growers used artificial intelligence algorithms to remotely control the greenhouse climate and crop growth. The regulatory effects of four different AI planting strategies on strawberry growth, fruit yield and qualitywere compared and analyzed. And human-operated cultivation was taken as a reference to analyze the characteristics, existing problems and shortages. Each AI planting strategy simulated and forecast the greenhouse environment and crop growth by constructing models. AI-1 implemented greenhouse management decisions primarily through the knowledge graph method, whereas AI-2 transferred the intelligent planting model of Dutch greenhouse tomato planting to strawberry planting. AI-3 and AI-4 created growth and development models for strawberries based on World Food Studies (WOFOST) and Product of Thermal Effectiveness and Photosynthesis Active Radiation (TEP), respectively. The results showed that all AI supported strategy outperformed a human-operated greenhouse that served as reference. In comparison to the human-operated cultivation group, the average yield and output value of the AI planting strategy group increased 1.66 and 1.82 times, respectively, while the highest Return on Investment increased 1.27 times. AI can effectively improve the accuracy of strawberry planting management and regulation, reduce water, fertilizer, labor input, and obtain higher returns under greenhouse production conditions equipped with relatively complete intelligent equipment and control components, all with the goal of high yield and quality. Key technologies such as knowledge graphs, deep learning, visual recognition, crop models, and crop growth simulators all played a unique role in strawberry AI planting. The average yield and Return on Investment (ROI) of the AI groups were greater than those of the human-operated cultivation group. More specifically, the regulation of AI-1 on crop development and production was relatively stable, integrating expert experience, crop data, and environmental data with knowledge graphs to create a standardized strawberry planting knowledge structure as well as intelligent planting decision-making approach. In this study, AI-1 achieved the highest yield, the heaviest average fruit weight, and the highest ROI. This group's AI-assisted strategy optimized the regulatory effect of growth, development, and yield formation of strawberry crops in consideration of high yield and quality. However, there are still issues to be resolved, such as the difficulty of simulating the disturbance caused by manual management and collecting crop ontology data.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Advances and Development Trend of Mountainous Tractor Leveling and Anti-Rollover System
    MU Xiaodong, YANG Fuzeng, DUAN Luojia, LIU Zhijie, SONG Zhuoying, LI Zonglin, GUAN Shouqing
    Smart Agriculture    2024, 6 (3): 1-16.   DOI: 10.12133/j.smartag.SA202312015
    Abstract800)   HTML102)    PDF(pc) (2448KB)(3657)       Save

    [Significance] The mechanization, automation and intelligentization of agricultural equipment are key factors to improve operation efficiency, free up labor force and promote the sustainable development of agriculture. It is also the hot spot of research and development of agricultural machinery industry in the future. In China, hills and mountains serves as vital production bases for agricultural products, accounting for about 70% of the country's land area. In addition, these regions face various environmental factors such as steep slopes, narrow road, small plots, complex terrain and landforms, as well as harsh working environment. Moreover, there is a lack of reliable agricultural machinery support across various production stages, along with a shortage of theoretical frameworks to guide the research and development of agricultural machinery tailored to hilly and mountainous locales. [Progress] This article focuses on the research advances of tractor leveling and anti-overturning systems in hilly and mountainous areas, including tractor body, cab and seat leveling technology, tractor rear suspension and implement leveling slope adaptive technology, and research progress on tractor anti-overturning protection devices and warning technology. The vehicle body leveling mechanism can be roughly divided into five types based on its different working modes: parallel four bar, center of gravity adjustable, hydraulic differential high, folding and twisting waist, and omnidirectional leveling. These mechanisms aim to address the issue of vehicle tilting and easy overturning when traversing or working on sloping or rugged roads. By keeping the vehicle body posture horizontal or adjusting the center of gravity within a stable range, the overall driving safety of the vehicle can be improved to ensure the accuracy of operation. Leveling the driver's cab and seats can mitigate the lateral bumps experienced by the driver during rough or sloping operations, reducing driver fatigue and minimizing strain on the lumbar and cervical spine, thereby enhancing driving comfort. The adaptive technology of tractor rear suspension and implement leveling on slopes can ensure that the tractor maintains consistent horizontal contact with the ground in hilly and mountainous areas, avoiding changes in the posture of the suspended implement with the swing of the body or the driving path, which may affect the operation effect. The tractor rollover protection device and warning technology have garnered significant attention in recent years. Prioritizing driver safety, rollover warning system can alert the driver in advance of the dangerous state of the tractor, automatically adjust the vehicle before rollover, or automatically open the rollover protection device when it is about to rollover, and timely send accident reports to emergency contacts, thereby ensuring the safety of the driver to the greatest extent possible. [Conclusions and Prospects] The future development directions of hill and mountain tractor leveling, anti-overturning early warning, unmanned, automatic technology were looked forward: Structure optimization, high sensitivity, good stability of mountain tractor leveling system research; Study on copying system of agricultural machinery with good slope adaptability; Research on anti-rollover early warning technology of environment perception and automatic interference; Research on precision navigation technology, intelligent monitoring technology and remote scheduling and management technology of agricultural machinery; Theoretical study on longitudinal stability of sloping land. This review could provide reference for the research and development of high reliability and high safety mountain tractor in line with the complex working environment in hill and mountain areas.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Status Quo of Waterfowl Intelligent Farming Research Review and Development Trend Analysis
    LIU Youfu, XIAO Deqin, ZHOU Jiaxin, BIAN Zhiyi, ZHAO Shengqiu, HUANG Yigui, WANG Wence
    Smart Agriculture    2023, 5 (1): 99-110.   DOI: 10.12133/j.smartag.SA202205007
    Abstract784)   HTML66)    PDF(pc) (2057KB)(3676)       Save

    Waterfowl farming in China is developing rapidly in the direction of large-scale, standardization and intelligence. The research and application of intelligent farming equipment and information technology is the key to promote the healthy and sustainable development of waterfowl farming, which is important to improve the output efficiency of waterfowl farming, reduce the reliance on labor in the production process, fit the development concept of green and environmental protection and achieve high-quality transformational development. In this paper, the latest research and inventions of intelligent waterfowl equipment, waterfowl shed environment intelligent control technology and intelligent waterfowl feeding, drinking water, dosing and disinfection and automatic manure treatment equipment were introduced. At present, compared to pigs, chickens and cattle, the intelligent equipment of waterfowl are still relatively backward. Most waterfowl houses are equipped with chicken equipment directly, lacking improvements for waterfowl. Moreover, the linkage between the equipment is poor and not integrated with the breeding mode and shed structure of waterfowl, resulting in low utilization. Therefore, there is a need to develop and improve equipment for the physiological growth characteristics of waterfowl from the perspective of their breeding welfare. In addition, the latest research advances in the application of real-time production information collection and intelligent management technologies were present. The information collection technologies included visual imaging technology, sound capture systems, and wearable sensors were present. Since the researches of ducks and geese is few, the research of poultry field, which can provide a reference for the waterfowl were also summarized. The research of information perception and processing of waterfowl is currently in its initial stage. Information collection techniques need to be further tailored to the physiological growth characteristics of waterfowl, and better deep learning models need to be established. The waterfowl management platform, taking the intelligent management platform developed by South China Agricultural University as an example were also described. Finally, the intelligent application of the waterfowl industry was pointed out, and the future trends of intelligent farming with the development of mechanized and intelligent equipment for waterfowl in China to improve the recommendations were analyzed. The current waterfowl farming is in urgent need of intelligent equipment reform and upgrading of the industry for support. In the future, intelligent equipment for waterfowl, information perception methods and control platforms are in urgent to be developed. When upgrading the industry, it is necessary to develop a development strategy that fits the current waterfowl farming model in China.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Fast Extracting Method for Strawberry Leaf Age and Canopy Width Based on Instance Segmentation Technology
    FAN Jiangchuan, WANG Yuanqiao, GOU Wenbo, CAI Shuangze, GUO Xinyu, ZHAO Chunjiang
    Smart Agriculture    2024, 6 (2): 95-106.   DOI: 10.12133/j.smartag.SA202310014
    Abstract784)   HTML44)    PDF(pc) (1903KB)(1396)       Save

    [Objective] There's a growing demand among plant cultivators and breeders for efficient methods to acquire plant phenotypic traits at high throughput, facilitating the establishment of mappings from phenotypes to genotypes. By integrating mobile phenotyping platforms with improved instance segmentation techniques, researchers have achieved a significant advancement in the automation and accuracy of phenotypic data extraction. Addressing the need for rapid extraction of leaf age and canopy width phenotypes in strawberry plants cultivated in controlled environments, this study introduces a novel high-throughput phenotyping extraction approach leveraging a mobile phenotyping platform and instance segmentation technology. [Methods] Data acquisition was conducted using a compact mobile phenotyping platform equipped with an array of sensors, including an RGB sensor, and edge control computers, capable of capturing overhead images of potted strawberry plants in greenhouses. Targeted adjustments to the network structure were made to develop an enhanced convolutional neural network (Mask R-CNN) model for processing strawberry plant image data and rapidly extracting plant phenotypic information. The model initially employed a split-attention networks (ResNeSt) backbone with a group attention module, replacing the original network to improve the precision and efficiency of image feature extraction. During training, the model adopted the Mosaic method, suitable for instance segmentation data augmentation, to expand the dataset of strawberry images. Additionally, it optimized the original cross-entropy classification loss function with a binary cross-entropy loss function to achieve better detection accuracy of plants and leaves. Based on this, the improved Mask R-CNN description involves post-processing of training results. It utilized the positional relationship between leaf and plant masks to statistically count the number of leaves. Additionally, it employed segmentation masks and image calibration against true values to calculate the canopy width of the plant. [Results and Discussions] This research conducted a thorough evaluation and comparison of the performance of an improved Mask R-CNN model, underpinned by the ResNeSt-101 backbone network. This model achieved a commendable mask accuracy of 80.1% and a detection box accuracy of 89.6%. It demonstrated the ability to efficiently estimate the age of strawberry leaves, demonstrating a high plant detection rate of 99.3% and a leaf count accuracy of 98.0%. This accuracy marked a significant improvement over the original Mask R-CNN model and meeting the precise needs for phenotypic data extraction. The method displayed notable accuracy in measuring the canopy widths of strawberry plants, with errors falling below 5% in about 98.1% of cases, highlighting its effectiveness in phenotypic dimension evaluation. Moreover, the model operated at a speed of 12.9 frames per second (FPS) on edge devices, effectively balancing accuracy and operational efficiency. This speed proved adequate for real-time applications, enabling rapid phenotypic data extraction even on devices with limited computational capabilitie. [Conclusions] This study successfully deployed a mobile phenotyping platform combined with instance segmentation techniques to analyze image data and extract various phenotypic indicators of strawberry plant. Notably, the method demonstrates remarkable robustness. The seamless fusion of mobile platforms and advanced image processing methods not only enhances efficiency but also ignifies a shift towards data-driven decision-making in agriculture.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Diagnosis of Grapevine Leafroll Disease Severity Infection via UAV Remote Sensing and Deep Learning
    LIU Yixue, SONG Yuyang, CUI Ping, FANG Yulin, SU Baofeng
    Smart Agriculture    2023, 5 (3): 49-61.   DOI: 10.12133/j.smartag.SA202308013
    Abstract780)   HTML119)    PDF(pc) (3044KB)(1026)       Save

    [Objective] Wine grapes are severely affected by leafroll disease, which affects their growth, and reduces the quality of the color, taste, and flavor of wine. Timely and accurate diagnosis of leafroll disease severity is crucial for preventing and controlling the disease, improving the wine grape fruit quality and wine-making potential. Unmanned aerial vehicle (UAV) remote sensing technology provides high-resolution images of wine grape vineyards, which can capture the features of grapevine canopies with different levels of leafroll disease severity. Deep learning networks extract complex and high-level features from UAV remote sensing images and perform fine-grained classification of leafroll disease infection severity. However, the diagnosis of leafroll disease severity is challenging due to the imbalanced data distribution of different infection levels and categories in UAV remote sensing images. [Method] A novel method for diagnosing leafroll disease severity was developed at a canopy scale using UAV remote sensing technology and deep learning. The main challenge of this task was the imbalanced data distribution of different infection levels and categories in UAV remote sensing images. To address this challenge, a method that combined deep learning fine-grained classification and generative adversarial networks (GANs) was proposed. In the first stage, the GANformer, a Transformer-based GAN model was used, to generate diverse and realistic virtual canopy images of grapevines with different levels of leafroll disease severity. To further analyze the image generation effect of GANformer. The t-distributed stochastic neighbor embedding (t-SNE) to visualize the learned features of real and simulated images. In the second stage, the CA-Swin Transformer, an improved image classification model based on the Swin Transformer and channel attention mechanism was used, to classify the patch images into different classes of leafroll disease infection severity. CA-Swin Transformer could also use a self-attention mechanism to capture the long-range dependencies of image patches and enhance the feature representation of the Swin Transformer model by adding a channel attention mechanism after each Transformer layer. The channel attention (CA) mechanism consisted of two fully connected layers and an activation function, which could extract correlations between different channels and amplify the informative features. The ArcFace loss function and instance normalization layer was also used to enhance the fine-grained feature extraction and downsampling ability for grapevine canopy images. The UAV images of wine grape vineyards were collected and processed into orthomosaic images. They labeled into three categories: healthy, moderate infection, and severe infection using the in-field survey data. A sliding window method was used to extract patch images and labels from orthomosaic images for training and testing. The performance of the improved method was compared with the baseline model using different loss functions and normalization methods. The distribution of leafroll disease severity was mapped in vineyards using the trained CA-Swin Transformer model. [Results and Discussions] The experimental results showed that the GANformer could generate high-quality virtual canopy images of grapevines with an FID score of 93.20. The images generated by GANformer were visually very similar to real images and could produce images with different levels of leafroll disease severity. The T-SNE visualization showed that the features of real and simulated images were well clustered and separated in two-dimensional space, indicating that GANformer learned meaningful and diverse features, which enriched the image dataset. Compared to CNN-based deep learning models, Transformer-based deep learning models had more advantages in diagnosing leafroll disease infection. Swin Transformer achieved an optimal accuracy of 83.97% on the enhanced dataset, which was higher than other models such as GoogLeNet, MobileNetV2, NasNet Mobile, ResNet18, ResNet50, CVT, and T2TViT. It was found that replacing the cross entropy loss function with the ArcFace loss function improved the classification accuracy by 1.50%, and applying instance normalization instead of layer normalization further improved the accuracy by 0.30%. Moreover, the proposed channel attention mechanism, named CA-Swin Transformer, enhanced the feature representation of the Swin Transformer model, achieved the highest classification accuracy on the test set, reaching 86.65%, which was 6.54% higher than using the Swin Transformer on the original test dataset. By creating a distribution map of leafroll disease severity in vineyards, it was found that there was a certain correlation between leafroll disease severity and grape rows. Areas with a larger number of severe leafroll diseases caused by Cabernet Sauvignon were more prone to have missing or weak plants. [Conclusions] A novel method for diagnosing grapevine leafroll disease severity at a canopy scale using UAV remote sensing technology and deep learning was proposed. This method can generate diverse and realistic virtual canopy images of grapevines with different levels of leafroll disease severity using GANformer, and classify them into different classes using CA-Swin Transformer. This method can also map the distribution of leafroll disease severity in vineyards using a sliding window method, and provides a new approach for crop disease monitoring based on UAV remote sensing technology.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Multiscale Feature Fusion Yak Face Recognition Algorithm Based on Transfer Learning
    CHEN Zhanqi, ZHANG Yu'an, WANG Wenzhi, LI Dan, HE Jie, SONG Rende
    Smart Agriculture    2022, 4 (2): 77-85.   DOI: 10.12133/j.smartag.SA202201001
    Abstract778)   HTML80)    PDF(pc) (1841KB)(2436)       Save

    Identifying of yak is indispensable for individual documentation, behavior monitoring, precise feeding, disease prevention and control, food traceability, and individualized breeding. Aiming at the application requirements of animal individual identification technology in intelligent informatization animal breeding platforms, a yak face recognition algorithm based on transfer learning and multiscale feature fusion, i.e., transfer learning-multiscale feature fusion-VGG(T-M-VGG) was proposed. The sample data set of yak facial images was produced by a camera named GoPro HERO8 BLACK. Then, a part of dataset was increased by the data enhancement ways that involved rotating, adjusting the brightness and adding noise to improve the robustness and accuracy of model. T-M-VGG, a kind of convolutional neural network based on pre-trained visual geometry group network and transfer learning was input with normalized dataset samples. The feature map of Block3, Block4 and Block5 were considered as F3, F4 and F5, respectively. What's more, F3 and F5 were taken by the structure that composed of three parallel dilated convolutions, the dilation rate were one, two and three, respectively, to dilate the receptive filed which was the map size of feature map. Further, the multiscale feature maps were fused by the improved feature pyramid which was in the shape of stacked hourglass structure. Finally, the fully connected layer was replaced by the global average pooling to classify and reduce a large number of parameters. To verify the effectiveness of the proposed model, a comparative experiment was conducted. The experimental results showed that recognition accuracy rate in 38,800 data sets of 194 yaks reached 96.01%, but the storage size was 70.75 MB. Twelve images representing different yak categories from dataset were chosen randomly for occlusion test. The origin images were masked with different shape of occlusions. The accuracy of identifying yak individuals was 83.33% in the occlusion test, which showed that the model had mainly learned facial features. The proposed algorithm could provide a reference for research of yak face recognition and would be the foundation for the establishment of smart management platform.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Extraction of Potato Plant Phenotypic Parameters Based on Multi-Source Data
    HU Songtao, ZHAI Ruifang, WANG Yinghua, LIU Zhi, ZHU Jianzhong, REN He, YANG Wanneng, SONG Peng
    Smart Agriculture    2023, 5 (1): 132-145.   DOI: 10.12133/j.smartag.SA202302009
    Abstract772)   HTML124)    PDF(pc) (2649KB)(1993)       Save

    Crops have diverse structures and complex growth environments. RGB image data can reflect the texture and color features of plants accurately, while 3D data contains information about crop volume. The combination of RGB image and 3D point cloud data can achieve the extraction of two-dimensional and three-dimensional phenotypic parameters of crops, which is of great significance for the research of phenomics methods. In this study, potatoe plants were chosen as the research subject, and RGB cameras and laser scanners were used to collect 50 potato RGB images and 3D laser point cloud data. The segmentation accuracy of four deep learning semantic segmentation methods, OCRNet, UpNet, PaNet, and DeepLab v3+, were compared and analyzed for the RGB images. OCRNet, which demonstrated higher accuracy, was used to perform semantic segmentation on top-view RGB images of potatoes. Mean shift clustering algorithm was optimized for laser point cloud data processing, and single-plant segmentation of laser point cloud data was completed. Stem and leaf segmentation of single-plant potato point cloud data were accurately performed using Euclidean clustering and K-Means clustering algorithms. In addition, a strategy was proposed to establish a one-to-one correspondence between RGB images and point clouds of single-plant potatoes using pot numbering. 8 2D phenotypic parameters and 10 3D phenotypic parameters, including maximum width, perimeter, area, plant height, volume, leaf length, and leaf width, etc., were extracted from RGB images and laser point clouds, respectively. Finally, the accuracy of three representative and easily measurable phenotypic parameters, leaf number, plant height, and maximum width were evaluated. The mean absolute percentage errors (MAPE) were 8.6%, 8.3% and 6.0%, respectively, while the root mean square errors (RMSE) were 1.371 pieces, 3.2 cm and 1.86 cm, respectively, and the determination coefficients (R2) were 0.93, 0.95 and 0.91, respectively. The research results indicated that the extracted phenotype parameters can accurately and efficiently reflect the growth status of potatoes. Combining the RGB image data of potatoes with three-dimensional laser point cloud data can fully exploit the advantages of the rich texture and color characteristics of RGB images and the volumetric information provided by three-dimensional point clouds, achieving non-destructive, efficient, and high-precision extraction of two-dimensional and three-dimensional phenotype parameters of potato plants. The achievements of this study could not only provide important technical support for the cultivation and breeding of potatoes but also provide strong support for phenotype-based research.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Method for Calculating Semantic Similarity of Short Agricultural Texts Based on Transfer Learning
    JIN Ning, GUO Yufeng, HAN Xiaodong, MIAO Yisheng, WU Huarui
    Smart Agriculture    2025, 7 (1): 33-43.   DOI: 10.12133/j.smartag.SA202410026
    Abstract751)   HTML10)    PDF(pc) (1239KB)(141)       Save

    [Objective] Intelligent services of agricultural knowledge have emerged as a current hot research domain, serving as a significant support for the construction of smart agriculture. The platform "China Agricultural Technology Extension" provides users with efficient and convenient agricultural information consultation services via mobile terminals, and has accumulated a vast amount of Q&A data. These data are characterized by a huge volume of information, rapid update and iteration, and a high degree of redundancy, resulting in the platform encountering issues such as frequent repetitive questions, low timeliness of problem responses, and inaccurate information retrieval. There is an urgent requirement for a high-quality text semantic similarity calculation approach to confront these challenges and effectively enhance the information service efficiency and intelligent level of the platform. In view of the problems of incomplete feature extraction and lack of short agro-text annotation data sets in existing text semantic similarity calculation models, a semantic similarity calculation model for short agro-text, namely CWPT-SBERT, based on transfer learning and BERT pre-training model, was proposed. [Methods] CWPT-SBERT was based on Siamese architecture with identical left and right sides and shared parameters, which had the advantages of low structural complexity and high training efficiency. This network architecture effectively reduced the consumption of computational resources by sharing parameters and ensures that input texts were compared in the same feature space. CWPT-SBERT consisted of four main parts: Semantic enhancement layer, embedding layer, pooling layer, and similarity measurement layer. The CWPT method based on the word segmentation unit was proposed in the semantic enhancement layer to further divide Chinese characters into more fine-grained sub-units maximizing the semantic features in short Chinese text and effectively enhancing the model's understanding of complex Chinese vocabulary and character structures. In the embedding layer, a transfer learning strategy was used to extract features from agricultural short texts based on SBERT. It captured the semantic features of Chinese text in the general domain, and then generated a more suitable semantic feature vector representation after fine-tuning. Transfer learning methods to train models on large-scale general-purposed domain annotation datasets solve the problem of limited short agro-text annotation datasets and high semantic sparsity. The pooling layer used the average pooling strategy to map the high-dimensional semantic vector of Chinese short text to a low-dimensional vector space. The similarity measurement layer used the cosine similarity calculation method to measure the similarity between the semantic feature vector representations of the two output short texts, and the computed similarity degree was finally input into the loss function to guide model training, optimize model parameters, and improve the accuracy of similarity calculation. [Results and Discussions] For the task of calculating semantic similarity in agricultural short texts, on a dataset containing 19 968 pairs of short ago-texts, the CWPT-SBERT model achieved an accuracy rate of 97.18% and 96.93%, a recall rate of 97.14%, and an F1-Score value of 97.04%, which are higher than 12 models such as TextCNN_Attention, MaLSTM and SBERT. By analyzing the Pearson and Spearman coefficients of CWPT-SBERT, SBERT, SALBERT and SRoBERTa trained on short agro-text datasets, it could be observed that the initial training value of the CWPT-SBERT model was significantly higher than that of the comparison models and was close to the highest value of the comparison models. Moreover, it exhibited a smooth growth trend during the training process, indicating that CWPT-SBERT had strong correlation, robustness, and generalization ability from the initial state. During the training process, it could not only learn the features in the training data but also effectively apply these features to new domain data. Additionally, for ALBERT, RoBERTa and BERT models, fine-tuning training was conducted on short agro-text datasets, and optimization was performed by utilizing the morphological structure features to enrich text semantic feature expression. Through ablation experiments, it was evident that both optimization strategies could effectively enhance the performance of the models. By analyzing the attention weight heatmap of Chinese character morphological structure, the importance of Chinese character radicals in representing Chinese character attributes was highlighted, enhancing the semantic representation of Chinese characters in vector space. There was also complex correlation within the morphological structure of Chinese characters. [Conclusions] CWPT-SBERT uses transfer learning methods to solve the problem of limited short agro-text annotation datasets and high semantic sparsity. By leveraging the Chinese-oriented word segmentation method CWPT to break down Chinese characters, the semantic representation of word vectors is enhanced, and the semantic feature expression of short texts is enriched. CWPT-SBERT model has high accuracy of semantic similarity on small-scale short agro-text and obvious performance advantages, which provides an effective technical reference for semantic intelligence matching.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Advances and Prospects on Rapid Acquisition Technology of Farmland Soil Physical and Chemical Parameters
    QI Jiangtao, CHENG Panting, GAO Fangfang, GUO Li, ZHANG Ruirui
    Smart Agriculture    2024, 6 (3): 17-33.   DOI: 10.12133/j.smartag.SA202404003
    Abstract749)   HTML114)    PDF(pc) (1524KB)(5216)       Save

    [Significance] Soil stands as the fundamental pillar of agricultural production, with its quality being intrinsically linked to the efficiency and sustainability of farming practices. Historically, the intensive cultivation and soil erosion have led to a marked deterioration in some arable lands, characterized by a sharp decrease in soil organic matter, diminished fertility, and a decline in soil's structural integrity and ecological functions. In the strategic framework of safeguarding national food security and advancing the frontiers of smart and precision agriculture, the march towards agricultural modernization continues apace, intensifying the imperative for meticulous soil quality management. Consequently, there is an urgent need for the rrapid acquisition of soil's physical and chemical parameters. Interdisciplinary scholars have delved into soil monitoring research, achieving notable advancements that promise to revolutionize the way we understand and manage soil resource. [Progress] Utilizing the the Web of Science platform, a comprehensive literature search was conducted on the topic of "soil," further refined with supplementary keywords such as "electrochemistry", "spectroscopy", "electromagnetic", "ground-penetrating radar", and "satellite". The resulting literature was screened, synthesized, and imported into the CiteSpace visualization tool. A keyword emergence map was yielded, which delineates the trajectory of research in soil physical and chemical parameter detection technology. Analysis of the keyword emergence map reveals a paradigm shift in the acquisition of soil physical and chemical parameters, transitioning from conventional indoor chemical and spectrometry analyses to outdoor, real-time detection methods. Notably, soil sensors integrated into drones and satellites have garnered considerable interest. Additionally, emerging monitoring technologies, including biosensing and terahertz spectroscopy, have made their mark in recent years. Drawing from this analysis, the prevailing technologies for soil physical and chemical parameter information acquisition in agricultural fields have been categorized and summarized. These include: 1. Rapid Laboratory Testing Techniques: Primarily hinged on electrochemical and spectrometry analysis, these methods offer the dual benefits of time and resource efficiency alongside high precision; 2. Rapid Near-Ground Sensing Techniques: Leveraging electromagnetic induction, ground-penetrating radar, and various spectral sensors (multispectral, hyperspectral, and thermal infrared), these techniques are characterized by their high detection accuracy and swift operation. 3. Satellite Remote Sensing Techniques: Employing direct inversion, indirect inversion, and combined analysis methods, these approaches are prized for their efficiency and extensive coverage. 4. Innovative Rapid Acquisition Technologies: Stemming from interdisciplinary research, these include biosensing, environmental magnetism, terahertz spectroscopy, and gamma spectroscopy, each offering novel avenues for soil parameter detection. An in-depth examination and synthesis of the principles, applications, merits, and limitations of each technology have been provided. Moreover, a forward-looking perspective on the future trajectory of soil physical and chemical parameter acquisition technology has been offered, taking into account current research trends and hotspots. [Conclusions and Prospects] Current advancements in the technology for rapaid acquiring soil physical and chemical parameters in agricultural fields have been commendable, yet certain challenges persist. The development of near-ground monitoring sensors is constrained by cost, and their reliability, adaptability, and specialization require enhancement to effectively contend with the intricate and varied conditions of farmland environments. Additionally, remote sensing inversion techniques are confronted with existing limitations in data acquisition, processing, and application. To further develop the soil physical and chemical parameter acquisition technology and foster the evolution of smart agriculture, future research could beneficially delve into the following four areas: Designing portable, intelligent, and cost-effective near-ground soil information acquisition systems and equipment to facilitate rapid on-site soil information detection; Enhancing the performance of low-altitude soil information acquisition platforms and refine the methods for data interpretation to ensure more accurate insights; Integrating multifactorial considerations to construct robust satellite remote sensing inversion models, leveraging diverse and open cloud computing platforms for in-depth data analysis and mining; Engaging in thorough research on the fusion of multi-source data in the acquisition of soil physical and chemical parameter information, developing soil information sensing algorithms and models with strong generalizability and high reliability to achieve rapaid, precise, and intelligent acquisition of soil parameters.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Design and Key Technologies of Big Data Platform for Commercial Beef Cattle Breeding
    MA Weihong, LI Jiawei, WANG Zhiquan, GAO Ronghua, DING Luyu, YU Qinyang, YU Ligen, LAI Chengrong, LI Qifeng
    Smart Agriculture    2022, 4 (2): 99-109.   DOI: 10.12133/j.smartag.SA202203005
    Abstract746)   HTML73)    PDF(pc) (1993KB)(1261)       Save

    Focusing on the low level of management and informatization and intelligence of the beef cattle industry in China, a big data platform for commercial beef cattle breeding, referring to the experience of international advanced beef cattle breeding countries, was proposed in this research. The functions of the platform includes integrating germplasm resources of beef cattle, automatic collecting of key beef cattle breeding traits, full-service support for the beef cattle breeding process, formation of big data analysis and decision-making system for beef cattle germplasm resources, and joint breeding innovation model. Aiming at the backward storage and sharing methods of beef cattle breeding data and incomplete information records in China, an information resource integration platform and an information database for beef cattle germplasm were established. Due to the vagueness and subjectivity of the breeding performance evaluation standard, a scientific online evaluation technology of beef cattle breeding traits and a non-contact automatic acquisition and intelligent calculation method were proposed. Considering the lack of scientific and systematic breeding planning and guidance for farmers in China, a full-service support system for beef cattle breeding and nanny-style breeding guidance during beef cattle breeding was developed. And an interactive progressive decision-making method for beef cattle breeding to solve the lack of data accumulation of beef cattle germplasm was proposed. The main body of breeding and farming enterprises was not closely integrated, according to that, the innovative breeding model of regional integration was explored. The idea of commercial beef cattle breeding big data software platform and the technological and model innovation content were also introduced. The technology innovations included the deep mining of germplasm resources data and improved breed management pedigree, the automatic acquisition and evaluation technology of non-contact breeding traits, the fusion of multi-source heterogeneous information to provide intelligent decision support. The future goal is to form a sustainable information solution for China's beef cattle breeding industry and improve the overall level of China's beef cattle breeding industry.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    A Lightweight Fruit Load Estimation Model for Edge Computing Equipment
    XIA Xue, CHAI Xiujuan, ZHANG Ning, ZHOU Shuo, SUN Qixin, SUN Tan
    Smart Agriculture    2023, 5 (2): 1-12.   DOI: 10.12133/j.smartag.SA202305004
    Abstract708)   HTML154)    PDF(pc) (2277KB)(4988)       Save

    [Objective] The fruit load estimation of fruit tree is essential for horticulture management. Traditional estimation method by manual sampling is not only labor-intensive and time-consuming but also prone to errors. Most existing models can not apply to edge computing equipment with limited computing resources because of their high model complexity. This study aims to develop a lightweight model for edge computing equipment to estimate fruit load automatically in the orchard. [Methods] The experimental data were captured using the smartphone in the citrus orchard in Jiangnan district, Nanning city, Guangxi province. In the dataset, 30 videos were randomly selected for model training and other 10 for testing. The general idea of the proposed algorithm was divided into two parts: Detecting fruits and extracting ReID features of fruits in each image from the video, then tracking fruit and estimating the fruit load. Specifically, the CSPDarknet53 network was used as the backbone of the model to achieve feature extraction as it consumes less hardware computing resources, which was suitable for edge computing equipment. The path aggregation feature pyramid network PAFPN was introduced as the neck part for the feature fusion via the jump connection between the low-level and high-level features. The fused features from the PAFPN were fed into two parallel branches. One was the fruit detection branch and another was the identity embedding branch. The fruit detection branch consisted of three prediction heads, each of which performed 3×3 convolution and 1×1 convolution on the feature map output by the PAFPN to predict the fruit's keypoint heat map, local offset and bounding box size, respectively. The identity embedding branch distinguished between different fruit identity features. In the fruit tracking stage, the byte mechanism from the ByteTrack algorithm was introduced to improve the data association of the FairMOT method, enhancing the performance of fruit load estimation in the video. The Byte algorithm considered both high-score and low-score detection boxes to associate the fruit motion trajectory, then matches the identity features' similarity of fruits between frames. The number of fruit IDs whose tracking duration longer than five frames was counted as the amount of citrus fruit in the video. [Results and Discussions] All experiments were conducted on edge computing equipment. The fruit detection experiment was conducted under the same test dataset containing 211 citrus tree images. The experimental results showed that applying CSPDarkNet53+PAFPN structure in the proposed model achieved a precision of 83.6%, recall of 89.2% and F1 score of 86.3%, respectively, which were superior to the same indexes of FairMOT (ResNet34) model, FairMOT (HRNet18) model and Faster RCNN model. The CSPDarkNet53+PAFPN structure adopted in the proposed model could better detect the fruits in the images, laying a foundation for estimating the amount of citrus fruit on trees. The model complexity experimental results showed that the number of parameters, FLOPs (Floating Point Operations) and size of the proposed model were 5.01 M, 36.44 G and 70.2 MB, respectively. The number of parameters for the proposed model was 20.19% of FairMOT (ResNet34) model's and 41.51% of FairMOT (HRNet18) model's. The FLOPs for the proposed model was 78.31% less than FairMOT (ResNet34) model's and 87.63% less than FairMOT (HRNet18) model's. The model size for the proposed model was 23.96% of FairMOT (ResNet34) model's and 45.00% of FairMOT (HRNet18) model's. Compared with the Faster RCNN, the model built in this study showed advantages in the number of parameters, FLOPs and model size. The low complexity proved that the proposed model was more friendly to edge computing equipment. Compared with the lightweight backbone network EfficientNet-Lite, the CSPDarkNet53 applied in the proposed model's backbone performed better fruit detection and model complexity. For fruit load estimation, the improved tracking strategy that integrated the Byte algorithm into the FairMOT positively boosted the estimation accuracy of fruit load. The experimental results on the test videos showed that the AEP (Average Estimating Precision) and FPS (Frames Per Second) of the proposed model reached 91.61% and 14.76 f/s, which indicated that the proposed model could maintain high estimation accuracy while the FPS was 2.4 times and 4.7 times of the comparison models, respectively. The RMSE (Root Mean Square Error) of the proposed model was 4.1713, which was 47.61% less than FairMOT (ResNet34) model's and 22.94% less than FairMOT (HRNet18) model's. The R2 of the determination coefficient between the algorithm-measured value and the manual counted value was 0.9858, which was superior to other comparison models. The proposed model revealed better performance in estimating fruit load and lower model complexity than other comparatives. [Conclusions] The experimental results proved the validity of the proposed model for fruit load estimation on edge computing equipment. This research could provide technical references for the automatic monitoring and analysis of orchard productivity. Future research will continue to enrich the data resources, further improve the model's performance, and explore more efficient methods to serve more fruit tree varieties.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Detection Method for Dragon Fruit in Natural Environment Based on Improved YOLOX
    SHANG Fengnan, ZHOU Xuecheng, LIANG Yingkai, XIAO Mingwei, CHEN Qiao, LUO Chendi
    Smart Agriculture    2022, 4 (3): 120-131.   DOI: 10.12133/j.smartag.SA202207001
    Abstract708)   HTML67)    PDF(pc) (2267KB)(1871)       Save

    Dragon fruit detection in natural environment is the prerequisite for fruit harvesting robots to perform harvesting. In order to improve the harvesting efficiency, by improving YOLOX (You Only Look Once X) network, a target detection network with an attention module was proposed in this research. As the benchmark, YOLOX-Nano network was chose to facilitate deployment on embedded devices, and the convolutional block attention module (CBAM) was added to the backbone feature extraction network of YOLOX-Nano, which improved the robustness of the model to dragon fruit target detection to a certain extent. The correlation of features between different channels was learned by weight allocation coefficients of features of different scales, which were extracted for the backbone network. Moreover, the transmission of deep information of network structure was strengthened, which aimed at reducing the interference of dragon fruit recognition in the natural environment as well as improving the accuracy and speed of detection significantly. The performance evaluation and comparison test of the method were carried out. The results showed that, after training, the dragon fruit target detection network got an AP0.5 value of 98.9% in the test set, an AP0.5:0.95 value of 72.4% and F1 score was 0.99. Compared with other YOLO network models under the same experimental conditions, on the one hand, the improved YOLOX-Nano network model proposed in this research was more lightweight, on the other hand, the detection accuracy of this method surpassed that of YOLOv3, YOLOv4 and YOLOv5 respectively. The average detection accuracy of the improved YOLOX-Nano target detection network was the highest, reaching 98.9%, 26.2% higher than YOLOv3, 9.8% points higher than YOLOv4-Tiny, and 7.9% points higher than YOLOv5-S. Finally, real-time tests were performed on videos with different input resolutions. The improved YOLOX-Nano target detection network proposed in this research had an average detection time of 21.72 ms for a single image. In terms of the size of the network model was only 3.76 MB, which was convenient for deployment on embedded devices. In conclusion, not only did the improved YOLOX-Nano target detection network model accurately detect dragon fruit under different lighting and occlusion conditions, but the detection speed and detection accuracy showed in this research could able to meet the requirements of dragon fruit harvesting in natural environment requirements at the same time, which could provide some guidance for the design of the dragon fruit harvesting robot.

    Reference | Related Articles | Metrics | Comments0
    Research Status and Prospects of Key Technologies for Rice Smart Unmanned Farms
    YU Fenghua, XU Tongyu, GUO Zhonghui, BAI Juchi, XIANG Shuang, GUO Sien, JIN Zhongyu, LI Shilong, WANG Shikuan, LIU Meihan, HUI Yinxuan
    Smart Agriculture    2024, 6 (6): 1-22.   DOI: 10.12133/j.smartag.SA202410018
    Abstract703)   HTML124)    PDF(pc) (3047KB)(1637)       Save

    [Significance] Rice smart unmanned farm is the core component of smart agriculture, and it is a key path to realize the modernization of rice production and promote the high-quality development of agriculture. Leveraging advanced information technologies such as the Internet of Things (IoT) and artificial intelligence (AI), these farms enable deep integration of data-driven decision making and intelligent machines. This integration creates an unmanned production system that covers the entire process from planting and managing rice crops to harvesting, greatly improving the efficiency and precision of rice cultivation. [Progress] This paper systematically sorted out the key technologies of rice smart unmanned farms in the three main links of pre-production, production and post-production, and the key technologies of pre-production mainly include the construction of high-standard farmland, unmanned nursery, land leveling, and soil nutrient testing. The construction of high-standard farmland is the foundation of the physical environment of the smart unmanned farms of rice, which provides perfect operating environment for the operation of modernized smart farm machinery through the reasonable layout of the field roads, good drainage and irrigation systems, and the scientific planting structure. Agricultural machine operation provides a perfect operating environment. The technical level of unmanned nursery directly determines the quality of rice cultivation and harvesting in the later stage, and a variety of rice seeding machines and nursery plate setting machines have been put into use. Land leveling technology can improve the growing environment of rice and increase the land utilization rate, and the current land leveling technology through digital sensing and path planning technology, which improves the operational efficiency and reduces the production cost at the same time. Soil nutrient detection technology is mainly detected by electrochemical analysis and spectral analysis, but both methods have their advantages and disadvantages, how to integrate the two methods to achieve an all-round detection of soil nutrient content is the main direction of future research. The key technologies in production mainly include rice dry direct seeding, automated transplanting, precise variable fertilization, intelligent irrigation, field weed management, and disease diagnosis. Among them, the rice dry direct seeding technology requires the planter to have high precision and stability to ensure reasonable seeding depth and density. Automated rice transplanting technology mainly includes three ways: root washing seedling machine transplanting, blanket seedling machine transplanting, and potting blanket seedling machine transplanting; at present, the incidence of problems in the automated transplanting process should be further reduced, and the quality and efficiency of rice machine transplanting should be improved. Precision variable fertilization technology is mainly composed of three key technologies: information perception, prescription decision-making and precise operation, but there are still fewer cases of unmanned farms combining the three technologies, and in the future, the main research should be on the method of constructing the whole process operation system of variable fertilization. The smart irrigation system is based on the water demand of the whole life cycle of rice to realize adaptive irrigation control, and the current smart irrigation technology can automatically adjust the irrigation strategy through real-time monitoring of soil, climate and crop growth conditions to further improve irrigation efficiency and agricultural production benefits. The field weed management and disease diagnosis technology mainly recognizes rice weeds as well as diseases through deep learning and other methods, and combines them with precision application technology for prevention and intervention. Post-production key technologies mainly include rice yield estimation, unmanned harvesting, rice storage and processing quality testing. Rice yield estimation technology is mainly used to predict yield by combining multi-source data and algorithms, but there are still problems such as the difficulty of integrating multi-source data, which requires further research. In terms of unmanned aircraft harvesting technology, China's rice combine harvester market has tended to stabilize, and the safety of the harvester's autopilot should be further improved in the future. Rice storage and processing quality detection technology mainly utilizes spectral technology and machine vision technology to detect spectra and images, and future research can combine deep learning and multimodal fusion technology to improve the machine vision system's ability and adaptability to recognize the appearance characteristics of rice. [Conclusions and Prospects] This paper reviews the researches of the construction of intelligent unmanned rice farms at home and abroad in recent years, summarizes the main difficulties faced by the key technologies of unmanned farms in practical applications, analyzes the challenges encountered in the construction of smart unmanned farms, summarizes the roles and responsibilities of the government, enterprises, scientific research institutions, cooperatives and other subjects in promoting the construction of intelligent unmanned rice farms, and puts forward relevant suggestions. It provides certain support and development ideas for the construction of intelligent unmanned rice farms in China.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Identification and Counting of Silkworms in Factory Farm Using Improved Mask R-CNN Model
    HE Ruimin, ZHENG Kefeng, WEI Qinyang, ZHANG Xiaobin, ZHANG Jun, ZHU Yihang, ZHAO Yiying, GU Qing
    Smart Agriculture    2022, 4 (2): 163-173.   DOI: 10.12133/j.smartag.SA202201012
    Abstract700)   HTML37)    PDF(pc) (2357KB)(2872)       Save

    Factory-like rearing of silkworm (Bombyx mori) using artificial diet for all instars is a brand-new rearing mode of silkworm. Accurate feeding is one of the core technologies to save cost and increase efficiency in factory silkworm rearing. Automatic identification and counting of silkworm play a key role to realize accurate feeding. In this study, a machine vision system was used to obtain digital images of silkworms during main instars, and an improved Mask R-CNN model was proposed to detect the silkworms and residual artificial diet. The original Mask R-CNN was improved using the noise data of annotations by adding a pixel reweighting strategy and a bounding box fine-tuning strategy to the model frame. A more robust model was trained to improve the detection and segmentation abilities of silkworm and residual feed. Three different data augmentation methods were used to expand the training dataset. The influences of silkworm instars, data augmentation, and the overlap between silkworms on the model performance were evaluated. Then the improved Mask R-CNN was used to detect silkworms and residual feed. The AP50 (Average Precision at IoU=0.5) of the model for silkworm detection and segmentation were 0.790 and 0.795, respectively, and the detection accuracy was 96.83%. The detection and segmentation AP50 of residual feed were 0.641 and 0.653, respectively, and the detection accuracy was 87.71%. The model was deployed on the NVIDIA Jetson AGX Xavier development board with an average detection time of 1.32 s and a maximum detection time of 2.05 s for a image. The computational speed of the improved Mask R-CNN can meet the requirement of real-time detection of the moving unit of the silkworm box on the production line. The model trained by the fifth instar data showed a better performance on test data than the fourth instar model. The brightness enhancement method had the greatest contribution to the model performance as compared to the other data augmentation methods. The overlap between silkworms also negatively affected the performance of the model. This study can provide a core algorithm for the research and development of the accurate feeding information system and feeding device for factory silkworm rearing, which can improve the utilization rate of artificial diet and improve the production and management level of factory silkworm rearing.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Crop Stress Sensing and Plant Phenotyping Systems: A Review
    BAI Geng, GE Yufeng
    Smart Agriculture    2023, 5 (1): 66-81.   DOI: 10.12133/j.smartag.SA202211001
    Abstract697)   HTML72)    PDF(pc) (1595KB)(4487)       Save

    Enhancing resource use efficiency in agricultural field management and breeding high-performance crop varieties are crucial approaches for securing crop yield and mitigating negative environmental impact of crop production. Crop stress sensing and plant phenotyping systems are integral to variable-rate (VR) field management and high-throughput plant phenotyping (HTPP), with both sharing similarities in hardware and data processing techniques. Crop stress sensing systems for VR field management have been studied for decades, aiming to establish more sustainable management practices. Concurrently, significant advancements in HTPP system development have provided a technological foundation for reducing conventional phenotyping costs. In this paper, we present a systematic review of crop stress sensing systems employed in VR field management, followed by an introduction to the sensors and data pipelines commonly used in field HTPP systems. State-of-the-art sensing and decision-making methodologies for irrigation scheduling, nitrogen application, and pesticide spraying are categorized based on the degree of modern sensor and model integration. We highlight the data processing pipelines of three ground-based field HTPP systems developed at the University of Nebraska-Lincoln. Furthermore, we discuss current challenges and propose potential solutions for field HTPP research. Recent progress in artificial intelligence, robotic platforms, and innovative instruments is expected to significantly enhance system performance, encouraging broader adoption by breeders. Direct quantification of major plant physiological processes may represent one of next research frontiers in field HTPP, offering valuable phenotypic data for crop breeding under increasingly unpredictable weather conditions. This review can offer a distinct perspective, benefiting both research communities in a novel manner.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Evaluation and Countermeasures on the Development Level of Intelligent Cold Chain in China
    YANG Lin, YANG Bin, REN Qingshan, YANG Xinting, HAN Jiawei
    Smart Agriculture    2023, 5 (1): 22-33.   DOI: 10.12133/j.smartag.SA202302003
    Abstract658)   HTML143)    PDF(pc) (1380KB)(1394)       Save

    The new generation of information technology has led to the rapid development of the intelligent level of the cold chain, and the precise control of the development level of the smart cold chain is the prerequisite foundation and guarantee to achieve the key breakthrough of the technical bottleneck and the strategic layout of the development direction. Based on this, an evaluation index system for China's intelligent cold chain development from the dimensions of supply capacity, storage capacity, transportation capacity, economic efficiency and informationization level was conducted. The entropy weight method combined with the technique for order preference by similarity to ideal solution (TOPSIS) was used to quantitatively evaluate the development of intelligent cold chain in 30 Chinese provinces and cities (excluding Tibet, Hong Kong, Macao and Taiwan) from 2017 to 2021. The quantitative evaluation of the level of intelligent cold chain development was conducted. The impact of the evaluation indicators on different provinces and cities was analysed by exploratory spatial data analyses (ESDA) and geographically weighted regression (GWR). The results showed that indicators such as economic development status, construction of supporting facilities and informationization level had greater weight and played a more important role in influencing the construction of intelligent cold chain. The overall level of intelligent cold chain development in China is divided into four levels, with most cities at the third and fourth levels. Beijing and the eastern coastal provinces and cities generally have a better level of intelligent cold chain development, while the southwest and northwest regions are developing slowly. In terms of overall development, the overall development of China's intelligent cold chain is relatively backward, with insufficient inter-regional synergy. The global spatial autocorrelation analysis shows that the variability in the development of China's intelligent cold chain logistics is gradually becoming greater. Through the local spatial autocorrelation analysis, it can be seen that there is a positive spatial correlation between the provinces and cities in East China, and negative spatiality in North China and South China. After geographically weighted regression analysis, it can be seen that the evaluation indicators have significant spatial and temporal heterogeneity in 2017, with the degree of influence changing with spatial location and time, and the spatial and temporal heterogeneity of the evaluation indicators is not significant in 2021. In order to improve the overall development level of China's intelligent cold chain, corresponding development countermeasures are proposed to strengthen the construction of supporting facilities and promote the transformation and upgrading of information technology. This study can provide a scientific basis for the global planning, strategic layout and overall promotion of China's intelligent cold chain.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Apple Phenological Period Identification in Natural Environment Based on Improved ResNet50 Model
    LIU Yongbo, GAO Wenbo, HE Peng, TANG Jiangyun, HU Liang
    Smart Agriculture    2023, 5 (2): 13-22.   DOI: 10.12133/j.smartag.SA202304009
    Abstract651)   HTML114)    PDF(pc) (2822KB)(2969)       Save

    [Objective] Aiming at the problems of low accuracy and incomplete coverage of image recognition of phenological period of apple in natural environment by traditional methods, an improved ResNet50 model was proposed for phenological period recognition of apple. [Methods] With 8 kinds of phenological period images of Red Fuji apple in Sichuan plateau area as the research objects and 3 sets of spherical cameras built in apple orchard as acquisition equipment, the original data set of 9800 images of apple phenological period were obtained, labeled by fruit tree experts. Due to the different duration of each phenological period of apple, there were certain differences in the quantity of collection. In order to avoid the problem of decreasing model accuracy due to the quantity imbalance, data set was enhanced by random cropping, random rotation, horizontal flip and brightness adjustment, and the original data set was expanded to 32,000 images. It was divided into training set (25,600 images), verification set (3200 images) and test set (3200 images) in a ratio of 8:1:1. Based on the ResNet50 model, the SE (Squeeze and Excitation Network) channel attention mechanism and Adam optimizer were integrated. SE channel attention was introduced at the end of each residual module in the benchmark model to improve the model's feature extraction ability for plateau apple tree images. In order to achieve fast convergence of the model, the Adam optimizer was combined with the cosine annealing attenuation learning rate, and ImageNet was selected as the pre-training model to realize intelligent recognition of plateau Red Fuji apple phenological period under natural environment. A "Intelligent Monitoring and Production Management Platform for Fruit Tree Growth Period" has been developed using the identification model of apple tree phenology. In order to reduce the probability of model misjudgment, improve the accuracy of model recognition, and ensure precise control of the platform over the apple orchard, three sets of cameras deployed in the apple orchard were set to capture motion trajectories, and images were collected at three time a day: early, middle, and late, a total of 27 images per day were collected. The model calculated the recognition results of 27 images and takes the category with the highest number of recognition as the output result to correct the recognition rate and improve the reliability of the platform. [Results and Discussions] Experiments were carried out on 32,000 apple tree images. The results showed that when the initial learning rate of Adam optimizer was set as 0.0001, the accuracy of the test model tended to the optimal, and the loss value curve converged the fastest. When the initial learning rate was set to 0.0001 and the iteration rounds are set to 30, 50 and 70, the accuracies of the optimal verification set obtained by the model was 0.9354, 0.9635 and 0.9528, respectively. Therefore, the improved ResNet50 model selects the learning rate of 0.0001 and iteration rounds of 50 as the training parameters of the Adam optimizer. Ablation experiments showed that the accuracy of validation set and test set were increased by 0.8% and 2.99% in the ResNet50 model with increased SE attention mechanism, respectively. The validation set accuracy and test set accuracy of the ResNet50 model increased by 2.19% and 1.42%, respectively, when Adam optimizer was added. The accuracy of validation set and test set was 2.33% and 3.65%, respectively. The accuracy of validation set was 96.35%, the accuracy of test set was 91.94%, and the average detection time was 2.19 ms.Compared with the AlexNet, VGG16, ResNet18, ResNet34, and ResNet101 models, the improved ResNet50 model improved the accuracy of the optimal validation set by 9.63%, 5.07%, 5.81%, 4.55%, and 0.96%, respectively. The accuracy of the test set increased by 12.31%, 6.88%, 8.53%, 8.67%, and 5.58%, respectively. The confusion matrix experiment result showed that the overall recognition rate of the improved ResNet50 model for the phenological period of apple tree images was more than 90%, of which the accuracy rate of bud stage and dormancy stage was the lowest, and the probability of mutual misjudgment was high, and the test accuracy rates were 89.50% and 87.44% respectively. There were also a few misjudgments during the young fruit stage, fruit enlargement stage, and fruit coloring stage due to the similarity in characteristics between adjacent stages. The external characteristics of the Red Fuji apple tree were more obvious during the flowering and fruit ripening stages, and the model had the highest recognition rate for the flowering and fruit ripening stages, with test accuracy reaching 97.50% and 97.49%, respectively. [Conclusions] The improved ResNet50 can effectively identify apple phenology, and the research results can provide reference for the identification of orchard phenological period. After integration into the intelligent monitoring production management platform of fruit tree growth period, intelligent management and control of apple orchard can be realized.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Gait Phase Recognition of Dairy Cows based on Gaussian Mixture Model and Hidden Markov Model
    ZHANG Kai, HAN Shuqing, CHENG Guodong, WU Saisai, LIU Jifang
    Smart Agriculture    2022, 4 (2): 53-63.   DOI: 10.12133/j.smartag.SA202204003
    Abstract638)   HTML33)    PDF(pc) (1428KB)(1340)       Save

    The gait phase of dairy cows is an important indicator to reflect the severity of lameness. IThe accuracy of available gait segmentation methods was not enough for lameness detection. In this study, a gait phase recognition method based on Gaussian mixture model (GMM) and hidden Markov model (HMM) was proposed and tested. Firstly, wearable inertial sensors LPMS-B2 were used to collect the acceleration and angular velocity signals of cow hind limbs. In order to remove the noise of the system and restore the real dynamic data, Kalman filter was used for data preprocessing. The first-order difference of the angular velocity of the coronal axis was selected as the eigenvalue. Secondly, to analyze the long-term continuous recorded gait sequences of dairy cows, the processed data was clustered by GMM in the unsupervised way. The clustering results were taken as the input of the HMM, and the gait phase recognition of dairy cows was realized by decoding the observed data. Finally, the cow gait was segmented into 3 phases, including the stationary phase, standing phase and swing phase. At the same time, gait segmentation was achieved according to the standing phase and swing phase. The accuracy, recall rate and F1 of the stationary phase were 89.28%, 90.95% and 90.91%, respectively. The accuracy, recall rate and F1 of the standing phase recognition in continuous gait were 91.55%, 86.71% and 89.06%, respectively. The accuracy, recall rate and F1 of the swing phase recognition in continuous gait were 86.67%, 91.51% and 89.03%, respectively. The accuracy of cow gait segmentation was 91.67%, which was 4.23% and 1.1 % higher than that of the event-based peak detection method and dynamic time warping algorithm, respectively. The experimental results showed that the proposed method could overcome the influence of the cow's walking speed on gait phase recognition results, and recognize the gait phase accurately. This experiment provides a new method for the adaptive recognition of the cow gait phase in unconstrained environments. The degree of lameness of dairy cows can be judged by the gait features.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Three-Dimensional Environment Perception Technology for Agricultural Wheeled Robots: A Review
    CHEN Ruiyun, TIAN Wenbin, BAO Haibo, LI Duan, XIE Xinhao, ZHENG Yongjun, TAN Yu
    Smart Agriculture    2023, 5 (4): 16-32.   DOI: 10.12133/j.smartag.SA202308006
    Abstract636)   HTML85)    PDF(pc) (1885KB)(1506)       Save

    [Significance] As the research focus of future agricultural machinery, agricultural wheeled robots are developing in the direction of intelligence and multi-functionality. Advanced environmental perception technologies serve as a crucial foundation and key components to promote intelligent operations of agricultural wheeled robots. However, considering the non-structured and complex environments in agricultural on-field operational processes, the environmental information obtained through conventional 2D perception technologies is limited. Therefore, 3D environmental perception technologies are highlighted as they can provide more dimensional information such as depth, among others, thereby directly enhancing the precision and efficiency of unmanned agricultural machinery operation. This paper aims to provide a detailed analysis and summary of 3D environmental perception technologies, investigate the issues in the development of agricultural environmental perception technologies, and clarify the future key development directions of 3D environmental perception technologies regarding agricultural machinery, especially the agricultural wheeled robot. [Progress] Firstly, an overview of the general status of wheeled robots was introduced, considering their dominant influence in environmental perception technologies. It was concluded that multi-wheel robots, especially four-wheel robots, were more suitable for the agricultural environment due to their favorable adaptability and robustness in various agricultural scenarios. In recent years, multi-wheel agricultural robots have gained widespread adoption and application globally. The further improvement of the universality, operation efficiency, and intelligence of agricultural wheeled robots is determined by the employed perception systems and control systems. Therefore, agricultural wheeled robots equipped with novel 3D environmental perception technologies can obtain high-dimensional environmental information, which is significant for improving the accuracy of decision-making and control. Moreover, it enables them to explore effective ways to address the challenges in intelligent environmental perception technology. Secondly, the recent development status of 3D environmental perception technologies in the agriculture field was briefly reviewed. Meanwhile, sensing equipment and the corresponding key technologies were also introduced. For the wheeled robots reported in the agriculture area, it was noted that the applied technologies of environmental perception, in terms of the primary employed sensor solutions, were divided into three categories: LiDAR, vision sensors, and multi-sensor fusion-based solutions. Multi-line LiDAR had better performance on many tasks when employing point cloud processing algorithms. Compared with LiDAR, depth cameras such as binocular cameras, TOF cameras, and structured light cameras have been comprehensively investigated for their application in agricultural robots. Depth camera-based perception systems have shown superiority in cost and providing abundant point cloud information. This study has investigated and summarized the latest research on 3D environmental perception technologies employed by wheeled robots in agricultural machinery. In the reported application scenarios of agricultural environmental perception, the state-of-the-art 3D environmental perception approaches have mainly focused on obstacle recognition, path recognition, and plant phenotyping. 3D environmental perception technologies have the potential to enhance the ability of agricultural robot systems to understand and adapt to the complex, unstructured agricultural environment. Furthermore, they can effectively address several challenges that traditional environmental perception technologies have struggled to overcome, such as partial sensor information loss, adverse weather conditions, and poor lighting conditions. Current research results have indicated that multi-sensor fusion-based 3D environmental perception systems outperform single-sensor-based systems. This superiority arises from the amalgamation of advantages from various sensors, which concurrently serve to mitigate individual shortcomings. [Conclusions and Prospects] The potential of 3D environmental perception technology for agricultural wheeled robots was discussed in light of the evolving demands of smart agriculture. Suggestions were made to improve sensor applicability, develop deep learning-based agricultural environmental perception technology, and explore intelligent high-speed online multi-sensor fusion strategies. Currently, the employed sensors in agricultural wheeled robots may not fully meet practical requirements, and the system's cost remains a barrier to widespread deployment of 3D environmental perception technologies in agriculture. Therefore, there is an urgent need to enhance the agricultural applicability of 3D sensors and reduce production costs. Deep learning methods were highlighted as a powerful tool for processing information obtained from 3D environmental perception sensors, improving response speed and accuracy. However, the limited datasets in the agriculture field remain a key issue that needs to be addressed. Additionally, multi-sensor fusion has been recognized for its potential to enhance perception performance in complex and changeable environments. As a result, it is clear that 3D environmental perception technology based on multi-sensor fusion is the future development direction of smart agriculture. To overcome challenges such as slow data processing speed, delayed processed data, and limited memory space for storing data, it is essential to investigate effective fusion schemes to achieve online multi-source information fusion with greater intelligence and speed.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    In Situ Identification Method of Maize Stalk Width Based on Binocular Vision and Improved YOLOv8
    ZUO Haoxuan, HUANG Qicheng, YANG Jiahao, MENG Fanjia, LI Sien, LI Li
    Smart Agriculture    2023, 5 (3): 86-95.   DOI: 10.12133/j.smartag.SA202309004
    Abstract620)   HTML84)    PDF(pc) (1659KB)(8180)       Save

    [Objective] The width of maize stalks is an important indicator affecting the lodging resistance of maize. The measurement of maize stalk width has many problems, such as cumbersome manual collection process and large errors in the accuracy of automatic equipment collection and recognition, and it is of great application value to study a method for in-situ detection and high-precision identification of maize stalk width. [Methods] The ZED2i binocular camera was used and fixed in the field to obtain real-time pictures from the left and right sides of maize stalks together. The picture acquisition system was based on the NVIDIA Jetson TX2 NX development board, which could achieve timed shooting of both sides view of the maize by setting up the program. A total of maize original images were collected and a dataset was established. In order to observe more features in the target area from the image and provide assistance to improve model training generalization ability, the original images were processed by five processing methods: image saturation, brightness, contrast, sharpness and horizontal flipping, and the dataset was expanded to 3500 images. YOLOv8 was used as the original model for identifying maize stalks from a complex background. The coordinate attention (CA) attention mechanism can bring huge gains to downstream tasks on the basis of lightweight networks, so that the attention block can capture long-distance relationships in one direction while retaining spatial information in the other direction, so that the position information can be saved in the generated attention map to focus on the area of interest and help the network locate the target better and more accurately. By adding the CA module multiple times, the CA module was fused with the C2f module in the original Backbone, and the Bottleneck in the original C2f module was replaced by the CA module, and the C2fCA network module was redesigned. Replacing the loss function Efficient IoU Loss(EIoU) splits the loss term of the aspect ratio into the difference between the predicted width and height and the width and height of the minimum outer frame, which accelerated the convergence of the prediction box, improved the regression accuracy of the prediction box, and further improved the recognition accuracy of maize stalks. The binocular camera was then calibrated so that the left and right cameras were on the same three-dimensional plane. Then the three-dimensional reconstruction of maize stalks, and the matching of left and right cameras recognition frames was realized through the algorithm, first determine whether the detection number of recognition frames in the two images was equal, if not, re-enter the binocular image. If they were equal, continue to judge the coordinate information of the left and right images, the width and height of the bounding box, and determine whether the difference was less than the given Ta. If greater than the given Ta, the image was re-imported; If it was less than the given Ta, the confidence level of the recognition frame of the image was determined whether it was less than the given Tb. If greater than the given Tb, the image is re-imported; If it is less than the given Tb, it indicates that the recognition frame is the same maize identified in the left and right images. If the above conditions were met, the corresponding point matching in the binocular image was completed. After the three-dimensional reconstruction of the binocular image, the three-dimensional coordinates (Ax, Ay, Az) and (Bx, By, Bz) in the upper left and upper right corners of the recognition box under the world coordinate system were obtained, and the distance between the two points was the width of the maize stalk. Finally, a comparative analysis was conducted among the improved YOLOv8 model, the original YOLOv8 model, faster region convolutional neural networks (Faster R-CNN), and single shot multiBox detector (SSD)to verify the recognition accuracy and recognition accuracy of the model. [Results and Discussions] The precision rate (P)、recall rate (R)、average accuracy mAP0.5、average accuracy mAP0.5:0.95 of the improved YOLOv8 model reached 96.8%、94.1%、96.6% and 77.0%. Compared with YOLOv7, increased by 1.3%、1.3%、1.0% and 11.6%, compared with YOLOv5, increased by 1.8%、2.1%、1.2% and 15.8%, compared with Faster R-CNN, increased by 31.1%、40.3%、46.2%、and 37.6%, and compared with SSD, increased by 20.6%、23.8%、20.9% and 20.1%, respectively. Respectively, and the linear regression coefficient of determination R2, root mean square error RMSE and mean absolute error MAE were 0.373, 0.265 cm and 0.244 cm, respectively. The method proposed in the research can meet the requirements of actual production for the measurement accuracy of maize stalk width. [Conclusions] In this study, the in-situ recognition method of maize stalk width based on the improved YOLOv8 model can realize the accurate in-situ identification of maize stalks, which solves the problems of time-consuming and laborious manual measurement and poor machine vision recognition accuracy, and provides a theoretical basis for practical production applications.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Chinese Kiwifruit Text Named Entity Recognition Method Based on Dual-Dimensional Information and Pruning
    QI Zijun, NIU Dangdang, WU Huarui, ZHANG Lilin, WANG Lunfeng, ZHANG Hongming
    Smart Agriculture    2025, 7 (1): 44-56.   DOI: 10.12133/j.smartag.SA202410022
    Abstract612)   HTML5)    PDF(pc) (1225KB)(105)       Save

    [Objective] Chinese kiwifruit texts exhibit unique dual-dimensional characteristics. The cross-paragraph dependency is complex semantic structure, whitch makes it challenging to capture the full contextual relationships of entities within a single paragraph, necessitating models capable of robust cross-paragraph semantic extraction to comprehend entity linkages at a global level. However, most existing models rely heavily on local contextual information and struggle to process long-distance dependencies, thereby reducing recognition accuracy. Furthermore, Chinese kiwifruit texts often contain highly nested entities. This nesting and combination increase the complexity of grammatical and semantic relationships, making entity recognition more difficult. To address these challenges, a novel named entity recognition (NER) method, KIWI-Coord-Prune(kiwifruit-CoordKIWINER-PruneBi-LSTM) was proposed in this research, which incorporated dual-dimensional information processing and pruning techniques to improve recognition accuracy. [Methods] The proposed KIWI-Coord-Prune model consisted of a character embedding layer, a CoordKIWINER layer, a PruneBi-LSTM layer, a self-attention mechanism, and a CRF decoding layer, enabling effective entity recognition after processing input character vectors. The CoordKIWINER and PruneBi-LSTM modules were specifically designed to handle the dual-dimensional features in Chinese kiwifruit texts. The CoordKIWINER module applied adaptive average pooling in two directions on the input feature maps and utilized convolution operations to separate the extracted features into vertical and horizontal branches. The horizontal and vertical features were then independently extracted using the Criss-Cross Attention (CCNet) mechanism and Coordinate Attention (CoordAtt) mechanism, respectively. This module significantly enhanced the model's ability to capture cross-paragraph relationships and nested entity structures, thereby generating enriched character vectors containing more contextual information, which improved the overall representation capability and robustness of the model. The PruneBi-LSTM module was built upon the enhanced dual-dimensional vector representations and introduced a pruning strategy into Bi-LSTM to effectively reduce redundant parameters associated with background descriptions and irrelevant terms. This pruning mechanism not only enhanced computational efficiency while maintaining the dynamic sequence modeling capability of Bi-LSTM but also improved inference speed. Additionally, a dynamic feature extraction strategy was employed to reduce the computational complexity of vector sequences and further strengthen the learning capacity for key features, leading to improved recognition of complex entities in kiwifruit texts. Furthermore, the pruned weight matrices become sparser, significantly reducing memory consumption. This made the model more efficient in handling large-scale agricultural text-processing tasks, minimizing redundant information while achieving higher inference and training efficiency with fewer computational resources. [Results and Discussions] Experiments were conducted on the self-built KIWIPRO dataset and four public datasets: People's Daily, ClueNER, Boson, and ResumeNER. The proposed model was compared with five advanced NER models: LSTM, Bi-LSTM, LR-CNN, Softlexicon-LSTM, and KIWINER. The experimental results showed that KIWI-Coord-Prune achieved F1-Scores of 89.55%, 91.02%, 83.50%, 83.49%, and 95.81%, respectively, outperforming all baseline models. Furthermore, controlled variable experiments were conducted to compare and ablate the CoordKIWINER and PruneBi-LSTM modules across the five datasets, confirming their effectiveness and necessity. Additionally, the impact of different design choices was explored for the CoordKIWINER module, including direct fusion, optimized attention mechanism fusion, and network structure adjustment residual optimization. The experimental results demonstrated that the optimized attention mechanism fusion method yielded the best performance, which was ultimately adopted in the final model. These findings highlight the significance of properly designing attention mechanisms to extract dual-dimensional features for NER tasks. Compared to existing methods, the KIWI-Coord-Prune model effectively addressed the issue of underutilized dual-dimensional information in Chinese kiwifruit texts. It significantly improved entity recognition performance for both overall text structures and individual entity categories. Furthermore, the model exhibited a degree of generalization capability, making it applicable to downstream tasks such as knowledge graph construction and question-answering systems. [Conclusions] This study presents an novel NER approach for Chinese kiwifruit texts, which integrating dual-dimensional information extraction and pruning techniques to overcome challenges related to cross-paragraph dependencies and nested entity structures. The findings offer valuable insights for researchers working on domain-specific NER and contribute to the advancement of agriculture-focused natural language processing applications. However, two key limitations remain: 1) The balance between domain-specific optimization and cross-domain generalization requires further investigation, as the model's adaptability to non-agricultural texts has yet to be empirically validated; 2) the multilingual applicability of the model is currently limited, necessitating further expansion to accommodate multilingual scenarios. Future research should focus on two key directions: 1) Enhancing domain robustness and cross-lingual adaptability by incorporating diverse textual datasets and leveraging pre-trained multilingual models to improve generalization, and 2) Validating the model's performance in multilingual environments through transfer learning while refining linguistic adaptation strategies to further optimize recognition accuracy.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Evaluation System of China's Low-Carbon Cold Chain Logistics Development Level
    YANG Bin, HAN Jiawei, YANG Lin, REN Qingshan, YANG Xinting
    Smart Agriculture    2023, 5 (1): 44-51.   DOI: 10.12133/j.smartag.SA202301011
    Abstract610)   HTML73)    PDF(pc) (707KB)(1676)       Save

    In recent years, China's cold chain logistics industry has entered a stage of rapid development. At the same time, with the increase of greenhouse gas emissions, green and low-carbon transformation has become a new feature and direction of high-quality and healthy development of the cold chain industry to meet the future development needs of China's low-carbon economy. In view of this, in order to ensure the scientificity of China's low-carbon cold chain logistics evaluation system, in this paper, 30 indicators from the four levels of energy transformation, technological innovation, economic efficiency, and national policy based on different relevant levels were first preliminarily determined, and finally 14 indicators for building China's low-carbon cold chain logistics development evaluation system through consulting experts and the possibility of data acquisition were determined. Data from 2017 to 2021 were selected to conduct a quantitative evaluation of the development level of low-carbon cold chain logistics in China. Firstly, the entropy weight method was used to analyze the weight and obstacle degree of different indicators to explore the impact of different indicators on the development of low-carbon cold chain logistics; Secondly, a weighted decision-making matrix was constructed based on the weights of different indicators, and the technology for order preference by similarity to ideal solution (TOPSIS) evaluation model was used to evaluate the development of low-carbon cold chain logistics in China from 2017 to 2021, in order to determine the development and changes of low-carbon cold chain logistics in China. The research results showed that among the 14 different indicators of the established evaluation system for the development of low-carbon cold chain logistics in China, the growth rate of the use of green packaging materials, the number of low-carbon technical papers published, the proportion of scientific research personnel, the growth rate of cold chain logistics demand for fresh agricultural products, and the reduction rate of hydrochlorofluorocarbon refrigerants account for a relatively large proportion, ranking in the top five, respectively reaching 0.1243, 0.1074, 0.1066, 0.0982, and 0.0716, accounting for more than half of the overall proportion. It has a significant impact on the development of low-carbon cold chain logistics in China. From 2017 to 2021, the development level of China's low-carbon cold chain logistics was scored from 0.1498 to 0.2359, with a year-on-year increase of about 57.5%, indicating that China's low-carbon cold chain logistics development level was relatively fast in the past five years. Although China's low-carbon cold chain logistics development has shown an overall upward trend, it is still in the development stage.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Rapid Recognition and Picking Points Automatic Positioning Method for Table Grape in Natural Environment
    ZHU Yanjun, DU Wensheng, WANG Chunying, LIU Ping, LI Xiang
    Smart Agriculture    2023, 5 (2): 23-34.   DOI: 10.12133/j.smartag.SA202304001
    Abstract609)   HTML67)    PDF(pc) (2122KB)(971)       Save

    [Objective] Rapid recognition and automatic positioning of table grapes in the natural environment is the prerequisite for the automatic picking of table grapes by the picking robot. [Methods] An rapid recognition and automatic picking points positioning method based on improved K-means clustering algorithm and contour analysis was proposed. First, euclidean distance was replaced by a weighted gray threshold as the judgment basis of K-means similarity. Then the images of table grapes were rasterized according to the K value, and the initial clustering center was obtained. Next, the average gray value of each cluster and the percentage of pixel points of each cluster in the total pixel points were calculated. And the weighted gray threshold was obtained by the average gray value and percentage of adjacent clusters. Then, the clustering was considered as have ended until the weighted gray threshold remained unchanged. Therefore, the cluster image of table grape was obtained. The improved clustering algorithm not only saved the clustering time, but also ensured that the K value could change adaptively. Moreover, the adaptive Otsu algorithm was used to extract grape cluster information, so that the initial binary image of the table grape was obtained. In order to reduce the interference of redundant noise on recognition accuracy, the morphological algorithms (open operation, close operation, images filling and the maximum connected domain) were used to remove noise, so the accurate binary image of table grapes was obtained. And then, the contours of table grapes were obtained by the Sobel operator. Furthermore, table grape clusters grew perpendicular to the ground due to gravity in the natural environment. Therefore, the extreme point and center of gravity point of the grape cluster were obtained based on contour analysis. In addition, the linear bundle where the extreme point and the center of gravity point located was taken as the carrier, and the similarity of pixel points on both sides of the linear bundle were taken as the judgment basis. The line corresponding to the lowest similarity value was taken as the grape stem, so the stem axis of the grape was located. Moreover, according to the agronomic picking requirements of table grapes, and combined with contour analysis, the region of interest (ROI) in picking points could be obtained. Among them, the intersection of the grapes stem and the contour was regarded as the middle point of the bottom edge of the ROI. And the 0.8 times distance between the left and right extreme points was regarded as the length of the ROI, the 0.25 times distance between the gravity point and the intersection of the grape stem and the contour was regarded as the height of the ROI. After that, the central point of the ROI was captured. Then, the nearest point between the center point of the ROI and the grape stem was determined, and this point on the grape stem was taken as the picking point of the table grapes. Finally, 917 grape images (including Summer Black, Moldova, and Youyong) taken by the rear camera of MI8 mobile phone at Jinniu Mountain Base of Shandong Fruit and Vegetable Research Institute were verified experimentally. Results and Discussions] The results showed that the success rate was 90.51% when the error between the table grape picking points and the optimal points were less than 12 pixels, and the average positioning time was 0.87 s. The method realized the fast and accurate localization of table grape picking points. On top of that, according to the two cultivation modes (hedgerow planting and trellis planting) of table grapes, a simulation test platform based on the Dense mechanical arm and the single-chip computer was set up in the study. 50 simulation tests were carried out for the four conditions respectively, among which the success rate of localization for purple grape picking point of hedgerow planting was 86.00%, and the average localization time was 0.89 s; the success rate of localization for purple grape identification and localization of trellis planting was 92.00%, and the average localization time was 0.67 s; the success rate of localization for green grape picking point of hedgerow planting was 78.00%, and the average localization time was 0.72 s; and the success rate of localization for green grape identification and localization of trellis planting was 80.00%, and the average localization time was 0.71 s. [Conclusions] The experimental results showed that the method proposed in the study can meet the requirements of table grape picking, and can provide technical supports for the development of grape picking robot.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Localization Method for Agricultural Robots Based on Fusion of LiDAR and IMU
    LIU Yang, JI Jie, PAN Deng, ZHAO Lijun, LI Mingsheng
    Smart Agriculture    2024, 6 (3): 94-106.   DOI: 10.12133/j.smartag.SA202401009
    Abstract587)   HTML52)    PDF(pc) (3329KB)(1071)       Save

    [Objective] High-precision localization technology serves as the crucial foundation in enabling the autonomous navigation operations of intelligent agricultural robots. However, the traditional global navigation satellite system (GNSS) localization method faces numerous limitations, such as tree shadow, electromagnetic interference, and other factors in the agricultural environment brings challenges to the accuracy and reliability of localization technology. To address the deficiencies and achieve precise localization of agricultural robots independent of GNSS, a localization method was proposed based on the fusion of three-dimensional light detection and ranging (LiDAR) data and inertial measurement unit (IMU) information to enhance localization accuracy and reliability. [Methods] LiDAR was used to obtain point cloud data in the agricultural environment and realize self-localization via point cloud matching. By integrating real-time motion parameter measurements from the IMU with LiDAR data, a high-precision localization solution for agricultural robots was achieved through a specific fusion algorithm. Firstly, the LiDAR-obtained point cloud data was preprocessed and the depth map was used to save the data. This approach could reduce the dimensionality of the original LiDAR point cloud, and eliminate the disorder of the original LiDAR point cloud arrangement, facilitating traversal and clustering through graph search. Given the presence of numerous distinct crops like trees in the agricultural environment, an angle-based clustering method was adopted. Specific angle-based clustering criteria were set to group the point cloud data, leading to the segmentation of different clusters of points, and obvious crops in the agricultural environment was effectively perceived. Furthermore, to improve the accuracy and stability of positioning, an improved three-dimensional normal distribution transform (3D-NDT) localization algorithm was proposed. This algorithm operated by matching the LiDAR-scanned point cloud data in real time with the pre-existing down sampled point cloud map to achieve real-time localization. Considering that direct down sampling of LiDAR point clouds in the agricultural environment could result in the loss of crucial environmental data, a point cloud clustering operation was used in place of down sampling operation, thereby improving matching accuracy and positioning precision. Secondly, to address potential constraints and shortcomings of using a single sensor for robot localization, a multi-sensor information fusion strategy was deployed to improve the localization accuracy. Specifically, the extended Kalman filter algorithm (EKF) was chosen to fuse the localization data from LiDAR point cloud and the IMU odometer information. The IMU provided essential motion parameters such as acceleration and angular velocity of the agricultural robot, and by combining with the LiDAR-derived localization information, the localization of the agricultural robot could be more accurately estimated. This fusion approach maximized the advantages of different sensors, compensated for their individual limitations, and improved the overall localization accuracy of the agricultural robot. [Results and Discussions] A series of experimental results in the Gazebo simulation environment of the robot operating system (ROS) and real operation scenarios showed that the fusion localization method proposed had significant advantages. In the simulation environment, the average localization errors of the proposed multi-sensor data fusion localization method were 1.7 and 1.8 cm, respectively, while in the experimental scenario, these errors were 3.3 and 3.3 cm, respectively, which were significantly better than the traditional 3D-NDT localization algorithm. These findings showed that the localization method proposed in this study could achieve high-precision localization in the complex agricultural environment, and provide reliable localization assistance for the autonomous functioning of agricultural robots. [Conclusions] The proposed localization method based on the fusion of LiDAR data and IMU information provided a novel localization solution for the autonomous operation of agricultural robots in areas with limited GNSS reception. Through the comprehensive utilization of multi-sensor information and adopting advanced data processing and fusion algorithms, the localization accuracy of agricultural robots could be significantly improved, which could provide a new reference for the intelligence and automation of agricultural production.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress of Apple Production Intelligent Chassis and Weeding and Harvesting Equipment Technology
    DUAN Luojia, YANG Fuzeng, YAN Bin, SHI shuaiqi, QIN jifeng
    Smart Agriculture    2022, 4 (3): 24-41.   DOI: 10.12133/j.smartag.SA202206010
    Abstract578)   HTML53)    PDF(pc) (1521KB)(2327)       Save

    As a pillar industry of economic development in the main apple-producing areas, apple industry has made important contributions to the increase of local farmers' income. With the transformation and upgrading of apple industry, the mechanization and intelligence level would be directly related to economic benefits. To promote the research of apple production intelligent technology and the development of intelligent equipment, in this paper, the current level of mechanization in each step of apple production was first introduced. Then, the main characteristics of the main apple orchard machinery, such as power chassis, weeding machinery, and harvesting equipment, were demonstrated. The application progress of automatic leveling and control, automatic navigation, automatic obstacle avoidance, weed identification, weed removal, apple identification, apple positioning, apple separation, and other technologies in intelligent power chassis, intelligent weeding machines, and apple harvesting robots, were summarized. The basic principles and characteristics of the above three key technologies of intelligent equipment were expounded in combination with different application environments. Intelligent control is the key technology for the intelligentization of orchard power chassis. The post of chassis adaptive control technology and autonomous navigation technology were discussed. In addition, a chassis intelligent perception and intelligent decision-making system should be established. Orchard chassis safe, accurate, efficient, and stable driving and operation is the future development trend of orchard intelligent chassis. The lack of robust weed sensing technology is the main limitation to the commercial development of a robotic weed control system. To improve the level of weed detection and weeding, machine vision and multi-sensor fusion methods have been proposed to solve the practical problems, such as illumination, overlapping leaves, occlusion, and classifier or network structure optimization. Robotic apple harvesting has proven to be a highly challenging task due to environmental complexities, sensor reliability, and robot stability. To improve the accuracy and efficiency of harvest mechanization applications in apples, apple quick identification under complex scenes, apple picking path planning, and materials and structure of manipulator for apple picking must all be optimized accordingly. Finally, the challenges of intelligent equipment technologies in apple production were analyzed, and the developing suggestions were put forward. This research can provide references and ideas for the advancement of intelligent technology research in apple production and the research and development of intelligent equipment.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Comparison of Droplet Deposition Performance Between Caterpillar Mist Sprayer and Six-Rotor Unmanned Aerial Vehicle in Mango Canopy
    LI Yangfan, HE Xiongkui, HAN Leng, HUANG Zhan, HE Miao
    Smart Agriculture    2022, 4 (3): 53-62.   DOI: 10.12133/j.smartag.SA202207007
    Abstract571)   HTML31)    PDF(pc) (1650KB)(790)       Save

    In order to solve the problems of pesticides abuse, nonuniformity deposition and low operating efficiency, build up the smart mango orchard, sedimentary properties of liquids in mango canopy of two orchard pesticide machinery, i.e., orchard caterpillar mist sprayer and six-rotor unmanned aerial vehicle (UAV) of were compared. Mango canopy was divided into upper, middle and lower canopy, tartrazine wsa selected as the tracer, high-definition printing paper and filter paper were used to collect pesticide droplets, the image processing methods such as deposition distribution uniformity were used to analyze the droplets. The experimental results showed that, for the surface droplets coverage rate of upper canopy leaf, unmanned aerial vehicle (UAV) was significantly higher than the cartipillar mist sprayer, there was no significant difference for the middle and lower canopy leaf. The the average coverage rate of both the front and back of leaves in UAV treatment group were 1.5~2 times for cartipillar mist sprayer, and got more deposition in back of leaves compare with caterpillar mist sprayer. The density of droplets on the front of the leaves of the mist sprayer treatment was significantly higher than that of the UAV treatment, but there was no significant difference on the back of the leaves. Both the front and back of the leaves of the plant protection UAV did not meet the requirements of disease and pest control with a low spray amount of 20/cm2. The liquid deposition of mist sprayer concentrated in the middle and lower canopy (61.1%), and while for the UAVs, it concentrated in the upper canopy (43.0%). The proportion of the deposition in the canopy was higher than that of the UAVs (48.6%), but the deposition capacity of mist sprayer in the upper canopy was insufficient, accounting for only 17%. The research shows that, compared with UAV, caterpillar mist sprayer is more suitable for the pest control of lower and middlein canopy, at the same time, the high density of droplets cover also has obvious advantages when spraying fungicide. UAV is more suitable for the external tidbits pest control of upper mango canopy, such as thrips, anthrax. According to the experimental results, a stereoscopic plant protection system can be built up in which can use the advantages of both caterpillar mist sprayer and UAV to achieve uniform coverage of pesticide in the mango tree canopy.

    Reference | Related Articles | Metrics | Comments0
    The Paradigm Theory and Judgment Conditions of Geophysical Parameter Retrieval Based on Artificial Intelligence
    MAO Kebiao, ZHANG Chenyang, SHI Jiancheng, WANG Xuming, GUO Zhonghua, LI Chunshu, DONG Lixin, WU Menxin, SUN Ruijing, WU Shengli, JI Dabin, JIANG Lingmei, ZHAO Tianjie, QIU Yubao, DU Yongming, XU Tongren
    Smart Agriculture    2023, 5 (2): 161-171.   DOI: 10.12133/j.smartag.SA202304013
    Abstract570)   HTML62)    PDF(pc) (1400KB)(2142)       Save

    Objective Deep learning is one of the most important technologies in the field of artificial intelligence, which has sparked a research boom in academic and engineering applications. It also shows strong application potential in remote sensing retrieval of geophysical parameters. The cross-disciplinary research is just beginning, and most deep learning applications in geosciences are still "black boxes", with most applications lacking physical significance, interpretability, and universality. In order to promote the application of artificial intelligence in geosciences and agriculture and cultivate interdisciplinary talents, a paradigm theory for geophysical parameter retrieval based on artificial intelligence coupled physics and statistical methods was proposed in this research. Methods The construction of the retrieval paradigm theory for geophysical parameters mainly included three parts: Firstly, physical logic deduction was performed based on the physical energy balance equation, and the inversion equation system was constructed theoretically which eliminated the ill conditioned problem of insufficient equations. Then, a fuzzy statistical method was constructed based on physical deduction. Representative solutions of physical methods were obtained through physical model simulation, and other representative solutions as the training and testing database for deep learning were obtained using multi-source data. Finally, deep learning achieved the goal of coupling physical and statistical methods through the use of representative solutions from physical and statistical methods as training and testing databases. Deep learning training and testing were aimed at obtaining curves of solutions from physical and statistical methods, thereby making deep learning physically meaningful and interpretable. Results and Discussions The conditions for determining the formation of a universal and physically interpretable paradigm were: (1) There must be a causal relationship between input and output variables (parameters); (2) In theory, a closed system of equations (with unknowns less than or equal to the number of equations) can be constructed between input and output variables (parameters), which means that the output parameters can be uniquely determined by the input parameters. If there is a strong causal relationship between input parameters (variables) and output parameters (variables), deep learning can be directly used for inversion. If there is a weak correlation between the input and output parameters, prior knowledge needs to be added to improve the inversion accuracy of the output parameters. The MODIS thermal infrared remote sensing data were used to retrieve land surface temperature, emissivity, near surface air temperature and atmospheric water vapor content as a case to prove the theory. When there was strong correlation between output parameters (LST and LSE) and input variables (BTi), using deep learning coupled with physical and statistical methods could obtain very high accuracy. When there was a weak correlation between the output parameter (NSAT) and the input variable (BTi), adding prior knowledge (LST and LSE) could improve the inversion accuracy and stability of the output parameter (NSAT). When there was partial strong correlation (WVC and BTi), adding prior knowledge (LST and LSE) could slightly improve accuracy and stability, but the error of prior knowledge (LST and LSE) may bring uncertainty, so prior knowledge could also be omitted. According to the inversion analysis of geophysical parameters of MODIS sensor thermal infrared band, bands 27, 28, 29 and 31 were more suitable for inversion of atmospheric water vapor content, and bands 28, 29, 31 and 32 were more suitable for inversion of surface temperature, Emissivity and near surface air temperature. If someone want to achieve the highest accuracy of four parameters, it was recommended to design the instrument with five bands (27, 28, 29, 31, 32) which were most suitable. If only four thermal infrared bands were designed, bands 27, 28, 31, and 32 should be given priority consideration. From the results of land surface temperature, emissivity, near surface air temperature and atmospheric water vapor content retrieved from MODIS data using this theory, it was not only more accurate than traditional methods, but also could reduce some bands, reduce satellite load and improve satellite life. Especially, this theoretical method overcomes the influence of the MODIS official algorithm (day/night algorithm) on sudden changes in surface types and long-term lack of continuous data, which leads to unstable accuracy of the inversion product. The analysis results showed that the proposed theory and conditions are feasible, and the accuracy and applicability were better than traditional methods. The theory and judgment conditions of geophysical parameter retrieval paradigms were also applicable for target recognition such as remote sensing classification, but it needed to be interpreted from a different perspective. For example, the feature information extracted by different convolutional kernels must be able to uniquely determine the target. Under satisfying with the conditions of paradigm theory, the inversion of geophysical parameters based on artificial intelligence is the best choice. Conclusions The geophysical parameter retrieval paradigm theory based on artificial intelligence proposed in this study can overcome the shortcomings of traditional retrieval methods, especially remote sensing parameter retrieval, which simplify the inversion process and improve the inversion accuracy. At the same time, it can optimize the design of satellite sensors. The proposal of this theory is of milestone significance in the history of geophysical parameter retrieval.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Research Progress and Prospect of Multi-robot Collaborative SLAM in Complex Agricultural Scenarios
    MA Nan, CAO Shanshan, BAI Tao, KONG Fantao, SUN Wei
    Smart Agriculture    2024, 6 (6): 23-43.   DOI: 10.12133/j.smartag.SA202406005
    Abstract554)   HTML73)    PDF(pc) (2300KB)(4507)       Save

    [Significance] The rapid development of artificial intelligence and automation has greatly expanded the scope of agricultural automation, with applications such as precision farming using unmanned machinery, robotic grazing in outdoor environments, and automated harvesting by orchard-picking robots. Collaborative operations among multiple agricultural robots enhance production efficiency and reduce labor costs, driving the development of smart agriculture. Multi-robot simultaneous localization and mapping (SLAM) plays a pivotal role by ensuring accurate mapping and localization, which are essential for the effective management of unmanned farms. Compared to single-robot SLAM, multi-robot systems offer several advantages, including higher localization accuracy, larger sensing ranges, faster response times, and improved real-time performance. These capabilities are particularly valuable for completing complex tasks efficiently. However, deploying multi-robot SLAM in agricultural settings presents significant challenges. Dynamic environmental factors, such as crop growth, changing weather patterns, and livestock movement, increase system uncertainty. Additionally, agricultural terrains vary from open fields to irregular greenhouses, requiring robots to adjust their localization and path-planning strategies based on environmental conditions. Communication constraints, such as unstable signals or limited transmission range, further complicate coordination between robots. These combined challenges make it difficult to implement multi-robot SLAM effectively in agricultural environments. To unlock the full potential of multi-robot SLAM in agriculture, it is essential to develop optimized solutions that address the specific technical demands of these scenarios. [Progress] Existing review studies on multi-robot SLAM mainly focus on a general technological perspective, summarizing trends in the development of multi-robot SLAM, the advantages and limitations of algorithms, universally applicable conditions, and core issues of key technologies. However, there is a lack of analysis specifically addressing multi-robot SLAM under the characteristics of complex agricultural scenarios. This study focuses on the main features and applications of multi-robot SLAM in complex agricultural scenarios. The study analyzes the advantages and limitations of multi-robot SLAM, as well as its applicability and application scenarios in agriculture, focusing on four key components: multi-sensor data fusion, collaborative localization, collaborative map building, and loopback detection. From the perspective of collaborative operations in multi-robot SLAM, the study outlines the classification of SLAM frameworks, including three main collaborative types: centralized, distributed, and hybrid. Based on this, the study summarizes the advantages and limitations of mainstream multi-robot SLAM frameworks, along with typical scenarios in robotic agricultural operations where they are applicable. Additionally, it discusses key issues faced by multi-robot SLAM in complex agricultural scenarios, such as low accuracy in mapping and localization during multi-sensor fusion, restricted communication environments during multi-robot collaborative operations, and low accuracy in relative pose estimation between robots. [Conclusions and Prospects] To enhance the applicability and efficiency of multi-robot SLAM in complex agricultural scenarios, future research needs to focus on solving these critical technological issues. Firstly, the development of enhanced data fusion algorithms will facilitate improved integration of sensor information, leading to greater accuracy and robustness of the system. Secondly, the combination of deep learning and reinforcement learning techniques is expected to empower robots to better interpret environmental patterns, adapt to dynamic changes, and make more effective real-time decisions. Thirdly, large language models will enhance human-robot interaction by enabling natural language commands, improving collaborative operations. Finally, the integration of digital twin technology will support more intelligent path planning and decision-making processes, especially in unmanned farms and livestock management systems. The convergence of digital twin technology with SLAM is projected to yield innovative solutions for intelligent perception and is likely to play a transformative role in the realm of agricultural automation. This synergy is anticipated to revolutionize the approach to agricultural tasks, enhancing their efficiency and reducing the reliance on labor.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Design and Test of Self-Propelled Orchard Multi-Station Harvesting Equipment
    MIAO Youyi, CHEN Hong, CHEN Xiaobing, TIAN Haoyu, YUAN Dong
    Smart Agriculture    2022, 4 (3): 42-52.   DOI: 10.12133/j.smartag.SA202206007
    Abstract551)   HTML69)    PDF(pc) (1391KB)(1387)       Save

    In order to solve the problems of high labor intensity, low efficiency of manual operation and lack of supporting machinery in the fruit harvesting of modern orchards, a self-propelled orchard multi-station harvesting equipment was designed in combination with the fruit tree dwarf anvil wide-row dense planting mode and agronomic planting requirements. The whole machine structure and working principle of the self-propelled orchard multi-station harvesting equipment were expounded. According to the environmental conditions of mountainous orchards, the crawler chassis structure was designed, and the working speed was 0~2 km/h. The operating platform including left extension platform and right extension platform was designed according to the difference of fruit tree row spacing, and the working width of the operating platform was 1500~2700 mm. In order to improve the working efficiency and ensure the same picking speed of upper and lower operators, the picking operation mode of "two sides, two heights and six stations" was proposed by comparing the difference in the working flexibility between the operator on the platform and the operator on the ground during the operation of the machine, and the in-and-out channels of fruit boxes and the automatic collection and packing device were designed. The front and rear unobstructed fruit box access system was composed of the front loading and unloading mechanism, the rear loading and unloading mechanism and the fruit box slide rail, which was convenient for the empty fruit box to enter the fruit loading station of the working platform from the front and unloading from the rear after the fruit was filled. Six sub-conveyor belts were designed to handle apples harvested by six non interacting operators at the same time. The prototype was test in the field, and the packing uniform distribution coefficient calculation method was proposed to evaluate the uniformity of fruit packing, and the performance of the prototype was comprehensively evaluated in combination with the fruit damage rate and packing speed. The results showed that, the designed self-propelled orchard multi-station harvesting equipment could synchronize with the six stations manual harvesting speed. At the same time, with the help of the expansion platform, the apple picking range covered the entire canopy of the fruit tree. The prototype worked smoothly, and the speed of each conveyor belt was in good coordination with manual picking, and there was no apple congestion occurred. The apple harvest damage rate was 4.67%, the packing uniform distribution coefficient was 1.475, and the packing speed was 72.9 apples per minute, which could meet the requirements of orchard harvest operation.

    Reference | Related Articles | Metrics | Comments0
    Spectroscopic Detection of Rice Leaf Blast Infection at Different Leaf Positions at The Early Stages With Solar-Induced Chlorophyll Fluorescence
    CHENG Yuxin, XUE Bowen, KONG Yuanyuan, YAO Dongliang, TIAN Long, WANG Xue, YAO Xia, ZHU Yan, CAO Weixing, CHENG Tao
    Smart Agriculture    2023, 5 (3): 35-48.   DOI: 10.12133/j.smartag.SA202309008
    Abstract549)   HTML46)    PDF(pc) (5433KB)(613)       Save

    [Objective] Rice blast is considered as the most destructive disease that threatens global rice production and causes severe economic losses worldwide. The detection of rice blast in an early manner plays an important role in resistance breeding and plant protection. At present, most studies on rice blast detection have been devoted to its symptomatic stage, while none of previous studies have used solar-induced chlorophyll fluorescence (SIF) to monitor rice leaf blast (RLB) at early stages. This research was conducted to investigate the early identification of RLB infected leaves based on solar-induced chlorophyll fluorescence at different leaf positions. [Methods] Greenhouse experiments and field trials were conducted separately in Nanjing and Nantong in July and August, 2021, in order to record SIF data of the top 1th to 4th leaves of rice plants at jointing and heading stages with an Analytical Spectral Devices (ASD) spectrometer coupled with a FluoWat leaf clip and a halogen lamp. At the same time, the disease severity levels of the measured samples were manually collected according to the GB/T 15790-2009 standard. After the continuous wavelet transform (CWT) of SIF spectra, separability assessment and feature selection were applied to SIF spectra. Wavelet features sensitive to RLB were extracted, and the sensitive features and their identification accuracy of infected leaves for different leaf positions were compared. Finally, RLB identification models were constructed based on linear discriminant analysis (LDA). [Results and Discussion] The results showed that the upward and downward SIF in the far-red region of infected leaves at each leaf position were significantly higher than those of healthy leaves. This may be due to the infection of the fungal pathogen Magnaporthe oryzae, which may have destroyed the chloroplast structure, and ultimately inhibited the primary reaction of photosynthesis. In addition, both the upward and downward SIF in the red region and the far-red region increased with the decrease of leaf position. The sensitive wavelet features varied by leaf position, while most of them were distributed in the steep slope of the SIF spectrum and wavelet scales 3, 4 and 5. The sensitive features of the top 1th leaf were mainly located at 665-680 nm, 755-790 nm and 815-830 nm. For the top 2th leaf, the sensitive features were mainly found at 665-680 nm and 815-830 nm. For the top 3th one, most of the sensitive features lay at 690 nm, 755-790 nm and 815-830 nm, and the sensitive bands around 690 nm were observed. The sensitive features of the top 4th leaf were primarily located at 665-680 nm, 725 nm and 815-830 nm, and the sensitive bands around 725 nm were observed. The wavelet features of the common sensitive region (665-680 nm), not only had physiological significance, but also coincided with the chlorophyll absorption peak that allowed for reasonable spectral interpretation. There were differences in the accuracy of RLB identification models at different leaf positions. Based on the upward and downward SIF, the overall accuracies of the top 1th leaf were separately 70% and 71%, which was higher than other leaf positions. As a result, the top 1th leaf was an ideal indicator leaf to diagnose RLB in the field. The classification accuracy of SIF wavelet features were higher than the original SIF bands. Based on CWT and feature selection, the overall accuracy of the upward and downward optimal features of the top 1th to 4th leaves reached 70.13%、63.70%、64.63%、64.53% and 70.90%、63.12%、62.00%、64.02%, respectively. All of them were higher than the canopy monitoring feature F760, whose overall accuracy was 69.79%, 61.31%, 54.41%, 61.33% and 69.99%, 58.79%, 54.62%, 60.92%, respectively. This may be caused by the differences in physiological states of the top four leaves. In addition to RLB infection, the SIF data of some top 3th and top 4th leaves may also be affected by leaf senescence, while the SIF data of top 1th leaf, the latest unfolding leaf of rice plants was less affected by other physical and chemical parameters. This may explain why the top 1th leaf responded to RLB earlier than other leaves. The results also showed that the common sensitive features of the four leaf positions were also concentrated on the steep slope of the SIF spectrum, with better classification performance around 675 and 815 nm. The classification accuracy of the optimal common features, ↑WF832,3 and ↓WF809,3, reached 69.45%, 62.19%, 60.35%, 63.00% and 69.98%, 62.78%, 60.51%, 61.30% for the top 1th to top 4th leaf positions, respectively. The optimal common features, ↑WF832,3 and ↓WF809,3, were both located in wavelet scale 3 and 800-840nm, which may be related to the destruction of the cell structure in response to Magnaporthe oryzae infection. [Conclusions] In this study, the SIF spectral response to RLB was revealed, and the identification models of the top 1th leaf were found to be most precise among the top four leaves. In addition, the common wavelet features sensitive to RLB, ↑WF832,3 and ↓WF809,3, were extracted with the identification accuracy of 70%. The results proved the potential of CWT and SIF for RLB detection, which can provide important reference and technical support for the early, rapid and non-destructive diagnosis of RLB in the field.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Accurate Extraction of Apple Orchard on the Loess Plateau Based on Improved Linknet Network
    ZHANG Zhibo, ZHAO Xining, GAO Xiaodong, ZHANG Li, YANG Menghao
    Smart Agriculture    2022, 4 (3): 95-107.   DOI: 10.12133/j.smartag.SA202206001
    Abstract538)   HTML25)    PDF(pc) (2040KB)(833)       Save

    The rapid increasing of apple planting area on the Loess Plateau has exerted an important influence on the regional eco-hydrology and socio-economic development. However, the orchards in this area are small and complex, and there are only county or city scale statistical data, lack of actual spatial distribution information. To this end, for the extraction of apple orchards on the Loess Plateau, in this study, a professional dataset of low-altitude remote sensing images acquired by unmanned aerial vehicle was firstly established. The R_34_Linknet network and other five commonly used deep learning semantic segmentation models SegNet, FCN_8s, DeeplabV3+, UNet and Linknet were applied to the spatial distribution extraction of apple orchards on the Loess Plateau, and the best-performing model was R_34_Linknet, with a F1 score of 87.1%, a pixel accuracy (PA) of 92.3%, an mean intersection over union (MioU) of 81.2%, a frequency weighted intersection over union (FWIoU) of 85.7%, and the mean pixel accuracy (MPA) was 89.6%. The spatial pyramid pool structure (ASPP) and R_34_Linknet network was combined to expand the receptive field of the network and get R_34_Linknet_ASPP network, and then ASPP structure was improved. Combining the spatial pyramid pooling (ASPP) with the R_34_Linknet network to expand the receptive field of the network and obtain a R_34_Linknet_ASPP network; Then the ASPP structure was improved to get a R_34_Linknet_ASPP+ network. The performance of the three networks were compared. R_34_Linknet_ASPP+ got the best performance, with 86.3% for F1, 94.7% for PA, 82.7% for MIoU, 89.0% for FWIoU, and 92.3% for MPA on the test set. The accuracy of apple orchard extraction in Wangdonggou, Changwu County and Tongji Village, Baishui County using R_34_Linknet_ASPP+ were 94.22% and 95.66%, respectively. In Wangdonggou, it was 1.21% and 0.58% higher than R_34_Linknet and R_34_Linknet_ASPP, respectively. In Tongji village, it was 1.70% and 0.90% higher than R_34_Linknet and R_34_Linknet_ASPP, respectively. The results show that the proposed R_34_Linknet_ASPP+ method can extract apple orchards accurately, the edge treatment of apple orchard plots is better, the method can be used as the technical support and theoretical basis for research on the spatial distribution mapping of apple orchards on the Loess Plateau.

    Reference | Related Articles | Metrics | Comments0
    Development of Mobile Orchard Local Grading System of Apple Internal Quality
    LI Yang, PENG Yankun, LYU Decai, LI Yongyu, LIU Le, ZHU Yujie
    Smart Agriculture    2022, 4 (3): 132-142.   DOI: 10.12133/j.smartag.SA202206012
    Abstract535)   HTML49)    PDF(pc) (1496KB)(842)       Save

    The detecting and grading of the internal quality of apples is an effective means to increase the added value of apples, protect the health of residents, meet consumer demand and improve market competitiveness. Therefore, an apple internal quality detecting module and a grading module were developed in this research to constitute a movable apple internal quality orchard origin grading system, which could realize the detection of apple sugar content and apple moldy core in orchard origin and grading according to the set grading standard. Based on this system, a multiplicative effect elimination (MEE) based spectral correction method was proposed to eliminate the multiplicative effect caused by the differences in physical properties of apples and improve the internal quality detection accuracy. The method assumed that the multiplication coefficient in the spectrum was closely related to the spectral data at a certain wavelength, and divided the original spectrum by the data at this wavelength point to achieve the elimination of the multiplicative scattering effect of the spectrum. It also combined the idea of least-squares loss function to set the loss function to solve for the optimal multiplication coefficient point. To verify the validity of the method, after pre-processing the apple spectra with multiple scattering correction (MSC), standard normal variate transform (SNV), and MEE algorithms, the partial least squares regression (PLSR) prediction models for apple sugar content and partial least squares-discriminant analysis (PLS-DA) models for apple moldy core were developed, respectively. The results showed that the MEE algorithm had the best results compared to the MSC and SNV algorithms. The correlation coefficient of correction set (Rc), root mean square error of correction set (RMSEC), the correlation coefficient of prediction set (Rp), and root mean square error of prediction set (RMSEP) for sugar content were 0.959, 0.430%, 0.929, and 0.592%, respectively; the sensitivity, specificity, and accuracy of correction set and prediction set for moldy core were 98.33%, 96.67%, 97.50%, 100.00%, 90.00%, and 95.00%, respectively. The best prediction model established was imported into the system for grading tests, and the results showed that the grading correct rate of the system was 90.00% and the grading speed was 3 pcs/s. In summary, the proposed spectral correction method is more suitable for apple transmission spectral correction. The mobile orchard local grading system of apple internal quality combined with the proposed spectral correction method can accurately detect apple sugar content and apple moldy core. The system meets the demand for internal quality detecting and grading of apples in orchard production areas.

    Reference | Related Articles | Metrics | Comments0
    Wheat Lodging Area Recognition Method Based on Different Resolution UAV Multispectral Remote Sensing Images
    WEI Yongkang, YANG Tiancong, DING Xinyao, GAO Yuezhi, YUAN Xinru, HE Li, WANG Yonghua, DUAN Jianzhao, FENG Wei
    Smart Agriculture    2023, 5 (2): 56-67.   DOI: 10.12133/j.smartag.SA202304014
    Abstract518)   HTML73)    PDF(pc) (4042KB)(2746)       Save

    [Objective] To quickly and accurately assess the situation of crop lodging disasters, it is necessary to promptly obtain information such as the location and area of the lodging occurrences. Currently, there are no corresponding technical standards for identifying crop lodging based on UAV remote sensing, which is not conducive to standardizing the process of obtaining UAV data and proposing solutions to problems. This study aims to explore the impact of different spatial resolution remote sensing images and feature optimization methods on the accuracy of identifying wheat lodging areas. [Methods] Digital orthophoto images (DOM) and digital surface models (DSM) were collected by UAVs with high-resolution sensors at different flight altitudes after wheat lodging. The spatial resolutions of these image data were 1.05, 2.09, and 3.26 cm. A full feature set was constructed by extracting 5 spectral features, 2 height features, 5 vegetation indices, and 40 texture features from the pre-processed data. Then three feature selection methods, ReliefF algorithm, RF-RFE algorithm, and Boruta-Shap algorithm, were used to construct an optimized subset of features at different flight altitudes to select the best feature selection method. The ReliefF algorithm retains features with weights greater than 0.2 by setting a threshold of 0.2; the RF-RFE algorithm quantitatively evaluated the importance of each feature and introduces variables in descending order of importance to determine classification accuracy; the Boruta-Shap algorithm performed feature subset screening on the full feature set and labels a feature as green when its importance score was higher than that of the shaded feature, defining it as an important variable for model construction. Based on the above-mentioned feature subset, an object-oriented classification model on remote sensing images was conducted using eCognition9.0 software. Firstly, after several experiments, the feature parameters for multi-scale segmentation in the object-oriented classification were determined, namely a segmentation scale of 1, a shape factor of 0.1, and a tightness of 0.5. Three object-oriented supervised classification algorithms, support vector machine (SVM), random forest (RF), and K nearest neighbor (KNN), were selected to construct wheat lodging classification models. The Overall classification accuracy and Kappa coefficient were used to evaluate the accuracy of wheat lodging identification. By constructing a wheat lodging classification model, the appropriate classification strategy was clarified and a technical path for lodging classification was established. This technical path can be used for wheat lodging monitoring, providing a scientific basis for agricultural production and improving agricultural production efficiency. [Results and Discussions] The results showed that increasing the altitude of the UAV to 90 m significantly improved flight efficiency of wheat lodging areas. In comparison to flying at 30 m for the same monitoring range, data acquisition time was reduced to approximately 1/6th, and the number of photos needed decreased from 62 to 6. In terms of classification accuracy, the overall classification effect of SVM is better than that of RF and KNN. Additionally, when the image spatial resolution varied from 1.05 to 3.26 cm, the full feature set and all three optimized feature subsets had the highest classification accuracy at a resolution of 1.05 cm, which was better than at resolutions of 2.09 and 3.26 cm. As the image spatial resolution decreased, the overall classification effect gradually deteriorated and the positioning accuracy decreased, resulting in poor spatial consistency of the classification results. Further research has found that the Boruta-Shap feature selection method can reduce data dimensionality and improve computational speed while maintaining high classification accuracy. Among the three tested spatial resolution conditions (1.05, 2.09, and 3.26 cm), the combination of SVM and Boruta-Shap algorithms demonstrated the highest overall classification accuracy. Specifically, the accuracy rates were 95.6%, 94.6%, and 93.9% for the respective spatial resolutions. These results highlighted the superior performance of this combination in accurately classifying the data and adapt to changes in spatial resolution. When the image resolution was 3.26 cm, the overall classification accuracy decreased by 1.81% and 0.75% compared to 1.05 and 2.09 cm; when the image resolution was 2.09 cm, the overall classification accuracy decreased by 1.06% compared to 1.05 cm, showing a relatively small difference in classification accuracy under different flight altitudes. The overall classification accuracy at an altitude of 90 m reached 95.6%, with Kappa coefficient of 0.914, meeting the requirements for classification accuracy. [Conclusions] The study shows that the object-oriented SVM classifier and the Boruta-Shap feature optimization algorithm have strong application extension advantages in identifying lodging areas in remote sensing images at multiple flight altitudes. These methods can achieve high-precision crop lodging area identification and reduce the influence of image spatial resolution on model stability. This helps to increase flight altitude, expand the monitoring range, improve UAV operation efficiency, and reduce flight costs. In practical applications, it is possible to strike a balance between classification accuracy and efficiency based on specific requirements and the actual scenario, thus providing guidance and support for the development of strategies for acquiring crop lodging information and evaluating wheat disasters.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    A Rapid Detection Method for Wheat Seedling Leaf Number in Complex Field Scenarios Based on Improved YOLOv8
    HOU Yiting, RAO Yuan, SONG He, NIE Zhenjun, WANG Tan, HE Haoxu
    Smart Agriculture    2024, 6 (4): 128-137.   DOI: 10.12133/j.smartag.SA202403019
    Abstract513)   HTML97)    PDF(pc) (2913KB)(844)       Save

    [Objective] The enumeration of wheat leaves is an essential indicator for evaluating the vegetative state of wheat and predicting its yield potential. Currently, the process of wheat leaf counting in field settings is predominantly manual, characterized by being both time-consuming and labor-intensive. Despite advancements, the efficiency and accuracy of existing automated detection and counting methodologies have yet to satisfy the stringent demands of practical agricultural applications. This study aims to develop a method for the rapid quantification of wheat leaves to refine the precision of wheat leaf tip detection. [Methods] To enhance the accuracy of wheat leaf detection, firstly, an image dataset of wheat leaves across various developmental stages—seedling, tillering, and overwintering—under two distinct lighting conditions and using visible light images sourced from both mobile devices and field camera equipmen, was constructed. Considering the robust feature extraction and multi-scale feature fusion capabilities of YOLOv8 network, the foundational architecture of the proposed model was based on the YOLOv8, to which a coordinate attention mechanism has been integrated. To expedite the model's convergence, the loss functions were optimized. Furthermore, a dedicated small object detection layer was introduced to refine the recognition of wheat leaf tips, which were typically difficult for conventional models to discern due to their small size and resemblance to background elements. This deep learning network was named as YOLOv8-CSD, tailored for the recognition of small targets such as wheat leaf tips, ascertains the leaf count by detecting the number of leaf tips present within the image. A comparative analysis was conducted on the YOLOv8-CSD model in comparison with the original YOLOv8 and six other prominent network architectures, including Faster R-CNN, Mask R-CNN, YOLOv7, and SSD, within a uniform training framework, to evaluate the model's effectiveness. In parallel, the performance of both the original and YOLOv8-CSD models was assessed under challenging conditions, such as the presence of weeds, occlusions, and fluctuating lighting, to emulate complex real-world scenarios. Ultimately, the YOLOv8-CSD model was deployed for wheat leaf number detection in intricate field conditions to confirm its practical applicability and generalization potential. [Results and Discussions] The research presented a methodology that achieved a recognition precision of 91.6% and an mAP0.5 of 85.1% for wheat leaf tips, indicative of its robust detection capabilities. This method exceled in adaptability within complex field environments, featuring an autonomous adjustment mechanism for different lighting conditions, which significantly enhanced the model's robustness. The minimal rate of missed detections in wheat seedlings' leaf counting underscored the method's suitability for wheat leaf tip recognition in intricate field scenarios, consequently elevating the precision of wheat leaf number detection. The sophisticated algorithm embedded within this model had demonstrated a heightened capacity to discern and focus on the unique features of wheat leaf tips during the detection process. This capability was essential for overcoming challenges such as small target sizes, similar background textures, and the intricacies of feature extraction. The model's consistent performance across diverse conditions, including scenarios with weeds, occlusions, and fluctuating lighting, further substantiated its robustness and its readiness for real-world application. [Conclusions] This research offers a valuable reference for accurately detecting wheat leaf numbers in intricate field conditions, as well as robust technical support for the comprehensive and high-quality assessment of wheat growth.

    Table and Figures | Reference | Related Articles | Metrics | Comments0
    Automatic Spraying Technology and Facilities for Pipeline Spraying in Mountainous Orchards
    SONG Shuran, HU Shengyang, SUN Daozong, DAI Qiufang, XUE Xiuyun, XIE Jiaxing, LI Zhen
    Smart Agriculture    2022, 4 (3): 86-94.   DOI: 10.12133/j.smartag.SA202205005
    Abstract512)   HTML42)    PDF(pc) (1218KB)(1734)       Save

    The orchard in the mountainous area is rugged and steep, and there is no road for large-scale plant protection machinery traveling in the orchard, so it is difficult for mobile spraying machinery to enter. In order to solve the above problems, the automatic pipeline spraying technology and facilities were studied. A pipeline automatic spraying facility suitable for mountainous orchards was designed, which included spraying head, field spraying pipeline, automatic spraying controller and spraying groups. The spraying head was composed of a spraying unit and a constant pressure control system, which pressurized the pesticide liquid and stabilized the liquid pressure according to the preset pressure value to ensure a better atomization effect. Field spraying pipeline consisted of main pipeline, valves and spraying groups. In order to perform automatic spraying, a solenoid valve was installed between the main pipeline and each spraying group, and the automatic spraying operation of each spraying group was controlled automatically by the opening or closing of the solenoid valve. An automatic spraying controller composed of main controller, solenoid valve driving circuit, solenoid valve controlling node and power supplying unit was developed, and the controlling software was also programmed in this research. The main controller had manual and automatic two working modes. The solenoid valve controlling node was used to send wireless signals to the main controller and receive wireless signals from the main controller, and open or close the corresponding solenoid valve according to the received control signal. During the spraying operation, the pesticide liquid flowed into the orchard from the spray head through the pipeline. The automatic spray controller was used to control the solenoid valve to open or close the spray group one by one, and implement manual control or automatic control of spraying. In order to determine the continuous opening time of the solenoid valve, an effectiveness of the spray test was carried out. The spraying test results showed that spraying effectiveness could be guaranteed by opening solenoid valve for 8 s continuously. The efficiency of this pipeline automatic spraying facility was 2.61 hm2/h, which was 45-150 times that of manual spraying, and 2.1 times that of unmanned aerial vehicle spraying. The automatic pipeline spraying technology in mountainous orchards had obvious advantages in the timeliness of pest controlling. This research can provide references and ideas for the development of spray technology and intelligent spraying facilities in mountainous orchards.

    Reference | Related Articles | Metrics | Comments0