欢迎您访问《智慧农业(中英文)》官方网站! English

Smart Agriculture ›› 2026, Vol. 8 ›› Issue (1): 40-51.doi: 10.12133/j.smartag.SA202504003

• 专题--农业病虫害智能识别与诊断 • 上一篇    下一篇

植物病害识别微调模型低秩适配方法

黄锦清1,2, 叶进1,2(), 胡慧琳1,2, 阳继辉3, 兰伟1,2, 张艳青1,2   

  1. 1. 广西大学 计算机与电子信息学院,广西 南宁 530000,中国
    2. 亚热带农业保护与利用国家重点实验室,广西 南宁 530000,中国
    3. 捷佳润科技集团股份有限公司,广西 南宁 530000,中国
  • 收稿日期:2025-04-02 出版日期:2026-01-30
  • 基金项目:
    国家自然科学基金(32402495); 2024年自治区级大学生创新训练计划(202410593001S)
  • 作者简介:

    黄锦清,研究方向为大模型微调。E-mail:

  • 通信作者:
    叶 进,博士,教授,研究方向为智慧农业与智能计算。E-mail:

Low-rank Adaptation Method for Fine-tuning Plant Disease Recognition Models

HUANG Jinqing1,2, YE Jin1,2(), HU Huilin1,2, YANG Jihui3, LAN Wei1,2, ZHANG Yanqing1,2   

  1. 1. School of Computer, Electronics and Information, Guangxi University, Nanning 530000, China
    2. State Key Laboratory for Conservation and Utilization of Subtropical Agrobiological Resources, Nanning 530000, China
    3. JJR Science and Technology Group Co. , Ltd. , Nanning 530000, China
  • Received:2025-04-02 Online:2026-01-30
  • Foundation items:National Natural Science Foundation of China(32402495); 2024 Autonomous Region-Level Student Innovation Training Program Project(202410593001S)
  • About author:

    HUANG Jinqing, E-mail:

  • Corresponding author:
    YE Jin, E-mail:

摘要:

【目的/意义】 深度学习应用于植物病害识别任务时,模型微调面临计算资源受限、参数更新开销较大的挑战。传统低秩适配(Low-Rank Adaptation, LoRA)方法虽能有效减少参数开销,但仍可能导致资源浪费或优化受限。 【方法】 提出一种动态秩分配算法,旨在提升植物病害分类模型精度的同时,平衡计算资源。通过引入一种基于中心化核对齐的跨层特征相似性度量方法,量化不同层之间的表征相关性;同时基于梯度信息和激活强度构建1个校正因子,用于衡量各层对损失函数的直接影响;考虑训练过程中特征表示变化的影响,提出基于稳定性触发的自适应秩分配表更新策略(Rank Re-Allocation, RRA),在训练过程中监测微调模型参数的收敛状态,在模型趋于稳定时自动更新秩分配表,确保关键层分配更多资源,从而实现参数资源在不同层之间的优化分配。 【结果和讨论】 在小麦病害数据集、植物病害数据集等公开数据集上使用AlexNet、MobileNetV2、RegNetY和ConvNeXt这4种不同的模型进行测试,该方法与全参数微调相比,在保证平均准确率的基础上,资源消耗分别减少至0.42%,2.46%,3.56%和1.25%。与LoRA方法(R=16)进行对比,AlexNet、MobileNetV2和RegNetY在小麦病害数据集上的准确率分别提高0.38、0.40和0.05个百分点,资源消耗分别减少59.3%、87.4%和50.5%。 【结论】 低秩自适应微调方法在保证植物病害识别精度的同时,能够显著减少模型训练参数。

关键词: 低秩自适应微调, 特征相似性, 秩分配算法, 秩分配更新策略

Abstract:

[Objective] When deep learning is applied to plant disease recognition tasks, model fine-tuning faces significant challenges, including limited computational resources and high parameter update overhead. Although traditional low-rank adaptation (LoRA) methods effectively reduce parameter overhead, their strategy of assigning a uniform, fixed rank to all layers often overlooks the varying importance of different layers. This approach may still lead to constrained optimization in critical layers or resource waste in less significant ones. To address this limitation, a dynamic rank allocation (DRA) algorithm is proposed in this research. The DRA algorithm is designed to evaluate and adjust the required parameter resources for each layer during training, enhance the accuracy of plant disease classification models while more efficiently balancing computational resources. [Methods] Public datasets of the Wheat Plant Diseases Dataset and the Plants Disease Dataset were utilized in the experiments. The Wheat Plant Diseases Dataset comprised 13 104 images covering 15 types of wheat diseases such as black rust and fusarium head blight, while the Plants Disease Dataset included 37 505 images of 26 types of plant diseases such as algal leaf spot, corn rust, and bacterial spot of tomato. These datasets were captured under varied lighting, different backgrounds, diverse angles, and at various stages of plant growth. A cross-layer feature similarity metric based on centred kernel alignment (CKA) was introduced to quantify the representational correlation between different layers. Concurrently, a correction factor was constructed based on gradient information and activation intensity to measure the direct impact of each layer on the loss function. These two metrics were then fused using a weighted harmonic mean to generate a comprehensive importance score, which was subsequently used for the initial rank allocation. Furthermore, considering the effect of feature representation changes during training, a stability-triggered adaptive rank update strategy rank re-allocation (RRA) was proposed. This strategy monitored the average parameter change of the low-rank adapters during the training process to determine the convergence state. When this change fell below a specific threshold, the low-rank matrices were merged into the original weights, and the rank allocation table was then re-calculated and updated. This process ensured that more resources were allocated to critical layers, thereby achieving an optimized allocation of parameter resources across different layers. [Results and Discussions] Tests on four models (AlexNet, MobileNetV2, RegNetY, and ConvNeXt) indicated that, compared to full-parameter fine-tuning, the proposed method reduced resource consumption to 0.42%, 2.46%, 3.56%, and 1.25%, respectively, while maintaining a comparable average accuracy. The RRA strategy demonstrated continuous parameter optimization throughout the model's training. On the ConvNeXt model, the trainable parameters on the plants disease dataset were progressively reduced from 18.34 to 9.26 M, a reduction of nearly 50%. In comparison with the standard LoRA method (R=16), the method reduced the accuracy by 0.38, 0.40 and 0.05 percentage points on the wheat plant diseases dataset for AlexNet, MobileNetV2, and RegNetY, respectively, while resource consumption was reduced by 59.3%, 87.4% and 50.5%. Robustness was tested by applying perturbations to the test set, including Gaussian noise, random cropping, color jitter, and random rotation. The results showed that the model was most affected by color jitter and random rotation on the Plants Disease Dataset, with accuracy decreasing by 6.02 and 5.11 percentage points, respectively. On the wheat plant diseases dataset, the model was more sensitive to random cropping and random rotation, with accuracy decreasing by 4.33 and 4.40 percentage points, respectively; the overall performance degradation remained within an acceptable range. When compared to other advanced low-rank methods such as AdaLoRA and DyLoRA under the same parameter budget, the DRA method exhibited higher accuracy. On the RegNetY model, the DRA method achieved an accuracy of 90.96% on the Plants Disease Dataset, which was 0.55 percentage points higher than AdaLoRA and 0.94 percentage points higher than DyLoRA. In terms of training efficiency on the Plants Disease Dataset, the DRA method required 43.5 minutes to reach its peak validation accuracy of 89.84%, whereas AdaLoRA required 52.3 minutes, representing a training time increase of approximately 20.23%. Regarding inference flexibility, the DyLoRA method was designed to generate a universal model capable of adapting to multiple rank configurations after a single training run, allowing for dynamic rank switching during inference based on hardware or latency requirements. The DRA method, however, did not possess this inference-time flexibility. It was focused on converging to a single, high-performance rank configuration for a specific task during the training phase. [Conclusions] The low-rank adaptive fine-tuning method proposed in this research significantly reduced the number of model training parameters while ensuring plant disease recognition accuracy. Compared to traditional fixed-rank LoRA and other advanced low-rank optimization methods, it demonstrated distinct advantages, providing an effective pathway for efficient model deployment on resource-constrained devices.

Key words: low-rank adaptive fine-tuning, feature similarity, dynamic rank allocation algorithm, rank allocation update strategy

中图分类号: