Welcome to Smart Agriculture 中文

Smart Agriculture

   

A Low-rank Adaptation Method for Fine-tuning Plant Disease Recognition Models

HUANG Jinqing1,2, YE Jin1,2(), HU Huilin1,2, YANG Jihui3, LAN Wei1,2, ZHANG Yanqing1,2   

  1. 1. Guangxi University School of computer, Electronics and Information, Nanning 530000, China
    2. State Key Laboratory for Conservation and Utilization of Subtropical Agrobiological Resources, Nanning 530000, China
    3. Jiejia Run Technology Group Co. , Ltd. , Nanning 530000, China
  • Received:2025-04-02 Online:2025-11-28
  • Foundation items:National Natural Science Foundation of China(32402495); 2024 Autonomous Region-Level Student Innovation Training Program Project(202410593001S)
  • About author:

    HUANG Jinqing, E-mail:

  • corresponding author:
    YE Jin, E-mail:

Abstract:

[Objective] When deep learning is applied to plant disease recognition tasks, model fine-tuning faces significant challenges, including limited computational resources and high parameter update overhead. Although traditional Low-Rank Adaptation (LoRA) methods effectively reduce parameter overhead, their strategy of assigning a uniform, fixed rank to all layers often overlooks the varying importance of different layers. This approach may still lead to constrained optimization in critical layers or resource waste in less significant ones. To address this limitation, a dynamic rank allocation (DRA) algorithm is proposes. The DRA algorithm is designed to evaluate and adjust the required parameter resources for each layer during training, aiming to enhance the accuracy of plant disease classification models while more efficiently balancing computational resources. [Methods] The experiments utilized public datasets, the wheat plant diseases dataset and the plants disease dataset. The wheat plant diseases dataset comprised 13 104 images covering 15 types of wheat diseases such as black rust and fusarium head blight as well as healthy plants, while the Plants Disease Dataset included 37 505 images of 26 types of plant diseases such as algal leaf spot, corn rust, and bacterial spot of tomato. These datasets were captured under varied lighting, different backgrounds, diverse angles, and at various stages of plant growth. A cross-layer feature similarity metric based on Centred Kernel Alignment (CKA) was introduced to quantify the representational correlation between different layers. Concurrently, a correction factor was constructed based on gradient information and activation intensity to measure the direct impact of each layer on the loss function. These two metrics were then fused using a weighted harmonic mean to generate a comprehensive importance score, which was subsequently used for the initial rank allocation. Furthermore, considering the effect of feature representation changes during training, a stability-triggered adaptive rank update strategy rank re-allocation (RRA) was proposed. This strategy monitored the average parameter change of the low-rank adapters during the training process to determine the convergence state. When this change fell below a specific threshold, the low-rank matrices were merged into the original weights, and the rank allocation table was then re-calculated and updated. This process ensured that more resources were allocated to critical layers, thereby achieving an optimized allocation of parameter resources across different layers. [Results and Discussions] Tests on four models (AlexNet, MobileNetV2, Regnety, and ConvNeXt) indicated that, compared to full-parameter fine-tuning, the proposed method reduced resource consumption to 0.42%, 2.46%, 3.56%, and 1.25%, respectively, while maintaining a comparable average accuracy. The RRA strategy demonstrated continuous parameter optimization throughout the model's training. On the ConvNeXt model, the trainable parameters on the plants disease dataset were progressively reduced from 18.34 to 9.26 MB, a reduction of nearly 50%. In comparison with the standard LoRA method (R=16), the method reduces the accuracy by 0.38, 0.40 and 0.05 percentage points on the wheat plant diseases dataset for AlexNet, MobileNetV2, and regnety, respectively, while resource consumption was reduced by 59.3%, 87.4% and 50.5%. Robustness was tested by applying perturbations to the test set, including Gaussian noise, random cropping, color jitter, and random rotation. The results showed that the model was most affected by color jitter and random rotation on the Plants Disease Dataset, with accuracy dropping by 6.02 and 5.11 percentage points, respectively. On the wheat plant diseases dataset, the model was more sensitive to random cropping and random rotation, with accuracy decreasing by 4.33 and 4.40 percentage points, respectively; the overall performance degradation remained within an acceptable range. When compared to other advanced low-rank methods such as AdaLoRA and DyLoRA under the same parameter budget, the DRA method exhibited higher accuracy. On the RegNetY model, the DRA method achieved an accuracy of 90.96% on the Plants Disease Dataset, which was 0.55 percentage points higher than AdaLoRA and 0.94 percentage points higher than DyLoRA. In terms of training efficiency on the Plants Disease Dataset, the DRA method required 43.5 minutes to reach its peak validation accuracy of 89.84%, whereas AdaLoRA required 52.3 minutes, representing a training time increase of approximately 20.23%. Regarding inference flexibility, the DyLoRA method was designed to generate a universal model capable of adapting to multiple rank configurations after a single training run, allowing for dynamic rank switching during inference based on hardware or latency requirements. The DRA method, however, did not possess this inference-time flexibility. It was focused on converging to a single, high-performance rank configuration for a specific task during the training phase. [Conclusions] The low-rank adaptive fine-tuning method proposed in this paper significantly reduced the number of model training parameters while ensuring plant disease recognition accuracy. Compared to traditional fixed-rank LoRA and other advanced low-rank optimization methods, it demonstrated distinct advantages, providing an effective pathway for efficient model deployment on resource-constrained devices.

Key words: low-rank adaptive fine-tuning, feature similarity, rank allocation algorithm, rank allocation update strategy

CLC Number: