[Objective] In crop cultivation and production, pests have gradually become one of the main issues affecting agricultural yield. Traditional models often focus on achieving high accuracy, however, to facilitate model application, lightweighting is necessary. The targets in yellow sticky trap images are often very small with low pixel resolution, so modifications in network structure, loss functions, and lightweight convolutions need to adapt to the detection of small-object pests. Ensuring a balance between model lightweighting and small-object pest detection is particularly important. To improve the detection accuracy of small target pests on sticky trap images from multi-source scenarios, a lightweight detection model named MobileNetV4+VN-YOLOv5s was proposed in this research to detect two main small target pests in agricultural production, whiteflies and thrips. [Methods] In the backbone layer of MobileNetV4+VN-YOLOv5s, an EM block constructed with the MobileNetV4 backbone network was introduced for detecting small, high-density, and overlapping targets, making it suitable for deployment on mobile devices. Additionally, the Neck layer of MobileNetV4+VN-YOLOv5s incorporates the GSConv and VoV-GSCSP modules to replace regular convolutional modules with lightweight design, effectively reducing the parameter size of the model while improving detection accuracy. Lastly, a normalized wasserstein distance (NWD)loss function was introduced into the framework to enhance the sensitivity for low-resolution small target pests. Extensive experiments including state-of-the-art comparison, ablation evaluation, performance analysis on image splitting, pest density and multi-source data were conducted. [Results and Discussions] Through ablation tests, it was concluded that the EM module and the VoV-GSCSP convolution module had significant effects in reducing the model parameter size and frame rate, the NWD loss function significantly improved the mean average precision (mAP) of the model. By comparing tests with different loss functions, the NWD loss function improves the mAP by 6.1, 10.8 and 8.2 percentage compared to the DIoU, GIoU and EIoU loss functions, respectively, so the addition of the NWD loss function achieved good results. Comparative performance tests were detected wiht different light weighting models, the experimental results showed that the mAP of the proposed MobileNetV4+VN-YOLOv5s model in three scenarios (Indoor, Outdoor, Indoor&Outdoor) was 82.5%, 70.8%, and 74.7%, respectively. Particularly, the MobileNetV4+VN-YOLOv5s model had a parameter size of only 4.2 M, 58% of the YOLOv5s model, the frame rate was 153.2 fps, an increase of 6.0 fps compared to the YOLOv5s model. Moreover, the precision and mean average precision reach 79.7% and 82.5%, which were 5.6 and 8.4 percentage points higher than the YOLOv5s model, respectively. Comparative tests were conducted in the upper scenarios based on four splitting ratios: 1×1, 2×2, 5×5, and 10×10. The most superior was the result by using 5×5 ratio in indoor scenario, and the mAP of this case reached 82.5%. The mAP of the indoor scenario was the highest in the low-density case, reaching 83.8%, and the model trained based on the dataset from indoor condition achieves the best performance. Comparative tests under different densities of pest data resulted in a decreasing trend in mAP from low to high densities for the MobileNetV4+VN-YOLOv5s model in the three scenarios. Based on the comparison of the experimental results of different test sets in different scenarios, all three models achieved the best detection accuracy on the IN dataset. Specifically, the IN-model had the highest mAP at 82.5%, followed by the IO-model. At the same time, the detection performance showed the same trend across all three test datasets: The IN model performed the best, followed by the IO-model, and the OUT-model performed the lowest. By comparing the tests with different YOLO improvement models, it was concluded that MobileNetV4+VN-YOLOv5s had the highest mAP, EVN-YOLOv8s was the second highest, and EVN-YOLOv11s was the lowest. Besides, after deploying the model to the Raspberry Pi 4B motherboard, it was concluded that the detection results of the YOLOv5s model had more misdetections and omissions than those of the MobileNetV4+VN-YOLOv5s model, and the time of the model was shortened by about 33% compared to that of the YOLOv5s model, which demonstrated that the model had a good prospect of being deployed in the application. [Conclusions] The MobileNetV4+VN-YOLOv5s model proposed in this study achieved a balance between lightweight design and accuracy. It can be deployed on embedded devices, facilitating practical applications. The model can provide a reference for detecting small target pests in sticky trap images under various multi-source scenarios.