Welcome to Smart Agriculture 中文
Topic--Agricultural Artificial Intelligence and Big Data

Distinguishing Volunteer Corn from Soybean at Seedling Stage Using Images and Machine Learning

  • FLORES Paulo , 1 ,
  • ZHANG Zhao , 1 ,
  • MATHEW Jithin 2 ,
  • JAHAN Nusrat 1 ,
  • STENGER John 1
Expand
  • 1. Department of Agricultural and Biosystems Engineering, North Dakota State University, Fargo, ND 58102, USA
  • 2. Department of Plant Sciences, North Dakota State University, Fargo, ND 58108, USA
Zhao Zhang (1985-), male, Ph.D., research assistant professor, research interests: sensing and automation in agriculture. Tel: 701-231-8403. E-mail: .

Paulo Flores (1979-), male, assistant professor, research interests: precision agriculture, remote sensing. E-mail: .

Received date: 2020-07-01

  Revised date: 2020-08-01

  Online published: 2020-10-12

Supported by

NDSU-AES Project(FARG005348)

Highlights

Volunteer corn in soybean fields are harmful as they disrupt the benefits of corn-soybean rotation. Volunteer corn does not only reduce soybean yield by competing for water, nutrition and sunlight, but also interferes with pest control (e.g., corn rootworm). It is therefore critical to monitor the volunteer corn in soybean at the crop seedling stage for better management. The current visual monitoring method is subjective and inefficient. Technology progress in sensing and automation provides a potential solution towards the automatic detection of volunteer corn from soybean. In this study, corn and soybean were planted in pots in greenhouse to mimic field conditions. Color images were collected by using a low-cost Intel RealSense camera for five successive days after the germination. Individual crops from images were manually cropped and subjected to image segmentation based on color threshold coupled with noise removal to create a dataset. Shape (i.e., area, aspect ratio, rectangularity, circularity, and eccentricity), color (i.e., R, G, B, H, S, V, L, a, b, Y, Cb, and Cr) and texture (coarseness, contrast, linelikeness, and directionality) features of individual crops were extracted. Individual feature's weights were ranked with the top 12 relevant features selected for this study. The 12 features were fed into three feature-based machine learning algorithms: support vector machine (SVM), neural network (NN) and random forest (RF) for model training. Prediction precision values on the test dataset for SVM, NN and RF were 85.3%, 81.6%, and 82.0%, respectively. The dataset (without feature extraction) was fed into two deep learning algorithms—GoogLeNet and VGG-16, resulting into 96.0% and 96.2% accuracies, respectively. The more satisfactory models from feature-based machine learning and deep learning were compared. VGG-16 was recommended for the purpose of distinguishing volunteer corn from soybean due to its higher detection accuracy, as well as smaller standard deviation (STD). This research demonstrated RGB images, coupled with VGG-16 algorithm could be used as a novel, reliable (accuracy >96%), and simple tool to detect volunteer corn from soybean. The research outcome helps provide critical information for farmers, agronomists, and plant scientists in monitoring volunteer corn infestation conditions in soybean for better decision making and management.

Cite this article

FLORES Paulo , ZHANG Zhao , MATHEW Jithin , JAHAN Nusrat , STENGER John . Distinguishing Volunteer Corn from Soybean at Seedling Stage Using Images and Machine Learning[J]. Smart Agriculture, 2020 , 2(3) : 61 -74 . DOI: 10.12133/j.smartag.2020.2.3.202007-SA002

1 Introduction

Crop rotation is a management practice with a myriad of benefits to farmers, which has been applied worldwide for thousands of years[1,2]. In the United States Midwest region, corn and soybean rotation (CSR) is a common practice which could result in yield increase[3,4]. Researchers reported that continued corn growing resulted into a 10% to 15% yield reduction compared to corn after soybean[5]. The same group further reported that soybean planted after corn led to soybean yield increase (10% to 18%) compared to continuous soybean. Soybean have nodules on their roots that host bacteria to fix atmospheric nitrogen into the soil, and after soybean harvested, the nitrogen is kept into the soil, which benefits the corn planted after soybean[6]. In addition, by rotating crops grown in a specific location, a majority of pests and diseases that rely on specific host environment are disturbed[7]. Nematodes negatively affect soybean production, however do not feed on corn[8]. By growing corn after soybean, the environment needed by nematodes is disturbed, lowering their population and increasing yield performance. Furthermore, it has been reported that CSR benefits the soil tilth due to differing crop root structures aiding in soil aeration, as well as providing organic matter to the soil[9].
One challenge to the current CSR approach is volunteer corn in soybean fields. Volunteer corn is one of the most competitive weeds[10]. Ear and kernel losses occurring in the previous year can lead to volunteer infestations[11]. Ears may drop for a number of reasons, such as drought stress and corn borer feeding in the ear shank[12]. Simultaneously, crop lodging, incomplete threshing of cobs, and improperly adjusted combine settings can result in kernels left infield. Left seed can overwinter and germinate in spring together with soybean[13]. Volunteer corn in soybean field negatively affects the CSR by lowering current season soybean yield and reducing control of corn rootworm in subsequent corn planted seasons[13]. Many researchers have demonstrated soybean yield reductions due to volunteer corn resulting from competition for water, nutrition and sunlight[14-16]. If volunteer corn is not removed before the silk stage, it serves as an attractant and refuge for corn rootworm beetles to reproduce, significantly impacting pest control as corn is replanted in the subsequent season[13,17].
Detecting and controlling volunteer corn at early stage is advantageous. As opposed to the end of season control, early control limits the negative affect on soybean yield, eliminates corn as a host to refuge corn pests and lowers chemical input costs as lower rates are able to control corn at earlier stages[11,18,19]. Development of a simple, easily implemented method of detection at the seedling stage would result in economic impacts to CSR production.
Recent progress in sensing provides a tool to extract key information from the field[20-31]. El-Fake et al.[27] used color features, along with neural network (NN) to distinguish weeds from soybean with an accuracy of 62%. By combining color and shape features and applying support vector machine (SVM) as a classifier. Wu et al.[32] and Wang et al.[33] obtained an accuracy >90% in distinguish weeds from soybeans[32,33]. Most recently, deep learning algorithms were tested to distinguish weeds from soybean crops. Tang et al.[34] applied convolutional neural network (CNN) to distinguish weeds (i.e., Cephalanoplos, Digitaria, and bindweed) from soybean crops with an accuracy of 93%. Ferreira et al.[35] tested CNN on weed detection in soybean field, generating a 99% accuracy rate. Bah et al.[36] tested residual network (ResNet) on unmanned aerial vehicle (UAV) images for weed detection, resulting into an accuracy of 95%. Yu et al.[37] tested GoogLeNet and visual geometry group network (VGGNet) on weed detection with an accuracy about 99%. Lottes et al.[38] tested a fully CNN on real-time weed detection, with an accuracy of 95%.
The current visual observation approach in volunteer corn detection and monitoring in soybean fields is labor intensive, inefficient, and subjective. In addition, farmers, agronomists, and plant scientists typically sample locations and use these point results to estimate the extent of the infestation, limiting their accuracies. Therefore, there is a need for a new method for this task. The main objectives of this study were to: (i) collect corn and soybean seedling images in greenhouse for successive five days after germination and create a dataset after image segmentation and noise removal; (ii) test feature based machine learning (SVM, NN, and random forest (RF)) and deep learning (GoogLeNet and VGG-16) algorithms in distinguishing volunteer corn from soybean; (iii) determine and recommend desirable algorithm(s).

2 Materials and methods

The various processes followed in this study of distinguishing volunteer corn from soybean, such as data collection, image pre-processing (e.g., image segmentation and noise removal), and model comparisons are summarized in Fig. 1. After pre-processing images, a dataset was created and further processed through two kinds of machine learning methods: feature-based machine learning and deep learning. For feature-based machine learning, SVM, NN and RF were applied; for deep learning, GoogLeNet and VGG-16 were tested. Model performances were compared with the most desirable one recommended.
Fig. 1 Overall process flowchart of distinguishing volunteer corn from soybean with machine learning

2.1 Data collection

The experiment was conducted in Agricultural Experiment Station Research Greenhouse Complex, North Dakota State University, Fargo, ND, U.S. Corn (Latham LH 4454) and soybean (Asgrow AG 0937) seeds were planted into plastic containers (20 cm×10 cm×10 cm; 20 containers) filled with peat, which maintained proper moisture and fertilizer conditions for seed germination. Soybeans were planted in 8 cm spacing to simulate a field planting pattern, and corn seeds were randomly planted in the pots. Due to constant and suitable temperature conditions in the greenhouse (18 °C to 22 °C), seeds of both crops started to germinate 4 to 6 days after planting. Data collection occurred on the 7thday after planting and continued for 5 days. The greenhouse was equipped with translucent roof, and supplemental light from 4 LED tubes lights was applied constantly. Data collection for the continuous 5 days occurred in the morning between 10:00 AM and 11:00 AM. As all the five days were sunny and supplemental lighting was used, the illumination of the five days was assumed to be consistent. A RealSense camera (model D435; resolution 1920×1080; FOV 86×57; Intel Corporation, Santa Clara, CA, U.S.) was used for image collection. The camera was mounted at the top of a frame to maintain consistent distance to crops. The dimensions of image collection frame, as well as the sensor mounting location, are described in Fig. 2.
Fig. 2 The image collection frame and a sample image

(a) Image collection frame with dimensions of (b) A sample image collected by camera

60 cm (width) × 60 cm (length) × 92 cm (height)

2.2 Dataset preprocessing

After obtaining RGB (red, green, and blue) images for successive five days, individual crops were manually cropped. Considering the large amount of background noise sources (e.g., peat, solid fertilizer, and pot edges) from the manually cropped images, images were segmented to extract the crops from the complex background. Without noise removal, the accuracy of the model would be significantly and negatively affected. Many techniques have been used in the past for image segmentation, such as color threshold, texture, edge detection, and graph-based segmentation[23]. In this study, color threshold was applied as the green crops could be efficiently separated from the non-green background. Collected images were first converted from RGB to YCbCr format. Y, Cb and Cr values were collected from 10 points of a plant and were used to base color threshold. The obtained threshold was applied to other crops to automate the image segmentation process. It was observed that there were noises in the segmented images. Since the noises were all small areas with a few pixels, hole filling algorithm that can remove small connected objects (< 5 pixels) was applied. Fig. 3 provides an example of the image pre-processing procedure.
Fig. 3 Image processing procedure

2.3 Feature based machine learning

For the feature based machine learning, properly selected features were first extracted, after which they were fed into machine learning algorithms. A critical factor to determine the performance of algorithms is the extracted features, in which domain knowledge plays an important role. If the features can represent the images properly, the model would have a desirable performance; otherwise, the performance would be poor. In this study, shape, color and texture features were extracted based on relevant literature review[39-41].

2.3.1 Feature extraction

Leaves of volunteer corn and soybean possess different shapes, which can be considered as a potential feature for distinction purposes[39]. Five leaf shape features were extracted in this study—area, aspect ratio, rectangularity, circularity and eccentricity. Area is a parameter that can reflect the size of leaves. In this study, it was reasonably assumed that the dimensions of individual pixel were a constant value for all the images collected during the five days, considering the large distance between the camera and crops and the relatively small height changes of crops. The area value was converted to a pixel count individual plants, which is the number of pixels taken by the crop. The aspect ratio was defined as the length of the leaf divided by its width (Equation (1))[42]. In general, a corn leaf is long and thin, and a soybean leaf is short and round, which would potentially lead to different aspect ratio values. Rectangularity represents the similarity between a leaf and a rectangle and can be calculated according to Equation (2) [41]. The more the leaf shape is similar to a rectangle, the closer the rectangularity value is to 1. The circularity, which represents the roundness of leaves, were also extracted according to Equation (3) [41]. For a perfect circle leaf, the circularity value is 1. Furthermore, the eccentricity feature was extracted based on Equation (4), in which b stands for the foci of the ellipse and a stands for the major axis length of the ellipse[41]. It needs to be noted that the pixel square value in Equations (1)~(4) and the area is 1×1.
A s p e c t   r a t i o = L e a f   l e n g t h L e a f   w i d t h
R e c t a n g u l a r i t y   = L e a f   l e n g t h   × L e a f   w i d t h A r e a
C i r c u l a r i t y = 4 × π × A r e a P e r i m e t e r
E c c e n t r i c i t y = 1 - ( b a ) 2
Color features were also used extensively to distinguish weeds from crops[42-46]. For this study, it was observed during the data collection that the soybeans were dark green, while the corn were yellow green. Thus, the color feature may perform as a good parameter to distinguish volunteer corn from soybean. In addition to RGB, the collected images were converted into other formats, including HSV (hue, saturation and value), Lab (lightness, red/green value and blue/yellow value), and YCbCr (luma, blue minus luma and red minus luma). The mean value of each color channel was calculated as the color features.
In addition to shape and color features, texture features (a repeating pattern of local variations in images) can be potentially used to differ volunteer corn from soybean[47]. For this research, 4 crop texture features were extracted for the distinction purpose: coarseness, contrast, linelikeness, and directionality [48].
Totally, 21 features were extracted, including 5 shape features, 12 color features, and 4 texture features (summarized in table 1) was in this research.
Table 1 Features extracted in this research
Shape feature Color feature Texture feature
Area RGB Coarseness
Aspect ratio HSV Contrast
Rectangularity Lab Linelikeness

Circularity

Eccentricity

YCbCr Directionality
Since the extracted 21 features were selected mainly based on domain knowledge and literature review, it is unknown of their relevance in distinguishing volunteer corn from soybean. Training the feature based machine learning models (i.e., SVM and NN) with those irrelevant features would not only lower the model performance (e.g., lower accuracy and higher standard deviation), but also increase the computation load[49]. It is therefore critical to select the proper features using feature weight analysis. Relieff algorithm was used extensively as a feature selection method, and was chosen to select the features for model training due to its proven performance[50]. The importance of the 21 extracted features were ranked, with the top 12 important features selected for model training and testing.

2.3.2 Support vector machine, neural network and random forest

SVM is a supervised machine learning classifier that has been extensively used in addressing classification issues[51]. The SVM separates all data points of one class from the other in binary classification by finding the best hyperplane. There are a number of hyperplanes that are able to separate all data points, and the best hyperplane is the one with the largest margin between the two classes[52]. Margin represents the largest width of the slab parallel to the hyperplane that has no interior data points. In this study, the fitcsvm function was applied to distinguish volunteer corn from soybean. Kernel function of radial basis function (RBF) was applied in this study, due to its proven performance[53].
In addition to SVM, NN is another popular algorithm for binary classification. NN consists of closely interconnected neurons for information receiving, processing, and transmission from previous layer neurons to the next layer ones[51]. Key information of a NN consists of weight and bias, and the training of NN is to determine the optimal weight and bias to minimize the classification error. In this study, the patternnet function was selected as the NN. The input of the NN was the 12 selected features, and the output was either volunteer corn or soybean. The NN consisted of two layers, configured with 10 and 5 neurons for the first and second layer. While training the NN, 1000 iterations were setup, "sigmoid" was chosen as the activation function for the neurons, and "sofmax" was applied to determine the final class label.
RF is an ensemble learning algorithm to predict the outcome via aggregating the results from a number of individual decision trees[52]. To evaluate the performance of RF, out-of-bag-error is generally used as a parameter. Preliminary multiple runs with selected features showed that out-of-bag-error dropped greatly and stabilized when the tree number exceeds above 100. Hence, the parameter of 100 trees was selected in this study.
In this study, a total of 367 samples (225 volunteer corn and 142 soybean) were collected and were randomly split into two groups: 300 for training and 67 for testing. Data was trained for two classifier outputs: corn and soybean. All the above data processing, such as feature extraction, model training/testing with SVM and NN were performed in MATLAB R2019 (The Mathworks, Inc., Natick, Mass., USA).

2.4 Deep learning

Compared to machine learning which requires feature extraction using domain knowledge, deep learning algorithms extract features automatically, which significantly minimizes the image pre-processing work[53]. In CNN, after an image was fed in, it went through a sequence of conventional layers or kernel filters for feature extraction[54]. Pooling operation (layer) following the convolution layer reduces the spatial size of the convolved features, as well as lowering the computation power for further data processing. Considering the relatively small dataset (367 samples) and avoidance of over-fitting, data augmentation was conducted, which is a procedure to artificially increase the amount and diversity of data. Instead of collecting new data, data augmentation applies geometry transformation techniques on the already existed dataset[51]. There are a variety of geometric transformation methods, such as flip, rotation, scale, crop, translation, and Gaussian noise, which were randomly applied to individual images in this study. The data augmentation increased the dataset from 367 to 3670 samples (10 times).

2.4.1 VGG16 and GoogLeNet

VGG16 is a state-of-the-art CNN model (structure shown in Fig. 4), which achieved a 93% accuracy in classifying ImageNet dataset into 1000 categories[55]. Crop images fed into VGG-16 model were first re-sized into 224×224×3 size, and while images going through a number of convolutional layers, filters (size of 3×3) were applied for feature extraction. The pooling layer after convolutional layers reduced the spatial sizes of image, and the fully connected layers provided the category for the corresponding input.
Fig. 4 VGG-16 model structure with 13 convolutional layers (Conv), 5 pooling layers (Pooling), and 3 fully connected layers (FC)
The VGG-16 model stacks multiple convolutional layers to enhance model's accuracy and uses only one size filter (3×3) for feature extraction. GoogLeNet, however, adopts the inception module with different filters (1×1, 3×3, and 5×5) for feature extraction[56,57] (Fig. 5). A key advantage of different sized filters over the one size filter is that inclusive features can be extracted. Large size filter (5×5) extracts general features and small size filter (1×1) extracts local features[57-59]. Features extracted by 3 different filters are concatenated for further computation.
Fig. 5 Inception module used in GoogLeNet for different scale feature extraction

2.4.2 Accuracy evaluation

Image classification results for the test dataset can be presented in a number of manners. In this study, a sample confusion matrix was provided, including True Positive (TP; corn classified as corn), False Positive (FP; soybean classified as corn), False Negative (FN; corn classified as soybean), and True Negative (TN; soybean classified as soybean). Since the classification result (shown in confusion matrix) of one run may be different from another run, this study chose Precision (Equation (5)) generated from 10 replications as the index to compare the model performance.
P r e c i s i o n = T P T P + F P  
This study took the advantage of transfer learning by first downloading the trained VGG-16 and GoogLeNet models, and then added the augmented dataset for the re-training of the models[47]. A key advantage of this procedure is that the downloaded models have already leaned a huge number of features, which significantly reduces the training time. For the training of both models, 6 epochs were chosen, with 256 iterations for each epoch. The augmented dataset was randomly divided into training (2600 images) and testing (1070 images). All the process in this study related to deep learning, including data augmentation, training and testing models, was performed in MATLAB R2019.

3 Results and discussion

3.1 Feature based machine learning algorithms

A sample run of SVM, NN, and RF was conducted, with the results shown in the confusion matrix (Figure 6). Generally, it can be observed in the 3 results that a majority of corn was classified as corn and soybean as soybean. Several misclassifications exist, such as six soybean crops were classified as corn in the SVM (Fig. 6(a)).
Fig. 6 Confusion matrixes for a sample run of SVM, NN and RF
(a)Volunteer corn and soybean classification using SVM (b) Volunteer corn and soybean classification using NN (c) Volunteer corn and soybean classification using RF

Note: "0" and "1" represents soybean and corn, respectively

Using confusion matrix can present individual result in a detailed manner, but it could not reflect the overall performance of the model. Hence, 10 replications were run for each model, with the averaged precision values shown in Fig. 7. Whiskers on bars represent two standard deviations calculated from 10 replicates. Bars with different letters significantly different by Tukey's test at 0.05 significance level. SVM, NN and RF classification accuracies were 85.3%, 81.6%, and 82.0%, respectively. The standard deviations (STD) for SVM and NN were 3.8% and 8.0%, respectively. The STD reflects the robustness of the model where a small value of STD shows a more reliable and robust performance. Considering a desirable model should have a larger precision value and a smaller STD, SVM is considered to have a superior performance over NN for this specific study.
Fig. 7 Precision values of two feature-based machine learning algorithms

3.2 Deep learning algorithms

A sample run of GoogLeNet and VGG16 was performed, with the results shown in the confusion matrixes (Fig. 8). Overall, it can be observed in the two results that deep learning algorithms resulted in more correct classifications, compared to machine learning algorithms (Fig. 6). Only a few misclassification results can be observed in Fig. 8.
Fig. 8 Confusion matrixes for a sample run of GoogLeNet and VGG-16
(a) Volunteer corn and soybean classification using GoogLeNet (b) Volunteer corn and soybean classification using VGG16

Note: "0" and "1" represents soybean and corn, respectively

To obtain a more objective evaluation of the two deep learning algorithms, precision values of two deep learning algorithms with 10 replications are shown in Fig. 9. Whiskers on bars represent two standard deviations calculated from 10 replicates. Bars with different letters significantly different by Tukey's test at 0.05 significance. The GoogLeNet and VGG-16 accuracies were 96.0% and 96.2%, respectively. Furthermore, the STDs of GoogLeNet and VGG-16 were 0.6% and 1.0%, respectively. Using the same standards described above, VGG-16 is confirmed to have a more desirable performance over GoogLeNet. Considering the GoogLeNet and VGG-16 are top two deep learning models in the current literature, other models (e.g., ResNet) were not considered in this research.
Fig. 9 Precision values of two deep learning algorithms

3.3 Performance comparison of machine learning and deep learning

SVM and VGG-16 performed superiorly over NN and GoogLeNet, respectively. For the purpose of identifying the desirable model, the precision comparison results between SVM and VGG-16 are presented in Fig. 10. Whiskers on bars represent two standard deviations calculated from 10 replicates. Bars with different letters significantly different by Tukey's test at 0.05 significance level. The VGG-16 accuracy (96.2%) was much higher than the SVM (83.3%). Additionally, the STD of VGG-16 (1.0%) was much smaller than that of SVM (3.8%), indicating the VGG-16 had a more reliable and robust performance over SVM. It is therefore concluded that VGG-16 is a satisfactory algorithm that can effectively distinguish volunteer corn from soybean. Overall, the deep learning models result into higher precision values and lower STD over the feature-based machine learning models. One potential reason is that the deep learning could extract more features. To improve the precision of feature-based machine learning, additional features should be considered to extend the model, such as color STD and 3D shapes of leaves.
Fig. 10 Precision comparison of SVM and VGG-16
In addition to accuracy, the efficiency of the pipeline is critical for its application. The pipeline time was calculated for the VGG-16, which had the most satisfactory accuracy performance. One hundred images were randomly selected from the dataset for one run, with 10 replications conducted. Results shows that the averaged pipeline time, including image reading, image resizing, and prediction, for an image was 24.5 ms, which could meet the requirement for real-time detection. Note, model training time was not considered, which is usually conducted off-line.

3.4 Future work

This study focuses on distinguishing volunteer corn from soybean at seedling stage in a binary sense based on the data collected in greenhouse, and the future research should use field data to validate the model performance. Considering the environmental differences between greenhouse and field (e.g., illumination), future experiment may need to provide a chamber with uniform artificial illumination for the field data collection. Also, it is an interesting topic to test the current pipeline in field collected image under natural lighting conditions. If no chamber is used, it is preferred to take images in the morning or evening for relatively consistent illumination. If the current model does not have a satisfactory performance in field data, the re-training of model may be needed to adapt it to the field data. Furthermore, the data was collected after the crop germinated for successive five days, and it is desirable to extend the data collection for successive 10 days to guarantee germination. The current research processed data off-line. Such model performance should additionally be tested for real-time data processing. The color-based image segmentation method used in this study did not perform satisfactorily, and manual intervention was needed. An updated image segmentation approach with more desirable performance by integrating color features, edge detection, and region-growth method is needed.

4 Conclusions

In this study, color images of volunteer corn and soybean were collected after germination for 5 successive days. Crops in these images were manually cropped, image segmentation based on color was conducted, followed by noise removal. Shape features (i.e., area, aspect ratio, rectangularity, circularity and eccentricity), coupled with color (i.e., R, G, B, H, S, V, L, a, b, Y, Cb and Cr) and texture features (coarseness, contrast, linelikeness and directionality) were extracted. Weight of the 21 extracted features was ranked with the top 12 relevant features fed into three feature-based machine learning algorithms: SVM, NN, and RF. Model accuracies of SVM, NN, and RF were 85.3%, 81.6%, and 82.0%, respectively, in detecting volunteer corn from soybean. The dataset, without feature extraction, was fed into two deep learning algorithms - GoogLeNet and VGG-16. Precision values of GoogLeNet and VGG-16 were 96.2% and 96.0%, respectively. Comparison between SVM and VGG16 was conducted, and VGG-16 was confirmed to have a more satisfactory performance over SVM, due to its higher precision value and lower standard deviation. Based on this study, it is recommended that users should choose VGG-16 for volunteer corn detection. RGB images, coupled with VGG-16 model provides a ready-to-use tool to assist farmers on their monitoring of volunteer corn from soybean. Further research would focus on testing the model performance on processing real-time dataset as well as field data.
1
CROOKSTON R, KURLE J, COPELAND P, et al. Rotational cropping sequence affects yield of corn and soybean[J]. Agronomy Journal, 1991, 83(1): 108-113.

2
BULLOCK G. Crop rotation[J]. Critical Reviews in Plant Sciences, 1992, 11(4): 309-326.

3
MEESE G, CARTER R, OPLINGER E S. Corn/soybean rotation effect as influenced by tillage, nitrogen, and hybrid/cultivar[J]. Journal of Production Agriculture, 1991, 4(1): 74-80.

4
DE BRUIN L, PORTER M, NICHOLAS J. Use of a rye cover crop following corn in rotation with soybean in the upper Midwest[J]. Agronomy Journal, 2005, 97(2): 587-598.

5
LAUER J, PORTER P, OPLINGER E. The corn and soybean rotation effect[J]. Field Crops, 1997, 28: 426-514.

6
RHODES C. Why do they do that? Rotating crops kernel description[EB/OL]. (2018-02-12) [2020-06-29].

7
MARGARET T. Managing plant diseases with crop rotation[EB/OL]. [2020-06-09].

8
TYLKA G. Soybean cyst nematode [EB/OL]. (1994-12-20) [2020-06-29].

9
HAROLD V. Crop rotation and soil tilth kernel description[EB/OL]. (2012-08-28)[2020-06-29].

10
JEFF G, NICOLAI D, STAHL L. Managing the potential for volunteer corn in 2019 [EB/OL]. (2018-10-30) [2020-06-29].

volunteer%20corn%20in% 20 Enlist%20Corn.

11
MARQUARDT T, TERRY M, JOHNSON G. The impact of volunteer corn on crop yields and insect resistance management strategies[J]. Agronomy, 2013, 3(2): 488-496.

12
ALMS J, MOECHNIG M, VOS D. Yield loss and management of volunteer corn in soybean[J]. Weed Technology, 2016, 30(1): 254-262.

13
GUNSOLUS J, DAVE N. Managing volunteer corn[EB/OL]. [2020-06-29].

14
BECKETT H, STOLLER W. Volunteer corn (zea mays) interference in soybeans (glycine max)[J]. Weed Science, 1988, 36(2):159-166.

15
CONLEY P, SANTINI B. Crop management practices in Indiana soybean production systems[J]. Crop Management, 2007, 6(1): 1-9.

16
ALMS J, MOECHNIG D, DENEKE D. Volunteer corn effect on corn and soybean yield[C]// Annual Meeting of North Central Weed Science Society. Indianapolis, Indiana, USA: North Central Weed Sci. Soc., 2008: 8-11.

17
MARQUARDT P, KRUPKE C, JOHNSON W G. Competition of transgenic volunteer corn with soybean and the effect on western corn rootworm emergence[J]. Weed Science, 2012, 60(2): 193-198.

18
JHALA A, WRIGHT B. Volunteer corn in soybean: Impact and management kernel description[EB/OL]. (2018-10-20) [2020-06-29].

19
LINGENFELTER D. Controlling volunteer corn in soybeans[EB/OL]. (2019-06-23) [2020-06-29].

20
ZHANG Z, HEINEMANN P H, LIU J, et al. The development of mechanical apple harvesting technology: A review[J]. Transactions of the ASABE, 2016, 59(5): 1165-1180.

21
ZHANG Z, POTHULA K, LU R. A review of bin filling technologies for apple harvest and postharvest handling[J]. Applied Engineering in Agriculture, 2018, 34(4): 687-703.

22
SUNOJ S, SUBHASHREE S N, DHARANI S, et al. Sunflower floral dimension measurements using digital image processing[J]. Computers and Electronics in Agriculture, 2018, 151:403-415.

23
CEN H, WAN L, ZHU J, et al. Dynamic monitoring of biomass of rice under different nitrogen treatments using a lightweight UAV with dual image-frame snapshot cameras[J]. Plant Methods, 2019, 15(1): ID 32.

24
ABDALLA A, CEN H, El-MANAWY A, et al. Infield oilseed rape images segmentation via improved unsupervised learning models combined with supreme color features[J]. Computers and Electronics in Agriculture, 2019, 162: 1057-1068.

25
HASANIJALILIAN O, IGATHINATHANE C, DOETKOTT C, et al. Chlorophyll estimation in soybean leaves infield with smartphone digital imaging and machine learning[J]. Computers and Electronics in Agriculture, 2020, 174: ID 105433.

26
ZHANG Z, IGATHINATHANE C, LI J, et al. Technology progress in mechanical harvest of fresh market apples[J]. Computers and Electronics in Agriculture, 2020, 175: ID 105606.

27
EL-FAKI S, ZHANG N, PETERSON D E. Weed detection using color machine vision[J]. Transactions of the ASAE, 2000, 43(6): 1969-1978.

28
ZHANG Z, HEINEMANN P. Economic analysis of a low-cost apple harvest-assist unit[J]. HortTechnology, 2017, 27(2): 240-247.

29
ZHANG Z, POTHULA K, LU R. Economic evaluation of apple harvest and in-field sorting technology[J]. Transactions of the ASABE, 2017, 60(5), 1537-1550.

30
ZHANG Z, HEINEMANN P H, LIU J, et al. Design and field test of a low-cost apple harvest-assist unit[J]. Transactions of the ASABE, 2016, 59(5): 1149-1156.

31
ZHANG Z, HEINEMANN P H, LIU J, et al. Brush mechanism for distributing apples in a low-cost apple harvest-assist unit[J]. Applied Engineering in Agriculture, 2017, 33(2): 195-201.

32
WU L, WEN Y. Weed/corn seedling recognition by support vector machine using texture features[J]. African Journal of Agricultural Research, 2009, 4(9): 840-846.

33
WANG A, ZHANG W, WEI X. A review on weed detection using ground-based machine vision and image processing techniques[J]. Computers and Electronics in Agriculture, 2019, 158: 226-240.

34
TANG J L, WANG D, ZHANG Z G, et al. Weed identification based on k-means feature learning combined with convolutional neural network[J]. Computers and Electronics in Agriculture, 2017, 135: 63-70.

35
FERREIRA S, FREITAS M, SILVA GDA, et al. Weed detection in soybean crops using convnets[J]. Computers and Electronics in Agriculture, 2017, 143: 314-324.

36
BAH M D, HAFIANE A, CANALS R. Deep learning with unsupervised data labeling for weed detection in line crops in uav images[J]. Remote Sensing, 2018, 10(11): ID 1690.

37
YU J, SHARPE M, SCHUMANN W, et al. Deep learning for image-based weed detection in turfgrass[J]. European Journal of Agronomy, 2019, 104: 78-84.

38
LOTTES P, BEHLEY J, MILIOTO A, et al. Fully convolutional networks with sequential information for robust crop and weed detection in precision farming[J]. IEEE Robotics and Automation Letters, 2018, 3(4): 2870-2877.

39
VENKATARAMAN D, MANGAYARKARASI N. Computer vision based feature extraction of leaves for identification of medicinal values of plants[C]//2016 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC). Piscataway, New York, USA: IEEE, 2016: 1-5.

40
ARSENOVIC M, KARANOVIC M, SLADOJEVIC S, et al. Solving current limitations of deep learning based approaches for plant disease detection[J]. Symmetry, 2019, 11(7): ID 939.

41
AZLAH M A F, CHUA L S, RAHMAD F R, et al. Review on techniques for plant leaf classification and recognition[J]. Computers, 2019, 8(4): ID 77.

42
KADIR A, NUGROHO L E, SUSANTO A, et al. A comparative experiment of several shape methods in recognizing plants[J]. International Journal of Computer Science & Information Technology, 2011, 3(3): 256-263.

43
PEREZ A J, LOPEZ F, BENLLOCH J V, et al. Colour and shape analysis techniques for weed detection in cereal fields[J]. Computers and Electronics in Agriculture, 2000, 25(3): 197-212.

44
LIU T, CHEN W, WU W, et al. Detection of aphids in wheat fields using a computer vision technique[J]. Biosystems Engineering, 2016, 141: 82-93.

45
HAMUDA E, GINLEY BMC, GLAVIN M, et al. Automatic crop detection under field conditions using the HSV colour space and morphological operations[J]. Computers and Electronics in Agriculture, 2017, 133:97-107.

46
SUNOJ S, IGATHINATHANE C, SALIENDRA N, et al. Color calibration of digital images for agriculture and other applications[J]. ISPRS Journal of Photogrammetry and Remote Sensing, 2018, 146: 221-234.

47
LIU T, LI R, ZHONG X, et al. Estimates of rice lodging using indices derived from UAV visible and thermal infrared images[J]. Agricultural and Forest Meteorology, 2018, 252: 144-154.

48
TAMURA H, MORI S, YAMAWAKI T. Textural features corresponding to visual perception[J]. IEEE Transactions on Systems Man and Cybernetics, 1978, 8(6): 460-473.

49
ZHANG B, HUANG W, GONG L, et al. Computer vision detection of defective apples using automatic lightness correction and weighted RVM classifier[J]. Journal of Food Engineering, 2015(146): 143-151.

50
SUN Y. Iterative relief for feature weighting: Algorithms, theories, and applications[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007, 29(6): 1035-1051.

51
ZHANG Z, FLORES P, IGATHINATHANE C, et al. Wheat lodging detection from UAS imagery using machine learning algorithms[J]. Remote Sensing, 2020, 12(11): ID 1838.

52
ZHOU Z. Ensemble methods: Foundations and algorithms[M]. CRC Press: Boca Raton, FL, USA, 2012.

53
HAN S, CAO Q, MENG H. Parameter selection in SVM with RBF kernel function[C]// World Automation Congress 2012. Piscataway, New York, USA: IEEE, 2012.

54
AMARI S, WU S. Improving support vector machine classifiers by modifying kernel functions[J]. Neural Networks, 1999, 12(6): 783-789.

55
NAIK D L, KIRAN R. Identification and characterization of fracture in metals using machine learning based texture recognition algorithms[J]. Engineering Fracture Mechanics, 2019, 219: ID 106618.

56
SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.

57
BHARATH RAJ. A simple guide to the versions of the inception network kernel description[EB/OL]. [2020-06-29].

58
SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]// Proceedings of the IEEE conference on computer vision and pattern recognition. Piscataway, New York, USA: IEEE, 2015: 1-9.

59
BROWNLEEJASON. How to develop VGG, conception and ResNet modules from scratch in keras kernel description[EB/OL] [2020-06-29].

Outlines

/