| 1 |
YAO J, YI X, WANG X, et al. From instructions to intrinsic human values: A survey of alignment goals for big models[EB/OL]. arXiv, 2023.
|
| 2 |
BOMMASANI R, HUDSON D A, ADELI E, et al. On the opportunities and risks of foundation models[EB/OL]. arXiv, 2022.
|
| 3 |
VASWANI A, SHAZEER N, PARMAR N,et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.
|
| 4 |
RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language understanding by generative pre-training[EB/OL]. OpenAI, 2018.
|
| 5 |
TAN C, CAO Q, LI Y, et al. On the promises and challenges of multimodal foundation models for geographical, environmental, agricultural, and urban planning applications[EB/OL]. arXiv, 2023.
|
| 6 |
AWAIS M, NASEER M, KHAN S, et al. Foundational models defining a new era in vision: A survey and outlook[EB/OL]. arXiv, 2023.
|
| 7 |
ZHAO W X, ZHOU K, LI J, et al. A Survey of large language models[EB/OL]. arXiv, 2023.
|
| 8 |
WEI J, TAY Y, BOMMASANI R, et al. Emergent abilities of large language models[EB/OL]. arXiv, 2022[2024-03-06].
|
| 9 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Minneapolis, USA: Association for Computational Linguistics, 2019: 4171-4186.
|
| 10 |
BROWN T B, MANN B, RYDER N, et al. Language models are few-shot learners[EB/OL]. arXiv, 2020.
|
| 11 |
TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: Open and efficient foundation language models[EB/OL]. arXiv, 2023.
|
| 12 |
DOSOVITSKIY A, BEYER L, KOLESNIKOV A, et al. An image is worth 16x16 words: Transformers for image recognition at scale[EB/OL]. arXiv, 2021.
|
| 13 |
CARON M, TOUVRON H, MISRA I,et al. Emerging properties in self-supervised vision transformers[C]// 2021 IEEE/CVF International Conference on Computer Vision (ICCV). Piscataway, New Jersey, USA: IEEE, 2021: 9630-9640.
|
| 14 |
KAPLAN J, MCCANDLISH S, HENIGHAN T, et al. Scaling laws for neural language models[EB/OL]. arXiv, 2020.
|
| 15 |
HOFFMANN J, BORGEAUD S, MENSCH A, et al. Training compute-optimal large language models[EB/OL]. arXiv, 2022.
|
| 16 |
CHEN T, KORNBLITH S, NOROUZI M, et al. A simple framework for contrastive learning of visual representations[EB/OL]. arXiv, 2020.
|
| 17 |
RAJBHANDARI S, RASLEY J, RUWASE O, et al. ZeRO: Memory optimizations toward training trillion parameter models[C]// Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. Piscataway, New Jersey, USA: IEEE, 2020: 1-16.
|
| 18 |
KIRILLOV A, MINTUN E, RAVI N, et al. Segment anything[EB/OL]. arXiv, 2023.
|
| 19 |
OUYANG L, WU J, JIANG X, et al. Training language models to follow instructions with human feedback[EB/OL]. arXiv, 2022.
|
| 20 |
HU E J, SHEN Y, WALLIS P, et al. LoRA: Low-rank adaptation of large language models[EB/OL]. arXiv, 2021.
|
| 21 |
WEI J, WANG X, SCHUURMANS D, et al. Chain-of-thought prompting elicits reasoning in large language models[EB/OL]. arXiv, 2023.
|
| 22 |
RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[R/OL]. 2019.
|
| 23 |
OPENAI, ACHIAM J, ADLER S, et al. GPT-4 technical report[EB/OL]. arXiv, 2023.
|
| 24 |
LEWIS M, LIU Y H, GOYAL N, et al. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Minneapolis, Minnesota, USA: Association for Computational Linguistics, 2020: 7871-7880.
|
| 25 |
RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of machine learning research, 2020, 21(140): 1-67.
|
| 26 |
CHUNG H W, HOU L, LONGPRE S, et al. Scaling instruction-finetuned language models[EB/OL]. arXiv, 2022.
|
| 27 |
WORKSHOP B, SCAO T L, FAN A, et al. BLOOM: A 176B-Parameter open-access multilingual language model[EB/OL]. arXiv, 2023.
|
| 28 |
TOUVRON H, MARTIN L, STONE K, et al. Llama 2: Open foundation and fine-tuned chat models[EB/OL]. arXiv, 2023.
|
| 29 |
ZENG A, LIU X, DU Z, et al. GLM-130B: An open bilingual pre-trained model[C]// The Eleventh International Conference on Learning Representations. Piscataway, New Jersey, USA: IEEE, 2022.
|
| 30 |
REZAYI S, LIU Z L, WU Z H, et al. AgriBERT: Knowledge-infused agricultural language models for matching food and nutrition[C]// Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence. Vienna, Austria: International Joint Conferences on Artificial Intelligence Organization, 2022: 5150-5156.
|
| 31 |
YANG X, GAO J, XUE W, et al. PLLaMa: An open-source large language model for plant science[EB/OL]. arXiv, 2024. [2024-01-23].
|
| 32 |
ZHAO B, JIN W Q, DEL SER J, et al. ChatAgri: Exploring potentials of ChatGPT on cross-linguistic agricultural text classification[J]. Neurocomputing, 2023, 557: ID 126708.
|
| 33 |
SILVA B, NUNES L, ESTEVÃO R, et al. GPT-4 as an agronomist assistant? Answering agriculture exams using large language models[EB/OL]. arXiv, 2023.
|
| 34 |
王婷, 王娜, 崔运鹏, 等. 基于人工智能大模型技术的果蔬农技知识智能问答系统[J]. 智慧农业(中英文), 2023, 5(4): 105-116.
|
|
WANG T, WANG N, CUI Y P, et al. Agricultural technology knowledge intelligent question-answering system based on large language model[J]. Smart agriculture, 2023, 5(4): 105-116.
|
| 35 |
BALAGUER A, BENARA V, CUNHA R L DE F, et al. RAG vs fine-tuning: Pipelines, tradeoffs, and a case study on agriculture[EB/OL]. arXiv, 2024.
|
| 36 |
QING J J, DENG X L, LAN Y B, et al. GPT-aided diagnosis on agricultural image based on a new light YOLOPC[J]. Computers and electronics in agriculture, 2023, 213: ID 108168.
|
| 37 |
REDMON J, DIVVALA S, GIRSHICK R, et al. You Only Look Once: Unified, real-time object detection[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2016: 779-788.
|
| 38 |
PENG R L, LIU K, YANG P, et al. Embedding-based retrieval with LLM for effective agriculture information extracting from unstructured data[J]. CoRR, 2023, ID abs/2308.03107.
|
| 39 |
YUAN L, CHEN D, CHEN Y L, et al. Florence: A new foundation model for computer vision[EB/OL]. arXiv, 2021.
|
| 40 |
WILLIAMS D, MACFARLANE F, BRITTEN A. Leaf Only SAM: Asegment anything pipeline for zero-shot automated leaf segmentation[EB/OL]. arXiv, 2023.
|
| 41 |
CARRARO A, SOZZI M, MARINELLO F. The Segment Anything Model (SAM) for accelerating the smart farming revolution[J]. Smart agricultural technology, 2023, 6: ID 100367.
|
| 42 |
LI Y, WANG D, YUAN C, et al. Enhancing agricultural image segmentation with an agricultural segment anything model adapter[J]. Sensors (basel), 2023, 23(18): ID 7884.
|
| 43 |
YANG X, DAI H, WU Z, et al. SAM for poultry science[EB/OL]. arXiv, 2023.
|
| 44 |
XIE E, WANG W, YU Z, et al. SegFormer: Simple and efficient design for semantic segmentation with transformers[EB/OL]. arXiv, 2021.
|
| 45 |
ZHENG S, LU J, ZHAO H, et al. Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers[C]// 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, New Jersey, USA: IEEE, 2021: 6877-6886.
|
| 46 |
GUI B L, BHARDWAJ A, SAM L. Evaluating the efficacy of segment anything model for delineating agriculture and urban green spaces in multiresolution aerial and spaceborne remote sensing images[J]. Remote sensing, 2024, 16(2): ID 414.
|
| 47 |
GURAV R, PATEL H, SHANG Z C, et al. Can SAM recognize crops? Quantifying the zero-shot performance of a semantic segmentation foundation model on generating crop-type maps using satellite imagery for precision agriculture[EB/OL]. arXiv, 2023: arXiv: 2311.15138.
|
| 48 |
LIU X Y. A SAM-based method for large-scale crop field boundary delineation[C]// 2023 20th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON). Piscataway, New Jersey, USA: IEEE, 2023: 1-6.
|
| 49 |
ZHANG C, MARFATIA P, FARHAN H, et al. Enhancing USDA NASS cropland data layer with segment anything model[C]// 2023 11th International Conference on Agro-Geoinformatics (Agro-Geoinformatics). Piscataway, New Jersey, USA: IEEE, 2023: 1-5.
|
| 50 |
RADFORD A, KIM J W, HALLACY C, et al. Learning transferable visual models from natural language supervision[C]// International Conference on Machine Learning. New York, USA: ACM, 2021.
|
| 51 |
ALAYRAC J B, DONAHUE J, LUC P, et al. Flamingo: A visual language model for few-shot learning[EB/OL]. arXiv, 2022.
|
| 52 |
RAMESH A, PAVLOV M, GOH G, et al. Zero-shot text-to-image generation[EB/OL]. arXiv, 2021. [2024-03-06].
|
| 53 |
KINGMA D P, WELLING M. Auto-encoding variational bayes[EB/OL]. arXiv, 2013.
|
| 54 |
BROOKS T, PEEBLES B, HOLMES C, et al. Video generation models as world simulators[EB/OL]. [2024-03-03].
|
| 55 |
CAO Y Y, CHEN L, YUAN Y, et al. Cucumber disease recognition with small samples using image-text-label-based multi-modal language model[J]. Computers and electronics in agriculture, 2023, 211: ID 107993.
|
| 56 |
MU N, KIRILLOV A, WAGNER D, et al. SLIP: Self-supervision meets language-image pre-training[M]// Lecture Notes in Computer Science. Cham: Springer Nature Switzerland, 2022: 529-544.
|
| 57 |
DETTMERS T, PAGNONI A, HOLTZMAN A, et al. QLoRA: Efficient Finetuning of quantized LLMs[EB/OL]. arXiv, 2023.
|
| 58 |
FRANTAR E, ASHKBOOS S, HOEFLER T, et al. OPTQ: Accurate quantization for generative pre-trained transformers[C]// The Eleventh International Conference on Learning Representations. Piscataway, New Jersey, USA: IEEE, 2022.
|
| 59 |
TZACHOR A, DEVARE M, RICHARDS C, et al. Large language models and agricultural extension services[J]. Nature food, 2023, 4(11): 941-948.
|
| 60 |
STELLA F, DELLA SANTINA C, HUGHES J. How can LLMs transform the robotic design process?[J]. Nature machine intelligence, 2023, 5(6): 561-564.
|