1 |
李秋生, 郑凯欣, 刘小春. 新时代基层农技推广体系改革创新实践探索、制约因素及深化路径[J]. 世界农业, 2022(2): 80-89.
|
|
LI Q S, ZHENG K X, LIU X C. Practical exploration, constraints and deepening paths of the reform and innovation of grass-roots agricultural technology extension system in the new era[J]. World agriculture, 2022(2): 80-89.
|
2 |
刘娜. 浅析我国农业推广的现状及策略[J]. 河北农业, 2023, 10: 27-28.
|
|
LIU N. Analysis of the current situation and strategy of agricultural extension in China[J]. Hebei agriculture, 2023, 10: 27-28.
|
3 |
RADFORD A, WU J, CHILD R, et al. Language models are unsupervised multitask learners[J]. Computer science, 2019: ID 160025533.
|
4 |
ZHAO W X, ZHOU K, LI J Y, et al. A survey of large language models[EB/OL]. arXiv: 2303.18223, 2023.
|
5 |
李冬梅, 罗斯斯, 张小平, 等. 命名实体识别方法研究综述[J]. 计算机科学与探索, 2022, 16(9): 1954-1968.
|
|
LI D M, LUO S S, ZHANG X P, et al. Review on named entity recognition[J]. Journal of frontiers of computer science and technology, 2022, 16(9): 1954-1968.
|
6 |
MORWAL S. Named entity recognition using hidden markov model (HMM)[J]. International journal on natural language computing, 2012, 1(4): 15-23.
|
7 |
EKBAL A, BANDYOPADHYAY S. Named entity recognition using support vector machine: A language independent approach[J]. International journal of electrical and computer engineering, 2010, 4(3): 589-604.
|
8 |
SONG S L, ZHANG N, HUANG H T. Named entity recognition based on conditional random fields[J]. Cluster computing, 2019, 22(3): 5195-5206.
|
9 |
LUO L, YANG Z H, YANG P, et al. An attention-based BiLSTM-CRF approach to document-level chemical named entity recognition[J]. Bioinformatics, 2018, 34(8): 1381-1388.
|
10 |
CHANG Y A, KONG L, JIA K J, et al. Chinese named entity recognition method based on BERT[C]// 2021 IEEE International Conference on Data Science and Computer Application (ICDSCA). Piscataway, New Jersey, USA: IEEE, 2021: 294-299.
|
11 |
ZHU Y Y, WANG G X, KARLSSON B F. CAN-NER: Convolutional attention network for Chinese named entity recognition[EB/OL]. arXiv: 1904.02141, 2019.
|
12 |
ZHANG Y, YANG J. Chinese NER using lattice LSTM[EB/OL]. arXiv: 1805.02023, 2018.
|
13 |
DEVLIN J, CHANG M W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[EB/OL]. arXiv: 1810.04805, 2018.
|
14 |
SUN Y, WANG S H, LI Y K, et al. ERNIE: Enhanced representation through knowledge integration[EB/OL]. arXiv: 1904.09223, 2019.
|
15 |
RILOFF E, THELEN M. A rule-based question answering system for reading comprehension tests[C]// Proceedings of the 2000 ANLP/NAACL Workshop on Reading Comprehension Tests as Evaluation for Computer-Based Language Understanding Sytems-Volume 6. New York,USA: ACM, 2000: 13-19.
|
16 |
YANI M, KRISNADHI A A. Challenges, techniques, and trends of simple knowledge graph question answering: A survey[J]. Information, 2021, 12(7): ID 271.
|
17 |
SHARMA Y, GUPTA S. Deep learning approaches for question answering system[J]. Procedia computer science, 2018, 132: 785-794.
|
18 |
LIU Y H, OTT M, GOYAL N, et al. RoBERTa: A robustly optimized BERT pretraining approach[EB/OL]. arXiv: 1907.11692, 2019.
|
19 |
CHIPMAN H A, GEORGE E I, MCCULLOCH R E. BART: Bayesian additive regression trees[J]. The annals of applied statistics, 2010, 4(1): 266-298.
|
20 |
PEREIRA J, FIDALGO R, LOTUFO R, et al. Visconde: Multi-document QA with GPT-3 and Neural Reranking[C]// European Conference on Information Retrieval. Cham: Springer Nature Switzerland, 2023: 534-543.
|
21 |
DAUDERT T. A web-based collaborative annotation and consolidation tool[J]. International conference on language resources and evaluation, 2020: 7053-7059.
|
22 |
YANG A Y, XIAO B, WANG B N, et al. Baichuan 2: Open large-scale language models[EB/OL]. arXiv: 2309.10305, 2023.
|
23 |
TOUVRON H, LAVRIL T, IZACARD G, et al. LLaMA: Open and efficient foundation language models[EB/OL]. arXiv: 2302.13971, 2023.
|
24 |
FLORIDI L, CHIRIATTI M. GPT-3: Its nature, scope, limits, and consequences[J]. Minds and machines, 2020, 30(4): 681-694.
|
25 |
DING N, QIN Y J, YANG G A, et al. Parameter-efficient fine-tuning of large-scale pre-trained language models[J]. Nature machine intelligence, 2023, 5(3): 220-235.
|
26 |
LIU X A, JI K X, FU Y C, et al. P-tuning: Prompt tuning can be comparable to fine-tuning across scales and tasks[C]// Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Stroudsburg, PA, USA: Association for Computational Linguistics, 2022: 61-68.
|
27 |
BANG Y J, CAHYAWIJAYA S, LEE N, et al. A multitask, multilingual, multimodal evaluation of ChatGPT on reasoning, hallucination, and interactivity[EB/OL]. arXiv: 2302.04023, 2023.
|
28 |
LEE K, IPPOLITO D, NYSTROM A, et al. Deduplicating training data makes language models better[EB/OL]. arXiv: 2107.06499, 2021.
|
29 |
PENG B L, GALLEY M, HE P C, et al. Check your facts and try again: Improving large language models with external knowledge and automated feedback[EB/OL]. arXiv: 2302.12813, 2023.
|
30 |
CHANG Y P, WANG X, WANG J D, et al. A survey on evaluation of large language models[EB/OL]. arXiv: 2307.03109, 2023.
|
31 |
CORLEY C, MIHALCEA R. Measuring the semantic similarity of texts[C]// Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment-EMSEE '05. Morristown, New Jersey, USA: Association for Computational Linguistics, 2005: 13-18.
|