欢迎您访问《智慧农业(中英文)》官方网站! English
信息处理与决策

基于递进式卷积网络的农业命名实体识别方法

展开
  • 1.中国科学院合肥物质科学研究院智能机械研究所,安徽 合肥 230031
    2.中国科学技术大学,安徽 合肥 230026
计 洁,硕士研究生,研究方向为自然语言处理、知识图谱。E-mail:jijiejie@mail.ustc.edu.cn
王儒敬,博士,研究员,研究方向为智能决策与知识工程、专家系统。E-mail:rjwang@iim.ac.cn

收稿日期: 2023-03-03

  网络出版日期: 2023-04-23

基金资助

国家重点研发计划项目(2019YFE0125700)

Progressive Convolutional Net Based Method for Agricultural Named Entity Recognition

Expand
  • 1.Institute of Intelligent Machinery, Hefei Institutes of Physical Science, Chinese Academy of Sciences, Hefei 230031, China
    2.University of Science and Technology of China, Hefei 230026, China
JI Jie, E-mail:jijiejie@mail.ustc.edu.cn
WANG Rujing, E-mail:rjwang@iim.ac.cn

Received date: 2023-03-03

  Online published: 2023-04-23

Supported by

National Key R&D Program Project (2019YFE0125700)

摘要

目前基于预训练语言模型(Pre-trained Language Model,PLM)的命名实体识别的研究在面对农业领域存在的实体命名方式繁杂、实体边界模糊等问题时,仅使用PLM最后一层表示输出,且均从外部引入知识或操作对实体表示进行增强,忽视内部各层本身蕴含语言不同层次的丰富信息。为解决上述问题,提出一种基于递进式卷积网络的命名实体识别方法。该方法首先存储自然句子,通过PLM后得到的每层输出表示;其次以递进式卷积作为全层信息的特征提取手段,对储存的模型中间层输出表示依次卷积。模型将注重全层信息,包括被忽略的浅层输出,而有研究表明靠近输入的模型层输出的句子嵌入包含更多的诸如短语、词组等粗粒度信息,对于边界模糊的农业命名实体识别,更关键的词组界定信息或许就隐含在这些被忽略的浅层嵌入中,可为农业领域存在的命名实体识别问题提供帮助。无需外部信息的引入,充分利用已使用的计算力得到的结果就能增强句子的表示嵌入;最终通过条件随机场(Conditional Random Field,CRF)模型生成全局最优序列。在构建的包含农作物品种、病害、虫害和农药4类农业实体的农业数据集上,所提方法的综合性指标F1值相较于基于Transformer的双向编码表征模型(Bidirectional Encoder Representation from Transformers, BERT) 提升3.61%,在公开数据集上也有较好表现,其中在数据集MSRA上F1值提升至94.96%,说明基于递进式的卷积网络能够增强模型对自然语言的表示能力,在命名实体识别任务上具有优势。

本文引用格式

计洁, 金洲, 王儒敬, 刘海燕, 李志远 . 基于递进式卷积网络的农业命名实体识别方法[J]. 智慧农业, 2023 , 5(1) : 122 -131 . DOI: 10.12133/j.smartag.SA202303001

Abstract

Pre-training refers to the process of training deep neural network parameters on a large corpus before a specific task model performs a particular task. This approach enables downstream tasks to fine-tune the pre-trained model parameters based on a small amount of labeled data, eliminating the need to train a new model from scratch. Currently, research on named entity recognition (NER) using pre-trained language model (PLM) only uses the last layer of the PLM to express output when facing challenges such as complex entity naming methods and fuzzy entity boundaries in the agricultural field. This approach ignores the rich information contained in the internal layers of the model themselves. To address these issues, a named entity recognition method based on progressive convolutional networks has been proposed. This method stores natural sentences and outputs representations of each layer obtained through PLM. The intermediate outputs of the pre-trained model are sequentially convolved to extract shallow feature information that may have been overlooked previously. Using the progressive convolutional network module proposed in this research, the adjacent two-layer representations are convolved from the first layer, and the fusion result continues to be convolved with the next layer, resulting in enhanced sentence embedding that includes the entire information dimension of the model layer. The method does not require the introduction of external information, which makes the sentence representation contain richer information. Research has shown that the sentence embedding output of the model layer near the input contains more fine-grained information, such as phrases and phrases, which can assist with NER problems in the agricultural field. Fully utilizing the computational power already used, the results obtained can enhance the representation embedding of sentences. Finally, the conditional random field (CRF) model was used to generate the global optimal sequence. On a constructed agricultural dataset containing four types of agricultural entities, the proposed method's comprehensive indicator F1 value increased by 3.61% points compared to the basic BERT (Bidirectional Encoder Representation from Transformers) model. On the open dataset MSRA, the F1 value also increased to 94.96%, indicating that the progressive convolutional network can enhance the model's ability to represent natural language and has advantages in NER tasks.

参考文献

1 QIU X P, SUN T X, XU Y G, et al. Pre-trained models for natural language processing: A survey[J]. Science China technological sciences, 2020, 63(10): 1872-1897.
2 SEVASTJANOVA R, KALOULI A, BECK C, et al. Explaining contextualization in language models using visual analytics[C]// 2021 59th Association for Computational Linguistics (ACL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2021: 464-476.
3 DEVLIN J, CHANG M-W, LEE K, et al. BERT: Pre-training of deep bidirectional transformers for language understanding[C]// North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 4171-4186.
4 VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// 2017 31st Annual Conference on Neural Information Processing Systems (NIPS). La Jolla, California, USA: Neural Information Processing Systems, 2017: 6000-6100.
5 杨飘, 董文永. 基于BERT嵌入的中文命名实体识别方法[J]. 计算机工程, 2020, 46(4): 40-45, 52.
  YANG P, DONG W Y. Chinese named entity recognition method based on BERT embedding[J]. Computer engineering, 2020, 46(4): 40-45, 52.
6 GAN Y, YANG R S, ZHANG C F, et al. Chinese named entity recognition based on BERT-transformer-BiLSTM-CRF model[C]// 2021 7th International Symposium on System and Software Reliability (ISSSR). Piscataway, NJ, USA: IEEE, 2021: 109-118.
7 GAO W C, ZHENG X H, ZHAO S S. Named entity recognition method of Chinese EMR based on BERT-BiLSTM-CRF[J]. Journal of physics. Conference series. 2021, 1848(1): ID 012083.
8 CHANG Y, KONG L, JIA K J, et al. Chinese named entity recognition method based on BERT[C]// 2021 IEEE International Conference on Data Science and Computer Application (ICDSCA). Piscataway, NJ, USA: IEEE, 2021: 294-299.
9 LI X, YAN H, QIU X, et al . FLAT: Chinese NER Using Flat-Lattice Transformer; proceedings of the ACL, F, 2020[C]// 2020 58th Annual Meeting of the Association for Computational Linguistics (ACL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 6836-6842.
10 琚生根, 李天宁, 孙界平. 基于关联记忆网络的中文细粒度命名实体识别[J]. 软件学报, 2021, 32(8): 2545-2556.
  JU S G, LI T N, SUN J P. Chinese fine-grained name entity recognition based on associated memory networks[J]. Journal of software, 2021, 32(8): 2545-2556.
11 WANG X Y, JIANG Y, BACH N, et al. Improving named entity recognition by external context retrieving and cooperative learning[J/OL]. arXiv: , 2021.
12 NIE Y Y, TIAN Y H, SONG Y, et al. Improving named entity recognition with attentive ensemble of syntactic information[C]// Findings of the Association for Computational Linguistics: EMNLP 2020. Stroudsburg, PA, USA: Association for Computational Linguistics, 2020: 4231-4245.
13 李林, 周晗, 郭旭超, 等. 基于多源信息融合的中文农作物病虫害命名实体识别[J]. 农业机械学报, 2021, 52(12): 253-263.
  LI L, ZHOU H, GUO X C, et al. Named entity recognition of diseases and insect pests based on multi source information fusion[J]. Transactions of the Chinese society for agricultural machinery, 2021, 52(12): 253-263.
14 赵鹏飞, 赵春江, 吴华瑞, 等. 基于注意力机制的农业文本命名实体识别[J]. 农业机械学报, 2021, 52(1): 185-192.
  ZHAO P F, ZHAO C J, WU H R, et al. Named entity recognition of Chinese agricultural text based on attention mechanism[J]. Transactions of the Chinese society for agricultural machinery, 2021, 52(1): 185-192.
15 JAWAHAR G, SAGOT B, SEDDAH D. What does BERT learn about the structure of language? [C]// 2019 57th Annual Meeting of the Association for Computational Linguistics (ACL). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 3651-3657.
16 ROGERS A, KOVALEVA O, RUMSHISKY A. A primer in BERTology: What we know about how BERT works[J]. Transactions of the association for computational linguistics, 2020, 8: 842-866.
17 JIE Z M, LU W. Dependency-guided LSTM-CRF for named entity recognition[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg, PA, USA: Association for Computational Linguistics, 2019: 4231-4245.
18 ZHANG Z B, WU S, JIANG D W, et al. BERT-JAM: Maximizing the utilization of BERT for neural machine translation[J]. Neurocomputing, 2021, 460: 84-94.
19 SU T C, CHENG H C. SesameBERT: Attention for anywhere[C]// 2020 IEEE 7th International Conference on Data Science and Advanced Analytics (DSAA). Piscataway, NJ, USA: IEEE, 2020: 363-369.
20 HU J, SHEN L, ALBANIE S, et al. Squeeze-and-excitation networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 42(8): 2011-2023.
21 JIANG Z, YU W, ZHOU D, et al. ConvBERT: Improving BERT with Span-based Dynamic Convolution[J/OL]. arXiv:2008.02496 [cs.CL], 2020.
22 SUN C J, GUAN Y, WANG X L, et al. Rich features based Conditional Random Fields for biological named entities recognition[J]. Computers in biology and medicine, 2007, 37(9): 1327-1333.
23 WEI J, REN X, LI X, et al. NEZHA: Neural contextualized representation for Chinese language understanding[J/OL]. arXiv:1909.00204v3 [cs.CL], 2009.
24 CUI Y M, CHE W X, LIU T, et al. Pre-training with whole word masking for Chinese BERT[J]. IEEE/ACM transactions on audio, speech and language processing, 2021, 29: 3504-3514.
文章导航

/