An End-to-End Entity and Relation Extraction Network with Multi-head Attention

来源 :第十七届全国计算语言学学术会议暨第六届基于自然标注大数据的自然语言处理国际学术研讨会(CCL 2018) | 被引量 : 0次 | 上传用户:daocaoren666666
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
  Relation extraction is an important semantic processing task in natu-ral language processing.The state-of-the-art systems usually rely on elaborately designed features,which are usually time-consuming and may lead to poor gen-eralization.Besides,most existing systems adopt pipeline methods,which treat the task as two separated tasks,i.e.,named entity recognition and relation ex-traction.However,the pipeline methods suffer two problems:(1)Pipeline mod-el over-simplifies the task to two independent parts.(2)The errors will be ac-cumulated from named entity recognition to relation extraction.Therefore,we present a novel joint model for entities and relations extraction based on multi-head attention,which avoids the problems in the pipeline methods and reduces the dependence on features engineering.The experimental results show that our model achieves good performance without extra features.Our model reaches an F-score of 85.7%on SemEval-2010 relation extraction task 8,which has com-petitive performance without extra feature compared with previous joint mod-els.On publication,codes will be made publicly available.
其他文献
文本蕴含是自然语言处理的难点,其形式类型复杂、知识难以概括.早期多利用词汇蕴含和逻辑推理知识识别蕴含,但仅对特定类型的蕴含有效.近年来,利用大规模数据训练深度学习模型的方法在句级蕴含关系识别任务上取得优异性能,但模型不可解释,尤其是无法标定引起蕴含的具体语言片段.本文研究文本蕴含成因形式,归纳为词汇、句法异构、常识三类,并以句法异构蕴含为研究对象.针对上述两个问题,提出句法异构蕴含语块的概念,定义
The conventional Chinese word embedding model is similar to the English word embedding model in modeling text,simply uses the Chinese word or character as the minimum processing unit of the text,witho
Word embeddings have recently been widely used to model words in Natural Language Processing(NLP)tasks including semantic similarity measurement.However,word embeddings are not able to cap-ture polyse
会议
Existing sentence alignment methods are founded fundamentally on sentence length and lexical correspondences.Methods based on the former follow in general the length proportionality assumption that th
Metadata extraction for scientific literature is to automati-cally annotate each paper with metadata that represents its most valu-able information,including problem,method and dataset.Most existing w
For different language pairs,word-level neural machine translation(NMT)models with a fixed-size vocabulary suffer from the same problem of representing out-of-vocabulary(OOV)words.The common practice
Existing methods for knowledge graph embedding do not ensure the high-rank triples predicted by themselves to be as consistent as possible with the logical background which is made up of a knowledge g
会议
In e-commerce websites,user-generated question-answering text pairs generally contain rich aspect information of products.In this paper,we address a new task,namely Question-answering(QA)aspect classi
Neural machine translation(NMT)has achieved great suc-cess under a great deal of bilingual corpora in the past few years.Howev-er,it is much less effective for low-resource language.In order to allevi
Aiming at the increasingly rich multi language information resources and multi-label data in scientific literature,in order to mining the relevance and correlation in languages,this paper proposed the