Attention-Based CNN-BLSTM Networks for Joint Intent Detection and Slot Filling

来源 :第十七届全国计算语言学学术会议暨第六届基于自然标注大数据的自然语言处理国际学术研讨会(CCL 2018) | 被引量 : 0次 | 上传用户:z178933143
下载到本地 , 更方便阅读
声明 : 本文档内容版权归属内容提供方 , 如果您对本文有版权争议 , 可与客服联系进行内容授权或下架
论文部分内容阅读
  Dialogue intent detection and semantic slot filling are two critical tasks in nature language understanding(NLU)for task-oriented dialog systems.In this paper,we present an attention-based encoder-decoder neural network model for joint intent detection and slot filling,which encodes sentence representation with a hybrid Convolutional Neural Networks and Bidirectional Long Short-Term Memory Networks(CNN-BLSTM),and decodes it with an attention-based re-current neural network with aligned inputs.In the encoding process,our model firstly extracts higher-level phrase representations and local features from each utterance using convolutional neural network,and then propagates historical con-textual semantic information with a bidirectional long short-term memory net-work layer architecture.Accordingly,we could obtain sentence representation by merging the two architectures mentioned above.In the decoding process,we in-troduce attention mechanism in long short-term memory networks that can pro-vide additional sematic information.We conduct experiment on dialogue intent detection and slot filling tasks with standard data set Airline Travel Information System(ATIS).Experimental results manifest that our proposed model can achieve better overall performance.
其他文献
The analysis and understanding of spoken texts is an impor-tant task in artificial intelligence and natural language processing.How-ever,there are many verbose expressions(such as mantras,nonsense,mod
会议
A Bi-LSTM based encode/decode mechanism for named entity recognition was studied in this research.In the proposed mechanism,Bi-LSTM was used for encoding,an Attention method was used in the intermedia
Relation extraction is an important part of many information extrac-tion systems that mines structured facts from texts.Recently,deep learning has achieved good results in relation extraction.Attentio
Using sequence-to-sequence models for abstractive text sum-marization is generally plagued by three problems: inability to deal with out-of-vocabulary words,repetition in summaries and time-consuming
As an essential sub-task of frame-semantic parsing,Frame Identifica-tion(FI)is a fundamentally important research topic in shallow semantic pars-ing.However,most existing work is based on sophisticate
The evaluation of word embeddings has received a considerable amount of attention in recent years,but there have been some debates about whether intrinsic measures can predict the performance of downs
Task-oriented dialog systems usually face the challenge of querying knowledge base.However,it usually cannot be explicitly modeled due to the lack of annotation.In this paper,we introduce an explicit
At present,the research on Tibetan machine translation is mainly fo-cused on Tibetan-Chinese machine translation and the research on Chinese-Tibetan machine translation is almost blank.In this paper,t
Extracting term translation pairs is of great help for Chinese histori-cal classics translation since term translation is the most time-consuming and challenging part in the translation of historical
Nowadays,research on stylistic features(SF)mainly focuses on two aspects: lexical elements and syntactic structures.The lexical elements act as the content of a sentence and the syntactic structures c