Contextualized Word Embeddings in a Neural Open Information Extraction Model

I. Sarhan, M. Spruit

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

Abstract

Open Information Extraction (OIE) is a challenging task of extracting relation tuples from an unstructured corpus. While several OIE algorithms have been developed in the past decade, only few employ deep learning techniques. In this paper, a novel OIE neural model that leverages Recurrent Neural Networks (RNN) using Gated Recurrent Units (GRUs) is presented. Moreover, we integrate the innovative contextual word embeddings into our OIE model, which further enhances the performance. The results demonstrate that our proposed neural OIE model outperforms the existing state-of-art on two datasets.
Original languageEnglish
Title of host publicationNLDB 2019: International Conference on Applications of Natural Language to Information Systems
EditorsE. et al. Métais
PublisherSpringer
Pages359–367
Number of pages9
Volume11608
ISBN (Electronic)978-3-030-23281-8
ISBN (Print)978-3-030-23280-1
DOIs
Publication statusPublished - 2019

Keywords

  • Open Information Extraction
  • Word embeddings
  • RNN
  • GRU
  • LSTM

Fingerprint

Dive into the research topics of 'Contextualized Word Embeddings in a Neural Open Information Extraction Model'. Together they form a unique fingerprint.

Cite this