Biomedical named entity recognition (BioNER) is the task of categorizing biomedical entities. Due to the specific characteristics of the names of biomedical entities, such as ambiguity among different concepts or different ways of referring to the same entity, the BioNER task is usually considered more challenging compared to standard named entity recognition tasks. Recent techniques based on deep learning not only significantly reduce the hand crafted feature engineering phase but also determined relevant improvements in the BioNER task. However, such systems are still facing challenges. One of them is the limited availability of annotated text data. Multi-task learning approaches tackle this problem by training different related tasks simultaneously. This enables multi-task models to learn common features among different tasks where they share some layers. To explore the advantages of the multi-task learning, we propose a model based on convolution neural networks, long-short term memories, and conditional random fields. The model we propose shows comparable results to state-of-the-art approaches. Moreover, we present an empirical analysis considering the impact of different word input representations (word embedding, character embedding, and case embedding) on the model performance.

Leveraging Multi-task Learning for Biomedical Named Entity Recognition

Mehmood, Tahir;Lavelli, Alberto;
2019-01-01

Abstract

Biomedical named entity recognition (BioNER) is the task of categorizing biomedical entities. Due to the specific characteristics of the names of biomedical entities, such as ambiguity among different concepts or different ways of referring to the same entity, the BioNER task is usually considered more challenging compared to standard named entity recognition tasks. Recent techniques based on deep learning not only significantly reduce the hand crafted feature engineering phase but also determined relevant improvements in the BioNER task. However, such systems are still facing challenges. One of them is the limited availability of annotated text data. Multi-task learning approaches tackle this problem by training different related tasks simultaneously. This enables multi-task models to learn common features among different tasks where they share some layers. To explore the advantages of the multi-task learning, we propose a model based on convolution neural networks, long-short term memories, and conditional random fields. The model we propose shows comparable results to state-of-the-art approaches. Moreover, we present an empirical analysis considering the impact of different word input representations (word embedding, character embedding, and case embedding) on the model performance.
File in questo prodotto:
File Dimensione Formato  
AIIA_2019_CameraReady.pdf

solo utenti autorizzati

Tipologia: Documento in Pre-print
Licenza: DRM non definito
Dimensione 294.16 kB
Formato Adobe PDF
294.16 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11582/319712
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
social impact