UW-BHI at MEDIQA 2019: An Analysis of Representation Methods for Medical Natural Language Inference

William R. Kearns, Wilson Lau, Jason Thomaskearnsw@uw.edu

Abstract

Recent advances in language modeling have led to large performance increases on a variety of natural language processing (NLP) tasks. However, it is not well understood how these methods may be augmented by knowledge-based approaches. This paper compares the performance and internal representation of an Enhanced Sequential Inference Model (ESIM) between three experimental conditions based on the input to the model, i.e. Bidirectional Encoder Representations from Transformers (BERT), Embeddings of Semantic Predications (ESP), or Word2Vec. We evaluated performance on the Medical Natural Language Inference (MedNLI) sub-task of the MEDIQA 2019 shared-task which relied heavily on semantic understanding and thus served as a suitable evaluation set for the comparison of these representation methods.

See below to scroll through BERT, Cui2Vec, and ESP attention heatmaps for all 405 sentence pairs. Our publication can be seen here: https://arxiv.org/abs/1907.04286