Faculty Recruiting Support CICS

Strubell, Verga, McCallum Receive EMNLP 2018 Best Long Paper Award

Emma Strubell and Pat Verga
Pat Verga (l.) and Emma Strubell

UMass Amherst College of Information and Computer Sciences (CICS) PhD Candidates Emma Strubell and Pat Verga, advised by Distinguished Professor Andrew McCallum, have been awarded Best Long Paper for their work "Linguistically-Informed Self-Attention for Semantic Role Labeling" at the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).

EMNLP is one of the most highly regarded Natural Language Programming (NLP) conferences in the world and is organized by the Association for Computational Linguistics. The researchers' paper was selected from more than 2,100 papers submitted to this year's conference, to be held in Brussels, Belgium, Nov. 2-4.

Co-authored with collaborators from Google AI Language, the award-winning paper presents Linguistically-Informed Self-Attention (LISA), a neural network model that combines multi-head self-attention with multi-task learning across syntactic dependency parsing, part-of speech tagging, predicate detection, and semantic role labeling.

"LISA brings together two relatively disjointed schools of thought regarding machine learning and language analysis by combining deep learning and linguistic formalisms. This allows the model to more effectively utilize syntactic parses to obtain semantic meaning," said Verga. "The task of syntactic parsing partitions a sentence into its grammatical structure, a tree consisting of nouns, verbs, their direct objects, and so forth. Semantic role labeling, however, is the process by which a computer is able to separate a written statement into sections based upon its overall meaning, i.e. identifying who did what to whom."

According to the researchers, they have engineered a method in which excerpts of writing can be separated into their respective syntactic pieces, while also labeling the writing with notations of its overall significance. "We developed a new technique for integrating syntax into a neural network model known as multi-head self-attention," said Strubell. "This innovation allowed our model to perform substantially better than any other model at the task of semantic role labeling."

Leveraging linguistic structure also allows the model to better generalize to writing styles across different domains. "This addresses a common problem in machine learning," said Verga. "Most comprehension models don't transfer well to understanding different forms of writing." Typically, Verga adds, a computer can only easily parse the writing styles it has been taught to analyze, but this new model can parse and assign semantic meaning to writing produced in different domains, such as journalism and fiction writing.  

Strubell and Verga's research also combines the steps involved in syntactic parsing and semantic role labeling into a single maneuver. Prior to their integration of the numerous operations previously involved in the task, a computer gathered the necessary information by running through multiple models performing redundant computation. Now, the researchers' model obtains all of the desired information regarding the structure and meaning of a text into a single condensed action, requiring far fewer computational resources.

Strubell and Verga work under the direction of McCallum, faculty director of CICS Center for Data Science, and are affiliated with the Center as well as the Information Extraction and Synthesis Laboratory.