Natural Language Processing
The Natural Language Processing Research Group , established in 1993 , is one of the largest and most successful language processing groups in the UK and has a strong global reputation.
Natural Language Processing (NLP) is an interdisciplinary field that uses computational methods:
Follow us on Twitter @SheffieldNLP
The group's research interests fall into the broad areas of:
Information Access: Building applications to improve access to information in massive text collections, such as the web, newswires and the scientific literature. Subtopics include: information extraction, text mining and semantic annotation, question answering, summarization.
Language Resources and Architectures for NLP: Providing resources - both data and processing resources - for research and development in NLP. Includes platforms for developing and deploying real world language processing applications, most notably GATE, the General Architecture for Text Engineering.
Machine Translation: Building applications to translate automatically between human languages, allowing access to the vast amount of information written in foreign languages and easier communication between speakers of different languages.
Human-Computer Dialogue Systems: Building systems to allow spoken language interaction with computers or embodied conversational agents, with applications in areas such as keyboard-free access to information, games and entertainment, articifial companions.
Detection of Reuse and Anomaly: Investigating techniques for determining when texts or portions of texts have been reused or where portions of text do not fit with surrounding text. These techniques have applications in areas such as plagiarism and authorship detection and in discovery of hidden content.
Foundational Topics: Developing applications with human-like capabilities for processing language requires progress in foundational topics in language processing. Areas of interest include: word sense disambiguation, semantics of time and events.
The NLP group's research has received support from: the EU's Framework Programmes (Frameworks 4, 5, 6 and 7) as well as Horizon 2020 and the European Research Council, the UK Research Councils (EPSRC, BBSRC, MRC and AHRC) and various governmental and industrial sponsors, including GlaxoSmithKline and IBM.
These are currently the members of NLP group. Click on a name to see a home page.
Click on a year to read the news stories
2018 - 2019
29 November 2018 - Adam Tsakalidis (University of Warwick) - Nowcasting User Behaviour with Social Media and Smart Devices
The adoption of social media and smart devices by millions of users worldwide over the last decade has resulted in an unprecedented opportunity for natural language processing and social sciences. Users publish their thoughts and opinions on everyday issues through social media platforms, while they record their digital traces through their smart devices. Mining these rich resources offers new opportunities in sensing real-world events and indices in a longitudinal fashion. This talk will focus on how to utilise such user-generated content in order to "nowcast" (i.e., predict the current state of) user-specific (a) political and (b) mental health indices, under a real-world and longitudinal setting. The talk will be divided into two parts. In the first part, we will focus on mining social media to infer user voting intention. We model social media users based on the content they share and their network structure over time, aiming to nowcast their political stance under a time constrained setting (i.e., Greek bailout referendum 2015). In the second part, we will also account for heterogeneous information sources about the user (e.g., information derived from users' smart phones, SMS and social media messages), aiming this time to nowcast time-varying and user-specific mental health indices on a longitudinal basis. We will emphasise the importance of sticking to a real-world evaluation setting and present the challenges that current state-of-the-art face, when tested under such an evaluation framework. Finally, we will outline open challenges in both domains and provide directions for future research.
Adam Tsakalidis is a final stage PhD candidate at the University of Warwick (Supervisors: A. I. Cristea and M. Liakata) and is currently working as a Research Associate at The Alan Turing Institute. He holds a PG Diploma in Computer and Communications Engineering (University of Thessaly, Greece) and a MSc in Computer Science and Applications (University of Warwick). Before his PhD, he had worked as a Research Assistant in the SocialSensor project (CERTH/ITI, Greece). His research interests lie in the area of natural language processing, with a particular focus on the longitudinal modelling of user-generated information as a step towards real-time monitoring of real-world indices.
7 November 2018 - Yanai Elazar (Bar-Ilan University) - Adversarial Removal of Demographic Attributes from Text Data
Recent advances in Representation Learning and Adversarial Training seem to succeed in removing unwanted features from the learned representation. We show that demographic information of authors is encoded in -- and can be recovered from -- the intermediate representations learned by text-based neural classifiers. The implication is that decisions of classifiers trained on textual data are not agnostic to -- and likely condition on -- demographic attributes. When attempting to remove such demographic information using adversarial training, we find that while the adversarial component achieves chance-level development-set accuracy during training, a post-hoc classifier, trained on the encoded sentences from the first part, still manages to reach substantially higher classification accuracies on the same data. This behavior is consistent across several tasks, demographic properties and datasets. We explore several techniques to improve the effectiveness of the adversarial component. Our main conclusion is a cautionary one: do not rely on the adversarial training to achieve invariant representation to sensitive features.
18 October 2018 - Shadrock Roberts (Ushahidi) - Natural Language Processing for Humanitarian Response: a view from the field
Drawing on real-life case studies from Nepal, Indonesia, and Kenya, I will provide an overview of how crowdsourced and social media data are used or ignored in humanitarian response and the challenges they pose for practitioners. Designed in order to respond to these challenges, I will present early stage software prototypes using the GATE open source NLP toolkit to identify context, actionability, and veracity in social media and crowdsourced data in order to speed and prioritize the delivery of humanitarian aid. Speaking as a practitioner, I will also propose avenues for impactful research and design to help increase the adoption of new tools and methods.
Shadrock Roberts is a humanitarian geographer and the Director of resilience and research programs at the Kenyan non-profit, Ushahidi, which builds open source software to crowdsource information for humanitarian response. He has worked for a variety of humanitarian and development organizations in multiple countries and holds a Ph.D. in Geography from the University in Georgia. His career has focused on the intersection of geographic information systems, information and communication technologies, and community engagement to improve the availability of data for humanitarian and development assistance. He has only recently learned what a “chip butty” is, and remains unclear on the concept.
2017 - 2018
7 June 2018 - Peter Cochrane (University of Suffolk) - Self Awareness: The Next BIG Breakthrough in NLP
For >50 years the dream of talking to a machine at a (human) conversational level has always been 30 years in the future. However, recent advances in computer, sensor, network, robotic, and mobile device hardware has brought that horizon much closer. In short; transistor density and connectivity per chip, along with network complexity crossed a critical threshold and accelerated the abilities of AI.
22 March 2018 - Marco Damonte (University of Edinburgh) - Natural Language Understanding with Abstract Meaning Representation
Abstract meaning representation (Banarescu et al, 2013), or AMR for short, is a semantic representation that provides sentences with a deep semantic interpretation. AMR includes most of the shallow-semantic NLP tasks that are usually addressed separately, such as named entity recognition, semantic role labeling and coreference resolution. AMR is not an interlingua, but AMR graphs can be exploited for a number of NLP tasks such as machine translation, summarisation and paraphrasing. Text-to-AMR (parsing) and AMR-to-text (generation) is however far from providing and using sufficiently accurate graphs for downstream applications. Moreover, not much work has been carried out on AMR for languages other than English. In this talk I’ll present my work on addressing these issues.
1 March 2018 - Wang Ling (Google DeepMind)
1 February 2018 - Johannes Welbl (University College London) - Constructing Datasets for Multi-hop Reading Comprehension Across Documents
Contemporary Reading Comprehension (RC) datasets — SQuAD, TriviaQA, etc. — are dominated by queries that can be answered with a single paragraph or document. However, enabling models to combine pieces of textual information from different sources would drastically extend the scope of RC. In this talk, I will introduce a novel Multi-hop RC task, where a model has to learn how to find and combine disjoint pieces of textual evidence, effectively performing multi-step (alias multi-hop) inference.
18 January 2018 - Horacio Saggion (Universitat Pompeu Fabra) - Mining and Enriching Scientific Text Collections
In the current online Open Science context, scientific datasets and tools for deep text analysis, visualization and exploitation play a major role. I will present a system developed over the past three years for “deep” analysis and annotation of scientific text collections. After a brief overview of the system and its main components, I will present our current work on the development of a bi-lingual (Spanish and English) fully annotated text resource in the field of natural language processing that we have created with our system. Moreover, a faceted-search and visualization system to explore the created resource will be also discussed.
I will take the opportunity to present further areas of research carried out in our Natural Language Processing group.
7 December 2017 - Miquel Espla-Gomis (Universitat d'Alacant) - Identifying insertion positions in word-level machine translation quality estimation
Machine translation (MT) quality estimation (QE) is the task of predicting the quality of a translation produced by an MT system without having a reference translation. At the level of sentences, quality is usually estimated in terms of the effort required to fix the translation, trying to predict metrics such as translation error rate (TER) or post-editing time. When it comes to word level, QE is usually tackled as the task of identifying which words in the translation need to be replaced or deleted. The main advantage of word-level MT QE in front of MT sentence- or document-level MT QE is that it can be used to help post-editors to focus their attention on those parts of the translation that need to be fixed. However, with the current approach of only identifying the words that need to be fixed, post-editors using word-level MT QE could be disregarding missing words. In order to improve the performance of such systems, we propose an approach capable to identifying both the words that need to be deleted and the positions where one or more words need to be inserted. The work presented compares different types of simple neural network architectures that build on different sources of bilingual information in order to provide such predictions. The results obtained not only confirm the feasibility of the approach proposed, but also that a reasonably high performance on both tasks can be obtained using relatively simple architectures.
16 November 2017 - Zeerak Waseem (The University of Sheffield) - Why the F*ck do You Talk Like That?
Over the past year, abusive language detection has received a surge in interest from the NLP community. In spite of this surge in interest, very little work bases itself in the social scientific theories on abusive language. In addition, little work deals with the social contexts surrounding abusive statements or bridging the gaps that are introduced by switching to different social contexts.
The paper examines Bostrom’s notion of Superintelligence and argues that, although we should not be sanguine about the future of AI or its potential for harm, superintelligent AI is highly unlikely to come about in the way Bostrom imagines.
2 November 2017 - Emem Rita Usanga (Bnkability) - Rethinking how deals investment is raised in Africa using NLP
With a $100bn annual infrastructure funding deficit over the next 10years and a npopulation anticipated to double by 2045, the need for infrastructure across the African continent is a pressing need. Government acknowledge this can only be done in partnership with private investors. Problem - international private investor often argue there's a lack of bankable projects in Africa.
This is an interactive session where we present our challenges in the application of NLP to our business solution and attendees propose possible solutions.
26 October 2017 - NLP Student Talks
Chiraag Lala - Multimodal Lexical Translation
Inspired by the tasks of Multimodal Machine Translation and Visual Sense Disambiguation we introduce a task called Multimodal Lexical Translation (MLT). The aim of this new task is to correctly translate an ambiguous word given its context - an image and a sentence in the source language. To facilitate the task, we introduce the MLT datasets, where each data point is a $4$-tuple consisting of an ambiguous source word, its visual context (an image), its textual context (a source sentence), and its translation that conforms with the visual and textual contexts. The dataset has been created from the Multi30K corpus using word-alignment followed by human inspection for English to German and English to French language directions. These datasets form a very valuable multimodal and multilingual language resource with several potential uses including evaluation of lexical disambiguation within (Multimodal) Machine Translation systems.
Fernando Manchego - Sentence Simplification via Sequence Labeling
Text Simplification aims to modify the content and structure of a text, in order to make it easier to read and understand. At the sentence-level, several rewriting operations can be performed to achieve this goal: replacing complex words or phrases for simpler synonyms, deleting unimportant content, splitting the sentence, etc. Most research treats sentence simplification as machine translation (MT), with complex and simple as source and target languages, respectively. In this talk, we will first present an in-depth analysis on the potential and limitations of end-to-end MT-style models using automatic and manual evaluations. To deal with some of the identified problems, we devise a two-step sequence labeling method: (i) identify the simplification operations that need to be performed (if any) in each token of sentence, and (ii) execute the operation using transformation-specific strategies. We show that this operation-based approach is able to produce simpler texts than end-to-end models.
19 October 2017 - Kris Cao (University of Cambridge) - Latent variable models of language
Behind the observed surface form of language exist underlying structures and themes, such as syntax, topic and utterance intent. In this talk, I will present some work which composes graphical models to learn underlying variables with powerful data likelihood functions to model the observed surface form. One such application is in open-domain dialogue modelling, where the latent variables capture the variation in the possible responses to a user utterance. We show that the latent variable approach generates more acceptable diverse output, as measured by human annotators. Another is extending topic models to instead learn topics underlying entire sentences, rather than just words. This lets the model learn topics which capture compositional meaning, which a standard word-level model has difficult doing.
12 October 2017 - Sasha Narayan (University of Edinburgh) - Text-to-text Generation Beyond Machine Translation
In recent years we have witnessed the achievements of sequence-to-sequence encoder-decoder models for machine translation.
In this talk I will discuss two examples, sentence simplification and document summarization, that explore the hypothesis that tailoring the model with knowledge of the task structure and linguistic requirements leads to better performance. In the first part, I will propose a new sentence simplification task (split-and-rephrase) where the aim is to split a complex sentence into a meaning preserving sequence of shorter sentences. I will show that the semantically-motivated split model is a key factor in generating fluent and meaning preserving rephrasings.
BIO: Shashi Narayan is a postdoctoral researcher in the School of Informatics at the University of Edinburgh. He obtained his PhD in Computer Science at the University of Lorraine, INRIA under Claire Gardent in 2014. His research focuses on natural language generation and understanding with an aim to develop general frameworks for generation from underlying meaning representation or for text rewriting such as summarization, text simplification and paraphrase generation. He also has experience with parsing and other structured prediction problems.
4 September 2017 - Thushari Atapattu (University of Adelaide) - Disclosure Analysis of Educational Big Data
NLP Reading Group
The target audience is all the members of the NLP group and other possible interested participants.
The meeting will take place weekly for one hour usually on Tuesdays from 11-12pm.
The meetings of the group will be informal and no necessary preparation will be required with the exception of the moderator reading the current paper and the rest having at least a brief overview of it.
Tuesday 12 June 2018
Chelsea Finn, Pieter Abbeel, Sergey Levine, ICML 2017
Tuesday 10 April 2018
Shen, T; Lei, T; Barzilay, R; Jaakola, T.
Tuesday 3 April 2018
Zhengli Zhao, Dheeru Dua and Sameer Singh
Tuesday 20 February 2018
ACL Paper submission feedback session
Tuesday 13 February 2018
Edouard Grave, Moustapha Cisse & Armand Joulin
Tuesday 6 February 2018
Alessandro Raganato, Claudio Delli Bovi & Roberto Navigli
Tuesday 30 January 2018
Tim Rocktäschel & Sebastian Riedel
Tuesday 23 January 2018
Unsupervised Learning of Universal Sentence Representations from NLI Data.
Tuesday 28 November 2017
Melvin Johnson, Mike Schuster, Quoc V. Le, et al.
Tuesday 14 November 2017
Grzegorz Chrupała, Lieke Gelderloos & Afra Alishahi
Tuesday 7 November 2017
Hui Lin & Jeff Bilmes
Tuesday 31 October 2017
Nan Duan, Duyu Tang, Peng Chen & Ming Zhou
Tuesday 24 October 2017
Roee Aharoni & Yoav Goldberg
Tuesday 17 October 2017
Hao Cheng, Hao Fang, Mari Ostendorf
Tuesday 10 October 2017
Pang Wei Koh, Percy Liang; Published in Proceedings of International Conference on Machine Learning, 2017
Tuesday 3 October 2017
Omer Levy, Minjoon Seo, Eunsol Choi and Luke Zettlemoyer
Tuesday 19 September 2017
Tuesday 29 August 2017
Wen Sun, Arun Venkatraman, Geoffrey J. Gordon, Byron Boots, J. Andrew Bagnell
Proceedings of the 34th International Conference on Machine Learning, PMLR 70:3309-3318, 2017.
Tuesday 22 August 2017
Split and Rephrase, Accepted for EMNLP 2017
Shashi Narayan, Claire Gardent, Shay B. Cohen and Anastasia Shimorina
Tuesday 15 August 2017
Attention Is All You need
Tuesday 8 August 2017
Dzmitry Bahdanau, Tom Bosc, Stanisław Jastrzębski, Edward Grefenstette, Pascal Vincent, Yoshua Bengio
Tuesday 1 August 2017
Learning to Generate Textual Data, EMNLP 2016
Tuesday 11 July 2017
Yusuf Aytar, Carl Vondrick, Antonio Torralba
Tuesday 4 July 2017
Xingxing Zhang, Mirella Lapata
Tuesday 27 June 2017
Junhua Mao, Jonathan Huang, Alexander Toshev, Oana Camburu, Alan Yuille, Kevin Murphy
Tuesday 20 June 2017
Understanding the BPE algorithm
Tuesday 13 June 2017
Ron J. Weiss, Jan Chorowski, Navdeep Jaitly, Yonghui Wu, Zhifeng Chen
Tuesday 6 June 2017
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
Tuesday 30 May 2017
Wang Ling, Dani Yogatama, Chris Dyer, Phil Blunsom
Tuesday 9 May 2017
Tuesday 6 May 2017
Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin
Tuesday 2 May 2017
Tuesday 25 April 2017
Mert Kilickaya, Aykut Erdem, Nazli Ikizler-Cinbis, Erkut Erdem
Tuesday 18 April 2017
Neural Tree Indexers, EACL2017
Tuesday 11 April 2017
Tuesday 4 April 2017
Tuesday 28 March 2017
Tuesday 21 March 2017
Tuesday 14 March 2017
Kim et al. (2016): Examples are not Enough, Learn to Criticize! Criticism for Interpretability, NIPS 2016
Tuesday 7 March 2017
Kris Cao and Stephen Clark
Tuesday 28 February 2017
Tuesday 21 February 2017
Tuesday 14 February 2017
by Fabio Petroni, Luciano Del Corro and Rainer Gemulla
Tuesday 7 February 2017
Takeru Miyato, Andrew, M.Dai, Ian Goodfellow
Tuesday 31 January 2017
Tim Vieira and Jason Eisner
Tuesday 24 January 2017
Oriol Vinyals, Charles Blundell, Tim Lillicrap, Koray Kavukcuoglu, Daan Wierstra
Tuesday 17 January 2017
Learning Structured Predictors from Bandit Feedback for Interactive NLP. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL). Berlin, Germany
Artem Sokolov, Julia Kreutzer, Christopher Lo, Stefan Riezler
Tuesday 13 December 2016
Marc Dymetman, Guillaume Bouchard, Simon Carter
Tuesday 6 December 2016
Rong Ge, Jason D. Lee, Tengyu Ma
Tuesday 29 November 2016
Compositional Semantic Parsing on Semi-Structured Tables
Tuesday 22 November 2016
Minimum Risk Training for Neural Machine Translation
Tuesday 15 November 2016
Generation from Abstract Meaning Representation using Tree Transducers
Tuesday 1 November 2016
Visual Representations for Topic Understanding and Their Effects on Manually Generated Labels Transactions of the Association for Computational Linguistics, 2016.
Tuesday 25 October 2016
Tuesday 11 October 2016
A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
Tuesday 4 October 2016
Ultradense Word Embeddings by Orthogonal Transformation
Tuesday 7 June 2016
Not All Character N-grams Are Created Equal: A Study in Authorship Attribution.
Tuesday 31 May 2016
Riedel, S., Yao, L., McCallum, A., & Marlin, B. M. (2013)
Tuesday 10 May 2016
Tuesday 3 May 2016
A New Corpus and Imitation Learning Framework for Context-Dependent Semantic Parsing
Tuesday 22 April 2016
Sequence Level Training with recurrent Neural Networks
Tuesday 22 March 2016
"Distributed Representation of Sentences and Documents"
Tuesday 8 March 2016
AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes
Tuesday 23 February 2016
From Word Embeddings To Document Distances
Tuesday 16 February 2016
Tuesday 9 February 2016
Tuesday 25 January 2016
Multi-Perspective Sentence Similarity Modeling with Convolutional Neural Networks
Tuesday 19 January 2016
Tuesday 12 January 2016
Tuesday 8 December 2015
Using Discourse Structure Improves Machine Translation Evaluation.
And here are the author's slides
Tuesday 1 December 2015
Practical Bayesian Optimization of Machine Learning Algorithms Advances in Neural Information Processing Systems, 2012
Related presentations/lecture slides:
My reading group presentation slides
Tuesday 24 November 2015
Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks ACL 2015
Additional resource about LSTM: "Anyone Can Learn To Code an LSTM-RNN in Python"
Tuesday 17 November 2015
More details on auto encoders for unsupervised pre-training:
Tuesday 10 November 2015
Tuesday 3 November 2015
Tuesday 27 October 2015
might help to read this NLP primer
Tuesday 20 October 2015
Teaching Machines to Read and Comprehend. NIPS 2015.
Slides (presented at LXMLS)
NAACL 2013 Tutorial "Deep Learning without Magic"
EMNLP 2014 Tutorial "Embedding Methods for NLP"
Entailment with Neural Attention (better description of attention models than in the NIPS paper in my opinion)
Tuesday 13 October 2015
A large annotated corpus for learning natural language inference. Proceedings of EMNLP 2015.
Should compare this to work on (multilingual) textual similarity
Funded Research Projects
|Resources||Group member resources|