Lecture Schedule (INF5880 - Fall 2012)

Unless otherwise noted, the seminars are bi-weekly and take place in Awk (room 3118), Tuesdays 14.15-16.00. 

Date Speaker Topic Comment
Mon. 13/8 Paul Cook (Melbourne, Australia) Identifying novel word-senses (abstract) Note the time and place: Monday, LTG meeting room (7166), at 14.15.
Fri. 17/8 Julia Hockenmaier (Urbana Champaign, USA)

Automatically describing images with sentences (abstract)

What extracting grammars from treebanks can tell us about linguistic theory (abstract)

Note the time and day; Friday at 14.15.
Tue. 28/8 Ned Letcher (Melbourne, Australia) gDelta: A Missing Link in the Grammar Engineering Toolchain (abstract)  
Tue. 11/9 Roser Morante (Antwerp, Belgia)
Processing modality: what next? (abstract)
Note the time: 15.15
Tue. 18/9 Noa Patricia Cruz Diaz (Huelva, Spain) A Machine Learning Approach to Negation and Speculation Detection in Biomedical and Review Texts (abstract)  
Tue. 25/9 Arne Skj?rholt and Angelina Ivanova PhD project updates  
Tue. 9/10 Fredrik J?rgensen (Meltwater, Norway) Notes from “The Real World” (abstract)  
Tue. 23/10 Pierre Lison Introduction to Statistical Relational Learning (abstract) Slides of the talk
Tue. 6/11 Linlin Li (Microsoft, Norway) Computational Modeling of Word Senses (abstract)  
Tue. 20/11 Lars Klintwall

Spontaneous and taught language acquisition: reinforcement, discrimination, and generalization issues (abstract)

 

 

Abstracts

 

Identifying novel word-senses, Paul Cook, 13.08.2012:

Automatic lexical acquisition has been an active area of research in computational linguistics for over 20 years, but the automatic identification of lexical semantic change has only recently received attention. In this talk we present a method for identifying novel word-senses --- senses present in one corpus, but not another --- based on a state-of-the-art non-parametric Bayesian word-sense induction method. One impediment to research on lexical semantic change has been a lack of appropriate evaluation resources. In this talk we further present the largest corpus-based dataset of diachronic sense differences to date. In experiments on two different corpus pairs, we show that our method is able to simultaneously identify: (a) types having taken on a novel sense over time, and (b) the token instances of such novel senses. Finally, we demonstrate that the performance of this method can be improved through the incorporation of social knowledge about the likely topics of new word-senses.

 

Automatically describing images with sentences, Julia Hockenmaier, 17.08.2012:

The ability to describe the entities, events and scenes depicted in an image is a hallmark of image understanding, and could have a major impact on image retrieval. But how can we design algorithms that learn to associate images with sentences in natural language that describe the situations depicted in them?

 

In this talk, I will introduce a data set for image description, discuss how to evaluate image description systems, and present a system that learns to associate images with sentences that describe them well. Although we believe that this task may benefit from improved object recognition, we show that models that rely on simple perceptual cues of color, texture and local feature descriptors on the image side can do surprisingly well, especially when paired with linguistic representations that capture lexical similarities. We also show that the availability of multiple captions for the same image yields a significant improvement in performance.

 

What extracting grammars from treebanks can tell us about linguistic theory, Julia Hockenmaier, 17.08.2012:

Linguistic theory is typically driven by the need to explain, and provide semantically adequate analyses for, interesting, yet seemingly infrequent constructions. Lexicalised frameworks such as Combinatory Categorial Grammar (and Tree-Adjoining Grammar) aim to account forthese phenomena with a small number of syntactic operations, and hence impose clear restrictions on the kinds of dependencies they can capture. Treebank development, on the other hand, is typically driven by the need to describe common constructions that account for the bulk of the data. Annotation styles such as those used in the Penn Treebank or the German Tiger project also provide mechanisms for dealing with non-standard constructions that require arbitrary dependencies, and allow relatively shallow descriptions of constructions for which a detailed analysis is difficult to obtain. This talk examines what happens when we aim to translate those treebanks into the more rigid representation imposed by CCG, and will address the following questions: Can we recover the information needed to obtain semantically adequate CCG derivations? Is CCG robust enough to cover the variety of constructions encountered in real corpus data? Can the dependencies in the treebanks be captured by CCG?

 

gDelta: A Missing Link in the Grammar Engineering Toolchain, Ned Letcher, 28.08.2012:

The development of precision grammars is an inherently resource-intensive process; their complexity means that changes made to one area of a grammar often introduce unexpected flow-on effects elsewhere in the grammar which may only be discovered after some time has been invested in updating numerous test suite items. In this talk, I will present the browser-based gDelta tool, which aims to provide grammar engineers with more immediate feedback on the impact of changes made to a grammar by comparing parser output from two different states of the grammar. gDelta makes use of a feature weighting algorithm for highlighting features in the grammar that have been strongly impacted by a modification to the grammar, as well as a technique for performing clustering over test suite items, which is intended to locate related groups of change. These two techniques are used to present the grammar engineer with different views on the grammar to inform them of different aspects of change in a data-driven manner.

 

Processing modality: what next?, Roser Morante, 11.09.2012:

Modality and negation are two linguistic phenomena that have received attention recently within the computational linguistics community because they account for extra-propositional aspects of meaning, i.e. meaning beyond  propositional content of the type `who does what to whom'. Generally speaking, modality is a grammatical category that allows the expression aspects related to the attitude of the speaker towards her statements. Negation is a grammatical category that allows to change the truth value of a proposition.

 

Research on modality and negation in NLP has progressed thanks to the availability of data sets where extra-propositional aspects of meaning are annotated, such as the certainty corpus (Rubin et al. 2007), the ACE 2008 corpus (LDC 2008), the BioScope corpus (Vincze et al. 2008), the FactBank corpus (Saurí 2009), and the  annotation undertaken as part of the SIMT SCALE project (Baker et al. 2010). Several competitions have addressed these topics: the CoNLL 2010 ST focused  on hedge detection (Farkas et al., 2010); the *SEM 2012 conference hosted a shared task on resolving the scope and focus of negation (Morante and Blanco, 2012); two pilot tasks of the QA4MRE Lab at CLEF 2011 and 2012 focused on processing modality for machine reading purposes (Morante and Daelemans 2011, 2012); and the last editions of the BioNLP shared tasks have also included subtasks aimed at detecting negated and modalised biomedical events. Additionally, the journal Computational Linguistics has published a special issue on modality and negation  (Morante and Sporleder, 2012).

 

Although progress has been made in the treatment of modality, there are still many issues to be tackled. In this talk I will summarise the advances made and I will discuss open issues for further research.

 

A Machine Learning Approach to Negation and Speculation Detection in Biomedical and Review Texts, Noa Patricia Cruz Diaz, 18.09.2012:

Negation detection has gained much attention in recent years, especially in the medical domain. Process negation can be useful for several NLP applications such as information extraction, opinion mining, sentiment analysis, paraphrasing and recognizing textual entailment. In this short talk, I will present a system based on machine learning techniques that identifies negation and speculation signals and their scopes in biomedical domain and a review corpus annotated for negation, speculation and their scope.

 

Notes from "the Real World", Fredrik J?rgensen, 09.10.2012:

In this talk, I'll present my experiences from working with NLP on large volumes of data in a software company. I'll use as a basis the company I work for, Meltwater, and start with presenting us and what we do, both in general and on the NLP side. How do we work, what technologies do we use, how do we gather resources, etc. Then I'll talk about my impression of how bosses and customers see NLP, what are the important factors, how do they evaluate NLP products, etc. I'll also talk briefly about my experiences with other NLP companies, how they present themselves and how I view them. I'll conclude the talk with a relief (for you, at least): Some things are more "real" in the academics than in the so-called "Real World".

 

Introduction to statistical relational learning, Pierre Lison, 23.10.2012:

Most machine learning approaches (used e.g. in NLP) represent their problem in terms of a fixed set of features that can take a predefined range of possible values. Such representation works fine for many domains, but is also quite impoverished: from a logical perspective, the expressive power of such representation is limited to propositional logic.

There exist a large class of learning problems that are best represented in terms of entities that can have various relations with each other - that is, in terms of a relational structure. For instance, an indoor environment can be described in terms of a set of physical places that are associated with specific attributes (roomType(x), area(x), etc.) but that are also connected with each other by various spatial relations (adjacentTo(x,y), contains(x,y), etc.). The number of these entities can of course vary from instance to instance.

Fortunately, there exists a subfield of machine learning which is precisely concerned with learning models that are defined by such rich relational structure. This field is generally called "statistical relational learning" (SRL), and is currently one of the most actively researched area of machine learning. I will focus in this talk on a specific approach called Markov Logic Networks, which is an elegant approach combining probabilistic models with first-order logic. SRL is particularly interesting for NLP researchers because language is precisely defined by highly articulate relational structures. SRL techniques have recently been successfully applied to practical NLP problems such as reference resolution, and obtained state-of-the-art results.

 

Computational Modeling of Word Senses, Linlin Li, 06.11.2012:

Word sense disambiguation (WSD) is the task of automatically determining the correct sense for a target word given the context in which it occurs. WSD is an important problem in NLP and an essential preprocessing step for many applications, including machine translation, question answering and information extraction. In this talk, we introduce a probabilistic topic model which explores sense priors for word sense disambiguation. We evaluate our approach on SemEval word sense shared tasks and show the improvement when compared with existing systems. Then, we move on to an evaluation study topic where we show our new findings on how to evaluate word senses when the sense category is not pre-specified, i.e., word sense induction evaluation.

 

Spontaneous and taught language acquisition: reinforcement, discrimination, and generalization issues, Lars Klintwall, 20.11.2012:

Most children acquire expressive and receptive language with remarkable speed. This has often been explained by the ad-hoc postulation of a rather mysterious neurological “language acquisition device”. However, applying learning principles derived from laboratory animal research, learning language can be analyzed as similar to any other skill acquisition (e.g. motor skills): as a combination of reinforcement learning and imitation. Experimental support for this analysis comes from teaching children with autism. These interventions are based on breaking down complex language into component parts and teaching these separately and repetitively. Most research has been on simple naming (e.g. objects or activities), and has focused on how to achieve correct discrimination (tell the difference between a dog and cat), and generalization (to use a word on novel examplars from a category, i.e. novel dogs). Additionally, the need for instant and effective feedback (reinforcement) has been found crucial. Related to this, but not identical, is research on teaching language comprehension (e.g. following instructions). Thus, this presentation will review two topics: 1. spontaneous language learning in typical children, and 2: methods for teaching language to children with autism.

By Erik Velldal
Published Aug. 8, 2012 3:37 PM - Last modified Nov. 15, 2012 3:36 PM