Previous Hamming Seminars 

21 November 2012
Jane Hillston

Over the last twenty years, stochastic process algebras have become an established technique in performance modelling.
I will explain how this development has not just provided an innovative approach to expressing performance models, but has also produced a shift in perspective based on quantified formal methods.

In particular, I will present the original motivations and some of the achievements of the stochastic process algebras, particularly in bridging between two communities that were originally distinct.
I will go on to discuss some of the current challenges in quantified formal methods ranging from biological processes to smart cities, many of which seem to require richer representation of spatial characteristics of systems.

24 October 2012
Leonid Libkin
It is a tradition to refer to Hamming's famous Bellcore talk during Hamming seminars, but I would like to depart from this tradition and concentrate on the complement. Curiously enough, Hamming never talks about beauty in science and research: neither the word, nor its synonyms appear in the transcription of his talk. And yet we strive to find a beautiful definition, a beautiful theorem, a beautiful proof. But are those really necessary?  Do they please anyone except the researcher discovering them? Do they have a lasting impact? Are they worth the effort, or is looking for those instances of beauty just one of the little perks we researchers are allowed to have?

You may guess from the way the questions are posed what my answers will be (although I accept that not everyone will agree with me). In this talk I discuss the role of beauty in CS research - or at least in the areas close to my own work.  This may be a slightly unusual Hamming seminar, but I'll use this approach to try to convince you that my field is important, and sometimes beautiful.
23 May 2012
Mirella Lapata
Talk to me in plain English, please!  Explorations in Data-driven Text Simplification

Recent years have witnessed increased interest in data-driven methods for text rewriting, e.g., the problem of reformulating a query to alternative queries, a document in a simpler style, or a sentence in more concise manner.  In this talk I will focus on text simplification, one of the oldest and best-studied rewriting problems.  The popularity of the simplification task stems from its potential relevance to various applications.  Examples include the development of reading aids for people with aphasia, non-native speakers and more generally individuals with low literacy.

In this talk I will discuss the challenges involved in text simplification and describe recent progress in leveraging large-scale corpora for model training.  I will then present a model that simplifies documents automatically based on a synchronous grammar.  I will explain how such a grammar can be induced from Wikipedia (and its model for selecting the most appropriate simplification from the space of possible rewrites generated by the grammar.  Finally, I will present experimental results on simplifying Wikipedia articles showing that this approach significantly reduces reading difficulty, while producing grammatical and meaningful output.
Joint work with Kristian Woodsend
2nd May 2012
Philip Wadler

The foundations of computing lay in a coincidence: Church's lambda calculus (1933), Herbrand and Godel's recursive functions (1934), and Turing's machines (1935) all define the same model of computation.  Another coincidence: Gentzen's intuitionistic natural deduction (1935) and Church's simply-typed lambda calculus (1940) define isomorphic systems. We review the history and significance of these coincidences, with an eye to Turing's role.
(The same talk will be delivered the following week at Princeton's Turing Centennial Celebration: between talks from Dana Scott and Les Valiant.)

15th February
Charles Sutton
Machine Learning: We Do, and They Don't

Despite the important strides that have been made by machine learning research in the past 20 years, we are still far from the level of sophistication required for a self-sufficient learning system. My work aims to move toward this goal by stretching the boundaries of the set of currently practical applications.
I will describe a few current roadblocks to more and better applications of machine learning. These include handling of streaming data, transfer learning,  and our pernicious inability to divide and conquer.
The format will essentially be a series of PhD-thesis sized topics, which may perhaps be enjoyable for those who are considering  whether to start a PhD, or for those who would like to obtain another one.
23rd November 2011
Subramanian Ramamoorthy

Games Robots Play

Where are yesterday's robots of tomorrow? The optimist might answer that some such robots have already begun to enter the realm of the possible, in the form of self-driving cars, robots that can fold towels, take an elevator to fetch you a sandwich, etc. The pessimist would rightly counter that although there are media clips showing robots doing these things, most of these systems are frightfully far from being ready for prime time (do we really expect to be able to take home an Asimo along with our iPhones?). The realist would note that although the pessimist's objections are valid, a variety of systems ranging from automated vacuum cleaners to unmanned submarines are routinely and gainfully deployed around the world today. Moreover, what about the much larger ecosystem of virtual robots roaming in worlds ranging from eBay and Amazon to NASDAQ?
Siding with the pragmatic realist, I argue that autonomous robots that do succeed and become commonplace will perhaps come in the form of collections of heterogeneous modules that are put together in an ad hoc way to do whatever is the need of the hour. Moreover, people are more likely to accept these robots in mixed-initiative team situations, capable of strategic interaction, rather than as black box unknowns. In this milieu, one technical issue stands out as being particularly thorny - how does one marshal diverse modules into long-lived autonomous systems capable of interacting in such non-trivial ways with a continually changing and rich world?
 A proper answer to this question requires a careful look at foundational issues in the science of *autonomous* decision making and an understanding of the space of *implementable* models for interactive behaviour. Conceptual advances in this area must also be informed by an empirical science of decisions in boundedly rational agents. One way to do this kind of science is by trying to construct complete agents that act in domains that are rich enough to fully exercise the theoretical framework being developed, while being sufficiently well posed to enable incremental experiments. Two such domains are robotic football and autonomous agents in electronic markets. This talk will provide glimpses into why and how AI researchers get robots to play these games, and how this helps us address the big questions.
19th October 2011
Models in software engineering

A model is (for purposes of this talk) an abstract, usually graphical, representation of some aspect of a software-intensive system. Software engineering has, since its invention, involved the use of models. In recent decades, a perceived need for greater automation of processes in software engineering, the wish to develop software faster and more cheaply, and the drive for higher quality have motivated ever more reliance on models.
Languages for modelling and techniques for working with models have struggled to keep pace, hindered in some cases by conflicting requirements arising from the different motivations. We are currently in a dangerous state in which models sometimes endanger rather than support the developers' aims. I will explain some of the problems, and discuss the progress that I think we will/may/should/must/cannot expect to see in the coming years
28th September 2011
Austin Tate Putting Plans to Real Use

Planning is a key task of importance to governments and organisations of all kinds, and to us all on a personal, family and business level. It is one of the most studied areas in Artificial Intelligence and one that demands many aspects and approaches. It has vital and important applications in a wide variety of areas. Planning is only useful if plans are successfully put into real use - i.e. they are executed. This is tricky. But plans can be used to assist in communication and collaboration, including in some of the most dynamic and challenging applications such as international emergency response.  Lets look forward to a future "Helpful Environment" aided by AI planning and knowledge-based technologies - a future in which ubiquitous computing, sensor grids and networked systems combine to help the individuals, families, businesses, organisations, the public at large, regions and countries to be self supportive and mutually helpful with specialised resources for their daily lives, for help and assistance in emergencies.  This will demand a new generation of researchers and entrepreneurs able to combine the next generation Internet, semantic web, social networking, agent-based technologies, intelligent planning systems and virtual environments into a new approach.  It will be quite a journey.
Dave Roberston
Formal Reasoning Gets Social

For much of its history, formal knowledge representation has aimed to describe knowledge independently of the personal and social context in which it is used, with the advantage that we can automate reasoning with such knowledge using mechanisms that also are context independent.  This sounds good until you try it on a large scale and find out how sensitive to context much of reasoning actually is.  Humans, however, are great hoarders of information and sophisticated tools now make the acquisition of many forms of local knowledge easy.  The question is: how to combine this beyond narrow individual use, given that knowledge (and reasoning) will inevitably be contextualised in ways that may be hidden from the people/systems that may interact to use it?  This is the social side of knowledge representation and automated reasoning.  I will discuss how the formal reasoning community has adapted to this new view of scale.
Barbara Webb Do bees have Bayesian brains?

Anyone who has tried to build a robot to get around in the real world can only be awed by the capabilities represented by the humble insect. Recent breakthroughs in obtaining partially comparable competence in robotics for problems like sensorimotor control of complex actuators or in unstructured environments, navigation, integration of multiple behaviours, and so on, have substantially depended on the use of powerful probabilistic methods. But can these possibly be implemented in a tiny insect brain; or have they found other, more efficient solutions? I will discuss this and related problems in working out the brain circuits of insects, and in particular how methods from informatics may contribute to answering the questions.
Chris Williams Latent variable models and "deep" learning

In this digital age there is an ever-increasing abundance of raw data (such as images, data streams, text), and a great need for methods to extract meaningful structure from this data. Given the difficulty, expense and limitations of providing annotations for this flood of data, it is vital to focus on *unsupervised* methods for extracting structure. It is natural to think in terms of latent (or hidden) variable models, where the patterns in the observed data are  explained in terms of these hidden "causes".  Having access to such latent representations should make subsequent tasks much easier than working directly on the raw data.

I will start by describing the commonly-used latent variable models which generally have one hidden layer. Although these have been very
useful, it is improbable that complex data sources can be fully explained in such simple terms. Recently Hinton et al (2006) proposed a greedy layer-by-layer method for constructing deeper networks. I will discuss this work and the prospects for learning deep latent variable models from data.

For the video of Chris' talk, please click here.
Steve Renals Natural Speech Technology

The state-of-the-art in speech recognition and speech synthesis is based on statistical models that may be learned from data. This has resulted in speech technology which works well in certain controlled tasks and domains, but is far from natural. In particular, it has a limited ability to adapt to other talkers, to the acoustic environment, and to the context of the conversation.

In recent years, there have been significant advances in speech technology, including the development of joint models for speech synthesis and speech recognition, the development of new models and learning criteria for speech recognition and synthesis, and speech recognition systems able to function in noisy or reverberant environments. However, despite these advances, a number of weaknesses remain including a close coupling to the scenario of operation, independent development of recognition and synthesis models, a reliance on supervised machine learning, a lack of factorisation of the different information sources in the speech signal, and little dependence on the context of the communication.

In this talk I'll discuss the big problems that need to be addressed if we are to develop speech technology which approaches human adaptability to different communication situations, and suggest how we might tackle these problems.

For the video of Steve's talk, please click here.

Elham Kashefi Why physicists and computer scientists should remain partners

Viewing quantum systems through the lens of their computational capabilities has the potential to provide important new insights into these systems.

"What makes quantum systems non-classical?"

Quantum Computing approaches this from the new perspective of how quantum effects can enhance information processing and communication. This is based upon two main ways in which quantum systems exhibit non-classicality: the apparent (exponential) complexity of simulating quantum systems classically, and non-local behaviour due to entanglement. Which leads to another fundamental question in physics:

"If computing predictions for Quantum Mechanics requires exponential resources, is Quantum Mechanics a falsifiable theory?"

I will briefly overview how a computational thinking has contributed towards these directions highlighting many remaining exciting open questions of the field.

Mark SteedmanSome Important Problems in Natural Language Processing

In the present state of natural language processing research, statistical models are ubiquitous. Together with the exponential increase in computing power under Moore's Law, they have driven the remarkable progress of the last 40 years in automatic speech recognition (ASR), information retrieval (IR), and statistical machine translation (SMT).

The most successful methods are based on supervised learning from training data labelled by humans. For parsers, these are sets of sentences laboriously annotated with syntactic trees or dependency graphs, the largest of which are currently around 1M words in size. For SMT systems, the training data is parallel text produced by human translators, of which the largest set available is around 200M words. For ASR it seems to be a few thousand hours of transcribed human speech.

However, it is quite clear that these quantities of labelled data are not enough to deliver human levels of performance. Error analyses show that 1M words of treebank is not enough to extract an adequate grammar, or a strong enough parsing model, for robust parsing. 200M words of parallel text is not enough to train reliable translation.

What is more, all of these methods require exponentially increasing amounts of training data to achieve noticeable improvements. No-one will pay for 10M words of treebank. Nor can we wait around for 2B words of Canadian Hansard to appear. Extrapolating present trends to predict the amount of speech data that would be required to bring HMM ASR up to human standard suggests quite impractical amounts around 1M hours.

In the present state of the art, research is accordingly to a great extent focused on learning from unlabelled data. However, truly unsupervised learning of even the lowliest HMM model is a hard task, and learning grammars and parsers is very hard indeed. Two alternatives seem promising in this impasse. One is to try to improve supervised models using unlabelled data, of which there are inexhaustible amounts. For example, one might be able to train parsers using automatically parsed text. Another is to try to make the theory itself more powerful. Parsing might be expected to similarly strengthen language models for SMT and ASR. Improved parsers could be used to build better ontologies and resources to improve inference and the parsing models themselves.

The bottleneck in carrying this forward is the existing parsers, which are too weak to initiate such a virtuous cycle. The talk will argue that the most important problem in NLP is to make parsing more effective, and to find ways to make parsers build meaning representations that can support efficient inference. The talk will discuss some subtler ways in which it might be possible to make use of unlabelled data to improve parsers, together with some more speculative alternatives, including the possibility of discovering new sources of automatically-produced labelled data

Sethu Vijayakumar New Tools, Old Dreams: The changing face of robotics research in the new millennium

Robotics and Artificial Intelligence, over the past 50 years or so, has provided us with a fascinating and futuristic look ahead to the technological driven society of tomorrow. It is probably fair to say that inspite of tremendous progress in actuation, robotic hardware, algorithms and sensor design, the end products are yet to match the dramatic visions drawn out by science fiction writers, movie directors and scientists themselves. This exemplifies how we have underestimated the capabilities required for autonomous decision making, robustness and adaptation to novel environments for even 'simple' tasks such as crossing a road without getting run over. I will look at how some new computational tools (machine learning, Bayesian statistics), modelling techniques (computational motor control) and novel anthropomorphic design with biomimetic inspirations are leading the way to a new paradigm in building and controlling robots.

Kousha Etessami The Computational Complexity of Games, Equilibria, and Fixed Points

In recent years, the computation of equilibria in markets and games has become an important topic of research in algorithms and complexity. This is primarily because of the central role these problems play in algorithmic game theory, an area at the intersection of economics and computer science. Such research has become increasingly prominent because of its key role in helping us understand and automate internet markets, and more generally to understand the behavior of collectives of self-interested agents on the internet.

Economics researchers have been working on such topics for many years and have well developed theories for them, but it has become evident that on the scales of the internet efficient algorithms are indispensable and complexity issues cannot be ignored in such theories. As just one instance of their usage, automated auctions on the internet, like Google's TV ad auction mechanism (devised by a computer scientist), use the computation of a specific kind of market equilibrium in order to determine the allocation of goods to the different bidding agents.

In this talk I will survey some basic aspects of general economic equilibrium theory and game theory, and the equilibrium computation problems that arise from it, and I will survey the state of the art in our understanding of the complexity of computing equilibria (both market equilibria and Nash equilibria), and their tight connection to fixed point computation problems for algebraically defined functions, which actually arise in a variety of different fields.

Mike O'Boyle Opportunity Knocks: Turbulence in Computer Systems

Energy is the current driving force behind architecture and system design and has significant impact on our discipline. The spendthrift designs of the 80s and 90s in the bountiful years of Moore's Law growth have been replaced by a post-boom landscape of energy-conscious design. This has led to the rapid-rise of long awaited parallel and specialised architectures. The trouble is: no-one really knows how to program them and if they did, they won't be able to do so tomorrow. Disruption in the technology road map causes acute anxiety amongst the major industrial players but is a golden opportunity for researchers with bold ideas. This talk will outline address some of the key challenges facing us and my totally unbiased view of how we should go about solving them.

David Willshaw Only Connect

The brain is unique amongst the organs of the body in that its major components, the nerve cells, are highly interconnected. These intricate patterns of connectivity enable sophisticated processing within single cells, groups of cells and brain areas, so contributing to the remarkable computational capabilities of the brain, unmatched by any man-made machine.

My research has focussed on both experimental and modelling approaches to understanding the properties of the patterns of neural connectivity: how the connections are formed and how they are used. My current research is on modelling the development of ordered patterns of nerve connections. In this talk I will attempt to explain the use of computational models in this field; to link my work into the field of Informatics; to emphasise the importance of data and the influence of new technology in providing such data; and to place myself at the right distance from Richard Hamming's points of view.

Phil Wadler What are the important problems for programming languages?

Discussed a few problems he considered important for the future of programming languages, ranging from technical to social:

  • How to program for a distributed and multicore world?
  • How to integrate more and less strongly typed languages?
  • How to program the web?
  • How to make programs an order-of-magnitude more compact?
  • How are programming languages adopted in practice?
  • How to measure the efficacy of a programming language?
  • How to make programming simple enough that children can do it?
  • How to convince the world that programming languages are important?

Please don't come expecting answers---the purpose of the talk is to raise questions!

Alan BundyAutomated Ontology Evolution

Started my talk with a brief introduction to the Hamming Seminars series.

Most work on automated reasoning is based on manually-built representations of knowledge. There is, however, suggestive evidence that human representations evolve both between and within episodes of reasoning. Unfortunately, how to automate the evolution of representations is a neglected problem in informatics. Challenges such as managing: the interaction of multiple agents; and the co-development of programs and their specifications, have it urgent to address this problem. Our improved understanding of representations and their relationships to reasoning techniques has made it timely. I will describe and motivate the problem and explain why it is difficult. I will outline some current approaches to this problem in research at Edinburgh and elsewhere.

Last updated: 28/11/12 by SP

Home : Research : Programmes : Hamming 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail:
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh