Previous Computational Thinking Seminars 

Wednesday 12th December 2012, 4pm
G.07/A Informatics Forum
Chris Speed Mimicking Object Agency: A Digital Arts Approach

This seminar speculates, upon a future context in which objects will begin to talk to us and even give us instructions. The purpose of the research is to anticipate a time when correlations between the data sets that are associated with different objects are found and the objects themselves are used to impart this ‘new’ knowledge back to us. Such an occasion may be considered to represent a form of agency.
Located within the technical and cultural context of the Internet of Things, this talk introduces a lineage for our relationship with objects from 1. Read Only, 2. Read and Write and 3. Read, Write and Act. The talk uses a series of projects developed by myself and colleagues that offer a brief history of this continuum from the EPSRC funded Tales of Things project, to the iPhone App entitled Take Me I’m Yours that operates as a working but speculative design project mimicking the conditions in which objects may talk to us. The talk speculate a design fiction in which object databases may begin to identify associations and propose ‘actions’ to a user.
Wednesday 25th March 2009
4 pm
Informatics Forum,
Room G.03
Michael Edwards Computational Thinking in Music Composition

Despite the still-prevalent but essentially nineteenth century perception of the creative artist, an algorithmic approach to music composition has been in evidence in western classical music for at least one thousand years. The history of algorithmic composition--from both before and after the invention of the digital computer--will be presented along with musical examples of the distant and recent past. The author's own work will then be placed in this context, focussing upon recent compositions for instruments and computer created with custom software developed in Common Lisp.

Wednesday, 6 August 2008
4 pm
Informatics Forum,
Room 4.31/4.33
Ed Hopkins Modelling How People Learn in Games

Computing Nash equilibria in games is complex. Yet, economists still hope that such equilibria can be used to predict human behaviour in a range of economic and social situations. One possible reason for the validity of this approach is that by following simple adaptive rules people can find their way to equilibrium. Recent advances in the theory of learning in games shows that this can be the case. The talk will be an introduction to the theory and some remaining unresolved issues.

Wednesday, 12 March 2008
4 pm
Appleton Tower
Level 6
Andrew Wallace A Short History of Computer Vision:
From Reason to Process

In the 1960's, "Making a computer see was something that leading experts in the field of Artificial Intelligence thought to be at the level of difficulty of a summer student's project." In the 2000's it is still difficult for a computer vision system to tell the difference between a cup and a saucer, except under constrained conditions. What went wrong?

Although a system which mimics the human visual capability is still some way off, modern image analysis or computer vision systems perform a wide range of computational tasks that are well beyond human capability, making significant progress in applications from medical and industrial image analysis to guidance systems for autonomous vehicles. In this talk, I'll try to explore the principal drivers that have led to a shift in thinking about how computational image analysis systems should be implemented and assessed, leading to the current explosion in systems which perform more constrained, and useful, forms of computer vision. However, at a time when literature surveys tend to concentrate only on the recent past, it is worth asking whether the vision systems of today have an earlier, neglected provenance.

Wednesday, 13 Feb 2008
4 pm
Appleton Tower
Level 6
Simon Kirby Language, Culture and Computation:
the adaptive systems approach to the evolution of language

A fundamental idealisation underpinning much of the work in linguistics over the past half century is the focus on the "ideal speaker-hearer in a completely homogeneous speech-community" (Chomsky 1965). This individual-based idealisation contrasts dramatically with the kinds of approaches taken in other natural sciences, most obviously in evolutionary biology with its focus on populations. This difference of approach has become increasingly apparent given the recent explosion of interest in the origins and evolution of human language - our species' defining characteristic.

The idealisations of linguistics on the one hand and evolutionary biology on the other have made awkward bedfellows. In response to this, over the past 15 years a programme of research has been underway in Edinburgh taking a different approach, aiming to develop models of language more suited to an evolutionary perspective. Instead of focusing on a single individual, we look at dynamic populations of many individuals. The aim of this research is not to explain specific properties of language in detail, but rather to understand what happens in general when multiple interacting and evolving agents are brought together. To do this, we rely on computational simulation, using techniques from machine learning, evolutionary computation, and multi-agent modelling.

In this talk, I will take you through some of this work and show how the use of computational modelling has brought to the fore the importance of cultural evolution in particular in understanding human language. Purely by virtue of being repeatedly learned by generations of agents, language adapts. The result is a structured system that appears designed, yet no designer is required. I will suggest that ultimately we may wish to think of cultural transmission itself as a computational process. This may have implications beyond language - for example for music and other aspects of human language.

Wednesday, 16 Jan 2008
4 pm
Appleton Tower
Level 6
Steve McLaughlin Signals, Information and Sampling

The traditional view of how fast to sample a signal to enable accurate reconstruction or how much information is contained in a transmission was laid down in the 1940s by Shannon and Nyquist and Engineers have have followed these guidelines ever since when designing signal processing or communication algorithms. Recent developments by Donoho and Candes offer an alternative to the conventional data acquisition/data compression strategies used in most systems. The latter utilizes Shannon-Nyquist sampling to obtain the data followed by data-adaptive compression or source coding techniques that exploit redundancies in the signal itself. The computational complexity is usually weighted heavily in favour of the receiver rather than the transmitter since the primary application of conventional data compression schemes is broadcast networks. In compressed sensing, random samples and/or simple random combinations of the data are formed, quantized and transmitted. The receiver attempts to reconstruct the original data by exploiting the redundancies in the signal and knowledge of the particular random combination that has been used. The promise of compressed sensing is that the data rate between transmitter and receiver might be significantly reduced and the computational complexity transferred from the transmitter to the receiver.

A key concern in the development of wireless sensor networks is the complexity of the signal processing algorithm that can be implemented at each sensor and the amount of information that can be communicated between sensors. These constraints (notwithstanding Moores law) are limited by the power capability of the sensor, its energy efficiency and ultimately the cost.

These two issues raise some interesting questions as to how we should approach the design and implementation of wireless sensor networks which will be explored in the talk.

Wednesday, 12 Dec 2007
4 pm
Appleton Tower
Level 6
William A. Mackaness Structuring Geographic Information to support Semantic and Visual 'Zoom'

The cartographic map has played an important role in many aspects of the GeoSciences, leading Hartshorne (1939) to argue that if the problem cannot be viewed in the form of a map, then it is not Geography! The paper map reflected the collective state of geographical knowledge at that instance in time. Computational techniques (beyond mere revolution of the cartographic discipline) have created a paradigm shift such that the map is now the window by which we interact with, and explore geographic information - and the database reflects the knowledge state. In combination with remote sensing techniques, such systems have played a critical role in 1) bringing about a global view of the world's problems, 2) challenging traditional visualisation techniques, replacing them with a continuum of complementary techniques that span from the highly abstract, through exploratory techniques and onto immersion in virtual world. So powerful are these techniques, that Geographers have solved all the world's problems and have nothing left to do.

Wednesday, 14 Nov 2007
4 pm
Appleton Tower
Level 6
David Glasspool Joining up computation and thinking in clinical medicine

Clinical medicine has perhaps been slower than some fields to feel the full impact of information technology, but it's starting to make up for lost time. However great the impact of computerised patient records and booking systems may be on clinical practice, though, it seems unlikely that they will revolutionise clinical thinking.

On the other hand, the analysis of clinical practice and clinicians' thought has a great deal to contribute to general theories of thinking. Theories of judgement, decision, planning and plan execution, and of their integration in goal-directed behaviour, all stand to gain from the rich context of clinical medicine.

These, in turn, have pointed the way to more radical uses of computing technology within clinical practice. And it is these which, finally, do have the potential to dramatically impact clinicians' thinking.

Wednesday, 17 Oct 2007
4 pm
Appleton Tower
Level 6
Leslie S. Smith Neuronal computing or computational neuroscience: brains vs computers

Animal brains are extraordinarily complex, as are modern computers. Both process information, though in quite different ways. Neural systems have been modelled on computers for many years: yet although this has shed light on many aspects of the information processing carried out by brains, it has not enabled us to build software (or hardware) to allow computers to have the same abilities as animal brains. Why is this? Is there a qualitative difference between brain and computer processing? We suggest that the underlying differences at the lowest levels pervade the nature of their system capabilities. Computers are built directly out of binary logic elements, whereas animal brains have a multi-level operation, based on a multitude of ionic channels, and multiple types of modulator molecules, both large and localised, and small and diffusing. We suggest that the ways in which these interact is qualitatively different from the way in which logic gates interact.

Wednesday, 16 May 2007
4 pm
Appleton Tower, Room 3.05
Theodore Scaltsas Computers in Aid of Philosophers

Using the computer to do some of our thinking entered the philosophical domain through formal logic, as automated proof checking/proving/teaching software (with many contributions from Edinburgh Informatics!) But it stayed there. (I am interested in the use of computers rather than of computer models.) The problem is ‘formal language’, which is not the language most philosophers use. So we started, in Archelogos, with a humble ambition of finding ways in which computers could aid us in organising and presenting argument structures, and thereby in comprehending them. But in the course of 15 years, we ventured into experimental projects of dialoguing argumentatively with the computer, of dynamic argument trees, and eventually even attempted an automated reasoning programme for philosophers (with Dov Gabbay, Kings). I will talk about our experience and results from our projects. Interestingly, of these, the initial Archelogos Project is the one most abundantly used in the profession.

Wednesday, 25 April 2007
4 pm
Appleton Tower, Room 3.05
Simon Buckingham Shum Digital Research Discourse?

We conduct research in order to make and contest claims about the world in principled ways, contributing to an ongoing research discourse. Although the internet has accelerated the pace at which research results can be disseminated, its impact to date has been evolutionary rather than revolutionary when we consider how new claims are expressed, peer reviewed, or how we analyse literatures. Equally, computers have made little impact on the research discourse of meetings, such as workshop debates, project meetings or student supervision. In this talk I consider some of the socio-technical possibilities and challenges for making research discourse more digital.

Wednesday, 21 March 2007
4 pm
Appleton Tower, Room 3.05
Burkhard Schaefer Of stormy wooing and quiet seduction - the ambivalent relationship between computational thinking and the law

Lawyers have been influenced by computational models of reasoning long before the existence of computers or the concept of “computational”. From Rabbi Loew’s Golem to Asimov’s Three Laws of Robotics, the mutual fascination of legal and computational modes of thinking had an impact far beyond academic legal writing and permeated from an early stage popular culture and literature. Entire jurisprudential schools elevated computational models of legal decision making to a – possibly unachievable – ideal. And entire legal systems adopted these philosophies wholesale. Despite, or maybe because of this high profile affinity between certain types of legal thought ( the “stormy wooing), the impact that computational models had on legal practice was rather mixed. Using two recent research projects in Edinburgh as examples (The POIROT Project on Fraud investigation, the SHERLOCK project on crime scene investigation), we will see that only recently, through a much more clandestine process (the “quiet seduction”) computational models began to change the way in which legal practitioners think about their work.

Wednesday, 28 February 2007
4 pm
Appleton Tower, Room 3.05
Paul Madden Computation and computational thinking in chemistry

Thanks to advances in methodology and computing power, it has become a matter of routine to perform atomistic calculations in order to explore and illustrate chemical phenomena and even to provide data which it is inconvenient to obtain from experiment. The majority of chemistry publications now involve a calculation which would have been regarded as the province of theoreticians and high performance computing just a few years ago. However, this does not mean that all problems are solved, as the processes of interest may require studies of much larger systems or longer timescales than can be handled with “first-principles” methods which accurately describe the chemical interactions. Computation has played other roles in chemical thinking, however. Optimisation and searching algorithms have guided new strategies in experimental chemistry for identifying the best chemicals for a particular purpose or for improving the reaction conditions in order to improve yields. Informatics methods, based upon large databases of experimental and other information, are used to identify target drugs in particular. In the talk I will attempt to illustrate these various themes.

Wednesday
24 January 2007
4 pm
Appleton Tower, Room 3.05
Anthony Maciocia Artificial Mathematics: a Mathematician's Perspective

There has been a huge amount of effort put into the development of computational systems to do mathematics. While computer algebra systems, in particular, have made a big impact on mathematics, systems to help with proof and the development of mathematical ideas have not. In this seminar, I want to give my own perspective on this problem and suggest some ways in which the situation might be improved. I shall look at issues of ontology, aesthetics, culture and modern trends in mathematical research.

Thursday
7 December 2006
4 pm
Lecture Theatre 1
Aaron Sloman 'Ontology extension' in evolution and in development, in animals and machines

A distinction can be made between definitional and substantive ontology extension. How the latter is possible is a deep question for AI, psychology, biology and philosophy. All information-processing systems have direct access only to limited sources of information. For some systems it suffices to detect and use patterns and associations found in those sources, including conditional probabilities linking input and output signals (a 'somatic' sensorimotor ontology). Sometimes it is necessary to refer beyond the available data to entities that exist independently of the information-processing system, and which have properties and relationships including causal relationships that are not definable in terms of patterns in sensed data (an 'exosomatic' ontology). This is commonplace in science: scientists postulate the existence of genes, neutrinos, electromagnetic fields, chemical valencies, and many other things because of their explanatory role in theories, not because they are directly sensed or acted on. Does this also go on in learning processes in infants and hatchlings that discover how the environment works by playful exploration and experiment? Is 'ontology extension' beyond the sensor data also set up in the genome of species whose young don't have time to go through that process of discovery but must be highly competent at birth or hatching? Is there anything in common between the different ways ontologies get expanded in biological systems? This relates to questions about what a genome is, and about varieties of epigenesis, as well as to the varieties of learning and development that need to be considered in AI/cognitive science/robotics/psychology.

This work extends the paper presented with biologist Jackie Chappell at IJCAI-05 on 'The Altricial-Precocial Spectrum for Robots'

It is part of the theoretical work being done on the EU-funded CoSy robot project. It challenges the current emphasis on such themes as symbol-grounding, sensorimotor-based cognition, and the role of embodiment in constraining cognition. It explains how we can be mathematicians, scientists and philosophers as well as animals with bodily functions and competences.

Wednesday 14 June 2006
4 pm
AT 3.05
Peter Ghazal Computational thinking on pathway medicine

This lecture will give a biologist centric view to the lack of a comprehensive theory of life's organisation at the molecular and cellular level and how computational thinking may help fill this gap. Genes, made of DNA, contain the instructions for producing proteins, made of amino acids, which carry out most functions in the cells. Some proteins can bind to DNA, turning particular genes on or off. This interplay, which is one way that cells regulate themselves, may not be too different from how electronic circuits function, with one transistor turning another on or off.

If we think of a cell as a computer, then it might be possible to chart the pathways in our network of interacting molecules that are important for keeping us healthy as well as understanding when a pathway goes awry in disease. I shall discuss efforts toward mapping biological pathways for the purpose of becoming an invaluable resource to be read and understood by biologists and physicians, with the view that they also have to be readable to machines.

In pathway medicine computational thinking is not only vital for formalising mechanisms but also for inspiring the building of computing devices out of biomolecules. A smart DNA pill that can be downloaded by cells for the purpose of sensing disease, as a result of a faulty network in a pathway. Such smart pills will need to compute information from the cell. The unification of disciplines in the form of DNA computing elements as medicine will lead us into the emergence of an entirely new and exciting field of medical science.

Wednesday 31 May 2006
4 pm
AT 3.05
Richard Brown Art, Creativity, Innovation and Experimental Science: Alternative ways of thinking, challenging paradigms and pushing boundaries

Are Science and Art two worlds apart, what happens when they collide?

In the fields of complexity and emergence can the analogue and the electrochemical offer a richer and wider landscape than that of the digital?

Joseph Beuys created batteries from copper, felt and fat; was this some form of Pataphysics? Where was Gordon Pask heading with his ferrite growths, and did Pask's Ear really move to music? Nikola Tesla wanted to make electricity free and transmit it through the air; was this such a brilliant idea? If Chemistry represents the essential distillation of Alchemy, what is Informatics?

These are some of the areas Richard will touch upon in his talk and presentation. He will show examples of interactive installations that embody art and science, carry out some live experiments, and illustrate how non-digital interactive processing may have the edge over the digital in generating complexity and emergence.

Richard Brown is the Informatics Research Artist in Residence, with a degree in Computers and Cybernetics and an MA in Fine Art, has a hybrid background in both Art and Science. Richard will conclude his presentation by outlining the interactive art proposals for the new Informatics Building.

Wednesday 3 May 2006
4 pm
AT 3.05
Richard Kenway Everything is a Computer

The primary objective of physics is to discover the Theory of Everything (TOE). This is a complete mathematical description of the Universe at the most fundamental level. The Universe itself is a realisation of the TOE - in some sense the Universe is a computer and the TOE is its program. The outputs of this program are the phenomena we observe. Computer simulation has become the third methodology of science alongside theory and experiment. It is based on the conjecture that the Universe may be approximated arbitrarily closely by a program which, as applied today, implements a numerical algorithm on a digital computer, although ultimately it may require a quantum computer, or some further generalisation. The accuracy achieved is limited by the correctness of the algorithm, the efficiency of its implementation by the program, and the speed of the computer. The program (equivalently the TOE) can only be discovered through a process of reverse engineering, in which experimental and observational results are compared with the outputs of simulations and the program adjusted until these agree. The complexity of the phenomena we observe in Nature suggests that it can only be fully understood in this way.

Wednesday 12 April 2006
4 pm
AT 3.05
Richard Coyne Thinking Places: Non-Place and Cognition

In this presentation I address the issue of space from the point of view of two disparate sets of theories: those pertaining to concepts of distributed cognition, and those pertaining to non-place. In the process I consider physical environment, social being and cultural setting, and provide pointers for greater understanding of work environments and designing for mobile technologies that support workers on the move. In the process I canvas issues of the relationships between spatial architecture and computation.

Wednesday 29 March 2006
4 pm
AT 3.05
Geoffrey Boulton Understanding the Earth

For much of the time since its modern emergence, in Edinburgh and elsewhere, in the late 18th century, geology has been an empirically-based science, creating a narrative of Earth history and accounting for its evolution by linking effects (geological phenomena or reconstructed events) directly with causes (processes that can be observed or readily inferred). The logic tended to be direct, uncoupled and linear. The advent of the computer has permitted us to represent and simulate complexity, to explore the consequences of non-linearity and coupling, and to recognise the potential for self-organisation of complex Earth systems. Douglas Adams was right. The Earth is an analogue computer. Earth scientists now recognise hierarchies of complexity, and, for practical purposes, at each level use boundary conditions set by the level above. An example is the way in which we simulate the lithosphere, cryosphere, oceans, hydrosphere, biosphere and atmosphere. Each can be separately simulated, with prescribed boundary conditions as if they were closed systems, with lower order hierarchies of complexity that can be represented by variety of levels of approximation. Interfacing simplicity and complexity, and undertaking simulation experiments that the Earth will not allow have both been highly creative processes. We are moving increasingly from low to high order models, to explore macro-level emergent behaviour of the Earth system as a whole (of which stratigraphy is a record and Earth System Science the approach), particularly in response to the need to understand the Earth's climate system and its possible futures. It is use of the computer that has created a new way of thinking amongst geologists and fellow Earth scientists, rather than any reflective tendency, as we tend to be of an active caste of mind. But how did our former, crude logic permit us to get so many things right?

Wednesday 1 March 2006
4 pm
AT 3.05
Andy Clark , Keith Stenning and Barbara Webb Computational Thinkers? A Discussion
(Webb slides, Clark slides, Stenning slides)

Computational thinking is valuable in many areas, from biology to aeronautics, from modeling economies to predicting the weather and the stock market. But it is sometimes claimed to be especially valuable in thinking about the human mind. The root of that special value, according to a number of classic treatments, is that the human brain and CNS, at least in some of its functioning, actually IS a computational device, able to define operations over internal representations of the world. Cognition, according to this conception, actually is (either in whole or in part) computation. The three speakers each offer their own take on this bold (or is it trivial?) assertion, using examples from robotics, control theory, and the study of human reasoning.

Wednesday 8 February 2006
4 pm
AT 3.05
Tim O'Shea Learning about Learning with Computers

Computational models of human learning have had both a direct and an indirect impact on education. Some computer tutors have incorporated explicit models of student learning. Many applications of computers in support of student learning have been motivated by particular computational models. A rough and ready taxonomy of the different approaches to the use of computers in education will be presented. Particular emphasis will be placed on how learners can better learn about their own learning through using computers.

Wednesday 25 January 2006
4 pm
AT 3.05
Bob Mann and Andy Lawrence Astro-informatics: computation in the study of the Universe

Astronomy is arguably the oldest of the sciences, but it continues to flourish today thanks to technological advances in other disciplines. Theoretical astrophysicists have long been demanding users of high performance computing resources, but the more varied and interesting challenges for computer science are arising within the observational side of the subject. Observational astronomy is driven by advances in detector technologies and modern solid-state detectors underpin the current wave of systematic sky surveys being undertaken across the electromagnetic spectrum and being progressively federated within the growing global Virtual Observatory. The storage, description, integration and analysis of the vast quantities of heterogeneous data being made available within the framework of the Virtual Observatory are providing challenging applications for many areas of computer science. We describe a number of the topics where close collaboration between astronomers and computer scientists is proving to be very beneficial to both sides, and discuss the sociological and technical reasons why astronomy is a particularly good domain for the application of computational thinking.

Wednesday 11 January 2006
4 pm
AT 3.05
Jane Hillston Computational Thinking for Life

Although it has recently become a hot topic, the endeavour of systems biology, to present a systemic rather than reductionist view of biological systems, is not new. Original work dates back to the 1950s and 60s. Moreover, computers and computer-based modelling have played a crucial role from the beginning. However, what has changed in the current emphasis on the topic is the involvement of theoretical computer scientists as well as our more applied colleagues. In this talk I will review the area of systems biology and outline the potential impact of ideas and techniques from computer science, at all stages of the endeavour.

Wednesday 14 December 2005
4 pm
AT 3.05
Mark Steedman and Matthew Stone Plans and the Computational Structure of Language

Linguistics has been a computational science for almost fifty years, since Chomsky first used formal language theory to characterize structure and complexity in natural language. Yet researchers continue to appeal to new kinds of computational thinking as they frame problems and results in the science of language. This talk focuses on two case studies of particular recent interest, whose common goal is to explain language in its broader evolutionary and biological context. The first case study concerns an account for the structure of language. Here computational approaches to agency---which showcase close parallels between language use in dialogue and other kinds of collaborative real-world activity---promise to link the grammatical representations implicated in language use to the the more general symmetric representations of ones' own and others' real-world actions that are the hallmark of primate social cognition. The second case study concerns an account of language processing. Here computational frameworks for approximate probabilistic inference---informed by striking correlations between the time-course of linguistic processing and the dynamics of uncertainty in the evidence available to the processor---suggest how the mechanisms of language use could simultaneously arise from more general neural or cognitive mechanisms.

Wednesday 9 November 2005
4 pm
AT 3.05
Vicki Bruce About Face

Our understanding of face perception has been influenced quite profoundly by computational developments. These developments have produced application drivers for face perception research (e.g. security, forensics, animation, cosmetics) and produced methods to ask scientific questions that were difficult or impossible to ask before (e.g. to present faces which are a combination of different identities, or to manipulate the dynamic pattern of expressive or other movements). Computational thinking has also helped us theoretically - though it has also created some false starts in terms of thinking about the representations that the brain uses to recognise faces.

Wednesday 12 October 2005
4 pm
AT 3.05
Alan Bundy A Crisis in Mathematics?

Mathematics is facing a crisis which strikes at its foundation: the nature of mathematical proof. We have known since Turing showed that much of mathematics was undecidable, that there are theorems with short statements, but whose simplest proof is too huge for a human mathematician to grasp in its entirety and so be reassured of its correctness. Within the last half century we have discovered practical examples of such theorems: the classification of all finite simple groups, the four colour theorem and Kepler's conjecture. These theorems were only proved with the aid of a computer. But computer proof is very controversial, with many mathematicians refusing to accept a proof that has not been thoroughly checked and understood by a human. The choice seems to be between abandoning the investigation of theorems with only huge proofs, or changing traditional mathematical practice to include computer-aided proofs. Or, is there a way to make large computer proofs more accessible to human mathematicians?


Alan Bundy
Last updated: 17 January 2008


Home : Research : Programmes : Comp-think 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh