International Review of UK ICT Research 2006

This page is under construction. Please revisit - and reload - to see latest version.

Overview of Posters and Demos

ICCS

constraint-based sentence compression - an integer programming approach

James Clarke

Abstract: The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or large-scale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.

Further information

standup: system to augment non-speakers' dialogue using puns

Helen Pain

Abstract: The STANDUP project explores how humour may be used to help non-speaking children learn to use language more effectively, through providing opportunities for language play. Starting from our previous research on the automated generation of punning riddles, we have designed and implemented a large-scale, robust, interactive, user-friendly pun-generator which allows the user to experiment with the construction of simple jokes. The STANDUP system was designed in consultation with potential users (children with communication and physical disabilities) and suitable experts, was rigorously engineered using public domain linguistic data, has a special-purpose child-friendly graphical user interface, and has been evaluated with real users.

Further information

modelling human parsing, syntactic priming, collaborative actions: interdisciplinary research with psychology and linguistics

Frank Keller

Abstract: This poster describes three areas of cognitive modeling that involve researchers in Informatics, Psychology, and Linguistics. These include information-theoretic models that account for prediction in human parsing, corpus-based studies of syntactic priming and parallelism (the tendency to repeat syntactic structures), and work on collaborative tasks involving humans and robots. The use of eye-tracking, an experimental approach that provides detailed measures of human cognitive processing, underlies all three areas.

Further information

statistical machine translation

Miles Osborne

Abstract: The Edinburgh Statistical Machine Translation Group consists of a dozen faculty, postdocs, PhD students and visitors. We build frameworks for translating between any language to any other language, with a particular emphasis on Arabic to English, Chinese to English and all major European languages to each other. Our current research areas include Factored Translation Models, Trillion word Language Modelling, large scale discriminative approaches to translation and methods for dealing with low-density languages. We also enter in all of the major international translation competitions, frequently outperforming companies and other universities.

Further information

machine learning of dialogue management policies

Oliver Lemon

Abstract: We are investigating machine learning methods for dialogue management policies. In the TALK project (an EU FP6 project), we have developed a novel combination of reinforcement learning and supervised learning, which allows us to learn an entire dialogue policy from a fixed corpus of human-machine dialogues. We have also developed user simulations for use in automatic evaluation and optimization of policies. Experiments with human users have demonstrated the advantages of the learned policy over a state-of-the-art hand-coded policy. In a forthcoming EPSRC project, ``End-to-end Integrated Statistical Processing for Context-Aware Dialogue Systems'', we will extend this work by developing tractable and effective techniques for the integrated treatment of uncertainty in context-aware dialogue systems, for example using Partially Observable MDPs.

Further information

IPAB

overview

Bob Fisher

Abstract: The Institute of Perception, Action and Behaviour (formed 1998) is focused on activities related to the issue of how to link computational perception, representation, transformation and generation processes to external worlds. The external world may be the "real" world or another computational environment that has its own character. Examples of where this issue arises occur throughout the research in IPAB: video sequence analysis, 3D shape capture and analysis, interacting agents in computer games or video, flexible and tolerant robot control, learning-based robot control, biomimetic robotics, particularly for various insects. This poster shows some example images from our research results.

Further information

Structure inference for bayesian multisensory perception and tracking

Timothy Hospedales

Abstract: We investigate a solution to the problem of multi-sensor perception and tracking by formulating it in the framework of Bayesian model selection. Humans robustly associate multi-sensory data as appropriate, but previous theoretical work has focused largely on purely integrative cases, leaving segregation unaccounted for and unexploited by machine perception systems. We illustrate a unifying, Bayesian solution to multi-sensor perception and tracking which accounts for both integration and segregation by explicit probabilistic reasoning about data association in a temporal context. Unsupervised learning of such a model with EM is illustrated for a real world audio-visual application.

Further information

Combining behavioural and robotic studies of insect sensorimotor control

Finlay Stewart

Abstract: Biorobotics research in IPAB focuses on understanding sensorimotor control in the context of the complex system composed of the brain, body and environment. We choose to study insects because despite the relatively small size of their brains they are capable of performing interesting behaviours which solve problems using efficient strategies. The aim of each project is to produce a sensorimotor controller which can function on a real robot, acting as a working model of the equivalent neural pathways in the animal. Using robots as well as simulations ensures that the controllers produced are robust to noise in real-world sensor data, and can correctly control the motion of the robot when constrained by real-world physics.

Further Information

ipab connections with industry

Matt Howard

Abstract: The Institute for Perception Action and Behaviour has a variety of links to industry. These include collaborative research projects as well as spin-off technology transfer companies. This poster illustrates current links with the research efforts of well-known companies such as Honda and Microsoft as well as new companies started by former IPAB members. The diversity of these links reflects the broad range of research conducted in the institute that is highly valued by our commercial counterparts.

Further information - Honda
Further information - Edinburgh Robotics
Further information - Microsoft
Further information - Dimensional Imaging

realistic nonparametric 3D surface completion

Toby Collins

Abstract: Real 3D scene acquisition usually requires the ability to scan completely the scene objects, but this is normally impossible due to access restrictions or physical limits. The research presented here shows how one can use the scanned portion of an nobject to hypothesise unscanned portions, whether holes or the complete back surface. The key idea is to grow the known surface outward across a hypothesised underlying surface, using matching sample neighbourhoods from theobserved front surface. The poster shows a reconstruction of the Leaning Tower at Pisa, where the floors, ridges and cupolas are realistically reconstructed, even though no model of the building was used. The approach can be extended to multiple scales and to incorporate colour.

Further information

ICSA

enhancing the performance predictability of grid applications with patterns and process algebras

Murray Cole

Abstract: The Enhance project aims to simplify the efficient programming of Grid systems by exploiting results from two underlying research programmes. Skeleton based programming recognises that many real parallel applications draw from a range of well known solution paradigms and seeks to make it easy for an application developer to tailor such a paradigm to a specific problem without re-inventing the wheel. Meanwhile, stochastic process algebras such as PEPA are used to model the behaviour of concurrent systems in which some aspects of behaviour are not precisely predictable. By modelling our skeletons with PEPA, and thereby being able to include aspects of uncertainty which are inherent to Grid computing, we are able to underpin run-time systems which make better scheduling and rescheduling decisions than less sophisticated approaches.

Further information

industrial collaboration with icsa

Nigel Topham

Abstract: Most of the research undertaken in the Institute for Computing Systems Architecture has a direct relevance to the embedded computing industry. There is a history of close interaction between researchers in ICSA and the embedded computing industry in the UK and Europe in particular. ICSA is currently involved in: two major collaborative European projects; two smaller projects involving Engineering Doctorate students at ARM (Cambidge) and Critical Blue (Edinburgh); and a close collaboration with ARC International in the area of new high-performance low-power embedded microprocessor architectures. This poster describes how ICSA research results have fed into startup activities, and explains how the long-term research collaboration with ARC International has cross-fertilized both the product developments at ARC and the research activities in ICSA.

Further information

Using Machine learning to focus iterative optimization

Mike O'Boyle

Abstract: Iterative compiler optimization has been shown to outperform static approaches. This, however, is at the cost of large numbers of evaluations of the program. This paper develops a new methodology to reduce this number and hence speed up iterative optimization. It uses predictive modelling from the domain of machine learning to automatically focus search on those areas likely to give greatest performance. This approach is independent of search algorithm, search space or compiler infrastructure and scales gracefully with the compiler optimization space size. Off-line, a training set of programs is iteratively evaluated and the shape of the spaces and program features are modelled. These models are learnt and used to focus the iterative optimization of a new program. We evaluate two learnt models, an independent and Markov model, and evaluate their worth on two embedded platforms, the Texas Instrument C6713 and the AMD Au1500. We show that such learnt models can speed up iterative search on large spaces by an order of magnitude. This translates into an average speedup of 1.26 on the TI C6713 and 1.27 on the AMD Au1500 in just 2 evaluations.

Further information - machine learning
Further information - compilers`


Research consortium in speckled computing

D.K. Arvind

Abstract: The Research Consortium in Speckled Computing is a multidisciplinary grouping of Computer Scientists, Electronic Engineers, Physicists and Electrochemists, drawn from five universities, researching the next generation of miniature (5X5X5mm) mobile computing devices called specks: each speck combines sensing, processing and wireless networking capabilities. The Consortium is vertically integrated, ranging from the design, realisation and intergration of the miniature specks, to the efficient organisation of networks of specks - called specknets - as a fine-grained distributed computation platform. The research is funded jointly by the Scottish Funding Council (Strategic Research Development Grant) and the Engineering and Physical Sciences Research Council (Basic Technology Grant) in excess of £5Million for the period 2004-10, and is led by the School of Informatics, University of Edinburgh.

Further information

LFCS

games (games and automata for synthesis and verification)

Julian Bradfield

Abstract: GAMES (Games and Automata for Synthesis and Verification) is a just-finished Framework V Research and Training Network. Its objectives were to develop the theory and practice of the interlinked triad of games, automata and logic, with particular reference to their applications in verification and controller synthesis. This poster outlines the project, and then describes briefly some of the work done at Edinburgh under the project, namely Etessami and Yannakakis' work on Recursive Markov Chains, Bradfield and Kreutzer's work on IF fixpoint logic, and Bradfield, Duparc and Quickert's work on hierarchies in fixpoint logic.

Further information


PEPA:  PERFORMANCE EVALUATION PROCESS ALGEBRA

Jane Hillston

Abstract: The PEPA language is a stochastic process algebra used to analyse both natural and artificial systems such as the biochemical pathways in living organisms and the performance characteristics of computer and communication systems.  PEPA is a concise modelling language in the tradition of process algebras such as CCS, designed in Edinburgh by Robin Milner, founder of the Laboratory for Foundations of Computer Science.  It enriches the CCS language with timing information about the activities performed by the model, enabling the system under study to be analysed both behaviourally and quantitatively.

Further information


MOBILITY AND SECURITY

Robert Atkey


The Mobility and Security Group is engaged in building secure foundations for the next generation of mobile applications, using proof-carrying code to give mathematical guarantees of program safety.

We aim to provide a new level of code safety assurance by using Proof-Carrying Code. Programs carry with them a mathematical proof of their safety. The code consumer checks the proof supplied with a program and only agrees to run it if this check succeeds. This provides a new level above current techniques such as cryptographic signatures because the proof refers not to an external authority, but to the code itself.

We have developed technology both for the code producer, to generate programs compiled with proofs of their safety, and for the code consumer, to check these proofs and guarantee their soundness. On the producer side, we have developed (with LMU Munich) advanced type systems that guarantee space bounds of functional programs. These guarantees can be packaged as proofs along with the compiled code. On the consumer side, we have guaranteed the soundness of proof checking by formalising a resource-aware logic for Java bytecode in the interactive theorem prover Isablle. We are currently extending our work to cover more resources and to adapt to Java as a source language.

We have two main application areas: small mobile devices and the Grid. Small mobile devices, such as mobile phones, now have the ability to download and run small Java applications such as games. These devices are very limited in terms of resources such as CPU power and memory space. They also have access to relatively expensive resources such as the network. Knowing beforehand that a downloaded application will have the resources to run and will not be unexpectedly expensive to the user is extremely useful. We have previously completed an EC project "Mobile Resource Guarantees" on applying proof-carrying code to mobile devices. We are now working as part of the EC funded Mobius ("Mobility, Ubiquity and Security") project with 15 other academic and industry partners in the EU on this area.

Grid computing aims to commoditise super-computing-level processing power and large shared databases. The Grid is built upon code being executed by untrusting hosts. Of particular importance is safe use of computing resources such as CPU time and memory space. A rogue piece of code must not be allowed to monopolise a shared resource. Our research in this area is funded by the EPSRC ReQueST grant (Resource Quantification for e-Science Technologies).

Further information

SMOQE: A System for providing secure access to xml

Xibei Jia

Abstract: This poster outlines the results of database research in two areas of data integration. The first concerns SMOQE, the first system to provide efficient support for answering queries over virtual and possibly recursively defined XML views. XML views have been widely used to integrate data, speed up query answering, and above all, enforce XML security, for which virtual XML views are necessary. SMOQE encompasses an array of novel techniques for specifying XML (security) views, and for rewriting, evaluating and optimizing XML queries posed on views without materializing the views.

The second provides a picture of a uniform system for integrating, cleaning, maintaining and securing data. The system supports a number of functionalities, including (a) automated schema mapping via a novel notion of schema embedding, (b) XML data publishing for exporting data from relational databases as XML documents, (c) XML data integration for combining data from multiple distributed and heterogeneous data sources, (d) cleaning integrated data based on a set of new integrity constraints designed for detecting inconsistencies, (e) incremental maintenance of integrated data, and (f) securing integrated data (SMOQE). This is the first uniform system capable of doing almost everything one needs for data integration.

Further information

two themes in digital curation: archiving & annotation

Heiko Mueller

Abstract: This poster presents the results of database research in two areas of digital curation. The first concerns the preservation aspect of curation; it demonstrates a technique for archiving all versions of a scientific database. It allows the efficient retrieval of any version and also permits temporal queries on the history of components of the database. The storage overhead is small, and depends on the amount of change rather than the frequency of changes. The second concerns data annotation, which is an important activity especially in curated biological databases. As yet there is no generic technology for annotating databases and for querying those annotations. Mondrian is a system for annotating relational databases through blocks and colours. The colours represent the annotations, and the blocks allow annotations to be attached to relationships among data elements.

Other areas of database research that are important for digital curation include data provenance, data publishing, data integration and data citation.

Further information

IANC

Bayesian Condition Monitoring in Neonatal Intensive Care

Chris Williams, John Quinn, Neil McIntosh

Abstract: Premature babies in intensive care are monitored continuously, with several different physiological measurements taken per second. These measurements indicate the state of health but are noisy and require a lot of experience to interpret. We model this data probabilistically as a Factorial Switching Kalman Filter, and show that this allows us to make inferences about the state of the baby and the operation of the monitoring equipment.

Further information

A PROBABILISTIC MODEL FOR MATCHING WHITE MATTER TRACTS RECONSTRUCTED FROM GROUP DIFFUSION MRI DATA

Jon Clayden

Abstract:  Diffusion magnetic resonance imaging (dMRI), a recently developed medical imaging technique, provides a rich source of structural information about the connectivity of the living brain.  Fibre tracking algorithms, which work from dMRI data, are capable of reconstructing the pathways of axon bundles in the brain, potentially allowing clinical studies of white matter pathology to focus on specific susceptible structures.  However, segmentations produced by fibre tracking presently lack consistency between individuals.  Here we describe a probabilistic model for the shape relationships between comparable tracts, with the aim of using it to maximise the similarity between segmented tracts in group dMRI data and a predefined reference tract - thus improving segmentation consistency.

Further information

Surround modulation by long-range lateral connections in an orientation map model of primary visual cortex development and function

Judith S. Law and James A. Bednar

Abstract: Neuronal response properties are often smoothly topographically organised across the cortical surface. The prototypical example is the map of orientation preference in primary visual cortex (V1). Many models of orientation map development have been very successful in reproducing the features of biological maps. The majority of these models are based on a principle of short-range excitatory and long-range inhibitory connections between neurons, e.g. von der Malsburg, 1973, Swindale, 1992, Obermayer et al., 1990 and the LISSOM model, Sirosh and Miikkulainen, 1997. However, biological data suggests that long-range connections between V1 neurons arise primarily from putatively excitatory pyramidal cells (Gilbert & Wiesel,1989, Hirsch & Gilbert, 1991, Weliky et al.,1995, Angelucci et al.,2002). Furthermore, simple models with long-range excitation and shortrange inhibition have shown how a biologically realistic circuitry can reproduce features of adult V1 function such as extra-classical receptive field phenomena (Schwabe et al., 2006). These models of adult function suggest that long-range excitatory connections are facilitatory when input is at low contrast, yet stronger activation of local inhibitory neurons at high contrast will cause these connections to act supressively. Previous developmental map models with long-range inhibitory connections are therefore unable to account for aspects of surround modulation. However, it is not yet clear how such circuits can arise, which parts of the system are plastic, or in general how to reconcile these findings with otherwise successful developmental models such as LISSOM. We present the first model which is consistent with this realistic connectivity, yet also reproduces the features of successful developmental models of topographic map formation. Future work will address how this connectivity can lead to surround modulation both in adult V1 and throughout development.

Further information


DTC

neuroinformatics doctoral training centre

Mark van Rossum

Abstract: This poster gives an overview of the structure, aims, and goals of the NeuroInformatics Doctoral Training Centre.

Further information

neuroinformatics: computing and the brain

Mark van Rossum

Abstract: This poster presents a selection of Phd projects done in the NeuroInformatics DTC. It addresses: How does a rat know where it is going? How to make a lasting neural memory? How does human memory work? and How do we see at low contrast.

Further information



IGS

informatics graduate school

Don Sannella

Abstract: There are currently 271 students studying for a postgraduate research
degree in Informatics, nearly all of them for a full-time PhD.  There
are robust arrangements in place for supervision and progress reviews,
and students have access to a range of specialist courses and short
transferrable skills training courses.  Networking amongst students
and researchers is facilitated by Research Institutes, by formal and
informal interdisciplinary groupings, and by subsidised social
activities.

Further information

CSTR

ami and amida: meeting browsers and remote meeting support

Jean Carletta

Abstract: The AMI Consortium develops new technologies to aid groups that hold meetings. The AMI project concentrated on ways of using archives of face-to-face meetings, and AMIDA will contribute aids for people who need to attend a meeting, but can't be together. Edinburgh's main contributions are in project coordination, data collection, speech processing, and language technology.

Further information

speech recognition: novel approaches

Simon King

Abstract: In this poster, we describe some of the novel approaches to speech recognition that we are investigating. These complement the more mainstream work being carried out in the AMI project, described elsewhere. The motivations of our novel approaches come from two inadequacies of current approaches (Hidden Markov Models of phonemes): Describing speech as a linear string of phonemes is inadequate and causes many problems for statistical modelling; Modelling speech directly in an acoustic observation space makes separation of classes very difficult.

To avoid the problems of the phonemic representation, we are working in two quite different directions. The first has strong linguistic motivations based on properties of speech production and includes work with articulatory measurement data and articulatory/phonetic features, all of which are factored (multi-stream) representations. To build statistical models of such representations, we use Dynamic Bayesian Networks. Our second, more recent, direction has purely "engineering" motivations: we model speech as a string of graphemes; this is linguistically implausible (especially for English), but avoids the need for pronunciation dictionaries (which are poor representations of natural, spontaneous speech); accuracies using grapheme models are already almost as good as for phoneme models. This is further evidence that phonemes are inadequate.

Instead of classification using Gaussian mixture models of acoustic observations (which are derived from the short term spectrum of the speech signal), we are looking at alternative techniques, including features based on class posterior probabilities, produced by some classifier (usually a neural network). This approach is not new in itself, but our novel contribution is to consider what classification task this neural network should be performing: conventionally, this is always phoneme classification, but we are looking at articulatory/phonetic features and graphemes as alternatives.

The above research is complemented by more theoretical work and by application-driven work. We are developing theory for learning the sub-word unit inventory (rather than pre-specifying it as phonemes or graphemes, for example) and for learning graphical model structure and the structure of precision (i.e. inverse covariance) matrices. Both of these topics involve automatically selecting models of appropriate structure and complexity for the data, to optimise classification performance. On the applications side, we are testing our models in areas including multi-lingual speech recognition and audio search. Both of these areas stand to benefit from using sub-word units other than phonemes.

Further information

SysBio

Edinburgh Centre for Bioinformatics

Yulia Matskevich

Research in the Computational Systems Biology group is focused on kinetic and static modeling of biological processes by linking diverse data and modesl through multiple iterations, from static ab initio models to highly constrained kinetic models that cross multiple scales. Modelling will be supported by the Systems Biology Software Infrastructure, a new integrated platform, facilitating the modelling process from databases to knowledge discovery, which is currently under development in our group.

Current research themes of the group are:

Edinburgh Pathway Editor

Anatoly Sorokin

EPE is a visual editor designed for annotation, visualization and presentation of wide variety of biological networks, including metabolic, genetic and signal transduction pathways. It based on a metadata driven architecture, which makes it very flexible in drawing, storing, presenting and exporting information related to the network of interest.

EPE was created as a stand-alone Eclipse application, with Eclipse open framework architecture. This enables the development of extensions to enhance the existing capabilities. Specific plug-ins, to perform scientific computing and other tasks can be easily incorporated.

Human metabolic network reconstruction and analysis

Stuart Moodie

A better understanding of human metabolism and its relationship with human disease is an important task in human systems biology studies. This project aims to present metabobolic network reconstructed from the genome annotation information. A preliminary network was first reconstructed by integrating the information from different databases such as EMP, KEGG, Brenda and Uniprot and the information from literature, resulting in a network with about 3000 metabolic reactions. We have reorganized the reactions into about 50 pathways according to their functional relationships. The disease related metabolic enzymes were marked for further analysis of their effect on human disease.

Mathematical modelling and large-scale computational simulation of complex biological systems

Stuart Moodie

The purpose of this project is to develop modular open source software to assist researchers in the building and modelling of circuits. Dynamical system theory including bifurcation analysis and global optimisation is employed to model the evolution of extremely complex biochemical pathways of living organisms, using high performance large-scale parallel computational techniques with aids of supercomputers.

Further information

comparative evaluation of the accuracy of reverse engineering gene regulatory networks with various machine learning methods

Adriano V. Werhii

Abstract: We compare the accuracy of predicting gene regulatory networks with three different machine learning methods: (1) relevance networks, (2) graphical Gaussian models, and (3) Bayesian networks. The evaluation is carried out on a cellular signalling network that describes the interaction of 11 phosphorylated proteins and phospholipids in human immune system cells.

Further information


eSI & EPCC

understanding human development through gene expression

Jano van Hemert

Abstract: The Developmental Gene Expression Map project ams to design the infrastructure for a pan-European collaborative network on the study of gene expression in early human development. Where the project includes the ethical and biological side of the infrastructure, in this poster, we focus on the ICT oriented research components that would contribute to a better understanding of human development. These include collaborative experiment planning, spatial-temporal gene expression databases, 3D reconstruction and visualisation, data integration, data mining, integrative biology, computational modelling, and systems biology.

Further information

The e-science institute

Anna Kenway

Abstract: The e-Science Institute is the UK's interdisciplinary centre for e-Researchers to meet, work and exchange ideas. Hosted by Edinburgh University, it has already been operating for 5 years and has now been extended to July 2011. The eSI poster records its past activity and describes its current development into a more thematic and research oriented mode.

Further information

OGSA-DAI (poster joint with ESI)

Neil Chue Hong/Konstantinos Karasavvas/Malcolm Atkinson

Abstract:  OGSA-DAI demonstrates the University of Edinburgh's ability to combine research on data access and integration using grid and web service technology and in-house software engineering expertise to create outputs which benefit international research. OGSA-DAI enables diverse heterogeneous data sources to be accessed through uniform interfaces and provides a flexible framework for managing additional processing functionality on the data exposed, reducing overall data transfer. Recent research has focused on designing a pipelining model which enables data integration activities to be orchestrated, and data transferred between them in an efficient manner.

Further information

EPCC leadership in europe

Neil Chue Hong/Kostas Kavoussanakis

Abstract:  As part of its mission, EPCC is committed to transferring skills and knowledge to UK and European industry.  EPCC coordinates the 21-partner NextGRID project, which seeks to ensure that Europe is a world leader in the next generation of Grid technology. The three-year project envisions the development of an architecture for Next Generation Grids which will enable their widespread use by research, industry and the ordinary citizen.

http://www.nextgrid.org/

With Grid middleware reaching maturity, industrial uptake of Grid solutions is paramount for European economy. The 74-partner BEinGRID project includes top European Grid, IT and business experts, supporting business experiments as they pilot Grid solutions in diverse market sectors. Instrumental in the design and management of the project, EPCC also leads Data Management support to the business experiments.

http://www.beingrid.eu/

EPCC outreach to other disciplines

Robert Baxter/George Beckett/Mark Parsons

Abstract:  EPCC has an outstanding reputation for providing computing solutions to disciplines across sciences. We showcase two of our current projects.

EPCC is working with eight other academic and commercial technology providers in ITI Techmedia’s Condition-Based Monitoring (CBM) Programme, investigating the application of CBM technologies to commercial farming. EPCC leads the biological modelling work, applying expertise in software, data management and data analysis to develop key intellectual property for the ITI CBM platform.

http://www.ititechmedia.com/defaultpage131abcde0.aspx?pageID=806

Distributed Grid Storage (DiGS) is a grid application that combines disparate storage resources to form a unified 'data grid', capable of meeting the data management challenges of both QCD Physics and a wider scientific community. It combines disparate mass storage technologies (e.g. RAID units or SAN systems) to provide a unified, multi-Terabyte 'data grid' facility.

http://www.gridpp.ac.uk/qcdgrid/

CISA

the open knowledge project

Dave Robertson

Abstract: Our aim is to develop a new form of knowledge sharing that is based not on direct sharing of ``true'' statements about the world but, instead, is based on sharing descriptions of interactions.  By making interaction specifications the currency of knowledge sharing we gain a context to interpreting knowledge that can be transmitted between peers.  The narrower notion of semantic commitment we thus obtain requires peers only to commit to meanings of terms for the purposes and duration of the interactions in which they appear.  This lightweight semantics allows networks of interaction to be formed between peers using comparatively simple means of tackling the perennial issues of query routing, service composition and ontology matching.  After the first year of the project we have an integrative architecture and an implemented kernel system, supplemented by verification methods and demonstrator applications. This is, to our knowledge, the first single system that shares interaction models in a peer to peer style and uses these to coordinate peers in an opportunistic but reliable way.  This is a radical departure from the mainstream in terms of the underlying methods of coordination between peers but it can accommodate mainstream practices.  For example, in our interaction modelling language (LCC) we can interpret traditional business process modelling languages; in our interactions we can conscript existing Web services via standard interfaces; and we can obtain adaptive behaviours such as ontology matching and mediation using dynamic modification of interaction models' contexts.

Further information

the helpful environment

Austin Tate

Abstract: The Planning and Activity Management Group within the Artificial Intelligence Applications Institute (AIAI) in the School of Informatics at the University of Edinburgh is exploring representations and reasoning mechanisms for inter-agent activity support. The agents may be people or computer systems working in a coordinated fashion. The group explores and develops generic approaches by engaging in specific applied studies. Applications include crisis action planning, command and control, space systems, manufacturing, logistics, construction, procedural assistance, help desks, emergency response, etc.

Our long term aim is the creation and use of task-centric virtual organisations involving people, government and non-governmental organisations, automated systems, grid and web services working alongside intelligent robotic, vehicle, building and environmental systems to respond to very dynamic events on scales from local to global.

The group is involved in collaborative research projects, programmes, standards and other activities internationally.

Further information


how safe is your pin?

Graham Steel

Abstract: Cash machines (ATMs) and other critical parts of the electronic payment infrastructure contain tamper-proof hardware security modules (HSMs), which protect highly sensitive data such as the keys used to obtain personal identification numbers (PINs). These HSMs have a restricted API that is designed to prevent malicious intruders from gaining access to the data. However, several attacks have been found on these APIs, as the result of painstaking manual analysis by experts such as Mike Bond and Jolyon Clulow. At the University of Edinburgh, a project is underway to formalise and mechanise the analysis of these APIs. We aim to develop techniques that help API designers to specify their systems precisely and check them for flaws. This poster introduces the challenges of the ATM network scenario, and describes our methods for analysing security APIs, using theorem provers, protocol analysis tools, and the PRISM probabilistic model checker.

Further information



Home : Research : ICTreviewEPSRC20061207 

Informatics Forum, 10 Crichton Street, Edinburgh, EH8 9AB, Scotland, UK
Tel: +44 131 651 5661, Fax: +44 131 651 1426, E-mail: school-office@inf.ed.ac.uk
Please contact our webadmin with any comments or corrections. Logging and Cookies
Unless explicitly stated otherwise, all material is copyright © The University of Edinburgh