Computing Science and Mathematics
Technical Reports Before 2014

Department Logo

The following Departmental Technical Reports were published before 2014. There is a separate list of reports from 2014 onwards.

Where the author of a report below is highlighted as a link, click on it to contact the author (via home web page or email). Some reports are available for downloading. Click on the symbol preceding each article for the following:

PDF Logo PDF

PDF Logo Title: Handling Emergent Conflicts in Adaptable Rule-Based Sensor Networks (PhD Thesis)
Author(s): Jesse M. Blum
Date: October 2012
Number: CSM-196
Abstract:

This thesis presents a study into conflicts that emerge amongst sensor device rules when such devices are formed into networks. It describes conflicting patterns of communication and computation that can disturb the monitoring of subjects, and lower the quality of service. Such conflicts can negatively aect the lifetimes of the devices and cause incorrect information to be reported. A novel approach to detecting and resolving conflicts is presented.

The approach is considered within the context of home-based psychiatric Ambulatory Assessment (AA). Rules are considered that can be used to control the behaviours of devices in a sensor network for AA. The research provides examples of rule conflict that can be found for AA sensor networks.

Sensor networks and AA are active areas of research and many questions remain open regarding collaboration amongst collections of heterogeneous devices to collect data, process information in-network, and report personalised findings. This thesis presents an investigation into reliable rule-based service provisioning for a variety of stakeholders, including care providers, patients and technicians. It contributes a collection of rules for controlling AA sensor networks.

This research makes a number of contributions to the field of rule-based sensor networks, including areas of knowledge representation, heterogeneous device support, system personalisation and, in particular, system reliability. This thesis provides evidence to support the conclusion that conflicts can be detected and resolved in adaptable rule-based sensor networks.


PDF Logo Title: Gen-O-Fix: An Embeddable Framework for Dynamic Adaptive Genetic Improvement Programming
Author(s): Jerry Swan, Michael Epitropakis and John R. Woodward
Date: Januayr 2014
Number: CSM-195
Abstract:

Genetic Improvement Programming (GIP) is concerned with automating the burden of software maintenance, the most costly phase of the software lifecycle. We describe Gen-O-Fix, a GIP framework which allows a software system hosted on a Java Virtual Machine to be continually improved (e.g. to make better predictions, to pass more regression tests, or to reduce power consumption). It is the first exemplar of a dynamic adaptive GIP framework, i.e. it can improve a system as it runs. It is written in the Scala programming language and uses reflection to yield source-to-source transformation. One of the design goals for Gen-O-Fix was to create a tool that is user-centric rather than researcher-centric: the end user is required only to provide a measure of system quality and the URL of the source code to be improved. We discuss potential applications to predictive, embedded and high-performance systems.


PDF Logo Title: Defensive C++: Programming Guidelines for Those Who dislike Debugging
Author(s): Jerry Swan
Date: November 2012
Number: CSM-194
Abstract:

C++ has a reputation as a difficult and complex language in which one can achieve anything, but with an attendant risk of (what is politely termed) 'undefined behavior' when practiced by the uninitiated.

We offer guidelines that are intended to eliminate common causes of systemic and hidden error (e.g. ignorance of copy-assignment semantics) and also describe a number of practices that facilitate both design robustness and 'programming in the large'.


PDF Logo Title: A System for Controlling, Monitoring and Programming The Home (PhD Thesis)
Author(s): Claire Maternaghan
Date: September 2012
Number: CSM-193
Abstract:

As technology becomes ever more pervasive, the challenges of home automation are increasingly apparent. Seamless home control, home monitoring and home programming by the end user have yet to enter the mainstream. This could be attributed to the challenge of developing a fully autonomous and extensible home system that can support devices and technologies of differing protocols and functionalities.

In order to offer programming facilities to the user, the underlying rule system must be fully independent, allowing support for current and future devices. Additional challenges arise from the need to detect and handle conflicts that may arise among user rules and yield undesirable results. Non-technical individuals typically struggle when faced with a programming task. It is therefore vital to encourage and ease the process of programming the home.

This thesis presents Homer, a home system that has been developed to support three key features of a home system: control, monitoring and programming. Homer supports any third-party hardware or software service that can expose its functionality through Java and conform to the Homer interface. Stand-alone end user interfaces can be written by developers to offer any of Homer's functionality.

Where policies (i.e. rules) for the home are concerned, Homer offers a fully independent policy system. The thesis presents a custom policy language, Homeric, that has been designed specifically for writing home rules. The Homer policy system detects overlaps and conflicts among rules using constraint satisfaction and the effect on environment variables.

The thesis also introduces the notion of perspectives to ease user interactivity. These have been integrated into Homer to accommodate the range of ways in which a user may think about different aspects and features of their home. These perspectives include location, device type, time and people-oriented points of view. Design guidelines are also discussed to aid end user programming of the home.

The work presented in this thesis demonstrates a system that supports control, monitoring and programming of the home. Developers can quickly and easily add functionality to the home through components. Conflicts can be detected amongst rules within the home. Finally, design guidelines and a prototype interface have been developed to allow both technically minded and non-technical people to program their home.


PDF Logo Title: The Late Acceptance Hill-Climbing Heuristic
Author(s): Edmund K. Burke and Yuri Bykov
Date: June 2012
Number: CSM-192
Keywords: Combinatorial Optimization, Heuristic, Local Search, Travelling Salesman, Timetabling, Hill Climbing, Simulated Annealing, Threshold Accepting, Great Deluge Algorithm
Abstract:

This paper introduces a new and very simple search methodology called Late Acceptance Hill-Climbing (LAHC). It is a one-point iterative search algorithm which accepts non-improving moves when a candidate cost function is better (or equal) than it was a number of iterations before. This value appears as a single algorithmic input parameter which determines the total processing time of the search procedure. The properties of this method are experimentally studied in this paper with a range of Travelling Salesman and Exam Timetabling benchmark problems. In addition, we present a fair comparison of LAHC with well-known search techniques which employ different variants of a cooling schedule: Simulated Annealing (SA), Threshold Accepting (TA) and the Great Deluge Algorithm (GDA). Moreover, we discuss the method's success in winning an international competition to automatically solve the Magic Square problem. Our experiments have shown that the LAHC approach is simple, easy to implement and yet is an effective search procedure. For all studied problems, its average performance was distinctly better than GDA and on the same level as SA and TA. One of the major advantages of LAHC approach is the absence of a cooling schedule. This makes it significantly more robust than cooling schedule-based techniques. We present an example where the rescaling of a cost function in the Exam Timetabling Problem dramatically deteriorates the performance of three cooling-schedule based techniques, but has absolutely no influence upon the performance of LAHC.


PDF Logo Title: Can People Program Their Home?
Author(s): Claire Maternaghan
Date: April 2012
Number: CSM-191
Abstract:

The requirements of a home system vary greatly in terms of the residents, the existing devices available, and many other factors. It is therefore crucial that any home system can be customisable by the end user; ideally this process would be simple and quick to perform. However, the challenges of end user programming are enormous. There is considerable challenge in obtaining concrete and unambiguous rules from non-technically minded individuals who typically think, and express, their desires as high level goals. Additionally, the language that is used to express the home rules must enable simple rules to be written by those less technical or ambitious, yet also not restrict more competent users.

Homer is a home system that has been fully developed by the author. It offers the ability for users to control, monitor and program their home. A custom policy language has been designed called Homeric which allows rules to be written for the home.

In order to evaluate Homeric, and the design guidelines for allowing end users to formulate Homeric, a stand-alone application was written. This application is called the Homeric Wizard and allows rules to be expressed for the home. It makes use of natural language and visual programming techniques, as well as the notion of perspectives which allows rule elements to be browsed from differing aspects – devices, locations, people and time.

An online evaluation was performed over two weeks, obtaining participation from 71 individuals of varying age, gender, status and technical abilities. This report presents an evaluation of the Homeric Wizard tool, and all the results and hypotheses obtained.


PDF Logo Title: The Accent Policy Server
Author(s): Stephan Reiff-Marganiec, Kenneth J. Turner, Lynne Blair and Feng Wang
Date: August 2013 (Revised)
Number: CSM-164
Abstract:

The Accent project (Advanced Call Control Enhancing Networked Systems) was concerned with developing a practical and comprehensive policy language for call control. The project studied a number of distinct tasks: the definition of the language, and also a three-layer architecture for deploying and enforcing policies defined in the language. The approach was subsequently extended for home care and telecare on the Match project (Mobilising Advanced Technologies for Care at Home) and for sensor networks and wind farms on the Prosen project (Proactive Control of Sensor Networks).

This document focuses on the policy system layer of the architecture. The layer is concerned with storing, deploying and enforcing policies. It represents the core functionality of the three-layer architecture. This report discusses the prototype implementation at a technical level. It is intended as supporting documentation for developers continuing to enhance the policy server, as well as those wishing to gain an insight into the technical details of the policy server layer.


PDF Logo Title: A Peer-to-Peer Network Framework utilising The Public Mobile Telephone Network (MPhil Thesis)
Author(s): Martin Blunn
Date: December 2011
Number: CSM-190
Keywords: Wireless Sensor Network, Middleware
Abstract:

P2P (Peer-to-Peer) technologies are well established and have now become accepted as a mainstream networking approach. However, the explosion of participating users has not been replicated within the mobile networking domain. Until recently the lack of suitable hardware and wireless network infrastructure to support P2P activities was perceived as contributing to the problem. This has changed with ready availability of handsets having ample processing resources utilising an almost ubiquitous mobile telephone network. Coupled with this has been a proliferation of software applications written for the more capable 'smartphone' handsets. P2P systems have not naturally integrated and evolved into the mobile telephone ecosystem in a way that 'client-server' operating techniques have. However as the number of clients for a particular mobile application increase, providing the 'server side' data storage infrastructure becomes more onerous. P2P systems offer mobile telephone applications a way to circumvent this data storage issue by dispersing it across a network of the participating users handsets.

The main goal of this work was to produce a P2P Application Framework that supports developers in creating mobile telephone applications that use distributed storage. Effort was assigned to determining appropriate design requirements for a mobile handset based P2P system. Some of these requirements are related to the limitations of the host hardware, such as power consumption. Others relate to the network upon which the handsets operate, such as connectivity. The thesis reviews current P2P technologies to assess which was viable to form the technology foundations for the framework. The aim was not to re-invent a P2P system design, rather to adopt an existing one for mobile operation. Built upon the foundations of a prototype application, the P2P framework resulting from modifications and enhancements grants access via a simple API (Applications Programmer Interface) to a subset of Nokia 'smartphone' devices. Unhindered operation across all mobile telephone networks is possible through a proprietary application implementing NAT (Network Address Translation) traversal techniques.

Recognising that handsets operate with limited resources, further optimisation of the P2P framework was also investigated. Energy consumption was a parameter chosen for further examination because of its impact on handset participation time.

This work has proven that operating applications in conjunction with a P2P data storage framework, connected via the mobile telephone network, is technically feasible. It also shows that opportunity remains for further research to realise the full potential of this data storage technique.


PDF Logo Title: Language Definition for REED
Author(s): Xiang Fei and Evan H. Magill
Date: May 2011
Number: CSM-189
Keywords: Wireless Sensor Network, Middleware
Abstract:

Wireless Sensor Networks (WSNs) have emerged as an enabling technology for a variety of distributed applications. WSN middleware eases the development of these applications by providing a uniform programming environment. REED (Rule Execution and Event Distribution) is WSN middleware that supports both the distribution of rules and the events that trigger them. This report provides the full description of the REED language that is defined using a variant of BNF (Backus-Naur Form).


PDF Logo Title: The Homer Home Automation System
Author(s): Claire Maternaghan
Date: December 2010
Number: CSM-187
Abstract:

This report discusses Homer, a system developed for managing home automation and telecare. The philosophy and architecture of Homer are explained. The nature of home components is discussed, along with how they fit together into the overall system. Policies are used as a means of automated decisions based on user-defined rules for control of the home system. User-friendly interfaces for home management are then presented. Finally, the report summarises what has been covered, evaluates the current status of home automation and telecare, and identifies trends and future developments in these fields.


PDF Logo Title: An Integrated Methodology for Creating Composed Web/Grid Services (PhD Thesis)
Author(s): Koon Leai Larry Tan
Date: October 2010
Number: CSM-186
Abstract:

This thesis presents an approach to design, specify, validate, verify, implement, and evaluate composed web/grid services. Web and grid services can be composed to create new services with complex behaviours. The BPEL (Business Process Execution Language) standard was created to enable the orchestration of web services, but there have also been investigations of its use for grid services. BPEL specifies the implementation of service composition but has no formal semantics; implementations are in practice checked by testing. Formal methods are used in general to define an abstract model of system behaviour that allows simulation and reasoning about properties. The approach can detect and reduce potentially costly errors at design time.

CRESS (Communication Representation Employing Systematic Specification) is a domain-independent, graphical, abstract notation, and an integrated toolset for developing composite web services. The original version of CRESS had automated support for formal specification in LOTOS (Language Of Temporal Ordering Specification), executing formal validation with MUSTARD (Multiple-Use Scenario Testing and Refusal Description), and implementation using BPEL4WS as the early version of the BPEL standard. This thesis work has extended CRESS and its integrated tools to design, specify, validate, verify, implement, and evaluate composed web/grid services. The work has extended the CRESS notation to support a wider range of service compositions, and has applied it to grid services as a new domain. The thesis presents two new tools: CLOVE (CRESS Language-Oriented Verification Environment) and MINT (MUSTARD Interpreter). These respectively support formal verification and implementation testing. New work has also extended CRESS to automate implementation of composed services using the more recent BPEL standard WS-BPEL 2.0.


PDF Logo Title: How do People want to control Their Home?
Author(s): Claire Maternaghan
Date: April 2011
Number: CSM-185
Abstract:

There are a large number of home automation companies which offer consumers tailored solutions to meet their exact needs. At installation time users are asked how they would like their home to behave so that the company engineers can program the solution for their client. Once programmed, the users must contact the company and, in many cases, pay for these rules to be altered in any way. This is mostly due to the rigid design of the software architecture used by these companies. In most cases the user is very unlikely to know how they would like their home to behave until they have lived with the technology for some time. Even then, their requirements may change on a daily basis.

The author proposes that consumers need home systems that they are able to program, and re-program, easily. In order to gain a deeper understanding of how people could see themselves using such a control mechanism of their home, an online survey was carried out. There were 150 participants ranging in age, gender, technical ability and experience. This report describes the survey and user group, and then explores the data received from the survey: evaluating the author's eight hypotheses and exploring any trends in the qualitative data gathered.


PDF Logo Title: An Analysis of Planarity in Face-Routing
Author(s): Marwan M. Fayed, David E. Cairns and Hussein T. Moutfah
Date: August 2010
Number: CSM-184
Abstract:

In this report we investigate the limits of routing according to left- or right-hand rule (LHR). Using LHR, a node upon receipt of a message will forward to the neighbour that sits next in counter-clockwise order in the network graph. When used to recover from greedy routing failures, LHR guarantees success if implemented over planar graphs. This is often referred to as face or geographic routing. In the current body of knowledge it is known that if planarity is violated then LHR is guaranteed only to eventually return to the point of origin. Our work seeks to understand why a non-planar environment stops LHR from making delivery guarantees. Our investigation begins with an analysis to enumerate all node con gurations that cause intersections. A trace over each con guration reveals that LHR is able to recover from all but a single case, the `umbrella' con guration so named for its appearance. We use this information to propose the Prohibitive Link Detection Protocol (PDLP) that can guarantee delivery over non-planar graphs using standard face-routing techniques. As the name implies, the protocol detects and circumvents the `bad' links that hamper LHR. The goal of this work is to maintain routing guarantees while disturbing the network graph as little as possible. In doing so, a new starting point emerges from which to build rich distributed protocols in the spirit of protocols such as CLDP and GDSTR.


PDF Logo Title: Case Studies using Cress to develop Web and Grid Services
Author(s): Koon Leai Larry Tan
Date: December 2009
Number: CSM-183
Abstract:

Web and grid services are forms of distributed computing based on the SOA (Service-Oriented Architecture) paradigm which offers loose coupling of functionality and interoperability in heterogeneous environments, achieved by using standards such as SOAP (Simple Object Access Protocol) and WSDL (Web Services Description Language). Web and grid services are widely used in businesses and research, where functionality is usually missioncritical. Services may be combined to create new services, an activity generally known as service composition. There are standards such as WS-BPEL (Web Services – Business Process Execution Language) that implement service composition. There is increasing implementation of web/grid services and their compositions, with more complex behaviour being developed.

Formal techniques support mathematical reasoning about the behaviour of systems which can detect errors and identify issues prior to implementation, and therefore can minimise the potential cost of errors as early as possible. Formal techniques can be applied to web and grid service composition. However, the attention given to their use is, by contrast, far less than the practical realisation of service implementation. Two reasons for this are the lack of formal expertise, and methodological differences between the practices of formal methods and service implementation.

CRESS (Communication Representation Employing Systematic Specification) is an abstract graphical notation and toolset for describing services at a high level, whereby specifications and implementations are automatically generated. CRESS has been extended for web and grid service compositions with automated support for specification, analysis, implementation and testing that appeals to nonspecialists. Service behaviour is automatically specified in LOTOS (Language of Temporal Ordering Specification). Formal analysis is supported in a manner appealing to non-specialists by abstracting formal aspects using high-level languages; analysis is carried out automatically.

These formal analyses, in the forms of validation and verification, are supported by MUSTARD (Multiple–Use Scenario Testing and Refusal Description) and CLOVE (CRESS Language-Oriented Verification Environment). Implementations of (composed) web and grid services are generated as Java and BPEL services, for deployment in Axis (web services), Globus Toolkit (grid services), and ActiveBPEL (BPEL services). Implementation analysis in the form of validation is supported by MINT (MUSTARD Interpreter), which is a high-level notation and tool that automates testing and performance evaluation on implemented services. These are integrated with CRESS, providing an integrated environment for service development in a rigorous way.

This report demonstrates the integrated methodology using web and grid service compositions as case studies.


PDF Logo Title: An Ontology Based Approach Towards A Universal Description Framework for Home Networks (PhD Thesis)
Author(s): Liam S. Docherty
Date: February 2010
Number: CSM-182
Abstract:

Current home networks typically involve two or more machines sharing network resources. The vision for the home network has grown from a simple computer network to every day appliances embedded with network capabilities. In this environment devices and services within the home can interoperate, regardless of protocol or platform. Network clients can discover required resources by performing network discovery over component descriptions. Common approaches to this discovery process involve simple matching of keywords or attribute/value pairings.

Interest emerging from the Semantic Web community has led to ontology languages being applied to network domains, providing a logical and semantically rich approach to both describing and discovering network components. In much of the existing work within this domain, developers have focused on defining new description frameworks in isolation from existing protocol frameworks and vocabularies.

This work proposes an ontology-based description framework which takes the ontology approach to the next step, where existing description frameworks are incorporated into the ontology-based framework, allowing discovery mechanisms to cover multiple existing domains. In this manner, existing protocols and networking approaches can participate in semantically-rich discovery processes. This framework also includes a system architecture developed for the purpose of reconciling existing home network solutions with the ontology-based discovery process.

This work also describes an implementation of the approach and is deployed within a home-network environment. This implementation involves existing home networking frameworks, protocols and components, allowing the claims of this work to be examined and evaluated from a 'real-world' perspective.


PDF Logo Title: Directed Intervention Crossover Approaches in Genetic Algorithms with Application to Optimal Control Problems (PhD Thesis)
Author(s): Paul Godley
Date: May 2009
Number: CSM-181
Abstract:

Genetic Algorithms (GAs) are a search heuristic technique modelled on the processes of evolution. They have been used to solve optimisation problems in a wide variety of fields. When applied to the optimisation of intervention schedules for optimal control problems, such as cancer chemotherapy treatment scheduling, GAs have been shown to require more fitness function evaluations than other search heuristics to find fit solutions. This thesis presents extensions to the GA crossover process, termed directed intervention crossover techniques, that greatly reduce the number of fitness function evaluations required to find fit solutions, thus increasing the effectiveness of GAs for problems of this type.

The directed intervention crossover techniques use intervention scheduling information from parent solutions to direct the offspring produced in the GA crossover process towards more promising areas of a search space. By counting the number of interventions present in parents and adjusting the number of interventions for offspring schedules around it, this allows for highly fit solutions to be found in less fitness function evaluations.

The validity of these novel approaches is illustrated through comparison with conventional GA crossover approaches for optimisation of intervention schedules of bio-control application in mushroom farming and cancer chemotherapy treatment. These involve optimally scheduling the application of a bio-control agent to combat pests in mushroom farming and optimising the timing and dosage strength of cancer chemotherapy treatments to maximise their effectiveness.

This work demonstrates that significant advantages are gained in terms of both fitness function evaluations required and fitness scores found using the proposed approaches when compared with traditional GA crossover approaches for the production of optimal control schedules.


PDF Logo Title: A Goal-Directed and Policy-Based Approach to System Management (PhD Thesis)
Author(s): Gavin A. Campbell
Date: May 2009
Number: CSM-180
Abstract:

This thesis presents a domain-independent approach to dynamic system management using goals and policies. A goal is a general, high-level aim a system must continually work toward achieving. A policy is a statement of how a system should behave for a given set of detectable events and conditions. Combined, goals may be realised through the selection and execution of policies that contribute to their aims. In this manner, a system may be managed using a goal-directed, policy-based approach.

The approach is a collection of related techniques and tools: a policy language and policy system, goal definition and refinement via policy selection, and conflict filtering among policies. Central to these themes, ontologies are used to model application domains, and incorporate domain knowledge within the system. The ACCENT policy system (Advanced Component Control Enhancing Network Technologies) is used as a base for the approach, while goals and policies are defined using an extension of APPEL (Adaptable and Programmable Policy Environment and Language).

The approach differs from existing work in that it reduces system state, goals and policies to a numerical rather than logical form. This is more user-friendly as the goal domain may be expressed without any knowledge of formal methods. All developed techniques and tools are entirely domain-independent, allowing for reuse with other event-driven systems. The ability to express a system aim as a goal provides more powerful and proactive high-level management than was previously possible using policies alone. The approach is demonstrated and evaluated within this thesis for the domains of Internet telephony and sensor network/wind turbine management.


PDF Logo Title: Bayesian Belief Networks for Dementia Diagnosis and Other Applications: A Comparison of Hand-Crafting and Construction using A Novel Data Driven Technique (PhD Thesis)
Author(s): Lloyd Oteniya
Date: July 2008
Number: CSM-179
Abstract:

The Bayesian Network (BN) formalism is a powerful representation for encoding domains characterised by uncertainty. However, before it can be used it must first be constructed, which is a major challenge for any real-life problem. There are two broad approaches, namely the hand-crafted approach, which relies on a human expert, and the data-driven approach, which relies on data. The former approach is useful, however issues such as human bias can introduce errors into the model.

We have conducted a literature review of the expert-driven approach, have cherry-picked a number of common methods, and have engineered a framework to assist non-BN experts with expert-driven construction of BNs. The latter construction approach uses algorithms to construct the model from a data set. However, construction from data is provably NP-hard. To solve this problem, approximate, heuristic algorithms have been proposed, in particular algorithms that assume an order between the nodes, therefore reducing the search space. However, traditionally, this approach relies on an expert providing the order among the variables; an expert may not always be available, or may be unable to provide the order. Nevertheless, if a good order is available, these order-based algorithms have demonstrated good performance.

More recent approaches attempt to 'learn' a good order then use the order-based algorithm to discover the structure. To eliminate the need for order information during construction, we propose a search in the entire space of Bayesian network structures. We present a novel approach for carrying out this task, and we demonstrate its performance against existing algorithms that search in the entire space and the space of orders. Finally, we employ the hand-crafting framework to construct models for the task of diagnosis in a 'real-life' medical domain - dementia diagnosis. We collect real dementia data from clinical practice, and we apply the data-driven algorithms developed to assess the concordance between the reference models developed by hand and the models derived from real clinical data.


PDF Logo Title: Efficient Service Discovery in Wide Area Networks (PhD Thesis)
Author(s): Alan Brown
Date: October 2008
Number: CSM-178
Keywords: Distributed Hash Table (DHT), Explicit Multi-Unicast (XCAST), Multicast, Open Services Gateway Initiative (OSGi), Peer-to-Peer Networks (P2P), Service Discovery, Service Initiation Protocol (SIP)
Abstract:

Living in an increasingly networked world, with an abundant number of services available to consumers, the consumer electronics market is enjoying a boom. The average consumer in the developed world may own several networked devices such as games consoles, mobile phones, PDAs, laptops and desktops, wireless picture frames and printers to name but a few. With this growing number of networked devices comes a growing demand for services, defined here as functions requested by a client and provided by a networked node. For example, a client may wish to download and share music or pictures, find and use printer services, or lookup information (e.g. train times, cinema bookings). It is notable that a significant proportion of networked devices are now mobile. Mobile devices introduce a new dynamic to the service discovery problem, such as lower battery and processing power and more expensive bandwidth. Device owners expect to access services not only in their immediate proximity, but further afield (e.g. in their homes and offices). Solving these problems is the focus of this research.

This thesis offers two alternative approaches to service discovery in Wide Area Networks (WANs). Firstly, a unique combination of the Session Initiation Protocol (SIP) and the OSGi middleware technology is presented to provide both mobility and service discovery capability in WANs. Through experimentation, this technique is shown to be successful where the number of operating domains is small, but it does not scale well.

To address the issue of scalability, this thesis proposes the use of Peer-to-Peer (P2P) service overlays as a medium for service discovery in WANs. To confirm that P2P overlays can in fact support service discovery, a technique to utilise the Distributed Hash Table (DHT) functionality of distributed systems is used to store and retrieve service advertisements. Through simulation, this is shown to be both a scalable and a flexible service discovery technique. However, the problems associated with P2P networks with respect to efficiency are well documented.

In a novel approach to reduce messaging costs in P2P networks, multi-destination multicast is used. Two well known P2P overlays are extended using the Explicit Multi-Unicast (XCAST) protocol. The resulting analysis of this extension provides a strong argument for multiple P2P maintenance algorithms co-existing in a single P2P overlay to provide adaptable performance. A novel multi-tier P2P overlay system is presented, which is tailored for service rich mobile devices and which provides an efficient platform for service discovery.


PDF Logo Title: The DAMES Metadata Approach
Author(s): Jesse M. Blum, Kenneth J. Turner
Date: December 2008
Number: CSM-177
Keywords:
Abstract:

The DAMES project will provide high quality data management activities services to the social science research community based on an e-social science infrastructure. The infrastructure is supported by the collection and use of metadata to describe datasets and other social science resources. This report reviews the metadata requirements of the DAMES services, reviews a number of metadata standards, and discusses how the selected standards can be used to supprot the DAMES services. The kinds of metadata focused on in this report include metadata for describing social science microdatasets, and other resources such as data analysis processing instruction files, metadata for grouping and linking datasets, and metadata for describing the provenance of data as it is transformed through analytical procedures. The social science metadata standards reviewed include:

The review concludes that the DDI standard version 3.0 is the most appropriate one to be used in the DAMES project, and explains how best to integrate the standard into the project. This includes a description of how to capture metadata upon resource registration, upgrade the metadata from accessible resources available throughthe GEODE project, use the metadata for resource discovery, and generate provenance metadata during data transformation procedures. In addition, a 'metadata wizard' is described to help with data management activities.


PDF Logo Title: A Case Study in Integrated Assertion Verification with Omnibus
Author(s): Thomas Wilson, Savi Majaraj and Robert G. Clark
Date: January 2009
Number: CSM-176
Keywords: Static Checking, Run-Time Checking, Integrated Formal Methods, Object-Oriented Programming
Abstract:

We present the example of the specification, implementation, and verification of a library system in Omnibus. Three approaches to verification (run-time assertion checking, extended static checking, and full formal verification) are applied to the example. We compare the ease of use and the error coverage of each approach. We then discuss how the three approaches may be used together within Omnibus in an integrated manner, explain the benefits of this, and show how integration is supported by the Omnibus IDE.


PDF Logo Title: Deriving Mean Field Equations from Large Process Algebra Models
Author(s): Chris McCaig, Rachel A. Norman and Carron E. Shankland
Date: March 2008
Number: CSM-175
Abstract:

In many domain areas the behaviour of a system can be described at two levels: the behaviour of individual components, and the behaviour of the system as a whole. Often deriving one from the other is impossible, or at least intractable, especially when realistically large systems are considered. Here we present a rigorous algorithm which, given an individual based model in the process algebra WSCCS describing the components of a system and the way they interact, can produce a system of mean field equations which describe the mean behaviour of the system as a whole. This transformation circumvents the state explosion problem, allowing us to handle systems of any size by providing an approximation of the system behaviour. From the mean field equations we can investigate the transient dynamics of the system. This approach was motivated by problems in biological systems, but is applicable to distributed systems in general.


PDF Logo Title: The Omnibus Language and Integrated Verification Approach (PhD Thesis)
Author(s): Thomas Wilson
Date: January 2008
Number: CSM-174
Keywords: Assertion-Based Verification, Run-Time Assertion Checking, Extended Static Checking, Formal Verification, Verification Policies
Abstract:

This thesis describes the Omnibus language and its supporting framework of tools. Omnibus is an object-oriented language which is superficially similar to the Java programming language but uses value semantics for objects and incorporates a behavioural interface specification language. Specifications are defined in terms of a subset of the query functions of the classes for which a frame-condition logic is provided. The language is well suited to the specification of modelling types and can also be used to write implementations. An overview of the language is presented and then specific aspects such as subtleties in the frame-condition logic, the implementation of value semantics and the role of equality are discussed. The challenges of reference semantics are also discussed.

The Omnibus language is supported by an integrated verification tool which provides support for three assertion-based verification approaches: run-time assertion checking, extended static checking and full formal verification. The different approaches provide different balances between rigour and ease of use. The Omnibus tool allows these approaches to be used together in different parts of the same project. Guidelines are presented in order to help users avoid conflicts when using the approaches together. The thesis discusses:

The principles of the implementation of the tool are described, focussing on the integrated static verifier module that supports both extended static checking and full formal verification through the use of an intermediate logic.

The different verification approaches are used to detect and correct a range of errors in a case study carried out using the Omnibus language. The case study is of a library system where copies of books, CDs and DVDs are loaned out to members. The implementation consists of 2278 lines of Omnibus code spread over 15 classes. To allow direct comparison of the different assertion-based verification approaches considered, run-time assertion checking, extended static checking and then full formal verification are applied to the application in its entirety. This directly illustrates the different balances between error coverage and ease-of-use which the approaches offer. Finally, the verification policy system is used to allow the approaches to be used together to verify different parts of the application.


PDF Logo Title: A Scalable Home Care System Infrastructure Supporting Domiciliary Care
Author(s): P. D. Gray, T. McBryan, N. Hine, C. J. Martin, N. Gil, M. Wolters, N. Mayo, K. J. Turner, L. S. Docherty, F. Wang, M. Kolberg
Date: August 2007
Number: CSM-173
Keywords: Home Care, Software Architecture, Policy-Based Management, Context-Sensitive System, Multimodal System, Dynamic Reconfiguration, Systems Integration
Abstract:

Technology-mediated home care is attractive for older people living at home and also for their carers. It provides the information necessary to give confidence and assurance to everyone interested in the well-being of the older person. From a care delivery perspective, however, widespread deployment of home care technologies presents system developers with a set of challenges. These challenges arise from the issues associated with scaling from individual installations to providing a community-wide service, particularly when each installation is to be fitted to the particular but changing needs of the residents, their in-home carers and the larger healthcare community. This paper presents a home care software architecture and services that seek to address these challenges. The approach aims to generate the information needed in a timely and appropriate form to inform older residents and their carers about changing life style that may indicate a loss of well-being. It unites sensor-based services, home care policy management, resource discovery, multimodal interaction and dynamic configuration services. In this way, the approach offers the integration of a variety of home care services with adaptation to the context of use.


PDF Logo Title: Ontologies for Resolution Policy Definition and Policy Conflict Detection
Author(s): Gavin A. Campbell
Date: February 2007
Number: CSM-172
Keywords: Accent, Call Control, Conflict Filtering, Conflict Detection, Conflict Resolution, Ontology, OWL, Policy
Abstract:

In previous work, the author devised a collection of ontologies to model the generic structure and characteristics of the Appel policy description language [18] utilised within the Accent policy system [1]. Two ontologies, namely genpol.owl and wizpol.owl, were defined using OWL (Web Ontology Language [10]) to describe the generic aspects of the policy language and aspects of how the policy wizard (user interface) uses the language. These generic ontologies are explained in CSM-169 [3], while an ontology modelling a domain-specific implementation of the policy language for call control is described in CSM-170 [4]. This document describes how these ontologies have been extended to define the structure of resolution policies, in addition to standard domain policies. A resolution policy has a similar structural composition to a standard policy, but places restrictions on the characteristics of its components. While a standard policy is used to define how events within the domain are handled, a resolution policy is defined purposely to resolve run-time conflicts between standard domain policies. Conflict arises between a pair of standard domain policies whose actions clash if executed simultaneously. A resolution policy specifies the action to be taken when such conflict occurs.

This report distinguishes between a standard policy and a resolution policy in Appel, outlining the ontology description for each and highlighting subtle differences in their form. In particular, it demonstrates extensions to genpol and wizpol to specify generic resolution policies. Based on the generic extensions, callcontrol.owl was also expanded to include generic call control resolution actions. The call control ontology is described here as a concrete example of a domain-specific resolution policy language definition for the purposes of managing call conflicts. In addition, the report describes generic and domain-specific ontology extensions to aid in policy conflict detection using a filtering technique.


PDF Logo Title: An Online Environmental Approach to Service Interaction Management in Home Automation (PhD Thesis)
Author(s): Michael E. Wilson
Date: October 2006
Number: CSM-171
Abstract:

Home automation is maturing with the increased deployment of networks and intelligent devices in the home. Along with new protocols and devices, new software services will emerge and work together releasing the full potential of networked consumer devices. Services may include home security, climate control or entertainment. With such extensive interworking the phenomenon known as service interaction, or feature interaction, appears. The problem occurs when services interfere with one another causing unexpected or undesirable outcomes.

The main goal of this work is to detect undesired interactions between devices and services while allowing positive interactions between services and devices. If the interaction is negative, the approach should be able to handle it in an appropriate way.

Being able to carry out interaction detection in the home poses certain challenges. Firstly, the devices and services are provided by a number of vendors and will be using a variety of protocols. Secondly, the configuration will not be fixed, the network will change as devices join and leave. Services may also change and adapt to user needs and to devices available at runtime. The developed approach is able to work with such challenges.

Since the goal of the automated home is to make life simpler for the occupant, the approach should require minimal user intervention.

With the above goals, an approach was developed which tackles the problem. Whereas previous approaches solving service interaction have focused on the service, the technique presented here concentrates on the devices and their surrounds, as some interactions occur through conflicting effects on the environment. The approach introduces the concept of environmental variables. A variable may be room temperature, movement or perhaps light. Drawing inspiration from the Operating Systems domain, locks are used to control access to the devices and environmental variables. Using this technique, undesirable interactions are avoided. The inclusion of the environment is a key element of this approach as many interactions can happen indirectly, through the environment.

Since the configuration of a home's devices and services is continually changing, developing an off-line solution is not practical. Therefore, an on-line approach in the form of an interaction manager has been developed. It is the manager's role to detect interactions.

The approach was shown to work successfuly. The manager was able to successfully detect interactions and prevent negative interactions from occurring. Interactions were detected at both device and service level. The approach is flexible: it is protocol independent, services are unaware of the manager, and the manager can cope with new devices and services joining the network. Further, there is little user intervention required for the approach to operate.


PDF Logo Title: Ontology for Call Control
Author(s): Gavin A. Campbell
Date: June 2006
Number: CSM-170
Keywords: Accent, Call Control, Internet Telephony, Ontology, OWL, Policy
Abstract:

An ontology provides a common vocabulary through which to share information in a particular area of knowledge, including the key terms, their semantic interconnections and certain rules of inference. Using OWL (The Web Ontology Language), an ontology has been developed describing the domain of (Internet) call control. In particular, the ontology focuses on the use of call control in conjunction with its application within the Accent policy-based management system. The structure of the ontology builds heavily on previously developed ontologies genpol and wizpol. These describe generic aspects of the system, including the core policy description language on which it is based.

This report presents a technical overview of the ontology for (Internet) call control, illustrated by way of graphical depictions of OWL class and property implementations.


PDF Logo Title: Ontology Stack for A Policy Wizard
Author(s): Gavin A. Campbell
Date: June 2006
Number: CSM-169
Keywords: Accent, Ontology, OWL, Policy
Abstract:

An ontology provides a common vocabulary through which to share information in a particular area of knowledge, including the key terms, their semantic interconnections and certain rules of inference. The Accent policy-based management system uses a policy description language called Appel, and supports policy document formation through the use of a comprehensive user interface wizard. Through the use of OWL (the Web Ontology Language), the core aspects of Appel have been captured and defined in an ontology. Assigned the acronym genpol, this ontology describes the policy language independent of any user interface or domain-specific policy information. A further ontology has been developed to define common interface features implemented by the policy wizard. This ontology, referred to as wizpol, directly extends genpol. It provides additional information to the language itself, whilst retaining freedom from any domain-specific policy details. Combined, both genpol and wizpol act as a base for defining further domain-specific ontologies that may describe policy options tailored for a particular application.

This report presents a technical overview of both the generic policy language ontology (genpol) and the wizard policy ontology (wizpol), expressed in the form of graphical depictions of OWL classes and properties.


PDF Logo Title: An Overview of Ontology Application for Policy-Based Management using Poppet
Author(s): Gavin A. Campbell
Date: June 2006
Number: CSM-168
Keywords: Knowledge-based systems, Ontology, OWL, Policy, Poppet, Systems Re-engineering
Abstract:

The use of ontology to describe the key concepts and their interrelationships within a particular area has become a widely recognised and advantageous means of sharing information about the structure of knowledge within a domain. Ontologies provide a way of integrating structured data within an application.

This report provides an overview of how the Accent policy-based management system was significantly re-engineered to utilise an ontology in place of previously hard-coded, domain-specific information within its user interface. In order to successfully integrate the ontology with the policy system, a new framework named Poppet was developed, responsible for parsing and querying ontological data. Although a substantial alteration in technical structure, the process vastly generalises the policy system, enabling adaptation for policy management within any custom domain.

An introduction to the concepts and motivation for ontology creation using OWL is presented, together with general background to the Accent policy system. A technical overview is then given covering the developed ontologies, the Poppet system design, and the policy system re-engineering process. Finally, a comparison is made between the new and old policy system structures, and the impact on system performance is evaluated.


PDF Logo Title: Push-Button Tools for Software Developers, Full Formal Verification for Component Vendors
Author(s): Thomas Wilson, Savitri Maharaj and Robert G. Clark
Date: August 2007
Number: CSM-167
Keywords: Assertion-Based Verification, Run-Time Assertion Checking, Extended Static Checking, Formal verification, Software Component Reuse, Tool Integration
Abstract:

Software developers have varying abilities and develop software with differing reliability requirements. Sometimes reliability is critical and the developers have the mathematical capabilities to perform interactive theorem proving but this is not usually the case. We believe that most software developers need easy to use tools such as run-time assertion checkers and extended static checkers that can help them produce more reliable application-specific code cheaply. However, these lightweight approaches are not sufficient to allow the safe reuse of software components. To safely reuse software components we need comprehensive descriptions and assurances of correctness. These requirements can be provided for by full formal verification with the additional costs justified by the economies of scale. Our Omnibus verification tool provides integrated support for all these different types of verification. This report illustrates these concepts through a sorting implementation.


PDF Logo Title: Policies for H.323 Internet Telephony
Author(s): Tingxue Huang
Date: May 2004
Number: CSM-165
Keywords: Feature, Gatekeeper, GNU Gk, H.323, Internet Telephony, Policy
Abstract:

Firstly, this report examines in which mode an H.323 gatekeeper should work for enforcing policies. Then, it explores how the gatekeeper can cooperate with a policy server. The report discusses how to complement a gatekeeper with policies, using GNU Gk as the basis. The approach takes into account issues of robustness, simplicity and scalability. Finally, the report investigates how to design H.323 policies, and how to combine these with SIP policies on a policy server.


PDF Logo Title: Writing Decision Trees for The CGT Viewer Program
Author(s): Richard Bland, Claire E. Beechey and Dawn Dowding
Date: October 2003
Number: CSM-163
Abstract:

This paper is a tutorial and manual for authors of Decision Trees to be displayed by the CGT Tree Viewer program. It assumes that readers understand the basic concepts of Decision Trees, and have sufficient skills to create and edit plain text files. After reading this document, readers should be able to write their own Decision Tree and get the CGT program to display it.


PDF Logo Title: Extending the Model of The Decision Tree
Author(s): Richard Bland, Claire E. Beechey and Dawn Dowding
Date: August 2002
Number: CSM-162
Abstract:

Decision Trees are a familiar tool for the analysis and presentation of problems involving choice between different possible courses of action. In conventional Decision Trees there are three types of node (Decision, Choice, and Terminal), and nodes have a small number of possible attributes (name, probability, and payoff). After a brief introduction to conventional Decision Tree, the report presents a series of extensions of this standard model. The additions greatly extend the expressive power of Decision Trees, particularly when they are used as a way of conveying information about the context and consequences of the decisions. The extended model adds a new type of node: the Question node, and a much richer range of attributes for nodes and for the tree as a whole. This extended model is independent of any particular (computer) implementation of Decision Trees. It can be used as a way of thinking about the formal properties of Decision Trees, and as a basis for different implementations.

The extended model has been implemented by the authors, and is being used within a project to explore the use of Decision Trees as a way of conveying medical information to patients.


PDF Logo Title: Computer Investigations of The Maximal Exceptional Graphs
Author(s): D. Cvetkovic, M. Lepovic, Peter Rowlinson and S. K. Simic
Date: August 2001
Number: CSM-160
Abstract:

A graph is said to be exceptional if it is connected, has least eigenvalue greater than or equal to -2, and is not a generalized line graph. Such graphs are known to be representable in the exceptional root system E8. The 473 maximal exceptional graphs have been found by computer using the star complement technique, and subsequently described using properties of E8. Here we present some information about these graphs obtained in the computer search: the exceptional star complements, some data on extendability graphs and the corresponding maximal graphs, the maximal exceptional graphs and some of their invariants.


PDF Logo Title: A Symbolic Semantics and Bisimulation for Full LOTOS
Author(s): Muffy Calder (University of Glasgow) and Carron E. Shankland
Date: August 2001
Number: CSM-159
Abstract:

A symbolic semantics for Full LOTOS in terms of symbolic transition systems is defined. The semantics extends the standard one by giving meaning to symbolic, or (data) parameterised, processes. Symbolic bisimulation is defined and illustrated with reference to examples. The approach taken follows that applied to message-passing CCS, but differs in several significant aspects, taking account of the particular features of LOTOS: multi-way synchronisation, value negotiation, and selection predicates.


PDF Logo Title: Formal Specification and Analysis of Digital Hardware Circuits in LOTOS (PhD Thesis)
Author(s): Ji He
Date: August 2000
Number: CSM-158
Keywords: Asynchronous Circuit Design, Conformance Testing, Design Verification, DILL (Digital Logic in LOTOS), Digital Logic, Hardware Verification Benchmark, HDL (Hardware Description Language), LOTOS (Language of Temporal Ordering Specification), Model Checking, Synchronous Circuit Design, Testing, Validation
Abstract:

This thesis discusses using the ISO standard formal language LOTOS (Language Of Temporal Ordering Specification) for formally specifying and analysing digital circuits. The study serves a two-fold purpose: it examines the possibility of extending LOTOS outside its traditional areas, and provides a new formalism to aid designing correct hardware.

Digital circuits are usually classified into synchronous (clocked) and asynchronous (unclocked) circuits. The thesis addresses both of these. LOTOS models for signals, wires, components and component connections are established, together with behavioural models of digital components in synchronous and asynchronous circuits. These formal models help to build rigorous specifications of digital circuits, which are not only valuable documentation but also serve as the bases for further analysis. The investigation of the thesis shows that LOTOS is suitable for specifying digital circuits at various levels of abstraction. Compared with other formalisms, it is especially efficient on higher level modelling. But there is also a gap between LOTOS models and real world hardware, which is the result of the difference between inputs and outputs of systems being abstracted in LOTOS. The gap is bridged by introducing input receptive or input quasi-receptive specifications.

Two analysis approaches are investigated in the thesis, namely formal verification and conformance testing. Verification intends to check the correctness of the formal model of a circuit. It is exhaustive and can ensure the correctness of the model being checked. While testing is applied to a physical product or to a formal or informal model, it can never be exhaustive but is nonetheless very useful when a formal model is difficult to build.

Current LOTOS verification techniques support the three common verification tasks, i.e. requirements capture, implementation verification and design verification. In this thesis, model checking is used to fulfill these tasks. It is found that verification of synchronous circuits is relatively straightforward since LOTOS tools can be used directly. For verifying asynchronous circuits, two conformance relations are defined to take the different roles of inputs and outputs into account. Compared to other hardware verification approaches, the approach presented in this thesis has the advantage of finding bugs at an early stage of development, because LOTOS can be used in higher level modelling. Moreover, LOTOS is supported by various verification techniques, which are complementary to each other and give more chances to detect design faults.

The thesis explores a new direction in applying formal methods to digital circuit design. The basic idea is to combine formal methods with traditional validation approaches. LOTOS conformance testing theory is employed to generate test cases from higher-level formal specifications. The test cases are then applied to commercial VHDL (VHSIC Hardware Description Language) simulators to simulate lower level circuit designs. Case studies reveals that the approach is very promising. For example, it can detect bugs which cannot be captured by examining the usual kind of formal model.

Timing characteristics are important factors in digital design. To be able to specify and analyse timed circuits, ET-LOTOS is exploited. Two important timing characteristics in digital circuits are identified, namely delays and timing constraints. Timed specifications of digital circuits are the composition of these timing characteristics and functionality. Based on the formal specifications, rigorous analysis can be applied. The method is valuable in discovering subtle design bugs related to timing such as hazards and race conditions, and can also be used for analysing speed performance of digital circuits.


Title: Star Sets and Related Aspects of Algebraic Graph Theory
Author(s): Penelope S. Jackson
Date: July 2000
Number: CSM-157
Abstract:


PDF Logo Title: Constructions of The Maximal Exceptional Graphs with Largest Degree less than 28
Author(s): D. Cvetkovic, Peter Rowlinson and S. K. Simic
Date: July 2000
Number: CSM-156
Abstract:

A graph is said to be exceptional if it is connected, has least eigenvalue greater than or equal to -2, and is not a generalized line graph. Such graphs are known to be representable in the exceptional root system E8. The 473 maximal exceptional graphs have been found by computer, and the 467 with maximal degree 28 have been characterized. Here we construct the remaining six maximal exceptional graphs by means of representations in E8.


PDF Logo Title: Extending Hardware Description in SDL
Author(s): Javier Argul-Marin and Kenneth J. Turner
Date: February 2000
Number: CSM-155
Keywords: ANISEED (Analysis In SDL Enhancing Electronic Design), Digital Logic, Hardware Verification Benchmark, HDL (Hardware Description Language), SDL (Specification and Description Language), Validation
Abstract:

The use of SDL (Specification and Description Language) for digital hardware description and analysis is investigated in this report. It continues the work undertaken at the University of Stirling and the Technical University of Budapest on hardware description with SDL, offering a modular approach to hardware design in SDL.

Although SDL is widely used in the software and telecommunications communities, it is not well known to hardware designers. However, it has attracted the researcher's interest because it offers good system structuring features and the possibility of software-hardware co-design.

One way of supporting hardware engineers when translating a circuit schematic into an SDL specification is to have a library of ready-to-use or pre-defined digital components. These elements may then be used as building blocks to aid in the development of more complex electronic hardware.

The main goal of this report has been to extend an existing SDL logic library, in an attempt to reflect the range of components typically available to electronics designers. Using these libraries and a commercial tool for SDL, the properties of a realistic circuit can now be investigated. Making use of these new elements, a practical case study has been carried out. The overall results clearly show that hardware description in SDL is an interesting alternative to other more traditional methods of hardware analysis.


Title: Graph Eigenspaces of Codimension at most Five
Author(s): Peter Rowlinson and Francis K. Bell
Date: February 2000
Number: CSM-154
Keywords: Graph, Eigenvalue, Eigenspace
Abstract:

For a given positive integer t there are only finitely many graphs with an eigenvalue µ not in {-1,0} such that the eigenspace of µ has codimension t. The graphs for which t <= 5 are identified.


PDF Logo Title: Classifying Acute Abdominal Pain by assuming Independence: A Study using Two Models constructed directly from Data
Author(s): Clifford S. Thomas
Date: January 2000
Number: CSM-153
Abstract:

Medical Expert Systems have been under development for several years, MYCIN being a typical example of a major project. Many of these recent systems have been constructed through expert opinions. In this paper we focus our attention on the semi-automated construction of a classifier directly from the database itself. The methodology employed utilises a data set specifically concerning the domain of Acute Abdominal Pain (AAP), courtesy of St. John's Hospital, Livingston, Scotland. Our objective is: given a set of diseases the computer diagnostic system will use a certain procedure to classify patients in accordance with their symptoms.

The classifying procedure we propose to use centres on the assumption of independence. More specifically, this uses a model described by naive Bayes (some times referred to as simple Bayes) and a new proposed model derived from Mutual Information Measure (MIM). Both models are constructed from a 2/3 randomly selected subset of the AAP data set and then validated by the remaining 1/3 unseen test data sample.

A total of 3705 patient record test samples were classified using the two models and compared to the results obtained by the doctors at St. John's Hospital. The MIM classifier identified correctly 70.04% of the unseen test samples with the naive Bayes classifier achieving 73.17%. In comparison to the doctors' 70.77%, both models of independence performed with a similar level of expertise, naive Bayes being marginally higher.


PDF Logo Title: Modelling and Verifying Synchronous Circuits In DILL
Author(s): Ji He and Kenneth J. Turner
Date: February 1999
Number: CSM-152
Keywords: Bus Arbiter, CADP (Caesar Aldébaran Development Package), Digital Logic, DILL (Digital Logic in LOTOS), Hardware Verification Benchmark, HDL (Hardware Description Language), LOTOS (Language of Temporal Ordering Specification), Single Pulser, Verification
Abstract:

This report investigates modelling and verifying synchronous circuits in DILL (Digital Logic in LOTOS). The synchronous circuit model used here is quite similar to the classical one exploited in digital logic design, but some additional restrictions are applied to simplify analysis. The basic logic gate and storage element models are modified from previous versions of DILL to suit synchronous design. To evaluate the approach, two benchmark circuits are specified and then verified using CADP (Caesar Aldébaran Development Package).


Title: Orthogonally Persistent Support for Persistent CORBA Objects
Author(s): Adrian O'Lenskie, Alan Dearle and David A. Hulse
Date: April 1998
Number: CSM-151
Abstract:

The Object Management Group (OMG) have defined the CORBA Persistent Object Service (POS) which is intended to provide support for persistent CORBA objects. In this paper we describe the CORBA POS and demonstrate how it is deficient in a number of areas. Specifically, it lacks a semantically meaningful failure model, it exhibits problems with encapsulation, and it encourages the prevalence of proprietary architectures which inhibit vendor independence. The first of these is perhaps the most serious since it results in the inability of a distributed system to recover to any meaningful state following individual node failures. The paper introduces an alternative CORBA Persistent Object Service hosted by a novel persistent operating system called Grasshopper. Persistent CORBA objects hosted by Grasshopper do not suffer from any of the problems outlined above. The paper briefly describes the Grasshopper system and how it may be used to host persistent CORBA objects.


Title: Trends in Operating System Design: Towards A Customisable Persistent Micro-Kernel
Author(s): Alan Dearle and David A. Hulse
Date: August 1998
Number: CSM-150
Abstract:

Monolithic and micro-kernel based operating systems such as Unix have failed to provide application developers with sufficient flexibility. They provide a host of inefficient and often inappropriate abstractions that prevent applications from accessing the hardware to exploit efficiency gains. These problems motivated the Grasshopper project to build a new operating system designed to explicitly support orthogonal persistence. Five years on, Grasshopper has demonstrated the feasibility of such an operating system although several problems have been identified. In the light of this, we decided to redesign our kernel using modern techniques. This paper examines the trends in operating system design over the last few years, and describes our plans to produce a new persistent micro-kernel.


Title: IFIP Workshop on Modelling of Microsystems: Methods, Tools and Application Examples
Author(s): Charles M. I. Rattray (editor)
Date: June 1998
Number: CSM-149
Abstract:

The purpose of the Workshop was to afford an opportunity, to those who recognise the importance of the development modelling concepts and tools for high-level microsystem design, to have discussions on current theory, latest practice, and industrial level applications, and to generate proposals for future research and development. Experienced researchers and practitioners in the field, and those new to this area attended.

Presentations on formal modelling methods for higher design levels, design languages which support the implementation of such methods, and applications which already use such formal methods and tools in the design process, were given.

The Proceedings contain a mixture of full papers and abstracts.


PDF Logo Title: Learning XOR: Exploring the Space of a Classic Problem
Author(s): Richard Bland
Date: June 1998
Number: CSM-148
Abstract:

Boolean functions of two inputs are amongst the simplest of all functions, and the construction of simple neural networks that learn to compute such functions is one of the first topics discussed in accounts of neural computing. Of these functions, only two pose any difficulty: these are XOR and its complement. XOR occupies, therefore, a historic position. It has long been recognised that simple networks often have trouble in learning the function, and as a result their behaviour has been much discussed, and the ability to learn to compute XOR has been used as a test of variants of the standard algorithms. This report puts forward a framework for looking at the XOR problem, and, using that framework, shows that the nature of the problem has often been misunderstood. The report falls into three main parts. Firstly, in Section 2 I review neural nets in general and the XOR problem in particular. This section is based on classic studies and the material will be familiar to most readers. Secondly, in Section 3 I look in considerable detail at the problem-space of the XOR problem using the simplest network that can solve the problem using only forward transmission with each layer communicating only with the next layer. Finally, in Section 4 I give a comprehensive account of the error surface. I show that this surface has exactly sixteen minima and that each is a solution to the XOR problem. This is a new result and corrects errors made by many previous authors.


PDF Logo Title: ROOA with SDL
Author(s): Robert G. Clark and Ana M. D. Moreira
Date: November 1998
Number: CSM-147
Abstract:

Our interest is in making object-oriented analysis a more rigorous process. As we wish to create a practical and usable method, we do not propose a new specification language. Instead, we base our work on standard formal description techniques which provide executable specifications and which are supported by validation and simulation tools so that prototyping can be used validate a specification against the requirements. Also, the first steps in our rigorous object-oriented analysis method are based on widely used informal methods such as OMT and OOSE. In this report, we show how SDL can be applied during object-oriented analysis to produce a formal object-oriented requirements specification. SDL is a standard formal description technique that is normally used in the design phase of systems development.

Building a formal specification from informal requirements is difficult. To simplify this, our process builds two formal models: a user-centred model and a system-centred model. The user-centred model is based on scenarios and specifies the external behaviour of a system from the viewpoint of the environment. It is used to support the construction of the system-centred model which is the formal object-oriented requirements specification.

We represent both models in the same formal language (in this case SDL, but it could be another formal description technique such as LOTOS).


Title: Software Development and Continual Change: A programmer's attitude problem? (Thesis)
Author(s): Philip A. Harwood
Date: July 1998
Number: CSM-146
Abstract:

Software forms around a requirement. Defining this requirement is often regarded as the hardest part of software engineering. The requirement however has an additional complexity as, once defined, it will change with time. This change of requirement can come either from the user, or from the rapid advances in `computer' technology. How then can software succeed to continue to remain `current' both in terms of requirements and technology in this forever changing environment?

This thesis examines the issues surrounding `change' as applied to software engineering. Changing requirements are often deemed a `curse' placed upon software engineers. It has been suggested, however, that the problems associated with change exist only in the attitude of software engineers. This is perhaps understandable considering the training methods and tools available to supposedly `help' them.

The evidence shows that quality of management and experience of personnel involved in development contribute more significantly to the success of a development project than any technical aspect. This unfortunately means that the process is highly susceptible to staff turnover which, if uncontrolled, can lead to pending disaster for the users. This suggests a `better' system would be developed if `experience' was maintained at a process level, rather than at an individual level.

Conventional methods of software engineering are based upon a defined set of requirements which are determined at the beginning of the software process. This thesis presents an alternative paradigm which requires only a minimal set of requirements at the outset and actively encourages changes and additional requirements, even with a mature software product. The basis of this alternative approach is the form of the `requirements specification' and the capturing and re-use of the `experience' maintained by the software process itself.


PDF Logo Title: Timed DILL: Digital Logic in LOTOS
Author(s): Ji He and Kenneth J. Turner
Date: April 1998
Number: CSM-145
Keywords: Digital Logic, DILL (Digital Logic in LOTOS), ET-LOTOS (Enhanced Timed LOTOS), HDL (Hardware Description Language), LOTOS (Language of Temporal Ordering Specification), TE-LOTOS (Time Extended LOTOS)
Abstract:

A timed extension to DILL (DIgital Logic in LOTOS) is discussed. The extension is based on ET-LOTOS (Enhanced Timed LOTOS) and is used to specify timed digital logic. The approach places timing properties in two categories: timing constraints and delays. A parallel-serial model is used to form a timed specification by composing the functionality of a digital component with timing constraints and delays. A variety of common timing constraints and delays have been specified, adding these as pseudo-components to the DILL library for subsequent reuse. The characteristics of other timed components in the DILL library are also discussed. The approach to timed specification and analysis is illustrated using a standard multiplexer component.


PDF Logo Title: An Architecture Based Approach to Specifying Distributed Systems in LOTOS and Z
Author(s): Richard O. Sinnott
Date: September 1997
Number: CSM-144
Keywords: Distributed Systems, LOTOS (Language Of Temporal Ordering Specification), Multimedia, ODP (Open Distributed Processing), Type Theory, Viewpoint, Z
Abstract:

Specification is difficult. It is often the case that the most difficult aspect of specifying is the structuring of the specification to begin with. Adopting an architectural approach can help to alleviate this structuring problem. We have investigated how the formal languages LOTOS and Z can be used to develop specification templates suitable for architecting specifications of distributed systems.

The immediate question that arises is: where do the architectural concepts come from? We have focused primarily on the work of the current standardisation activity of Open Distributed Processing (ODP). The approach taken there is to provide an object-oriented set of concepts and to use these as the basis for developing a multi-viewpoint approach. A viewpoint may be regarded as an abstraction of the system focusing on some particular aspect, the intention being to help reduce the complexity of the system as a whole. ODP identifies five viewpoints: the enterprise, information, computational, engineering and technology viewpoints. In our work, we formalise the foundation set of concepts in LOTOS and Z then show how specification architectures based on the computational viewpoint can be developed. We also highlight the advantages in a formal approach through the identification of limitations and errors in the ODP framework.

Central to work on distributed systems (or any system adopting an object-oriented methodology), and to the computational viewpoint of ODP in particular, is the issue of type management. We have investigated how LOTOS and Z can be used to reason about type systems generally and to investigate the issues that have to be resolved in determining type equivalence. We base our work on the idea of type equivalence as a substitutability relation between types. We address issues such as signature type checking, behavioural type checking and non-functional aspects of type checking such as quality of service. Our investigations have also included a treatment of multimedia type systems where continuous flows of information raise performance issues that are particularly important.

We have applied our approach to two case studies: the specification of the ODP trader in LOTOS and the specification of a producer and consumer flow configuration in Z. The advantages and disadvantages of adopting LOTOS or Z to develop specification architectures for distributed systems are highlighted and discussed.


PDF Logo Title: Walk Backwards to Happiness - Debugging by Time Travel
Author(s): Simon P. Booth and Simon B. Jones
Date: July 1997
Number: CSM-143
Abstract:

(This paper was presented at the 3rd International Workshop on Automated Debugging (AADEBUG'97), hosted by the Department of Computer and Information Science, Linköping University, Sweden, May 1997.)

In circumstances when a variable in a program has an incorrect value the process of debugging it is often a process about discovering the history of that variable, or rather the ancestry of the value that it contains. In this paper we propose a new technique for debugging that revolves around being able to trace back through the history of a particular variable or variables that it depends upon. Our development is in the domain of functional programming as the proposal has particular significance in this domain due to the fact that so many standard debugging techniques cannot be used at all.


PDF Logo Title: Extended DILL: Digital Logic in LOTOS
Author(s): Ji He and Kenneth J. Turner
Date: November 1997
Number: CSM-142
Keywords: Digital Logic, DILL (Digital Logic in LOTOS), Hardware Description Language, LOTOS (Language Of Temporal Ordering Specification)
Abstract:

DILL (Digital Logic in LOTOS) is a language and an approach to specify digital logic in LOTOS; the initial version of DILL was developed in 1993 at Stirling. This technical report investigates further the possibilities for specifying and analysing digital systems in LOTOS. The new version of DILL contains more building blocks, including tri-state components and abstract descriptions. Strategies for specifying multi-bit signals are also supported in the extended DILL language. These extensions aim to provide a more flexible and powerful capability for specifying digital logic.

In the report, an example of designing a simple CPU is given to examine the new extensions and to give an overall feel for the DILL approach. The example indicates that DILL is suitable for specifying digital logic, not only at the gate level but also at an abstract level. Through the example we have gained more confidence in the suitability of DILL for specifying and analysing digital logic, especially for larger-scale digital circuits.


PDF Logo Title: Are Ours Really Smaller Than Theirs?
Author(s): Simon B. Jones and Simon P. Booth
Date: November 1996
Number: CSM-141
Abstract:

(This paper was presented at the 1996 Glasgow Functional Programming Workshop, held in Ullapool, July 1996. It is published in the proceedings: "Functional Programming, Glasgow 1996", Spring-Verlag, electronic Workshops in Computing Series, 1996.)

The claim is often made that functional programs are "more" expressive than their imperative counterparts. This paper examines this claim using a particular measure of (i) program length and (ii) programming language "level" (a means of expressive power) - both from the work of Halstead on software metrics.


PDF Logo Title: Using A Formal User-Centred Model to build a Formal System-Centred Model
Author(s): Robert G. Clark and Ana M. D. Moreira
Date: March 1997
Number: CSM-140
Abstract:

We have been investigating the process of constructing an executable, formal and object-oriented specification from a set of informal requirements. This is not an easy task as there is a wide gap between the structure and notation of a formal specification and the requirements from which it is to be derived. It also cannot be a formal process.

As informal requirements are usually expressed in terms of the behaviour which the environment expects from the system, we propose that the construction of a formal and executable user-centred model should precede the construction of a formal object-oriented specification. By prototyping the user-centred model, we can both validate it with respect to the requirements and show up inconsistencies in the requirements. The user-centred model can then be used to support the construction of a formal system-centred model, i.e. the object-oriented specification.

When both models are expressed in the same executable formal language, the informal task of validating the object-oriented specification with respect to the requirements can be replaced by verifying that it is equivalent to the user-centred model. We already have the ROOA (Rigorous Object-Oriented Analysis) method, which proposes a process to build a formal system-centred model. Now we are proposing a process to build a user-centred model. As an example of this approach, we show its use within the ROOA method.


Title: Towards a Purely Functional Debugger for Functional Programs
Author(s): Simon B. Jones and Simon P. Booth
Date: August 1996
Number: CSM-139
Abstract:

(This paper was presented at the 1995 Glasgow Functional Programming Workshop, held in Ullapool, July 1995. It is published in the proceedings: "Functional Programming, Glasgow 1995", Spring-Verlag, electronic Workshops in Computing Series, 1996. ISBN 3-540-14850-X. For copyright reasons the paper is not available online.)

A major drawback when developing large applications with functional programming languages is the lack of good debugging tools. When using imperative languages all sorts of useful information about the program's behaviour is available via a good debugger.

In this paper we present a debugging technique that allows the examination of the behaviour of programmer defined functions in the context of the whole program. In particular, we present a technique that allows examination the function input and the result of the application of the function to that input.


Title: Experiences with Clean I/O
Author(s): Simon B. Jones
Date: August 1996
Number: CSM-138
Abstract:

(This paper was presented at the 1995 Glasgow Functional Programming Workshop, held in Ullapool, July 1995. It is published in the proceedings: "Functional Programming, Glasgow 1995", Spring-Verlag, electronic Workshops in Computing Series, 1996. ISBN 3-540-14850-X. For copyright reasons the paper is not available online.)

The Clean system is a powerful functional programming tool. It contains experiments in a number of different areas of functional language design. In particular, it has a novel proposal for the organization of input and output, and contains impressive libraries of facilities for programming graphical user interfaces.

Clean I/O is based on collections of operations that act to cause side-effects on multiple explicit abstract values representing physical I/O entities, such as files and graphical interfaces. A system of unique types is used to ensure that these values are individually single threaded through the program; and the side effecting I/O operations are therefore well controlled. This approach is distinct from monadic I/O, which is being widely adopted; monadic I/O schemes are based on a single, implicit environment, and guarantee that this is single threaded.

In this paper we will show that the Clean and monadic approaches to I/O merge nicely. The functionality provided by the Clean and its I/O libraries allows libraries for monadic I/O to be implemented. The paper presents an implementation of a basic I/O monad library in Clean that can serve for future development. In itself, the fact that the monadic approach can be implemented in Clean is unsurprising. However, some interesting technical difficulties arose during implementation of the monad; these and their solutions are discussed. The opportunity to express programs using the implicit environments of monadic I/O allows us to simplify Clean programs by removing some of the spaghetti, whilst retaining the generality of the explicit environments where it is the most appropriate approach.


Title: A Mutual Information Measure Classifier
Author(s): Clifford S. Thomas
Date: May 1996
Number: CSM-137
Abstract:

Automated discovery of knowledge in databases is becoming more important as data not only continuously grows but sometimes lies dormant. In parallel with this challenging real-world problem, the techniques of classification are either being refined or newly formed in order that this knowledge, once discovered, can be fully utilised.

This paper addresses a techniques that offers both a fast automated solution to knowledge discovery together with a classifier modelled directly from the database. A Mutual Information Measure (MIM) structure is presented and subsequently compared as a classifier with a conventional feed-forward multilayer neural network. Both models are validated using a medical data set containing the results of four acute abdominal pain diagnostic groups.

A total of 2281 samples were randomly partitioned into a 1520 learning set and a 761 test set. The overall classification level achieved for the two models when compared to that of the experts themselves (the doctors) was surprisingly close. The MIM classified correctly 83.4% of the test set whilst the neural network achieved 78.8%. The latter was marginally higher than the experts at 74.8%, the MIM performing significantly higher.


Title: Graph Eigenspaces of Small Codimension
Author(s): Francis K. Bell
Date: March 1996
Number: CSM-136
Abstract:

We consider graph eigenspaces corresponding to eigenvalues not belonging to {-1,0}. There are only finitely many graphs with such an eigenspace of given codimension. A method for determining all such graphs is given, and those with codimension at most 4 are determined explicitly.


Title: Flexible Binding Strategies in Persistent Programming Environments
Author(s): Alex S. M. Farkas
Date: October 1995
Number: CSM-135
Abstract:

The Flexible Binding Strategies approach to programming differs greatly from conventional programs, which consist of denotations that describe an algorithm and values that may be constructed at run-time. FBS permits this conventional style of programming, but augments it with the ability to describe values that exist at compilation time or construction time. The new programming style allows programs to be constructed that are shorter than those previously expressible in many persistent programming languages. This brevity is not achieved without cost - the programs are more tightly bound. It is not imagined that such a mechanism will be appropriate for all programming tasks. Instead, it is asserted that the new style of programming that emerges from this ability will augment techniques already at the disposal of the programmer.


PDF Logo Title: Evolution and Incremental Construction Support via Integrated Programming Environment Mechanisms
Author(s): Alex S. M. Farkas and Alan Dearle
Date: October 1995
Number: CSM-134
Abstract:

The mechanisms described in this paper support a software engineering environment in which the transition from initial design through implementation and into maintenance is a smooth and continuous process. new data types, Nodule and Octopus, are presented. Nodules are templates containing compiled code and labelled typed locations, and are intended to support the interactive construction and evolution of applications. The major benefit of the Nodules over other systems is at they permit application systems to be generated that are complete, internally consistent and strongly type checked. The Octopus mechanism permits executable application systems to be evolved in situ. Using this mechanism, the information that was available to the application developer is accessible to the maintenance programmer. It also enables components of the application to be evolved or replaced, and reconnected to live data in a type safe manner. When combined into a single system, the Nodule and Octopus data types enable a rich collection of information about the structure and state of applications to be maintained and made available to programmers, not only during the construction phase but also during the entire lifetime of applications.


PDF Logo Title: Formalising Trust as a Computational Concept (PhD Thesis)
Author(s): Stephen P. Marsh
Date: January 1995
Number: CSM-133
Abstract:

Trust is a judgement of unquestionable utility - as humans we use it every day of our lives. However, trust has suffered from an imperfect understanding, a plethora of definitions, and informal use in the literature and in everyday life. It is common to say `I trust you' but what does that mean?

This thesis provides a clarification of trust. We present a formalism for trust which provides us with a tool for precise discussion. The formalism is implementable: it can be embedded in an artificial agent, enabling the agent to make trust-based decisions. Its applicability in the domain of Distributed Artificial Intelligence (DAI) is raised. The thesis presents a test-bed populated by simple trusting agents which substantiates the utility of the formalism.

The formalism provides a step in the direction of a proper understanding and definition of human trust. A contribution of the thesis is its detailed exploration of the possibilities of future work in the area.


PDF Logo Title: Rigorous Object Oriented Analysis
Author(s): Ana M. D. Moreira
Date: August 1994
Number: CSM-132
Abstract:

Object-oriented methods for analysis, design and programming are commonly used by software engineers. Formal description techniques, however, are mainly used in a research environment. We have investigated how rigour can be introduced into the analysis phase of the software development process by combining object-oriented analysis (OOA) methods with formal description techniques. The main topics of this investigation are a formal interpretation of OOA constructs using LOTOS, a mathematical definition of the basic OOA concepts using a simple denotational semantics and a new method for object-oriented analysis that we call the Rigorous Object-Oriented Analysis method (ROOA).

The LOTOS interpretation of the OOA concepts is an intrinsic part of the ROOA method. It was designed in such a way that software engineers with no experience in LOTOS, can still use ROOA. The denotational semantics of the concepts of object-oriented analysis illuminates the formal syntactic transformations within ROOA and guarantees that the basic object-oriented concepts can be understood independently of the specification language we use.

The ROOA method starts from a set of informal requirements and an object model and produces a formal object-oriented analysis model that acts as a requirements specification. The resulting formal model integrates the static, dynamic and functional properties of a system in contrast to existing OOA methods which are informal and produce three separate models that are difficult to integrate and keep consistent. ROOA provides a systematic development process, by proposing a set of rules to be followed during the analysis phase. During the application of these rules, auxiliary structures are created to help in tracing the requirements through to the final formal model.

As LOTOS produces executable specifications, prototyping can be used to check the conformance of the specification against the original requirements and to detect inconsistencies, omissions and ambiguities early in the development process.


Title: Applying Design Patterns to the Object-Oriented Construction of Frameworks
Author(s): Christopher J. Woodcock and Robert G. Clark
Date: December 1994
Number: CSM-131


Title: Defining Object-Oriented Design Patterns with Frameworks
Author(s): Christopher J. Woodcock and Robert G. Clark
Date: November 1994
Number: CSM-130


PDF Logo Title: Specification Case Studies in ROOA
Author(s): Ana M. D. Moreira and Robert G. Clark
Date: October 1994
Number: CSM-129
Abstract:

The Rigorous Object-Oriented Analysis (ROOA) method starts from a set of informal requirements and produces a formal object-oriented analysis model that acts as a requirements specification. This specification is expressed in the standard formal description language LOTOS.

The ROOA formal model integrates the static, dynamic and functional properties of a system in contrast to existing OOA methods which are informal and produce three separate models that are difficult to integrate and keep consistent. ROOA provides a systematic development process by proposing a set of rules to be followed during the analysis phase. During the application of these rules, auxiliary structures are created to help in tracing the requirements through to the final formal model.

As LOTOS produces executable specifications, prototyping can be used to check the conformance of the specification against the original requirements and to detect inconsistencies, omissions and ambiguities early in the development process.

ROOA has been applied to several problems. This document shows how this can be done, by presenting three case studies: an automated banking system, a warehouse management system and a car rental system.


Title: Design Documentation for the Ghost System
Author(s): Gary Marsden, M. K. Tan and Charles M. I. Rattray
Date: September 1994
Number: CSM-128
Abstract:

This document describes the most recent version of the GHOST system. It contains information on the high level aims of the system, the lower level operations of the GHOST machine implementation, and the rationale behind various implementation decisions.


PDF Logo Title: An Experiment in Compile Time Garbage Collection
Author(s): Simon B. Jones
Date: September 1994
Number: CSM-127
Keywords: functional languages, language implementation, optimization, memory management, garbage collection
Abstract:

This report presents the details and conclusions of an experiment designed to assess the potential benefits to be obtained from a particular proposal for optimizing the execution of functional programs by compile time garbage collection. The optimisations proposed comprise the compile time insertion of code to directly re-use heap cells which are released and can be statically re-allocated immediately. The optimisations are determined by the static sharing analysis of strict, first order functional programs using list (tree-like) data structures. The method is powerful enough to detect many places in programs where the optimization can be applied, but the effect on the performance of `typical' programs has not been practically assessed before. In this experiment a non-trivial Haskell program is adapted to run as a strict, first order LML program. Re-use optimization is performed (simple deallocation is not possible in LML as it has no free list in the heap), and the performance of the unoptimised and optimized versions are compared. The results turn out to be (surprisingly) disappointing: although the number of bytes allocated from the heap is reduced by about 8% and the time for garbage collections reduces by about 15%, the cell allocation time itself is not improved, and the garbage collection time amounts to only 10% of the total program execution time. There is a slight improvement in the locality of memory references, reducing the cache penalty overhead by about 3%. The total time reductions are of the order of 4.5% - which is not encouraging.


PDF Logo Title: Exploiting the m4 Macro Language
Author(s): Kenneth J. Turner.
Date: September 1994
Number: CSM-126
Abstract:

m4 is a macro language and macro processor that is available with a number of operating systems such as Unix and MS-DOS. The role of m4 is discussed, and it is concluded that m4 has a useful part to play even though it has not achieved widespread popularity. A summary of m4 is given along with some basic examples. A cookbook of techniques is then described, allowing surprisingly sophisticated macros to be developed. Some applications of the approach are described for specifying Open Systems Interconnection standards, telecommunications service creation and digital logic circuits. To show the range of possibilities for m4, a few uses by others for specification and text-processing are briefly discussed.


PDF Logo Title: Formalizing OO Analysis with LOTOS
Author(s): Ana M. D. Moreira, Peter B. Ladkin and Robert G. Clark
Date: August 1994
Number: CSM-125
Abstract:

We define the denotational semantics of the concepts of Object-Oriented Analysis (OOA), in order to provide a generic description of the transformation from OOA into a formal model. We have developed the ROOA (Rigorous Object-Oriented Analysis) method, which builds on an object model created by using OOA methods and refines into a formal model expressed in LOTOS. We illustrate the semantics with ROOA-developed LOTOS specification.

Other semantics have focused on objects, and derived the meaning of classes and templates from them. In contrast, to fit in with OOA as it is practised, we focus on the ROOA concept of {\it class template}, and explain how the behaviour of objects in an implemented system is constrained by the behaviour expression contained in the template.


PDF Logo Title: Construction of LOTOS Behaviour Expressions from Object Communication Diagrams
Author(s): Robert G. Clark
Date: August 1994
Number: CSM-124
Abstract:

The Rigorous Object-Oriented Analysis (ROOA) methods is a systematic development process which takes a set of informal requirements and produces a formal object-oriented analysis model expressed in the standard formal description language LOTOS. An intermediate step in the production of the LOTOS model is the creation of an Object Communication Diagram (OCD) which is a graphical representation of the eventual LOTOS behaviour expression.

It is possible to construct object communication diagrams for which there is no corresponding LOTOS behaviour expression. We propose a condition that must be satisfied by an object communication diagram for there to be a guarantee that there is a corresponding LOTOS behaviour expression and prove that object communication diagrams of arbitrary complexity which satisfy this condition do indeed have a corresponding LOTOS behaviour expression. We then give an algorithm for the construction of a LOTOS behaviour expression from an object communication diagram which satisfies the condition.


PDF Logo Title: Complex Objects: Aggregates
Author(s): Ana M. D. Moreira and Robert G. Clark
Date: May 1994
Number: CSM-123
Abstract:

Aggregation is a kind of abstraction which allows a more complex object, the aggregate, to be formed from simpler objects, the components. Although there is not yet a standard definition of aggregation, the two main cases are aggregates with hidden components and aggregates with shared components. The more interesting case is the former, in which the aggregate encapsulates its components, making them invisible to the other objects in the model. The ROOA (Rigorous Object-Oriented Analysis) method, developed to integrate LOTOS within object-oriented analysis, models both kinds of aggregation.


PDF Logo Title: The Formal Specification in LOTOS of a Basic Type Manager
Author(s): Richard O. Sinnott
Date: April 1994
Number: CSM-122
Abstract:

This paper offers one approach to developing a type management system in LOTOS. The limitations of the definitions of type and subtype are investigated, as given in the Reference Model for Open Distributed Processing (RM-ODP). When used to model a type management system in LOTOS, different levels of type equivalence are identified. An approach is given to model types in LOTOS whereby type relationships can be established that are not based on simple name equivalence only. These are then used in the development of a prototype type management system.


PDF Logo Title: The Development of An Architectural Semantics for ODP
Author(s): Richard O. Sinnott
Date: March 1994
Number: CSM-121
Keywords: LOTOS, Z, Open Distributed Processing, architectural semantics
Abstract:

This paper provides an introduction to the role of Formal Description Techniques (FDTs) in the development of an architectural semantics for Open Distributed Processing (ODP). It gives a brief introduction to ODP in general and the Reference Model for ODP (RM-ODP). Following this, an outline of the reasons for the development of an architectural semantics, the problems associated with the development of an architectural semantics and issues surrounding the possible solutions to these problems are presented.


Title: Distributed Systems: Architecture-Driven Specification using Extended LOTOS
Author(s): Ashley McClenaghan
Date: April 1994
Number: CSM-120


Title: XDILL: An X-based Simulator Tool for DILL
Author(s): Ashley McClenaghan
Date: April 1994
Number: CSM-119


PDF Logo Title: The Implementer's Dilemma: A Mathematical Model of Compile Time Garbage Collection
Author(s): Simon B. Jones, Andrew S. Tyas
Date: February 1994
Number: CSM-118
Keywords: functional language implementation, garbage collection, memory management optimization
Abstract:

Optimization by compile time garbage collection is one possible weapon in the functional language implementer's armory for combatting the excessive memory allocation usually exhibited by functional programs. It is an interesting idea, but the practical question of whether it yields benefits in practice has still not been answered convincingly one way or the other.

In this short paper we present a mathematical model of the performance of straightforward versions of mark-scan and copying garbage collectors with programs optimized for explicit deallocation. A mark-scan heap manager has a free list, whereas a copying heap manager does not --- herein lies the dilemma, since a copying garbage collector is usually considered to be faster than a mark-scan, but it cannot take advantage of this important optimization. For tractability we consider only heaps with fixed cells.

The results reported show that the garbage collection scheme of choice depends quite strongly on the heap occupancy ratio: the proportion of the total heap occupied by accessible data structures averaged over the execution of the program. We do not know what typical heap occupancy ratios are, and so are unable to make specific recommendations, but the results may be of use in tailoring applications and heap management schemes, or in controlling schemes where the heap size varies dynamically. An important result arising from the work reported here is that when optimizing for explicit deallocation, a very large proportion of cell releases must be optimized before very much performance benefit is obtained.


PDF Logo Title: Optimism and Pessimism in Trust
Author(s): Stephen Marsh
Date: May 1994
Number: CSM-117
Abstract:

Artificial agents do not exist in the world in solitude. They are required to interact with others, and thus must reason in some fashion about those they interact with. This paper presents a view of trust which artificial agents can use to help in such reasoning processes, providing them with a means to judge possible future behaviour on the basis of past experience. The paper discusses the notion of `dispositions' of trusting behaviour, which we call Optimism, Pessimism and Realism. Each different disposition results in different trust estimations from an agent. We discuss the possible effects of these differences. Finally, we present the concept of memory in trusting agents, and briefly suggest some ways in which memory spans can affect the trusting decisions of such agents, with different dispositions.


PDF Logo Title: A Screen Editor written in the Miranda Functional Programming Language
Author(s): Simon P. Booth and Simon B. Jones
Date: February 1994
Number: CSM-116
Keywords: functional programming, Miranda, screen editor, specification, Z
Abstract:

This paper explores the development of an interactive screen editor, in the functional programming language Miranda, from a specification written in Z. The program makes use of a set of interactive functions especially developed to aid the writing of interactive programs in Miranda. The editor developed is fully featured and could be used to edit text, although the execution speed is rather poor.


PDF Logo Title: SOLVe: Specification using an Object-Oriented, LOTOS-Based, Visual Language
Author(s): Ashley McClenaghan
Date: January 1994
Number: CSM-115
Keywords: requirements capture and prototyping, LOTOS generation, object-oriented, interactive-systems
Abstract:

SOLVe is a language and set of software tools developed by the SPLICE1 project. SOLVe is experimental, it is not a mature system. SOLVe is particularly suited for the formal specification and prototyping of interactive systems. A SOLVe specification consists of a number of inter-communicating objects. Objects control on-screen icons. Clicking or dragging object icons send messages to the objects. A SOLVe specification may be interactively animated. This is achieved by translating the SOLVe specification into LOTOS, then executing the LOTOS using the hippo simulator. Events output by the LOTOS simulation drive icons in an X widget and, reciprocally, mouse manipulation of these icons drives the LOTOS specification. This document is a tutorial, user guide, and SPLICE project report on SOLVe.


Title: Formal Object-Oriented Development of Software Systems using LOTOS (PhD Thesis)
Author(s): J. Paul Gibson
Date: September 1993
Number: CSM-114
Abstract:

Formal methods are necessary in achieving correct software: that is, software that can be proven to fulfil its requirements. Formal specifications are unambiguous and analysable. Building a formal model improves understanding. The modelling of nondeterminism, and its subsequent removal in formal steps, allows design and implementation decisions to be made when most suitable. Formal models are amenable to mathematical manipulation and reasoning, and facilitate rigorous testing procedures. However, formal methods are not widely used in software development. In most cases, this is because they are not suitably supported with development tools. Further, many software developers do not recognise the need for rigour.

Object oriented techniques are successful in the production of large, complex software systems. The methods are based on simple mathematical models of abstraction and classification. Further, the object-oriented approach offers a conceptual consistency across all stages of software development. However, the inherent flexibility of object-oriented approaches can lead to an incremental and interactive style of development, a consequence of which may be insufficient rigour. This lack of rigour is exacerbated by the inconsistent and informal semantics for object-oriented concepts at all stages of development.

Formal and object-oriented methods are complementary in software development: object-oriented methods can be used to manage the construction of formal models, and formality can add rigour to object-oriented software development. This thesis shows how formal object-oriented development can proceed from analysis and requirements capture to design and implementation.

A formal object-oriented analysis language is defined in terms of state transition system semantics. This language is said to be customer-oriented: a number of graphical views of object-oriented relations in the formal analysis models are presented, and the specifications produced say what is required rather than how the requirements are to be met. A translation to ACT ONE provides an executable model for customer validation. This translation is founded on a precise statement of the relationship between classes and types (and subclassing and subtypes). The structure of the resulting ACT ONE requirements model corresponds to the structure of the problem domain communicated by the customer.

The step from analysis to design requires an extension to the requirements model to incorporate semantics for object communication. A process algebra provides a suitable formal model for the specification of communication properties. LOTOS, which combines ACT ONE and a process algebra in one coherent semantic model, provides means of constructing object-oriented design semantics. Design is defined as the process of transforming a customer-oriented requirements model to an implementation-oriented design, whilst maintaining correctness. Correctness preserving transformations (CPTs) are defined for: transferring requirements structure to design structure, manipulating design structure, and changing internal communication models.

Design must be targeted towards a particular implementation environment. The thesis examines a number of different environments for the implementation of object-oriented LOTOS designs. It illustrates the importance of understanding programming language semantics. We show how Eiffel can be used to implement formal object-oriented designs.

A case study evaluates the formal object-oriented models and methods developed in this thesis. This identifies re-use at all stages of software development and emphasises the role of structure: it improves understanding and communication, and makes validation and verification easier and better.

The thesis shows that formal object-oriented technology is ready for transfer to industry. These methods should be exploited sooner rather than later: object-oriented development can incorporate formal methods without significant cost, and formal methods can utilise the object-oriented paradigm to manage complexity. The thesis provides a rationale for formal object-oriented development and a set of conceptual tools which makes the development of software systems a true engineering discipline.


Title: Formal Object-Based Design in LOTOS
Author(s): J. Paul Gibson
Date: April 1993
Number: CSM-113
Abstract:

The advantages of object-based techniques in software engineering are well recognised and widely accepted. The inherent flexibility of object-based systems often leads to an interactive and incremental style of development, a consequence of which may be insufficient rigour in software production. As the size and complexity of a software project increases, it is important to adhere to as rigorous a method as possible.

This report shows how a formal method can be incorporated into an object-based development strategy. In this way it is possible to provide all the advantages of the object-based paradigm whilst ensuring that the quality of code is not compromised. The specification language LOTOS is shown to be appropriate for the generation and evolution of formal object-based designs. The structure of the object-based specification, as a design, provides a foundation upon which an implementation can be constructed. A simple network routing system illustrates the role of the design in the object-based development process, from the generation of the LOTOS specification to the coding of the C++ implementation.


Title: A LOTOS-Based Approach to Neural Network Specification
Author(s): J. Paul Gibson
Date: May 1993
Number: CSM-112
Abstract:

The objective of the SPLICE project (Specification using LOTOS for an Interactive Customer Environment) is to help bridge the gap that currently exists between customers, specifiers and designers. Central to the initial phases of the project is the identification of problem domains suitable for study, together with the prototyping of tools for the specification and animation of systems within the chosen problem areas. Neural network specification has been identified as one area which can be usefully considered. This paper examines how LOTOS can be used to specify the behaviour of one particular type of neural network: a trained perceptron. It also gives an overview of the customer-friendly environment, currently under development, which provides tools for the generation and testing of perceptron models, defined as LOTOS specifications. The report concludes with recommendations for further research into the area of neural network specification using LOTOS.


PDF Logo Title: Using Rigorous Object Oriented Analysis
Author(s): Ana M. D. Moreira and Robert G. Clark
Date: August 1993
Number: CSM-111
Abstract:

The ROOA (Rigorous Object-Oriented Analysis) method introduces formality into the object-oriented analysis process by providing a set of rules which enables a formal object-oriented analysis model to be produced systematically from a set of requirements. This model is expressed in LOTOS and provides a precise and unambiguous specification of system requirements. As the specification obtained is executable, prototyping is used to support validation and refinement of the formal model.


Title: On Using Tense Logic to Describe Digital Computing Systems
Author(s): Peter B. Ladkin
Date: August 1993
Number: CSM-110
Abstract:

We consider various arguments for and against the use of tense logic for specification and verification of digital computing systems, illustrated by issues that arose in a discussion concerning Lamport's Temporal Logic of Actions.


PDF Logo Title: ROOA: Rigorous Object Oriented Analysis Method
Author(s): Ana M. D. Moreira and Robert G. Clark
Date: October, 1993
Number: CSM-109
Abstract:

Object-oriented analysis (OOA) and design methods are used by the software engineering community, while formal description techniques (FDTs) are mainly used in a research environment. The Rigorous Object-Oriented Analysis (ROOA) method combines OOA methods with the ISO standard FDT LOTOS to produce a practical method which can be applied by software engineers.

ROOA takes a set of informal requirements and an object model and produces a formal object-oriented analysis model that acts as a requirements specification. The resulting formal model integrates the static, dynamic and functional properties of a system in contrast to existing OOA methods which are informal and produce three separate models that are difficult to integrate and keep consistent.

The ROOA method provides a systematic development process, by proposing a set of rules to be followed during the analysis phase. During the application of these rules, auxiliary structures are created to help in tracing the requirements through to the final formal model. An important part of the ROOA method is to give a formal interpretation in LOTOS of object-oriented analysis constructs.

As LOTOS produces executable specifications, prototyping can be used to check the conformance of the specification against the original requirements and to detect inconsistencies, omissions and ambiguities early in the development process. LOTOS is a wide-spectrum specification language and so the requirements specification can act as the starting point for software development based on correctness preserving transformations.


Title: Testing Properties of Systems
Author(s): Peter B. Ladkin
Date: May 1993
Number: CSM-108
Abstract:

We consider constructing tests from system specifications. Properties of systems are considered to be finite specifications of some form, and observable behaviour of a system to be a set of traces. We require an interleaving semantics, to conform with TTCN, so traces are sequences, either of events or states. We describe what it means for a property to be satisfied by a trace, or a finite behaviour, and give a definition of test outcome. We describe various classes of testable properties, and define conformance requirement and conformance test suite. We illustrate the use of all these notions on the ubiquitous biscuit machine. Finally we obtain some general procedures for deriving test suites from safety and liveness properties.


Title: Groupware design: Principles, prototypes, and systems (PhD Thesis)
Author(s): Andrew J. G. Cockburn
Date: March 1993
Number: CSM-107
Abstract:

Computers are valuable tools for a wide range of work tasks. A substantial limitation on their value, however, is the predominant focus on enhancing the work of individuals. This fails to account for the issues of collaboration that affect almost all work. Research into computer supported cooperative work (CSCW) aims to eliminate this deficiency, but the promise of computer systems for group work has not been met.

This thesis presents four design principles that promote the development of successful groupware. The principles identify the particular problems encountered by groupware, and provide guidelines and strategies to avoid, overcome, or minimise their impact. Derived from several sources, the major influence on the principles development is an investigation into the relationship between factors affecting groupware failure. They are stimulated by observations of groupware use, and by design insights arising from the development of two groupware applications and their prototypes: Mona and TELEFREEK.

Mona provides conversation-based email management. Several groupware applications allow similar functionality, but the design principles result in Mona using different mechanisms to achieve its user-support.

TELEFREEK provides a platform for accessing computer-supported communication and collaboration facilities. It attends to the problems of initiating interaction, and supports an adaptable and extendible set of "social awareness" assistants. TELEFREEK offers a broader range of facilities than other groupware, and avoids the use of prohibitively high-bandwidth communication networks. TELEFREEK demonstrates that much can be achieved through current and widely accessible technology.

Together Mona and TELEFREEK forcefully demonstrate the use of the design principles, and substantiate the claim of their utility.


Title: Trust in Distributed Artificial Intelligence - A Discussion
Author(s): Stephen P. Marsh
Date: April 1993
Number: CSM-106
Abstract:

A discussion of trust is presented which focuses on multi-agent systems from the point of view of one agent in a system. The roles trust plays in various forms of interaction are considered, with the view that trust allows interactions between agents where there may have been no interaction possible before trust. Trust allows parties to acknowledge that, whilst there is a risk in relationships with potentially malevolent agents, some form of interaction may produce benefits, where no interaction at all may not. In addition, accepting the risk allows the trusting agent to prepare itself for possibly irresponsible or untrustworthy behaviour, thus minimizing the potential damage caused. A formalism is proposed to allow agents to reason with and about trust.

(This is a revision of a paper presented at the 4th European Workshop on Modelling Autonomous Agents in a Multi-Agent World, Italy, 1992.)


Title: A User's Guide to TeleFreek: A Social Awareness and Communications Platform
Author(s): Andrew J. G. Cockburn
Date: March 1993
Number: CSM-105


PDF Logo Title: Translating CBS to LML
Author(s): Simon B Jones
Date: April 1993
Number: CSM-104
Keywords: functional programming, CBS, broadcasting systems, process calculus
Abstract:

CBS is a process Calculus for describing Broadcasting Systems. This note describes a translation of CBS expressions and agent definitions into LML expressions and function definitions. The resulting functions execute under the control of a simple bus arbitrator to produce a trace of the communications that will result from one of the computations possible for the original CBS agents. Each CBS agent is translated to an LML function which can be used to simulate its behaviour, and CBS agent-forming operators are translated to higher order functions which combine the functional translations of the component agents.


Title: Proving Systems Correct
Author(s): Peter B. Ladkin
Date: March 1993
Number: CSM-103


Title: On Binary Constraint Problems
Author(s): Peter B. Ladkin and Roger D. Maddux
Date: February 1993
Number: CSM-102


Title: Interpreting Message Sequence Charts
Author(s): Peter B. Ladkin and Stefan Leue
Date: March 1993
Number: CSM-101


PDF Logo Title: Automatic Generation of Theories
Author(s): Alan G. Hamilton
Date: January 1993
Number: CSM-100
Keywords: Constructive type theory, data structures, mechanized proof
Abstract:

Constructive type theory (CTT) has potential as a framework for program synthesis. But the basic theory provides only a minimal collection of mathematical structures which can be used to represent data. Because of the uniform structure and the extensibility of CTT, it is proposed that new structures can best be introduced directly, via additional rules, rather than by representation in terms of basic structures. For arbitrary freely generated structures, the necessary rules can be constructed, starting from a simple form of specification. Further, it is shown how certain definitions can be made, and how theorems can be generated and proved by uniform methods. In particular, closure, uniqueness, cancellation and equality-decidability theorems are dealt with. All of the methods described have been fully implemented, using PICT, a proof assistant for CTT which has been developed by the author. Additional rules are generated in PICT's format, so that PICT can use them without modification, and new theorems are proved by PICT automatically. Examples are given.


Up one level to Computing Science/Mathematics Reports

Web Ken Turner Home   Email    Search Search Web Pages

Last Update: 12th January 2020
URL: https://www.cs.stir.ac.uk/~kjt/techreps/previous.html