BROADENING REAL-TIME SYSTEMS RESEARCH
Alan Burns

Although real-time computing has emerged as a distinct discipline it cannot, and should not, be viewed in isolation. It overlaps significantly with other computing topic areas such as communications, OS, formal methods, languages etc. Its strategic research directions must therefore be integrated into a wider program of activity. Such a program must however be intrinsically linked to the explosion of IT related activities in the wider economy and society.

It is, of course, difficult to predict long-term future trends in IT. But it is clear that the next decade will see an intensification in global networking, an enormous increase in IT functionality in the home and office, and, what can only be called a revolution in the entertainment business (with multi-media, virtual reality facilities etc).

What role does real-time have in this future? At one level we can see that some of these new technologies, such as multi-media, do have scheduling problems to address (and will therefore keep many researchers busy for years) but this is a little parochial. To succeed, the ubiquitous open systems of the future will have to master inevitable complexity. The use of temporal ordering and other key properties of `time' to coordinate activities is one way that the potential chaos can be controlled and thereby utilised. Many of the topics that are currently only addressed within the real-time community will need to be exported. We are used to viewing `time' as the central issue of concern. Requirements capture, design and implementation must all be undertaken from a temporal perspective. This gives us a level of control and planning (and adaptability) that is precisely what will be needed in many systems of the future. The key is to define interfaces, architectures and protocols that are cognizant of real-time and other non-functional requirements.

Just as all areas of computing will grow, the traditional real-time application domains will expand. Two technology drivers will impact strongly in this area and must therefore be addressed in our research endeavours. Firstly the underlying hardware resources will continue to evolve along the path that makes worst-case predictions of behaviour (especially over short time intervals) very difficult to acquire without unacceptable levels of pessimism. Secondly, the integration of real-time with dependability will make architecting much more challenging. Within many areas of high integrity computing (such as avionics) there is a push to support multi-levels of integrity, software and hardware fault tolerance, reconfiguration, graceful degradation and value added computation. As these systems are inevitably real-time, many of the difficulties in providing the right levels of protection and support come from the problems of managing time (scheduling resources etc).

These technology drivers would seem to lead us towards the following research challenges:

1) Broadening the notion of guarantee and prediction to include probabilistic assessments

2) Deriving object oriented abstractions (as these seem to give considerable help in structuring systems) that adequately address notions of concurrency, robustness and real-time.

3) Experimenting with generic architectures that adequately support distribution, adaptation, prediction and fault tolerance.

These topics, alone, necessitate a concerted (international) effort over a number of years. They also require the dependability and real-time communities to work closer together. Success will, however, be most likely if `real-time' continues to blossom as a distinct discipline with its own methods, body of knowledge, information infrastructure and research agenda.