Edited by:
Rance Cleaveland Scott A. Smolka
Department of Computer Science Department of Computer Science
North Carolina State University State University of New York at Stony Brook
Raleigh, NC 27695-8206 Stony Brook, NY 11794-4400
rance@csc.ncsu.edu
sas AT cs DOT sunysb DOT edu
Rajeev Alur (U. California Berkeley) | Dale Miller (U. Pennsylvania) |
Jos Baeten (Technical U. Eindhoven) | Jay Misra (U. Texas Austin) |
Jan Bergstra (U. Amsterdam) | Faron Moller (Uppsala Univ.) |
Eike Best (U. Hildesheim) | Ugo Montanari (U. Pisa) |
Rance Cleaveland (N.C. State Univ.) | Amir Pnueli (Weizmann Univ.) |
Rocco De Nicola (U. Firenze) | Sanjiva Prasad (IIT Delhi) |
Helen Gill (Mitre Corp.) | Vaughan Pratt (Stanford Univ.) |
Roberto Gorrieri (U. Bologna) | Joseph Sifakis (VERIMAG Grenoble) |
Mohamed Gouda (U. Texas Austin) | Scott A. Smolka - Chair (SUNY Stony Brook) |
Jan Friso Groote (CWI) | Bernhard Steffen (U. Passau) |
Tom Henzinger (U. California Berkeley) | Bent Thomsen (ICL) |
C.A.R. Hoare (Oxford Univ.) | Frits Vaandrager (U. Nijmegen) |
Maj. David Luginbuhl (AFOSR) | Moshe Vardi (Rice Univ.) |
Albert Meyer (MIT) | Pierre Wolper (U. Liège) |
Concurrency is concerned with the fundamental aspects of systems of multiple, simultaneously active computing agents that interact with one another. This notion is intended to cover a wide range of system architectures, from tightly coupled, mostly synchronous parallel systems, to loosely coupled, largely asynchronous distributed systems. We refer to systems exhibiting concurrency as concurrent systems, and we call computer programs written for concurrent systems concurrent programs.
Concurrency has emerged as a separate area of research in response to the intrinsic difficulties experienced by designers of concurrent systems in diverse fields within computer science. These difficulties stem in general from two main sources. The first is that the presence of multiple threads of control can lead to subtle and often unanticipated interactions between agents--in particular, issues such as interference, race conditions, deadlock, and livelock are peculiar to concurrent programming. The second arises from the fact that many concurrent systems, such as operating systems and distributed databases, are reactive, meaning that they are designed to engage in an ongoing series of interactions with their environments. Unlike traditional sequential programs, reactive systems should not terminate, and consequently traditional notions of correctness that rely on relating inputs to expected outputs upon termination no longer apply. Developing mathematical frameworks that accurately and tractably account for these phenomena has been a major goal of researchers in concurrency.
This report focuses on research into models and logics for concurrency and their application in specifying, verifying, and implementing concurrent systems. This general area has become known as concurrency theory, and its roots may be traced back to the 1960s [Dij68, Pet62]. Our aim is to survey the rich collection of mature theories and models for concurrency that exist; to review the powerful specification, design, and verification methods and tools that have been developed; and to highlight ongoing active areas of research and suggest fruitful directions for future investigation.
By focusing on concurrency theory and its use in verification, we necessarily omit consideration of other concurrency-related topics such as concurrency control in database systems, concurrent program debugging, operating systems, distributed system architecture, and real-time systems. The interested reader is referred to other working group reports, which address many of these topics.
The remainder of this report develops along the following lines. The next section discusses the current state of concurrency research; the section following then presents strategic directions for future investigation. Finally, some concluding remarks are given.
This section gives an overview of the current state of research in concurrency. It reviews work on mathematical models of concurrency, discusses results obtained in system specification and verification, and surveys the practical impact of concurrency theory.
As the introduction suggests, concurrent systems differ in essential ways from sequential ones. In order to characterize the fundamental aspects of these systems, researchers have developed a variety of mathematical theories of concurrency over the past two decades. It should first be noted, however, that there is a basic agreement between the different schools of thought: unlike the semantic theories developed for sequential computation, models of concurrent systems cannot be viewed as mappings from inputs to outputs, owing to the reactive nature of such systems as well as their capacity for nondeterminism. Instead, it is necessary to take into account the fact that systems interact with other systems and hence may exhibit patterns of stimuli/response relationships that vary over time. Consequently, rather than adopting the notion of input/output mapping as primitive, theories of concurrency are generally based on the assumption that certain aspects of system behavior are atomic. Systems then execute by engaging in patterns of these ``indivisible'' events.
The point of departure for different theories of concurrency lies in what constitute ``patterns'' of events. More specifically, models that have been developed may be classified on the basis of the stances they adopt with respect to three basic dichotomies.
Intensional models focus on describing what systems do, while extensional models are based on what an outside observer sees. Consequently, intensional theories are sometimes referred to as operational, while extensional ones are called denotational, the denotation of a process being the set of observations it can engender.
Intensional theories model systems in terms of states and transitions between states. In labeled transition systems [Kel76] transitions are decorated with atomic actions representing interactions with the environment; these have been studied extensively in the context of process algebras such as ACP [BW90], CCS [Mil89] and CSP [Hoa85]. I/O automata [LT89] also follow this approach, but they additionally allow distinctions to be made between different kinds of actions (inputs and outputs). Petri nets [Rei85] extend the state/transition paradigm by allowing states to be ``distributed'' among different locations. UNITY [CM88] adopts an imperative style, with transitions corresponding to the (atomic) execution of conditional assignment statements.
In contrast, extensional models first define a notion of observation and then represent systems in terms of the observations that may be made of them. One of the most basic observations about a system is a trace, or sequence of atomic actions executed by a system [Hoa85]. Elaborations of this model include acceptance trees [Hen88] and failure sets [BHR84], both of which decorate basic trace information with information about what stimuli systems may respond to after performing a sequence. Synchronization trees [Mil80] encode system behavior as a tree with edges labeled by actions. Mazurkiewicz traces [Maz87] include an independence relation between actions to capture possible concurrency. Kahn nets [Kah74] define the behavior of dataflow systems, whose components are given by I/O equations, using fixed-point theory. Other models, such as pomsets [Pra86], encode concurrency information using partial orders on atomic events possibly enriched with a conflict relation (event structures [Win89]) to encode information about choices made during execution.
Interleaving models ``reduce'' concurrency to nondeterminism by treating the parallel execution of actions as the choice between their sequentializations. This view may be termed a ``uniprocessor'' approach to concurrency. To avoid anomalies having to do with process starvation, fairness constraints are sometimes imposed to ensure that individual processes ``make progress'' if they are capable of doing so [Fra86]. Theories such as ACP, CCS and CSP employ interleaving, as do I/O-automata and UNITY; both of the latter also include mechanisms for enforcing fairness constraints. The trace-based and synchronization-tree extensional models also cater for modeling concurrency via interleaving.
In contrast, truly concurrent models instead treat concurrency as a primitive notion; the behavior of systems is represented in terms of the causal relations among the events performed at different ``locations'' in a system. Petri nets represent concurrency in this manner, as do Kahn nets, Mazurkiewicz traces, pomsets and event structures.
The difference between branching-time and linear-time models lies in their treatment of the choices that systems face during their execution. Linear-time models describe concurrent systems in terms of the sets of their possible (partial) runs while in branching-time models the points at which different computations diverge from one another is recorded. Thus traces and pomsets are linear-time models while synchronization trees and event structures are branching-time theories.
Different decisions about the three dichotomies are appropriate for different purposes. Extensional models provide a useful basis for explaining system behavior, while intensional ones are often more amenable to automated analysis, as they typically give rise to (finite-)state machines. An interleaving semantics is useful for specifying systems, whereas a truly concurrent semantics might be the basis for describing possible implementations, where for example performance issues are of interest. A branching-time semantics is useful for modeling future behavior of a system, while linear-time suffices for describing execution histories.
While traditional theories have concerned themselves with modeling choice and concurrency, more recent work has focused on elaborating them with other aspects of system behavior, including real-time, probability and security. This point is addressed in more detail in Section 3.1.
In dealing with concurrent programs care must be taken when defining the notion of ``correctness''. Traditional (deterministic) sequential programs may be viewed as (partial) functions from inputs to outputs; in such a setting specifications may be given as a pair consisting of a precondition describing the ``allowed'' inputs and a postcondition describing the desired results for these inputs. This notion of specification remains appropriate for a concurrent program representing a ``parallelized'' version of some sequential program; in this case, concurrency (parallelism) has been introduced solely for performance purposes. For reactive, nonterminating and potentially nondeterministic concurrent systems, however, this approach is too limited. This section presents alternative notions that have been developed and discusses related approaches for reasoning about systems.
Two general schools of thought have emerged regarding appropriate mechanisms for specifying concurrent systems. One emphasizes the use of logical formulas to encode properties of interest; another uses ``high-level'' systems as specifications for lower-level ones. What follows elaborates on each of these.
In a seminal paper [Lam77], Lamport argued that the requirements that designers wish to impose on reactive systems fall into two categories. Safety properties state that ``something bad never happens''; a system meeting such a property must not engage in the proscribed activity. Liveness properties, on the other hand, state that ``something good eventually happens''; to satisfy such a property, a system must engage in some desired activity. Although informal, this classification has proved enormously useful, and formalizing it has been a main motivation for much of the work done on specification and verification of concurrent systems. Some of this work has aimed at semantic characterizations of these properties [AS85]. Others have developed logics that allow the precise formulation of safety and liveness properties. The most widely studied are temporal logics, which were first introduced into the computer science community by Pnueli [Pnu77] and support the formulation of properties of system behavior over time. The remainder of this section reviews some of the research into temporal logic.
The dichotomies presented earlier in modeling concurrency also reveal themselves in the development of temporal logics. In particular, two schools of temporal logic have emerged [EH86].
The other two dichotomies--intensional vs. extensional, and interleaving vs. true concurrency--remain relatively unexplored. Traditional temporal logics have generally adopted an extensional view of system behavior and an interleaving model of concurrency, although recent work has explored logics for true concurrency [Thi94].
Finally, other logics have also been developed for reasoning about concurrent systems, including various dynamic logics and logics of knowledge. The former permit the inclusion of programs inside formulas [Pel87], while the latter allow users to express the understanding that individual agents have of other agents' states at a given point in time [HM90, HZ92].
Another popular approach to specifying concurrent systems involves the use of behavioral equivalences and preorders to relate specifications and implementations. In this framework, which was introduced by Milner [Mil80] and has been extensively explored by researchers in process algebra [BW90, Hoa85, Mil89], specifications and implementations are given in the same notation; the former describes the desired high-level behavior, while the latter provides lower-level details indicating how this behavior is to be achieved. In equivalence-based methodologies, proving an implementation correct amounts to establishing that it behaves ``the same as'' its specification; in those based on preorders, one instead shows that the implementation provides ``at least'' the behavior dictated by the specification.
In support of this approach, a number of different equivalences and preorders have been proposed based on what aspects of system behavior should be observable. Relations may be classified on the basis of the degree to which they abstract away from internal details (viz. the intensional/extensional dichotomy) of system descriptions; the amount of sensitivity they display to choices systems make during their execution (viz. the linear/branching time dichotomy); and the stance they adopt with respect to interleaving vs. true concurrency. For example, bisimulation equivalence [Mil80] is a branching-time, interleaving-based, intensional equivalence, while observational equivalence [Mil80] is a branching-time, interleaving-based extensional equivalence. Other noteworthy relations include the failures/testing relations (linear-time, interleaving, extensional) [BHR84, DNH84] and pomset equivalence (linear-time, truly concurrent, extensional) [Pra86]. Interested readers are referred to [Gla90] for a detailed study of the relationship among different equivalences.
These relations may also be used to bridge the gap between intensional and extensional models of concurrency described above. That is, in order to define a relation over intensional models, one first selects the extensional, or ``observable'', information that processes may exhibit and then uses this as the basis for relating models.
Connections between the two approaches have also been explored. In particular, a temporal logic induces an equivalence on systems as follows: two system are equivalent if and only if they satisfy the same formulas. Using this framework, relationships between different linear- and branching-time logics and equivalences have been established [HM85, BCG88].
Verification refers to the process of establishing that a system satisfies a specification given for it. The previous subsections described different approaches to specifying concurrent systems; this one gives an overview of techniques of establishing that systems meet such specifications.
One verification methodology involves using axioms and inference rules to prove that systems satisfy specifications. Researchers have investigated proof systems for several different specification formalisms, including variants of temporal logic and a variety of behavioral relations, and we briefly review these here.
Early proof systems aimed at generalizing frameworks for sequential programs to cope with concurrency [OG76]. In particular, specifications were given as pairs of conditions circumscribing the allowed inputs and desired outputs, and consequently reactive systems were not catered for. However, it was in the context of this work that interference emerged as a crucial complicating factor in verifying concurrent programs. To cope with nonterminating systems, other researchers focused their attention to developing proof methodologies for establishing that systems satisfied different specifications given as invariants [Lam80].
Traditional temporal-logic-based proof systems rely on proving implications between temporal formulas [Pnu77]. To prove a system correct, one first translates it into an ``equivalent'' temporal formula and proves that this formula entails the specification. The virtue of this approach is that one can use (existing) axiomatizations of temporal logic to conduct proofs; the disadvantage is the translation requirement, which usually obscures the structure of the system. More recent proof systems such as UNITY [CM88] adopt proof rules for specific system notations and temporal logics; other work has been devoted to the development of domain-specific rules for establishing certain classes of properties [MP91, MP95].
Behavioral-relation-based proof systems have typically been algebraic (hence the term process algebra) [Mil89, BW90]; in order to prove that two systems are equivalent, one uses equational reasoning. Sound and complete proof system have been devised for a number of different equivalences, preorders, and description languages.
Finite-state systems turn out to be amenable to automatic verification, since their observable behavior can be finitely represented. These systems arise in practice in areas such as hardware design and communication protocols; this fact has spurred interest in the development of verification algorithms for temporal logic and relation-based specifications.
The task of determining automatically whether a system satisfies a temporal formula is usually referred to as model checking. One may identify two basic approaches to model checking. The first, which grew out of the branching-time logic community, relies on an analysis of the structure of the formula to determine which system states satisfy the formula [CES86, CS93]. The second, which arose from the linear-time community, is automaton-based [VW86]; one constructs an automaton from the negation of the formula in question and then determines whether or not the ``product'' of this automaton and the automaton representing the system is empty (if so, the system is correct). The two approaches turn out to be related, and indeed automaton-based approaches have been developed for branching-time logics and ``structure-based'' ones have been devised for linear-time logics [BVW94, BCG95]. The time complexities of the best algorithms in each case are proportional to the product of the number of system states and the size of the formula (branching-time) or an exponential of the size of the formula (linear-time).
Algorithms have also been devised for determining whether systems are related by semantic relations. Traditional approaches for calculating equivalences combine partition-based algorithms for bisimulation equivalence [PT87, KS90] with automaton transformations that modify the automata corresponding to the systems being checked [CPS93]. Methods for computing preorders follow a similar scheme, with the ``base'' relation being a variant of the simulation preorder.
The key impediment to the practical application of these algorithms is the state-explosion problem. In general, the number of states a system has will be exponential in the number of concurrent processes; thus, as the number of processes grows, the possibility of enumerating all possible system states rapidly becomes infeasible. Much of the recent work on model and relation checking has been devoted to techniques for ameliorating the effects of state explosion. Some of the techniques that have been developed include:
Finally, researchers have also developed techniques for handling certain kinds of infinite-state systems, including context-free processes [BBK93] and those based on dense real-time [AD94].
The past decade has also witnessed the development of tools that use concurrency theory as a basis for automating the design and analysis of concurrent systems. What follows is essentially a categorization of tools based on the degree of interaction demanded from the user during the verification process. The interested reader is also referred to the report of the formal methods working group for additional tool information.
The remainder of Section 2 reviews some of the applications in the area of programming languages and system verification to which concurrency theory has been put.
Traditionally, the inclusion of concurrency into programming languages has been done in a somewhat ad hoc manner; the general paradigm has been to augment sequential languages such as C with facilities for using operating-system-supplied primitives such as process creation and semaphores. This meant that programs written in these programs were not portable across different operating systems; in addition, the low-level nature of system calls led to programs that were difficult to maintain.
Programming language researchers have over the past decade begun to investigate the design of programming constructs based on models of concurrency described above as a means of remedying this problem. Examples include OCCAM [Inm88], which arose out of the work done on CSP [Hoa85]; synchronous programming languages such as ESTEREL [BG92] and LUSTRE [Hal93], which evolved from dataflow models [Kah74, KM77]; and PICT [PT95a, PT95b], CML [Rep92], and Facile [TLK96], which are functional concurrent programming languages based on process algebras (CCS [Mil89], pi-calculus [MPW92]), and their higher-order extensions (Higher Order pi-calculus [San92, San93] and CHOCS [Tho89, Tho93, Tho95]). Other languages, such as Linda [CG89], have been given rigorous semantics using techniques borrowed from concurrency theory [DNP94, JW94]. See Section 3.4 for a discussion of the use of concurrency models in programming language design as a strategic research direction.
Significant case studies to which concurrency theory has been brought successfully to bear are too numerous to give a complete account of. What follows lists ones taken from a forthcoming special issue of the journal Science of Computer Programming devoted to industrial applications; the interested reader is also referred to the formal methods report for others.
We present a number of strategic directions for future concurrency research, each of which is accompanied by a ``grand challenge''. The challenges are intended to serve as a yardstick for progress made along the various research directions.
In sequential programming, researchers studying semantic issues have focused most of their attention on program correctness, which can be modeled as a function relating inputs to outputs. A correct sequential program should also terminate for all inputs. See [AA78, dB80, Apt81, Apt84, Jon92] for comprehensive coverage of proof techniques for sequential programs. Program efficiency (space and time) has also received tremendous attention (see the ACM SDCR Theory of Computation working group report.)
For concurrent systems, particularly those of the reactive variety, many non-functional requirements are of interest to system builders. These include the following:
Note that these different types of requirements are sometimes contradictory. Reliability, for instance, requires the use of additional resources, which may degrade system performance and thus endanger timeliness (see [KS92, KMS94] for a treatment of the reliability vs. efficiency tradeoff in parallel programs).
Recognizing their importance, concurrency researchers have been turning their attention more and more to the non-functional requirements of concurrent programs. For example, substantial progress has been made on models and logics for real-time [Zub80, RR88, NRSV90, BB91, ZHR91, HMP91, MMP92, AD94], probability [Han94, LS92, SL94, GSS95, BG96], mobility [MPW92, Tho93, San92, MPW93, San93, Hen93, Tho95, ACS96] and hybrid systems [ACHH93, MP93, Hoo93, NOSY93, Lam93, ACH+95, HH95, Ver95, LSVW96].
Solidify our understanding of phenomena such as real-time and mobility and at the same time develop new formalisms for those phenomena that remain largely unexplored (e.g., security, although see [FG95, Ros95a]). Achieving these goals will likely necessitate interaction with researchers from other computer science disciplines (security experts, system engineers, etc.), and from non-computer-science disciplines such as electrical and mechanical engineering and control theory.
We should also strive to develop semantic partnerships between formalisms that would facilitate the construction of a model that captures the requirements most relevant to a given problem; e.g., a model that embodies timing and probability information in the context of mobile systems (see also Section 3.2, which treats, in greater detail, the challenge of taxonomizing and unifying semantic models of concurrency).
Finally, these formalisms must be brought out of the ``test-tube'' environment and applied in an integrated way to the specification, verification and design of complex real-life systems. A concrete challenge here is the formal modeling and analysis of the emerging micro electro-mechanical systems (MEMS) technology, which presents a host of modeling challenges. The interested reader should see [Gab95] and the ACM SDCR MEMS working group report for more on MEMS.
Another challenging application is air-traffic control. Its safety-critical nature and high degree of concurrency, make this area an ideal test-case for concurrency research. The utility of a formal approach is already evident in the work [HL96], where a formal requirements specification of the commercial traffic collision avoidance system (TCAS II) was produced and tested. Greater challenges lie in the future, particularly if today's centralized solutions that require airplanes to use specific approach patterns to runways are abandoned in favor of decentralized ``free-flight'' solutions. While free-flight solutions can improve efficiency and solve congestion problems, the complexity of their design demands use of formal approaches. Principal to this design are problems of coordination and conflict resolution among multiple agents [SMT+95].
The progress of science involves a constant interplay between diversification and unification. Diversification extends the boundaries of science to cover new and wider ranges of phenomena; successful unification reveals that a range of experimentally validated theories are no more than particular cases of some more general principle. The cycle continues when the general principle suggests further specialisations for experimental investigation.
During the past twenty years, concurrency theory has produced a mature but loose collection of models, theorems, algorithms, and tools. The collection is mature because it is based on a solid mathematical foundation (see Section 2.1) and has made possible documented successes in system design and analysis (see Section 2.2.4). The collection is loose because a theoretical result or a practical application is typically carried out within a particular formalism, whose idiosyncrasies may support the result or application in undeclared ways.
Overall, the myriad of formalisms has led to healthy diversity rather than fragmentation of the discipline: existing concurrency theories have proved suitable for a wide range of application domains, and comparative concurrency theory [Win87, Gla90, Old91, Abr96] has identified deep mathematical relationships between individual theories.
Yet, potential users, such as designers of communication protocols and embedded systems, have been reluctant in applying concurrency-theoretic methods. This reluctance is reinforced by the perceived need of having to buy into one particular formalism and tool from what must seem, to the bystander, a bewildering array of choices.
Develop a systematic and coherent way of presenting concurrency theory to its potential users. This should be achieved by a simple uniform framework that permits an application-oriented taxonomy of the major models of concurrency and a structured organization of the core results.
A uniform framework for concurrency will aid not only potential users of concurrency-theory-based tools, but also students and researchers. In particular, such a framework could provide a basis for concurrency education in the computer science curriculum (see Section 3.5). It could also aid the development of new methods and tools for the design and analysis of heterogeneous systems, consisting of synchronous and asynchronous, discrete and continuous, hardware and software components. For example, a sufficiently broad framework could support the implementation of asynchronous protocols using synchronous circuits.
The literature already contains a myriad of proposals for highly abstract, general-purpose models of concurrency, e.g., [GV92, BDH92, BH93, Mil93, Gla93, ABV94, WN95, MMP95, MR96, Abr96, AH96b], and the uniform framework we seek could very well evolve from this body of work. To make this discussion more concrete, and to illustrate the rich diversity among the proposed models, we briefly describe three of these efforts.
A reactive module [AH96b] is a kind of state transition system that has been proposed as a uniform framework for synchronous and asynchronous computation in the context of hardware-software codesign. It supports an abstraction operator that can be used to collapse an arbitrary number of consecutive computation steps into a single step, thereby providing the ability to turn an asynchronous system into a synchronous one (temporal scalability). It supports a hiding operator that changes external variables into internal ones, which can be used to change a synchronous system into an asynchronous one (spatial scalability). Finally, the model is equipped with a natural notion of composition that permits compositional reasoning, i.e., the ability to reason about a composite system in terms of the system components.
Causal computing [MR96] is a general framework for concurrent computation based on the ``true concurrency'' concurrency model. In the causal approach, concurrent computations are defined as those equivalence classes of sequential computations that are obtained by executing concurrent events in any order. A causal program consists of a set of rewriting rules, events are rule applications, and simultaneous rewritings correspond to concurrent events. Causal computing has been used as a model of computation for process algebras, constraint and logic programming, term and graph rewriting, and mobile and coordination systems. Please see the position statement of Ugo Montanari for a closer look at causal computing.
A Chu space [GP93] is an event-state-symmetric generalization of the notion of event structure consisting of a set of events, a set of states, and a binary relation of occurrence between them. The interpretation of process algebra over event structures extends straightforwardly to Chu spaces, while the language of process algebra expands to include the operations of linear logic, compatibly interpreted. Chu spaces may be equivalently formulated in terms of unfolded Petri nets, propositional theories, or history preserving process graphs [GP95]. Chu spaces are of independent mathematical interest, forming a bicomplete self-dual closed category first studied by Barr and Chu [Bar79] which has found applications elsewhere in proof theory, game theory, and Stone duality [Pra95]. Chu spaces are the subject of Vaughan Pratt's position statement.
There are two important observations about the current state in the design and verification of concurrent systems:
Transform the existing array of design and verification techniques into sound and tested methodologies. The resulting methodologies should extend the range of existing techniques to applications orders of magnitude larger in size and complexity. Similar to concerns raised in Section 3.2 regarding a uniform semantic framework for concurrency, we should also seek ways to combine methodologies, to better suit the demands of a given application, and, relatedly, develop an application-oriented taxonomy of methodologies.
To produce a next generation of truly usable methodologies, the following issues must be addressed:
In Section 2.2.4, we briefly discussed the impact models of concurrency have had on the design of concurrent programming languages. Strategically, we believe that the continued use of concurrency theory--especially as this theory continues to evolve--in the design and implementation of programming languages is an important research direction. The main rationale behind this belief is simple and powerful: the development of programming languages with strong foundations in concurrency theory would blur, and in some cases completely eliminate, the distinction between a system model and a system implementation. As such, the task of designing, coding, and verifying concurrent systems would be significantly eased.
Designing programming languages for concurrency is a nontrivial task. Adding a notion of concurrency to an existing language may break some of the tacit assumptions of the language's design, such as determinism and no (inter-process) interference on variables. Thus, designing a language with an explicit concurrency model from scratch is likely to produce a language with a cleaner and better-understood semantics. Moreover, rooting a programming language in a sufficiently expressive concurrency model could lead to the construction of re-usable abstractions for sophisticated synchronization mechanisms, in much the same way as objects are used in sequential systems today.
A concurrency model underlying the design of a parallel programming language might also increase our ability to efficiently port a program across a wide spectrum of system architectures, without unduly restricting the parallelism in the ported version of the program. Concurrency models by their very nature tend to be abstract, meaning that they are largely architecture-independent. In particular, they typically assume no a priori limit on the number of processors available for running processes (but see [GL94] for an exception), and hence capture a notion of maximal parallelism.
To better address the portability issue, we should more closely examine the interplay between true concurrency models and interleaving-based models (Section 2.1). The former come into play when mapping individual processes to individual processors in the target architecture; the latter are relevant when multiple processes must be mapped to the same processor. A better understanding of this interplay is one of the potential benefits of a unifying framework for concurrency (Section 3.2).
Concurrency theory has also begun to influence the design of type systems for functional concurrent programming. In [NN93, NN94] function types of the form f:t1--b-->t2 are allowed, where b is a process algebra behavior expression. The type is to be read: function f takes a value of type t1 and produces a value of type t2 and in doing so has behavior b. Under some conditions these behaviors will be inferable and ``principal'' in a sense similar to that of SML [MTH90]. A similar approach has been proposed in [Nie90] for concurrent object-oriented languages based on process algebra.
Type systems of this nature seem like a particularly promising technique for integrating a concurrency model into a programming language. For instance, we could devise new type systems that guarantee safety and liveness properties in the same way that traditional type systems guarantee safety in calling functions and procedures [AGN96].
Design usable, safe, and secure languages incorporating a well-understood concurrency model. The need for such languages is particularly urgent now that mobile agents or applets have started to roam the Internet. Currently, mobile agents are written in languages such as Java, Telescript and Safe Tcl/Tk, all providing rudimentary support for concurrency in the form of threads and semaphores. There is, however, a growing need for extending the concurrency constructs of these languages or designing new languages with more advanced synchronization mechanisms, including synchronous channels, multi-cast groups and constraint variables. Concurrency theory--in particular, higher-order and mobile theories [Tho93, Tho95, MPW92, San92, San93, Hen93, NN94]--can play an important role in achieving these goals. The resulting languages would help system designers analyze sophisticated software systems built using mobile agents.
It is unlikely that concurrency-based design and verification methodologies will gain wide-spread acceptance until the user community is educated in these methodologies and the concepts underlying them. Concurrency education should start in the undergraduate curriculum. Moreover, as aptly pointed out in [Ros95b], concurrency, whether we recognize it or not, pervades many areas of computer science--e.g., hardware, operating systems, languages, compilers, and algorithms--and an undergraduate well-versed in concurrency basics will be better-prepared for these courses.
Introduce a course on concurrency basics into the official ACM Undergraduate Computer Science Curriculum by the year 2000.
An undergraduate course on concurrency basics (a course entitled ``Concurrency Theory'' may sound too intimidating) should include coverage of the following essential concepts:
In this report, we have outlined the current state of the art in concurrency research and proposed strategic directions and challenges for further investigation. We hope that we have convinced the reader that the field of concurrency is thriving. Mature theories and models exist; specification, design, and verification methods and tools are under active development; many successful applications of substantial complexity have been carried out and the potential for further and more sophisticated application is enormous.
The future of computing is interactive and networked, rather than the traditional sequential one. Consequently the role of concurrency theory, which aims at understanding the nature of interaction, will continue to grow.
An active mailing list for the field can be subscribed to by sending e-mail to concurrency@cwi.nl.
This report has benefited from helpful comments from Chris Hankin, Michael Loui, Jack Stankovic, and Peter Wegner. The editorial activities of Rance Cleaveland have been supported in part by ONR Grant N00014-92-J-1582, NSF Grants CCR-9257963 and CCR-9402807, and AFOSR Grant F49620-95-1-0508; and those of Scott Smolka by NSF Grant CCR-9505562 and AFOSR Grants F49620-93-1-0250 and F49620-95-1-0508.
Strategic Directions in Computing Research
Concurrency Working Group Report
This document was generated using the LaTeX2HTML translator Version 96.1 (Feb 5, 1996) Copyright © 1993, 1994, 1995, 1996, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -split 0 -link 0 -no_navigation -show_section_numbers final.tex.
The translation was initiated by Scott Smolka on Mon Sep 30 20:55:05 EDT 1996