Recent Publications of Scott Smolka






Recent Journal Publications

Md. A. Islam, A. Murthy, E. Bartocci, E. M. Cherry, F. H. Fenton, J. Glimm, S. A Smolka, and R. Grosu, Model-Order Reduction of Ion Channel Dynamics Using Approximate Bisimulation. Theoretical Computer Science, 2014.

We show that in the context of the Iyer et al. (IMW) 67-variable cardiac myocycte model, it is possible to replace the detailed 13-state probabilistic subsystem of the sodium channel dynamics with a much simpler Hodgkin-Huxley (HH)-like two-state abstraction, while only incurring a bounded approximation error. We then extend our technique to the 10-state subsystem of the fast-recovering calcium-independent potassium channel. The basis of our results is the construction of an approximate bisimulation between the HH-type abstraction and the corresponding detailed ion channel model, both of which are input-controlled (voltage in this case) CTMCs. The construction of the appropriate approximate bisimulation, as well as the overall result regarding the behavior of this modified IMW model, involves: (1) Identification of the voltage-dependent parameters of the m and h gates in the HH-type models via a two-step fitting process, carried out over more than 20,000 representative observational traces of the detailed sodium and potassium ion channel models. (2) Proving that the distance between observations of the detailed models and their respective abstraction is bounded. (3) Exploring the sensitivity of the overall IMW model to the HH-type approximations. Our extensive simulation results experimentally validate our findings, for varying IMW-type input stimuli.


A. Murthy, E. Bartocci, F. Fenton, J. Glimm, R. Gray, E. M. Cherry, S. A. Smolka, and R. Grosu, Curvature Analysis of Cardiac Excitation Wavefronts. IEEE/ACM Transactions on Computational Biology and Bioinformatics, Vol. 10, No. 2, pp. 323-336 (March-April 2013).

We present the Spiral Classification Algorithm (SCA), a fast and accurate algorithm for classifying electrical spiral waves and their associated breakup in cardiac tissues. The classification performed by SCA is an essential component of the detection and analysis of various cardiac arrhythmic disorders, including ventricular tachycardia and fibrillation. Given a digitized frame of a propagating wave, SCA constructs a highly accurate representation of the front and the back of the wave, piecewise interpolates this representation with cubic splines, and subjects the result to an accurate curvature analysis. This analysis is more comprehensive than methods based on spiral-tip tracking, as it considers the entire wave front and back. To increase the smoothness of the resulting symbolic representation, the SCA uses weighted overlapping of adjacent segments which increases the smoothness at join points. SCA has been applied to a number of representative types of spiral waves, and, for each type, a distinct curvature evolution in time (signature) has been identified. Distinct signatures have also been identified for spiral breakup. These results represent a significant first step in automatically determining parameter ranges for which a computational cardiac-cell network accurately reproduces a particular kind of cardiac arrhythmia, such as ventricular fibrillation.


A. Gorlin, C. R. Ramakrishnan, and S. A. Smolka, Model Checking with Probabilistic Tabled Logic Programming. Theory and Practice of Logic Programming, Vol. 12, No. 4-5, pp. 681-700, Cambridge University Press (Sept. 2012).

We present a formulation of the problem of probabilistic model checking as one of query evaluation over probabilistic logic programs. To the best of our knowledge, our formulation is the first of its kind, and it covers a rich class of probabilistic models and probabilistic temporal logics. The inference algorithms of existing probabilistic logic-programming systems are well defined only for queries with a finite number of explanations. This restriction prohibits the encoding of probabilistic model checkers, where explanations correspond to executions of the system being model checked. To overcome this restriction, we propose a more general inference algorithm that uses finite generative structures (similar to automata) to represent families of explanations. The inference algorithm computes the probability of a possibly infinite set of explanations directly from the finite generative structure. We have implemented our inference algorithm in XSB Prolog, and use this implementation to encode probabilistic model checkers for a variety of temporal logics, including PCTL and GPL (which subsumes PCTL* ). Our experiment results show that, despite the highly declarative nature of their encodings, the model checkers constructed in this manner are competitive with their native implementations.


X. Huang, J. Seyster, S. Callanan, K. Dixit, R. Grosu, S. A. Smolka, S. D. Stoller, and E. Zadok, Software Monitoring with Controllable Overhead. International Journal on Software Tools for Technology Transfer, Vol. 14, Issue 3, pp. 327-347, Springer-Verlag (June 2012).

We introduce the technique of software monitoring with controllable overhead (SMCO), which is based on a novel combination of supervisory control theory of discrete event systems and PID-control theory of discrete time systems. SMCO controls monitoring overhead by temporarily disabling monitoring of selected events for as short a time as possible under the constraint of a user-supplied target overhead o_t. This strategy is optimal in the sense that it allows SMCO to monitor as many events as possible, within the confines of o_t. SMCO is a general monitoring technique that can be applied to any system interface or API. We have applied SMCO to a variety of monitoring problems, including two highlighted in this paper: integer range analysis, which determines upper and lower bounds on integer variable values; and non-accessed period detection, which detects stale or underutilized memory allocations. We benchmarked SMCO extensively, using both CPU- and I/O-intensive workloads, which often exhibited highly bursty behavior. We demonstrate that SMCO successfully controls overhead across a wide range of target overhead levels; its accuracy monotonically increases with the target overhead; and it can be configured to distribute monitoring overhead fairly across multiple instrumentation points.


J. Seyster, K. Dixit, X. Huang, R. Grosu, K. Havelund, S. A. Smolka, S. D. Stoller, and E. Zadok, InterAspect: Aspect-Oriented Instrumentation with GCC. Formal Methods in System Design, Vol. 41, No. 3, pp. 295-320, Springer (Dec. 2012).

We present the INTERASPECT instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directlyadd instrumentation by transforming the compiler<92>s intermediate representation. Such transformations,however, require expert knowledge of GCC internals. INTERASPECT addressesthis situation by allowing instrumentation plug-ins to be developed using the familiar vocabularyof Aspect-Oriented Programming: pointcuts, join points, and advice functions.Moreover, INTERASPECT uses specific information about each join point in a pointcut,possibly including results of static analysis, to support powerful customized instrumentation.We describe the INTERASPECT API and present several examples that illustrate itspractical utility as a runtime-verification platform. We also introduce a tracecut system thatuses INTERASPECT to construct program monitors that are formally specified as regularexpressions.


X. Huang, A. Singh, and S. A. Smolka, Using Integer Clocks to Verify Clock-Synchronization Protocols. Innovations in Systems and Software Engineering, Special Issue on NASA Formal Methods Symposium 2010, Vol. 7, Issue 2, pp. 119-130 (June 2011).

We use the Uppaal model checker for timed automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN), the clock-synchronization algorithm of Lenzen, Locher and Wattenhofer (LLW) for general distributed systems, and the clock-thread technique of the software monitoring with controllable overhead algorithm (SMCO). Clock-synchronization algorithms such as TPSN, LLW, and SMCO must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must also be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of timed automata. To overcome this formal-modeling obstacle, we augment the Uppaal specification language with the integer clock-derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN, LLW, and SMCO protocols. With these integer-clock-based models in hand, we use Uppaal to verify a number of key correctness properties, including network-wide time synchronization, bounded clock skew, bounded overhead skew, and absence of deadlock. We also use the Uppaal Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution.


E. Bartocci, R. Singh, F. von Stein, A. Amedome, A.-J. Caceres, J. Castillo, E. Closser, G. Deards, A. Goltsev, R. Sta. Ines, C. Isbilir, J. Marc, D. Moore, D. Pardi, S. Sadhu, S. Sanchez, P. Sharma, A. Singh, J. Rogers, A. Wolinetz, T. Grosso-Applewhite, K. Zhao, A. Filipski, R. F. Gilmour Jr, R. Grosu, J. Glimm, S. A. Smolka, E. M. Cherry, E. M. Clarke, N. Griffeth, and F. H. Fenton, Teaching Cardiac Electrophysiology Modeling to Undergraduate Students: Using Java Applets and GPU Programming to Study Arrhythmias and Spiral-Wave Dynamics. American Journal of Advances in Physiology Education, Vol. 35, No. 4, pp. 427-437, American Physiological Society (Dec. 2011).

As part of a 3-wk intersession workshop funded by a National Science Foundation Expeditions in Computing award, 15 undergraduate students from the City University of New York(1) collaborated on a study aimed at characterizing the voltage dynamics and arrhythmogenic behavior of cardiac cells for a broad range of physiologically relevant conditions using an in silico model. The primary goal of the workshop was to cultivate student interest in computational modeling and analysis of complex systems by introducing them through lectures and laboratory activities to current research in cardiac modeling and by engaging them in a hands-on research experience. The success of the workshop lay in the exposure of the students to active researchers and experts in their fields, the use of hands-on activities to communicate important concepts, active engagement of the students in research, and explanations of the significance of results as the students generated them. The workshop content addressed how spiral waves of electrical activity are initiated in the heart and how different parameter values affect the dynamics of these reentrant waves. Spiral waves are clinically associated with tachycardia, when the waves remain stable, and with fibrillation, when the waves exhibit breakup. All in silico experiments were conducted by simulating a mathematical model of cardiac cells on graphics processing units instead of the standard central processing units of desktop computers. This approach decreased the run time for each simulation to almost real time, thereby allowing the students to quickly analyze and characterize the simulated arrhythmias. Results from these simulations, as well as some of the background and methodology taught during the workshop, is presented in this article along with the programming code and the explanations of simulation results in an effort to allow other teachers and students to perform their own demonstrations, simulations, and studies.


A. Singh, C. R. Ramakrishnan and S. A. Smolka. A Process Calculus for Mobile Ad Hoc Networks. Science of Computer Programming, Vol. 75, Issue 6, pp. 440-469 (June 2010).

We present the ω-calculus, a process calculus for formally modeling and reasoning about Mobile Ad Hoc Wireless Networks (MANETs) and their protocols. The ω-calculus naturally captures essential characteristics of MANETs, including the ability of a MANET node to broadcast a message to any other node within its physical transmission range (and no others), and to move in and out of the transmission range of other nodes in the network. A key feature of the ω-calculus is the separation of a node's communication and computational behavior, described by an ω-process, from the description of its physical transmission range, referred to as an ω-process interface. Our main technical results are as follows. We give a formal operational semantics of the ω-calculus in terms of labeled transition systems and show that the state reachability problem is decidable for finite-control ω-processes. We also prove that the ω-calculus is a conservative extension of the p-calculus, and that late bisimulation (appropriately lifted from the p-calculus to the ω-calculus) is a congruence. Congruence results are also established for a weak version of late bisimulation, which abstracts away from two types of internal actions: t-actions, as in the p-calculus, and μ-actions, signaling node movement. Finally, we illustrate the practical utility of the calculus by developing and analyzing a formal model of a leader-election protocol for MANETs.


E. Bartocci, F. Corradini, M. R. Di Berardini, E. Entcheva, S. A. Smolka and R. Grosu. Modeling and Simulation of Cardiac Tissue using Hybrid I/O Automata. Theoretical Computer Science. Vol. 410, No. 33, pp. 3149-3165 (August 2009).

We propose a new biological framework based on the Lynch et al. theory of Hybrid I/O Automata (HIOAs) for modeling and simulating excitable tissue. Within this framework, we view an excitable tissue as a composition of two main kinds of component: a diffusion medium and a collection of cells, both modeled as an HIOA. This approach yields a notion of decomposition that allows us to describe a tissue as the parallel composition of several interacting tissues, a property that could be exploited to parallelize, and hence improve, the efficiency of the simulation process. We also demonstrate the feasibility of our HIOA-based framework to capture and mimic different kinds of wave-propagation behavior in 2D isotropic cardiac tissue, including normal wave propagation along the tissue; the creation of spiral waves; the break-up of spiral waves into more complex patterns such as fibrillation; and the recovery of the tissue to the rest via electrical defibrillation.


R. Grosu, S.A. Smolka, F. Corradini, A. Wasilewska, E. Entcheva, and E. Bartocci, Learning and Detecting Emergent Behavior in Networks of Cardiac Myocytes. Communications of the ACM , Vol. 53, No. 3, pp. 97-105 (March 2009).

We address the problem of specifying and detecting emergent behavior in networks of cardiac myocytes, spiral electric waves in particular, a precursor to atrial and ventricular fibrillation. To solve this problem we: (1) Apply discrete mode-abstraction to the cycle-linear hybrid automata (clha) we have recently developed for modeling the behavior of myocyte networks; (2) Introduce the new concept of spatial-superposition of clha modes; (3) Develop a new spatial logic, based on spatial-superposition, for specifying emergent behavior; (4) Devise a new method for learning the formulae of this logic from the spatial patterns under investigation; and (5) Apply bounded model checking to detect (within milliseconds) the onset of spiral waves. We have implemented our methodology as the Emerald tool-suite, a component of our eha framework for specification, simulation, analysis and control of excitable hybrid automata. We illustrate the effectiveness of our approach by applying Emerald to the scalar electrical fields produced by our CellExcite simulator.


P. Ye, E. Entcheva, S.A. Smolka and R. Grosu. Modeling Excitable Cells Using Cycle-Linear Hybrid Automata. Journal of IET Systems Biology, Vol. 2, Issue 1, pp. 24-32 (January 2008).

Abstract: Cycle-linear hybrid automata (CLHAs), a new model of excitable cells that efficiently and accurately captures action-potential morphology and other typical excitable-cell characteristics such as refractoriness and restitution, is introduced. Hybrid automata combine discrete transition graphs with continuous dynamics and emerge in a natural way during the (piecewise) approximation process of any nonlinear system. CLHAs are a new form of hybrid automata that exhibit linear behaviour on a per-cycle basis but whose overall behaviour is appropriately nonlinear. To motivate the need for this modelling formalism, first it is shown how to recast two recently proposed models of excitable cells as hybrid automata: the piecewise-linear model of Biktashev and the nonlinear model of Fenton –Karma. Both of these models were designed to efficiently approximate excitable-cell behaviour. We then show that the CLHA closely mimics the behaviour of several classical highly nonlinear models of excitable cells, thereby retaining the simplicity of Biktashev’s model without sacrificing the expressiveness of Fenton –Karma. CLHAs are not restricted to excitable cells; they can be used to capture the behaviour of a wide class of dynamic systems that exhibit some level of periodicity plus adaptation.


E. Bartocci, F. Corradini, E. Entcheva, R. Grosu and S.A. Smolka. CellExcite: An Efficient Simulation Environment for Excitable Cells. BMC Bioinformatics, 9(Suppl 2):53 (March 2008).

Background: Brain, heart and skeletal muscle share similar properties of excitable tissue, featuring both discrete behavior (all-or-nothing response to electrical activation) and continuous behavior (recovery to rest follows a temporal path, determined by multiple competing ion flows). Classical mathematical models of excitable cells involve complex systems of nonlinear differential equations. Such models not only impair formal analysis but also impose high computational demands on simulations, especially in large-scale 2-D and 3-D cell networks. In this paper, we show that by choosing Hybrid Automata as the modeling formalism, it is possible to construct a more abstract model of excitable cells that preserves the properties of interest while reducing the computational effort, thereby admitting the possibility of formal analysis and efficient simulation. Results: We have developed CellExcite, a sophisticated simulation environment for excitable-cell networks. CellExcite allows the user to sketch a tissue of excitable cells, plan the stimuli to be applied during simulation, and customize the diffusion model. CellExcite adopts Hybrid Automata (HA) as the computational model in order to efficiently capture both discrete and continuous excitable-cell behavior. Conclusions: The CellExcite simulation framework for multicellular HA arrays exhibits significantly improved computational efficiency in large-scale simulations, thus opening the possibility for formal analysis based on HA theory. A demo of CellExcite is available at http:// www.cs.sunysb.edu/~eha/.


D. Hansel, R. Cleaveland, and S.A. Smolka, Distributed Prototyping from Validated Specifications. Journal of Systems and Software, 2003.

We present vpl2cxx that automatically generates efficient, fully Model checking distributed C++ code from high-level system models specified in the mathematically well-founded VPL design language. As the Concurrency Workbench of the New Century (CWB-NC) verification tool includes a front-end for VPL, designers may use the full range of automatic verification and simulation checks provided by this tool on their VPL system designs before invoking the translator, thereby generating distributed prototypes from validated specifications. Besides being fully distributed, the code generated by vpl2cxx is highly readable and portable to a host of execution environments and real-time operating systems. This is achieved by encapsulating all generated code dealing with low-level interprocess communication issues in a library for synchronous communication, which in turn is built upon the Adaptive Communication Environment (ACE) client-server network programming interface. Finally, example applications show that the performance of the generated code is very good, especially for prototyping purposes. We discuss two such examples, including the RETHER real-time Ethernet protocol for voice and video applications.


Y. Dong, X. Du, G. J. Holzmann and S.A. Smolka, Fighting Livelock in the GNU i-Protocol: A Case Study in Explicit-State Model Checking. Software Tools for Technology Transfer, Vol. 4, Issue 2, 2003.

The i-protocol, an optimized sliding-window protocol for GNU uucp, first came to our attention in 1995 when we used the Concurrency Factory's local model checker to detect, locate, and correct a non-trivial livelock in version 1.04 of the protocol. Since then, we have conducted a systematic case study on the protocol using four verification tools, viz. Cospan, Murphi, Spin, and XMC, each of which supports some form of explicit-state model checking. Our results show that although the i-protocol is inherently complex--the size of its state space grows exponentially in the window size and it deploys several sophisticated optimizations aimed at minimizing control-message and retransmission overhead--it is nonetheless amenable to a number of general-purpose abstraction techniques whose application can significantly reduce the size of the protocol's state space.


A. Phillipou, O. Sokolsky, R. Cleaveland, I. Lee, and S.A. Smolka, Hiding Resources that Can Fail: An Axiomatic Perspective. Information Processing Letters, Special Issue on Process Algebra, Vol. 80, Issue 1, pp. 3-13 (October 2001).

In earlier work, we presented a process algebra, PACSR, that uses a notion of resource failure to capture probabilistic behavior in reactive systems. PACSR also supports an operator for resource hiding. In this paper, we carefully consider the interaction between these two features from an axiomatic perspective. For this purpose, we introduce a subset of PACSR, called "PACSR- lite", that allows us to isolate the semantic issues surrounding resource hiding in a probabilistic setting, and provide a sound and complete axiomatization of strong bisimulation for this fragment.


X. Du, S.A. Smolka, and R. Cleaveland, Local Model Checking and Protocol Analysis. Software Tools for Technology Transfer, Vol. 2, No. 3, pp. 219-241 (Nov. 1999).

This paper describes a local model-checking algorithm for the alternation-free fragment of the modal mu-calculus that has been implemented in the Concurrency Factory and discusses its application to the analysis of a real-time communications protocol. The protocol considered is Rether, a software-based, real-time Ethernet protocol developed at SUNY at Stony Brook. Its purpose is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity Ethernet hardware. Our model-checking results show that (for a particular network configuration) Rether makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. Our data also indicate that, in many cases, the state-exploration overhead of the local model checker is significantly smaller than the total amount that would result from a global analysis of the protocol.

In the course of specifying and verifying Rether, we also identified an alternative design of the protocol that warranted further study due to its potentially smaller run-time overhead. Again, using local model checking, we showed that this alternative design also possesses the properties of interest. This observation points up one of the often-overlooked benefits of formal verification: by forcing designers to understand their designs rigorously and abstractly, these techniques often enable the designers to uncover interesting design alternatives.


R. Cleaveland, Z. Dayar, S.A. Smolka, and S. Yuen, Testing Preorders for Probabilistic Processes. Information and Computation (1999).

We present a testing preorder for probabilistic processes based on a quantification of the probability with which processes pass tests. The theory enjoys close connections with the classical testing theory of De Nicola and Hennessy in that whenever a process passes a test with probability 1 (respectively some non-zero probability) in our setting, then the process must (respectively may) pass the test in the classical theory. We also develop an alternative characterization of the probabilistic testing preorders that takes the form of a mapping from probabilistic traces to the interval [0,1], where a probabilistic trace is an alternating sequence of actions and probability distributions over actions.

Finally, we give proof techniques, derived from the alternative characterizations, for establishing preorder relationships between probabilistic processes. The utility of these techniques is demonstrated by means of some simple examples.


Y.-J. Joung and S.A. Smolka, Strong Interaction Fairness via Randomization . IEEE Trans. on Parallel and Distributed Systems, Vol. 9, No. 2 (February 1998).

We present MULTI, a symmetric, distributed, randomized algorithm that, with probability 1, schedules multiparty interactions in a strongly fair manner. To our knowledge, MULTI is the first algorithm for strong interaction fairness to appear in the literature. Moreover, the expected time taken by MULTI to establish an interaction is a constant not depending on the total number of processes in the system. In this sense, MULTI guarantees real-time response.

MULTI makes no assumptions (other than boundedness) about the time it takes processes to communicate. It thus offers an appealing tonic to the impossibility results of Tsay&Bagrodia and Joung concerning strong interaction fairness in an environment, shared-memory or message-passing, in which processes are deterministic and the communication time is nonnegligible.

Because strong interaction fairness is as strong a fairness condition that one might actually want to impose in practice, our results indicate that randomization may also prove fruitful for other notions of fairness lacking deterministic realizations and requiring real-time response.


S.-H. Wu, S.A. Smolka and E. Stark, Composition and and Behaviors of Probabilistic I/O Automata . Theoretical Computer Science, Vol. 176, No. 1-2, pp. 1-38 (1997).

We augment the I/O automaton model of Lynch and Tuttle with probability, as a step toward the ultimate goal of obtaining a useful tool for specifying and reasoning about asynchronous probabilistic systems. Our new model, called probabilistic I/O automata, preserves the fundamental properties of the I/O automaton model, such as the asymmetric treatment of input and output and the pleasant notion of asynchronous composition. For certain classes of probabilistic I/O automata, we show that probabilistic behavior maps, which are an abstract representation of I/O automaton behavior in terms of a certain expectation operator, are compositional and fully abstract with respect to a natural notion of probabilistic testing.


Y.-J. Joung and S.A. Smolka, A Comprehensive Study of the Complexity of Multiparty Interaction . Journal of the ACM, Vol. 43, No. 1, pp. 75-115 (January 1996).

A multiparty interaction is a set of I/O actions executed jointly by a number of processes, each of which must be ready to execute its own action for any of the actions in the set to occur. An attempt to participate in an interaction delays a process until all other participants are available. Although a relatively new concept, the multiparty interaction has found its way into a number of distributed programming languages and algebraic models of concurrency. In this paper, we present a taxonomy of languages for multiparty interaction that covers all proposals of which we are aware. Based on this taxonomy, we then present a comprehensive analysis of the computational complexity of the multiparty interaction implementation problem, the problem of scheduling multiparty interactions in a given execution environment.


S.A. Smolka and B. Steffen, Priority as Extremal Probability. Formal Aspects of Computing, Vol. 8, pp. 585-606 (1996).

We extend the stratified model of probabilistic processes to obtain a very general notion of process priority. The main idea is to allow probability guards of value 0 to be associated with alternatives of a probabilistic summation expression. Such alternatives can be chosen only if the non-zero alternatives are precluded by contextual constraints. We refer to this model as one of ``extremal probability'' and to its signature as PCCSz. We provide PCCSz with a structural operational semantics and a notion of probabilistic bisimulation, which is shown to be a congruence. Of particular interest is the abstraction PCCSp of PCCSz in which all non-zero probability guards are identified. PCCSp represents a customized framework for reasoning about priority, and covers all features of process algebras proposed for reasoning about priority that we know of.


R. Cleaveland and S.A. Smolka, Strategic Directions in Concurrency Research. ACM Computing Surveys, Vol. 28, No. 4, pp. 607-625 (December 1996).


R. De Nicola and S.A. Smolka, Concurrency: Theory and Practice. ACM Computing Surveys, Vol. 28, No. 4es, Article 52 (December 1996). http://www.acm.org./pubs/articles/journals/surveys/1996-28-4es/a52-de_nicola/a52-de_nicola.html


R. van Glabbeek, S.A. Smolka, and B.U. Steffen, Reactive, Generative, and Stratified Models of Probabilistic Processes. Information and Computation, Vol. 121, No. 1, pp. 59-80 (August 1995).

We introduce three models of probabilistic processes, namely, reactive, generative and stratified. These models are investigated within the context of PCCS, an extension of Milner's SCCS in which each summand of a process summation expression is guarded by a probability and the sum of these probabilities is 1. For each model we present a structural operational semantics of PCCS and a notion of bisimulation equivalence which we prove to be a congruence. We also show that the models form a hierarchy: the reactive model is derivable from the generative model by abstraction from the relative probabilities of different actions, and the generative model is derivable from the stratified model by abstraction from the purely probabilistic branching structure. Moreover the classical nonprobabilistic model is derivable from each of these models by abstraction from all probabilities.


J.C.M. Baeten, J.A. Bergstra, and S.A. Smolka, Axiomatizing Probabilistic Processes: ACP with Generative Probabilities. Information and Computation, Vol. 121, No. 2, pp. 234-255 (September 1995).

This paper is concerned with finding complete axiomatizations of probabilistic processes. We examine this problem within the context of the process algebra ACP and obtain as our end-result the axiom system prACP-I, a version of ACP whose main innovation is a probabilistic asynchronous interleaving operator. Our goal was to introduce probability into ACP in as simple a fashion as possible. Optimally, ACP should be the homomorphic image of the probabilistic version in which the probabilities are forgotten.

We begin by weakening slightly ACP to obtain the axiom system ACP-I. The main difference between ACP and ACP-I is that the axiom x + delta = x, which does not yield a plausible interpretation in the generative model of probabilistic computation, is rejected in ACP-I. We argue that this does not affect the usefulness of ACP-I in practice, and show how ACP can be reconstructed from ACP-I with a minimal amount of technical machinery.

prACP-I is obtained from ACP-I through the introduction of probabilistic alternative and parallel composition operators, and a process graph model for ACP-I based on probabilistic bisimulation is developed. We show that prACP-I is a sound and complete axiomatization of probabilistic bisimulation for finite processes, and that prACP-I can be homomorphically embedded in ACP-I as desired.

Our results for ACP-I and prACP-I are presented in a modular fashion by first considering several subsets of the signatures. We conclude with a discussion about adding an iteration operator to ACP-I.


F. Moller and S.A. Smolka, On the Complexity of Bisimulation. ACM Computing Surveys, Vol. 27, No. 2, pp. 287-289 (June 1995).


R. Gupta, S. Bhaskar, and S.A. Smolka, On Randomization in Sequential and Distributed Algorithms, ACM Computing Surveys, Vol. 26, No. 1, pp. 7-86 (March 1994).

Probabilistic, or randomized, algorithms are fast becoming as commonplace as conventional deterministic algorithms. This survey presents five techniques that have been widely used in the design of randomized algorithms. These techniques are illustrated using 12 randomized algorithms--both sequential and distributed--that span a wide range of applications, including: primality testing (a classical problem in number theory), universal hashing (choosing the hash function dynamically and at random), interactive probabilistic proof systems (a new method of program testing), dining philosophers (a classical problem in distributed computing), and Byzantine agreement (reaching agreement in the presence of malicious processors). Included with each algorithm is a discussion of its correctness and its computational complexity. Several related topics of interest are also addressed, including the theory of probabilistic automata, probabilistic analysis of conventional algorithms, deterministic amplification, and derandomization of randomized algorithms. Finally, a comprehensive annotated bibliography is given.


Y.-J. Joung and S.A. Smolka, Coordinating First-Order Multiparty Interactions. ACM TOPLAS, Vol. 16, No. 3, pp. 954-985 (May 1994).

A first-order multiparty interaction is an abstraction mechanism that defines communication among a set of formal process roles. Actual processes participate in a first-order interaction by enroling into roles, and execution of the interaction can proceed when all roles are filled by distinct processes. As in CSP, enrolement statements can serve as guards in alternative commands. The enrolement guard scheduling problem then is to enable the execution of first-order interactions through the judicious scheduling of roles to processes currently ready to execute enrolement guards.

We present a fully distributed and message-efficient algorithm for the enrolement guard scheduling problem, the first such solution of which we are aware. We also describe several extensions of the algorithm, including generic roles, dynamically changing environments where processes can be created and destroyed at run time, and nested-enrolement which allows interactions to be nested.


Recent Conference Publications

Md. A. Islam, R. DeFrancisco, C. Fan, R. Grosu, S. Mitra, and S. A. Smolka, Model Checking Tap Withdrawal in C. Elegans. Submitted to CAV 2015.

We present what we believe to be the first formal verification of a biologically realistic (nonlinear ODE) model of a neural circuit in a multicellular organism: Tap Withdrawal (TW) in C. Elegans, the common roundworm. TW is a reflexive behavior exhibited by C. Elegans in response to vibrating the surface on which it is moving; the neural circuit underlying this response is the subject of this investigation. Specifically, we perform reachability analysis on the TW circuit model of Wicks et al. (1996), which enables us to estimate key circuit parameters. Underlying our approach is the use of Fan and Mitra's recently developed technique for automatically computing local discrepancy (convergence and divergence rates) of general nonlinear systems. We show that the results we obtain are in agreement with the experimental results of Wicks et al. (1995). As opposed to the fixed parameters found in most biological models, which can only produce the predominant behavior, our techniques characterize ranges of parameters that produce (and do not produce) all three observed behaviors: reversal of movement, acceleration, and lack of response.


A. Murthy, Md. A. Islam, S. A. Smolka, and R. Grosu, Computing Bisimulation Functions using SOS Optimization and δ-Decidability over the Reals. Proceeding of HSCC 2015, 18th International Conference on Hybrid Systems: Computation and Control, Seattle, Washington, USA, ACM Press (April 2015).

We show that the 13-state sodium channel component of the Iyer et al. cardiac cell model can be replaced with a previously identi- fied d-bisimilar 2-state Hodgkin Huxley-type abstraction by appealing to a small gain theorem. To prove this feedback compositionality result, we construct quadratic-polynomial exponentially decaying bisimulation functions between the two sodium channel models and also for the rest of a simplified version of the Iyer et al. model using the SOSTOOLS toolbox. Our experimental results validate the analytical ones. To the best of our knowledge, this is the first application of d-bisimilar, feedbackassisting, compositional reasoning in biological systems.


Md. A. Islam, A. Murthy, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics.Proceeding of HSCC 2014, 17th International Conference on Hybrid Systems: Computation and Control, Berlin, Germany, ACM Press (April 2014).

By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


E. Bartocci, R. DeFrancisco, and S. A. Smolka, Towards a GPGPU-Parallel SPIN Model Checker. Proceedings of SPIN 2014, Third International SPIN Symposium on Model Checking of Software, San Jose, CA, ACM Press (July 2014).

As General-Purpose Graphics Processing Units (GPGPUs)become more powerful, they are being used increasingly often in high-performance computing applications. State space exploration, as employed in model-checking and other verification techniques, is a large, complex problem that has successfully been ported to a variety of parallel architectures. Use of the GPU for this purpose, however, has only recently begun to be studied. We show how the 2012 multicore CPU-parallel state-space exploration algorithm of the SPIN model checker can be re-engineered to take advantage of the unique parallel-processing capabilities of the GPGPU architecture, and demonstrate how to overcome the non-trivial design obstacles presented by this task. Our preliminary results demonstrate significant performance improvements over the traditional sequential model checker for state spaces of appreciable size (>10 million unique states).


S. Bhattacharyya, S. P. Miller, J. Yang, S. A. Smolka, B. Meng, C. Sticksel, and C. Tinelli, Verification of Quasi-Synchronous Systems with Uppaal. Proceedings of DASC 2014, IEEE/AIAA 33rd Digital Avionics Systems Conference, Colorado Springs, CO (Oct. 2014).

Modern defense systems are complex distributed software systems implemented over heterogeneous and constantly evolving hardware and software platforms. Distributed agreement protocols are often developed exploiting the fact that their systems are quasi-synchronous, where even though the clocks of the different nodes are not synchronized, they all run at the same rate, or multiples of the same rate, modulo their drift and jitter. This paper describes an effort to provide systems designers and engineers with an intuitive modeling environment that allows them to specify the highlevel architecture and synchronization logic of quasi-synchronous systems using widely available systems-engineering notations and tools. To this end, a translator was developed that translates system architectural models specified in a subset of SysML into the Architectural Analysis and Description Language (AADL). Translators were also developed that translate the AADL models into the input language of the Uppaal and Kind model checkers. The Uppaal model checker. supports the modeling, verification, and validation of real-time systems modeled as a network of timed automata. This paper focuses on the challenges encountered in translating from AADL to Uppaal, and illustrates the overall approach with a common avionics example: the Pilot Flying System.


D. Peled, R. Grosu, C. R. Ramakrishnan, S. A. Smolka, S. D. Stoller, and J. Yang, Using Statistical Model Checking for Measuring Systems. Proceedings of ISoLA 2014, Sixth International Symposium on Leveraging Applications, Corfu, Greece, Lecture Notes in Computer Science, Springer (Oct. 2014).

State spaces represent the way a system evolves through its different possible executions. Automatic verification techniques are used to check whether the system satisfies certain properties, expressed using automata or logic-based formalisms. This provides a Boolean indication of the system’s fitness. It is sometimes desirable to obtain other indications, measuring e.g., duration, energy or probability. Certain measurements are inherently harder than others. This can be explained by appealing to the difference in complexity of checking CTL and LTL properties. While the former can be done in time linear in the size of the property, the latter is PSPACE in the size of the property; hence practical algorithms take exponential time. While the CTL-type of properties measure specifications that are based on adjacency of states (up to a fixpoint calculation), LTL properties have the flavor of expecting some multiple complicated requirements from each execution sequence. In order to quickly measure LTL-style properties from a structure, we use a form of statistical model checking; we exploit the fact that LTL-style properties on a path behave like CTL-style properties on a structure. We then use CTL-based measuring on paths, and generalize the measurement results to the full structure using optimal Monte Carlo estimation techniques. To experimentally validate our framework, we present measurements for a flocking model of bird-like agents.


T. Deshpande, P. Katsaros, S. A. Smolka, and S. D. Stoller, Stochastic Game-Based Analysis of the DNS Bandwidth Amplification Attack Using Probabilistic Model Checking. Proceedings of EDCC 2014, 10th European Dependable Computing Conference, Newcastle upon Tyne, UK, IEEE Press (May 2014).

The Domain Name System (DNS) is an Internet-wide, hierarchical naming system used to translate domain names into numeric IP addresses. Any disruption of DNS service can have serious consequences. We present a formal game-theoretic analysis of a notable threat to DNS, namely the bandwidth amplification attack (BAA), and the countermeasures designed to defend against it. We model the DNS BAA as a two-player, turn-based, zero-sum stochastic game between an attacker and a defender. The attacker attempts to flood a victim DNS server with malicious traffic by choosing an appropriate number of zombie machines with which to attack. In response, the defender chooses among five BAA countermeasures, each of which seeks to increase the amount of legitimate traffic the victim server processes. To simplify the model and optimize the analysis, our model does not explicitly track the handling of each packet. Instead, our model is based on calculations of the rates at which the relevant kinds of events occur in each state. We use our game-based model of DNS BAA to generate optimal attack strategies, which vary the number of zombies, and optimal defense strategies, which aim to enhance the utility of the BAA countermeasures by combining them in advantageous ways. The goal of these strategies is to optimize the attacker's and defender's payoffs, which are defined using probabilistic reward-based properties, and are measured in terms of the attacker's ability to minimize the volume of legitimate traffic that is processed, and the defender's ability to maximize the volume of legitimate traffic that is processed.


Md. A. Islam, T. Deshpande, A. Murthy, E. Bartocci, S. D. Stoller, S. A. Smolka, and R. Grosu, Tracking Action Potentials of Nonlinear Excitable Cells using Model Predictive Control. Proceedings of BIOTECHNO 2014, Sixth International Conference on Bioinformatics, Biocomputational Systems and Biotechnologies, Chamonix, France, IARIA XPS Press (April 2014).

Explicit and online Model Predictive Controllers (MPCs) are presented for an excitable cell simulator. The dynamics of the excitable cell are captured using the nonlinear FitzHugh-Nagumo model. Despite the plant’s nonlinearity, the model predictive control problem is formulated as an instance of quadratic programming, using a piecewise affine abstraction of the plant. Speed versus accuracy tradeoff is analyzed for the explicit and online versions by testing them on the same reference trajectory. The MPCs provide a framework for in silico testing of biomedical control strategies, specifically in the context of excitable cells like neurons and cardiac myocytes.


Md. A. Islam, A. Murthy, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics. Proceeding of HSCC 2014, 17th International Conference on Hybrid Systems: Computation and Control, Berlin, Germany, ACM Press (April 2014).

By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


K. Kalajdzic, E. Bartocci, S. A. Smolka, S. D. Stoller and R. Grosu, Runtime Verification with Particle Filtering. Proceedings of RV 2013, Fourth International Conference on Runtime Verification, INRIA Rennes, France, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2013).

We introduce Runtime Verification with Particle Filtering (RVPF), a powerful and versatile method for controlling the tradeoff between uncertainty and overhead in runtime verification. Overhead and accuracy are controlled by adjusting the frequency and duration of observation gaps, during which program events are not monitored, and by adjusting the number of particles used in the RVPF algorithm. We succinctly represent the program model, the program monitor, their interaction, and their observations as a dynamic Bayesian network (DBN). Our formulation of RVPF in terms of DBNs is essential for a proper formalization of peek events: low-cost observations of parts of the program state, which are performed probabilistically at the end of observation gaps. Peek events provide information that our algorithm uses to reduce the uncertainty in the monitor state after gaps. We estimate the internal state of the DBN using particle filtering (PF) with sequential importance resampling (SIR). PF uses a collection of conceptual particles (random samples) to estimate the probability distribution for the system's current state: the probability of a state is given by the sum of the importance weights of the particles in that state. After an observed event, each particle chooses a state transition to execute by sampling the DBN's joint transition probability distribution; particles are then redistributed among the states that best predicted the current observation. SIR exploits the DBN structure and the current observation to reduce the variance of the PF and increase its performance. We experimentally compare the overhead and accuracy of our RVPF algorithm with two previous approaches to runtime verification with state estimation: an exact algorithm based on the forward algorithm for HMMs, and an approximate version of that algorithm, which uses precomputation to reduce runtime overhead. Our results confim RVPF's versatility, showing how it can be used to control the tradeoff between execution time and memory usage while, at the same time, being the most accurate of the three algorithms.


A. Islam, A. Murthy, E. Bartocci, A. Girard, S. A. Smolka, and R. Grosu, Compositionality Results for Cardiac Cell Dynamics. Proceedings of CMSB’13, 11th Conference on Computational Methods in Systems Biology, Klosterneuburg, Austria, Springer-Verlag, Lecture Notes in Computer Science, Sept. 2013 (Short paper).

By appealing to the small-gain theorem of one of the authors (Girard), we show that the 13-variable sodium-channel component of the 67-variable IMW cardiac-cell model (Iyer-Mazhari-Winslow) can be replaced by an approximately bi-similar, 2-variable HH-type (Hodgkin-Huxley) abstraction. We show that this substitution of (approximately) equals for equals is safe in the sense that the approximation error between sodium-channel models is not amplified by the feedback-loop context in which it is placed. To prove this feedback-compositionality result, we exhibit quadratic-polynomial, exponentially decaying bisimulation functions between the IMW and HH-type sodium channels, and also for the IMW-based context in which these sodium-channel models are placed. These functions allow us to quantify the overall error introduced by the sodium-channel abstraction and subsequent substitution in the IMW model. To automate computation of the bisimulation functions, we employ the SOSTOOLS optimization toolbox. Our experimental results validate our analytical findings. To the best of our knowledge, this is the first application of d-bisimilar, feedback-assisting, compositional reasoning in biological systems.


E. Bartocci, R. Grosu, A. Karmarkar, S. A. Smolka, S. D. Stoller, E. Zadok and J. Seyster, Adaptive Runtime Verification. Proceedings of RV 2012, Third International Conference on Runtime Verification, Istanbul, Turkey, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2012).

We present Adaptive Runtime Verification (ARV), a new approach to runtime verification in which overhead control, runtime verification with state estimation, and predictive analysis are synergistically combined. Overhead control maintains the overhead of runtime verification at a specified target level, by enabling and disabling monitoring of events for each monitor instance as needed. In ARV, predictive analysis based on a probabilistic model of the monitored system is used to estimate how likely each monitor instance is to violate a given temporal property in the near future, and these criticality levels are fed to the overhead controllers, which allocate a larger fraction of the target overhead to monitor instances with higher criticality, thereby increasing the probability of violation detection. Since overhead control causes the monitor to miss events, we use Runtime Verification with State Estimation (RVSE) to estimate the probability that a property is satisfied by an incompletely monitored run. A key aspect of the ARV framework is a new algorithm for RVSE that performs the calculations in advance, dramatically reducing the runtime overhead of RVSE, at the cost of introducing some approximation error. We demonstrate the utility of ARV on a significant case study involving runtime monitoring of concurrency errors in the Linux kernel.


A. Gorlin, C. R. Ramakrishnan, and S. A. Smolka, Model Checking with Probabilistic Tabled Logic Programming. Proceedings of ICLP 2012, 28th International Conference on Logic Programming, Budapest, Hungary (Sep. 2012).

We present a formulation of the problem of probabilistic model checking as one of query evaluation over probabilistic logic programs. To the best of our knowledge, our formulation is the first of its kind, and it covers a rich class of probabilistic models and probabilistic temporal logics. The inference algorithms of existing probabilistic logic-programming systems are well defined only for queries with a finite number of explanations. This restriction prohibits the encoding of probabilistic model checkers, where explanations correspond to executions of the system being model checked. To overcome this restriction, we propose a more general inference algorithm that uses finite generative structures (similar to automata) to represent families of explanations. The inference algorithm computes the probability of a possibly infinite set of explanations directly from the finite generative structure. We have implemented our inference algorithm in XSB Prolog, and use this implementation to encode probabilistic model checkers for a variety of temporal logics, including PCTL and GPL (which subsumes PCTL*). Our experiment results show that, despite the highly declarative nature of their encodings, the model checkers constructed in this manner are competitive with their native implementations.


A. Murthy, A. Islam, E. Bartocci, E. Cherry, F. H. Fenton, J. Glimm, S. A. Smolka, and R. Grosu, Approximate Bisimulations for Sodium Channel Dynamics. Proceedings of CMSB 2012, 10th Conference on Computational Methods in Systems Biology, London, UK, Springer, Lecture Notes in Computer Science (Oct. 2012).

This paper shows that, in the context of the Iyer et al. 67- variable cardiac myocycte model (IMW), it is possible to replace the detailed 13-state continuous-time MDP model of the sodium-channel dynamics, with a much simpler Hodgkin-Huxley (HH)-like two-state sodiumchannel model, while only incurring a bounded approximation error. The technical basis for this result is the construction of an approximate bisimulation between the HH and IMW channel models, both of which are input-controlled (voltage in this case) continuous-time Markov chains. The construction of the appropriate approximate bisimulation, as well as the overall result regarding the behavior of this modified IMW model, involves: (1) The identification of the voltage-dependent parameters of the m and h gates in the HH-type channel, based on the observations of the IMW channel. (2) Proving that the distance between observations of the two channels never exceeds a given error. (3) Exploring the sensitivity of the overall IMW model to the HH-type sodium-channel approximation. Our extensive simulation results experimentally validate our findings, for varying IMW-type input stimuli.


A. Donz'e, O. Maler, E. Bartocci, D. Nickovic, R. Grosu, and S. A. Smolka, On Temporal Logic and Signal Processing. Proceedings of ATVA 2012, 10th International Symposium on Automated Technology for Verification and Analysis, Trivandrum, Kerala, Springer, Lecture Notes in Computer Science (Oct. 2012).

We present Time-Frequency Logic (TFL), a new specification formalism for real-valued signals that combines temporal logic properties in the time domain with frequency-domain properties. We provide a property checking framework for this formalism and illustrate its expressive power in defining and recognizing properties of musical pieces. Like hybrid automata and their analysis techniques, the TFL formalism is a contribution to a unified systems theory for hybrid systems.


G. Chatzieleftheriou, B. Bonakdarpour, S. A. Smolka and P. Katsaros, Abstract Model Repair. Proceedings of NFM 2012, Fourth NASA Formal Methods Symposium, pp. 341-355, Springer-Verlag, Lecture Notes in Computer Science, Vol. 7226, Norfolk, VA, (April 2012).


S. D. Stoller, E. Bartocci, J. Seyster, R. Grosu, K. Havelund, S. A. Smolka and E. Zadok, Runtime Verification with State Estimation. Proceedings of RV 2011, Second International Conference on Runtime Verification, San Francisco, US, Springer-Verlag, Lecture Notes in Computer Science (Sept. 2011).

We introduce the concept of Runtime Verification with State Estimation and show how this concept can be applied to estimate the probability that a temporal property is satisfied by a run of a program when monitoring overhead is reduced by sampling. In such situations, there may be gaps in the observed program executions, thus making accurate estimation challenging. To deal with the effects of sampling on runtime verification, we view event sequences as observation sequences of a Hidden Markov Model (HMM), use an HMM model of the monitored program to “fill in” sampling-induced gaps in observation sequences, and extend the classic forward algorithm for HMM state estimation (which determines the probability of a state sequence, given an observation sequence) to compute the probability that the property is satisfied by an execution of the program. To validate our approach, we present a case study based on the mission software for a Mars rover. The results of our case study demonstrate high prediction accuracy for the probabilities computed by our algorithm. They also show that our technique is much more accurate than simply evaluating the temporal property on the given observation sequences, ignoring the gaps.


E. Bartocci, E. M. Cherry, J. Glimm, R. Grosu, S. A. Smolka, and F.H. Fenton, Toward Real-time Simulation of Cardiac Dynamics. Proceedings of CMSB 2011, 9th International Conference on Computational Methods in Systems Biology, Paris, France, ACM Press (Sept. 2011).

We show that through careful and model-specific optimizations of their GPU implementations, simulations of realistic, detailed cardiac-cell models now can be performed in 2D and 3D in times that are close to real time using a desktop computer. Previously, large-scale simulations of detailed mathematical models of cardiac cells were possible only using supercomputers. In our study, we consider five different models of cardiac electrophysiology that span a broad range of computational complexity: the two-variable Karma model, the four-variable Bueno-Orovio-Cherry-Fenton model, the eight-variable Beeler-Reuter model, the 19-variable Ten Tusscher-Panfilov model, and the 67-variable Iyer-Mazhari-Winslow model. For each of these models, we treat both their single- and double-precision versions and demonstrate linear or even sub-linear growth in simulation times with an increase in the size of the grid used to model cardiac tissue. We also show that our GPU implementations of these models can increase simulation speeds to near real-time for simulations of complex spatial patterns indicative of cardiac arrhythmic disorders, including spiral waves and spiral wave breakup. The achievement of real-time applications without the need for supercomputers may, in the near term, facilitate the adoption of modeling-based clinical diagnostics and treatment planning, including patient-specific electrophysiological studies.


A. Murthy, E. Bartocci, F. Fenton, J. Glimm, R. Gray, S.A. Smolka, and R. Grosu, Curvature Analysis of Cardiac Excitation Wavefronts. Proceedings of CMSB 2011, 9th International Conference on Computational Methods in Systems Biology, Paris, France, ACM Press (Sept. 2011).

We present the Spiral Classification Algorithm (SCA), a fast and accurate algorithm for classifying electrical spiral waves and their associated breakup in cardiac tissues. The classification performed by SCA is an essential component of the detection and analysis of various cardiac arrhythmic disorders, including ventricular tachycardia and fibrillation. Given a digitized frame of a propagating wave, SCA constructs a highly accurate representation of the front and the back of the wave, piecewise interpolates this representation with cubic splines, and subjects the result to an accurate curvature analysis. This analysis is more comprehensive than methods based on spiral-tip tracking, as it considers the entire wave front and back. To increase the smoothness of the resulting symbolic representation, the SCA uses weighted overlapping of adjacent segments which increases the smoothness at join points. SCA has been applied to a number of representative types of spiral waves, and, for each type, a distinct curvature evolution in time (signature) has been identified. Distinct signatures have also been identified for spiral breakup. These results represent a significant first step in automatically determining parameter ranges for which a computational cardiac-cell network accurately reproduces a particular kind of cardiac arrhythmia, such as ventricular fibrillation.


R. Grosu, G. Batt, F. H. Fenton, J. Glimm, C. Leguernic, S. A. Smolka, and E. Bartocci, From Cardiac Cells to Genetic Regulatory Networks. Proceedings of CAV 2011, 23rd International Conference on Computer Aided Verification, Snowbird, Utah, Springer-Verlag, Lecture Notes in Computer Science (July 2011).

A fundamental question in the treatment of cardiac disorders, such as tachycardia and fibrillation, is under what circumstances does such a disorder arise? To answer to this question, we develop a multiaffine hybrid automaton (MHA) cardiac-cell model, and restate the original question as one of identification of the parameter ranges under which the MHA model accurately reproduces the disorder. The MHA model is obtained from the minimal cardiac model of one of the authors (Fenton) by first bringing it into the form of a canonical, genetic regulatory network, and then linearizing its sigmoidal switches, in an optimal way. By leveraging the Rovergene tool for genetic regulatory networks, we are then able to successfully identify the parameter ranges of interest.


Z. Li, R. Grosu, K. Muppalla, S. A. Smolka, S. D. Stoller, and E. Zadok, Model Discovery for Energy-Aware Computing Systems. Proceedings of ERSS 2011, First International Workshop on Energy Consumption and Reliability of Storage Systems, Orlando, FL, IEEE Press (July 2011).

We present a model-discovery methodology for energy-aware computing systems that achieves high prediction accuracy. Model discovery, or system identification, is a critical first step in designing advanced controllers that can dynamically manage the energy-performance trade-off in an optimal manner. Our methodology favors Multiple-InputsMultiple-Outputs (MIMO) models over a collection of Single-Input-Single-Output (SISO) models, when the inputs and outputs of the system are coupled in a nontrivial way. In such cases, MIMO is generally more accurate than SISO over a wide range of inputs in predicting system behavior. Our experimental evaluation, carried out on a representative server workload, validates our approach. We obtained an average prediction accuracy of 77% and 76% for MIMO power and performance, respectively. We also show that MIMO models are consistently more accurate than SISO ones.


Z. Li, R. Grosu, P. Sehgal, S. A. Smolka, S. D. Stoller, and E. Zadok, On the Energy Consumption and Performance of Systems Software. Proceedings of SYSTOR 2011, Fourth Annual International Systems and Storage Conference, Haifa, Israel, ACM Press (May 2011).

Models of energy consumption and performance are necessary to understand and identify system behavior, prior to designing advanced controls that can balance out performance and energy use. This paper considers the energy consumption and performance of servers running a relatively simple file-compression workload. We found that standard techniques for system identi- fication do not produce acceptable models of energy consumption and performance, due to the intricate interplay between the discrete nature of software and the continuous nature of energy and performance. This motivated us to perform a detailed empirical study of the energy consumption and performance of this system with varying compression algorithms and compression levels, file types, persistent storage media, CPU DVFS levels, and disk I/O schedulers. Our results identify and illustrate factors that complicate the system’s energy consumption and performance, including nonlinearity, instability, and multi-dimensionality. Our results provide a basis for future work on modeling energy consumption and performance to support principled design of controllable energy-aware systems.


E. Bartocci, R. Grosu, P. Katsaros, C. R. Ramakrishnan, and S. A. Smolka, Model Repair for Probabilistic Systems. Proceedings of TACAS 2011, 17th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Saarbrucken, Germany, Sprunger-Verlag, Lecture Notes in Computer Science, Vol. 6605, pp. 326-340 (March-April 2011).


N. Alexiou, T. Deshpande, S. Basagiannis, S. A. Smolka and P. Katsaros, Formal Analysis of the Kaminsky DNS Cache-Poisoning Attack Using Probabilistic Model Checking. Proceedings of HASE 2010, 12th IEEE International High Assurance Systems Engineering Symposium, San Jose, CA, IEEE Computer Society (Nov. 2010).

We use the probabilistic model checker PRISM to formally model and analyze the highly publicized Kaminsky DNS cache-poisoning attack. DNS (Domain Name System) is an internet-wide, hierarchical naming system used to translate domain names such as google.com into physical IP addresses such as 208.77.188.166. The Kaminsky DNS attack is a recently discovered vulnerability in DNS that allows an intruder to hijack a domain, i.e. corrupt a DNS server so that it replies with the IP address of a malicious web server when asked to resolve URLs within a non-malicious domain such as google.com. A proposed fix for the attack is based on the idea of randomizing the source port a DNS server uses when issuing a query to another server in the DNS hierarchy. We use PRISM to introduce a Continuous Time Markov Chain representation of the Kaminsky attack and the proposed fix, and to perform the required probabilistic model checking. Our results, gleaned from more than 240 PRISM runs, formally validate the existence of the Kaminsky cache-poisoning attack even in the presence of an intruder with virtually no knowledge of the victim DNS server's actions. They also serve to quantify the effectiveness of the proposed fix: using nonlinear least-squares curve fitting, we show that the probability of a successful attack obeys a 1/N distribution, where N is the upper limit on the range of source-port ids. We also demonstrate an increasing attack probability with an increasing number of attempted attacks or increasing rate at which the intruder guesses the source-port id.


J. Seyster, K. Dixit, X. Huang, R. Grosu, K. Havelund, S. A. Smolka, S. D. Stoller and E. Zadok, Aspect-Oriented Instrumentation with GCC. Proceedings of RV 2010, First International Conference on Runtime Verification, Sliema, Malta, Springer-Verlag, Lecture Notes in Computer Science, Vol. 6418, pp. 405-420 (Nov. 2010).

We present the INTERASPECT instrumentation framework for GCC, a widely used compiler infrastructure. The addition of plug-in support in the latest release of GCC makes it an attractive platform for runtime instrumentation, as GCC plug-ins can directly add instrumentation by transforming the compiler’s intermediate representation. Such transformations, however, require expert knowledge of GCC internals. INTERASPECT addresses this situation by allowing instrumentation plug-ins to be developed using the familiar vocabulary of Aspect-Oriented Programming: pointcuts, join points, and advice functions. Moreover, INTERASPECT uses specific information about each join point in a pointcut, possibly including results of static analysis, to support powerful customized instrumentation. We describe the INTERASPECT API and present several examples that illustrate its practical utility as a runtime-verification platform. We also introduce a tracecut system that uses INTERASPECT to construct program monitors that are formally specified as regular expressions.


X. Huang, A. Singh and S. A. Smolka, Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol. Proceedings of NFM 2010, Second NASA Formal Methods Symposium, Washington D.C., NASA Conference Publication (April 2010).

We use the UPPAAL model checker for Timed Automata to verify the Timing-Sync time-synchronization protocol for sensor networks (TPSN). The TPSN protocol seeks to provide network-wide synchronization of the distributed clocks in a sensor network. Clock-synchronization algorithms for sensor networks such as TPSN must be able to perform arithmetic on clock values to calculate clock drift and network propagation delays. They must be able to read the value of a local clock and assign it to another local clock. Such operations are not directly supported by the theory of Timed Automata. To overcome this formal-modeling obstacle, we augment the UPPAAL specification language with the integer clock derived type. Integer clocks, which are essentially integer variables that are periodically incremented by a global pulse generator, greatly facilitate the encoding of the operations required to synchronize clocks as in the TPSN protocol. With this integer-clock-based model of TPSN in hand, we use UPPAAL to verify that the protocol achieves network-wide time synchronization and is devoid of deadlock. We also use the UPPAAL Tracer tool to illustrate how integer clocks can be used to capture clock drift and resynchronization during protocol execution.


A. Singh, C. R. Ramakrishnan and S. A. Smolka, Query-Based Model Checking of Ad Hoc Network Protocols. Proceedings of CONCUR 2009, the 20th International Conference on Concurrency Theory, Pisa, Italy, Springer-Verlag, Lecture Notes in Computer Science, Vol. 5710, pp. 603-619 (Sep. 2009).


Z. Yang, B. Al-Rawi, K. Sakallah, X. Huang, S.A. Smolka and R. Grosu, Dynamic Path Reduction for Software Model Checking. Proceedings of iFM’09, the Seventh International Conference on Integrated Formal Methods, Duesseldorf, Germany, Springer-Verlag, Lecture Notes in Computer Science, Vol. 5423, pp. 322-336 (Feb. 2009).

We present the new technique of dynamic path reduction (DPR), which allows one to prune redundant paths from the state space of a program under verification. DPR is a very general technique which we consider here in the context of the bounded model checking of sequential programs with nondeterministic conditionals. The DPR approach is based on the symbolic analysis of concrete executions. For each explored execution path p that does not reach an abort statement, we repeatedly apply a weakest-precondition computation to accumulate the constraints associated with an infeasible sub-path derived from p by taking the alternative branch to an if-then-else statement. We then use an SMT solver to learn the minimally unsatisfiable core of these constraints. By further learning the statements in p that are critical to the sub-path’s infeasibility as well as the control-flow decisions that must be taken to execute these statements, unexplored paths containing the same unsatisfiable core can be efficiently and dynamically pruned. Our preliminary experimental results show that DPR can prune a significant percentage of execution paths, a percentage that grows with the size of the instance of the problem being considered.


O. Riganelli, R. Grosu, S. R. Das, C. R. Ramakrishnan and S. A. Smolka, Power Optimization in Fault-Tolerant Mobile Ad Hoc Networks. Proceedings of the 11th IEEE High Assurance Systems Engineering Symposium (HASE’08), Nanjing, China, IEEE Press (Dec. 2008).

In this paper, we investigate the transmission-power assignment problem for k-connected mobile ad hoc networks (MANETs), the problem of optimizing the lifetime of a MANET at a given degree k of connectivity by minimizing power consumption. Our proposed solution is fully distributed and uses a model-based transmission power adaptation strategy based on model-predictive control. Specifically, a stochastic model of the network is used by a state estimator to predict the network's future degree of connectivity and remaining energy. The predicted states are used by an optimizer to derive an optimal transmission power assignment sequence which tracks the desired connectivity level k while minimizing energy consumption. Our experimental results on a simulated wireless sensor network comprising 100 mobile nodes reveals that our localized topology control algorithm provides an almost identical control policy to that of a globalized scheme which is solving a fully observable problem. The difference, of course, is in the scalability of our localized solution, which requires much less communication bandwidth and energy than the globalized approach.


P. Ye, R. Grosu, S.A. Smolka and E. Entcheva. Formal Analysis of Abnormal Excitation in Cardiac Tissue. Proceedings of CMSB’08, the 6th International Conference on Computational Methods in Systems Biology, Rostock, Germany, Springer, Lecture Notes in Bioinformatics (Oct. 2008).

We present the Piecewise Linear Approximation Model of Ion Channel contribution (PLAMIC) to cardiac excitation. We use the PLAMIC model to conduct formal analysis of cardiac arrhythmic events, namely Early Afterdepolarizations (EADs). The goal is to quantify (for the first time) the contribution of the overall sodium (Na?+?), potassium (K?+?) and calcium (Ca2?+?) currents to the occurrence of EADs during the plateau phase of the cardiac action potential (AP). Our analysis yields exact mathematical criteria for the separation of the parameter space for normal and EAD-producing APs, which is validated by simulations with classical AP models based on complex systems of nonlinear differential equations. Our approach offers a simple formal technique for the prediction of conditions leading to arrhythmias (EADs) from a limited set of experimental measurements, and can be invaluable for devising new anti-arrhythmic strategies.


R. Cleaveland, S.A. Smolka and S.T. Sims, An Instrumentation-Based Approach to Controller Model Validation. Model-Driven Development of Reliable Automotive Services, Springer, LNCS 4922, pp. 84-97, (July 2008).

This paper discusses the concept of Instrumentation-Based Validation (IBV): the use of model instrumentation and coverage-based testing to validate models of embedded control software. IBV proceeds as follows. An engineer first formalizes requirements as assertions, or small models, which may be thought of as monitors that observe the behavior of the controller model as it executes. The engineer then instruments the model with these assertions and develops test suites with the aim of highlighting where assertion violations occur. To make our discussion of IBV more concrete, we also consider its implementation within the Reactis tool suite for the automated testing and validation of controller models given in /Stateflow.


A. Singh, C. R. Ramakrishnan and S. A. Smolka, A Process Algebra for Mobile Ad Hoc Networks. Proceedings of Coordination 2008, the 10th International Conference on Coordination Models and Languages, Oslo, Norway, Springer, LNCS Vol. 5052, pp. 296-314 (June 2008).


E. Bartocci, F. Corradini, R. Grosu, E. Merelli, O. Riganelli and S. A. Smolka. StonyCam: A Formal Framework for Modeling, Analyzing and Regulating Cardiac Myocytes. Concurrency, Graphs and Models, Lecture Notes in Computer Science, Vol. 5065, Springer-Verlag, pp. 493-502 (June 2008).

This paper presents a formal framework, experimental infrastructure, and computational environment for modeling, analyzing and regulating the behavior of cardiac tissues. Based on the theory of hybrid automata, we aim at providing suitable tools to be used in devising strategies for the pharmacological or other forms of treatment of cardiac electrical disturbances.


P. Ye, E. Entcheva, S.A. Smolka and R. Grosu. Symbolic Analysis of the Neuron. Proceedings of ICBBE’08, the 2nd International Conference on Bioinformatics and Biomedical Engineering, Shanghai, China, IEEE Press (May 2008).

We present a novel approach to investigating key behavioral properties of complex biological systems by first using automated techniques to learn a simplified Linear Hybrid Automaton model of the system under investigation, and then carrying out automatic reachability analysis on the resulting model. The specific biological system we consider is the neuronal Action Potential and the specific question of interest is bifurcation: the graded response of the neuron to stimulation of varying amplitude and duration. Reachability analysis in this case is performed using the d/dt analysis tool for hybrid systems. The results we so obtain reveal the precise conditions under which bifurcation manifests, when taking into consideration an infinite class of input stimuli of arbitrary shape, amplitude, and duration within given respective intervals. To the best of our knowledge, this represents the first time that formal (reachability) analysis has been applied to a computational model of excitable cells. The obvious advantage of symbolic reachability analysis over simulation—perhaps the only available analysis method when complex systems of coupled ODEs are used to model excitable-cell behavior, as has traditionally been the case—is that through the so-called reachable set computation, the system’s reaction to an infinite set of possible inputs can be observed. Our results further demonstrate that Linear Hybrid Automata, as a formal language, is both expressive enough to capture interesting excitable-cell behavior, and abstract enough to render formal analysis possible.


R. Grosu, E. Bartocci, F. Corradini, E. Entcheva, S.A. Smolka and A. Wasilewska, Learning and Detecting Emergent Behavior in Networks of Cardiac Myocytes. Proceedings of HSCC’08, the 11th International Conference on Hybrid Systems: Computation and Control, St. Louis, MO, Springer, LNCS Vol. 4981, pp. 229-243 (April 2008).

We address the problem of specifying and detecting emergent behavior in networks of cardiac myocytes, spiral electric waves in particular, a precursor to atrial and ventricular fibrillation. To solve this problem we: (1) Apply discrete mode-abstraction to the cycle-linear hybrid automata (CLHA) we have recently developed for modeling the behavior of myocyte networks; (2) Introduce the new concept of spatialsuperposition of CLHA modes; (3) Develop a new spatial logic, based on spatial-superposition, for specifying emergent behavior; (4) Devise a new method for learning the formulae of this logic from the spatial patterns under investigation; and (5) Apply bounded model checking to detect (within milliseconds) the onset of spiral waves. We have implemented our methodology as the Emerald tool-suite, a component of our EHA framework for specification, simulation, analysis and control of excitable hybrid automata. We illustrate the effectiveness of our approach by applying Emerald to the scalar electrical fields produced by our CellExcite simulator.


E. Bartocci, F. Corradini, M.R. Di Berardini, E. Entcheva, R. Grosu and S.A. Smolka. Spatial Networks of Hybrid I/O Automata for Modeling Excitable Tissue. Proceedings of FBTC’07, the International Workshop From Biology to Concurrency and Back, Lisbon, Springer, ENTCS Vol. 194(3), pp. 51-67 (September, 2007).

We propose a new biological framework, spatial networks of hybrid input/output automata (SNHIOA), for the efficient modeling and simulation of excitable-cell tissue. Within this framework, we view an excitable tissue as a network of interacting cells disposed according to a 2D spatial lattice, with the electrical behavior of a single cell modeled as a hybrid input/ouput automaton. To capture the phenomenon that the strength of communication between automata depends on their relative positions within the lattice, we introduce a new, weighted parallel composition operator to specify the influence of one automata over another. The purpose of the SNHIOA model is to efficiently capture the spatiotemporal behavior of wave propagation in 2D excitable media. To validate this claim, we show how SNHIOA can be used to model and capture different spatiotemporal behavior of wave propagation in 2D isotropic cardiac tissue, including normal planar wave propagation, spiral creation, the breakup of spirals into more complex (potentially lethal) spatiotemporal patterns, and the recovery of the tissue to the rest via defibrillation.


R. Grosu, S. Mitra, P. Ye, E. Entcheva, IV Ramakrishnan and S.A. Smolka, Learning Cycle-Linear Hybrid Automata for Excitable Cells. Proceedings of HSCC’07, the 10th International Conference on Hybrid Systems: Computation and Control, Pisa, Italy, pp. 245-258, Springer, LNCS Vol. 4416 (April 2007).

We show how to automatically learn the class of Hybrid Automata called Cycle-Linear Hybrid Automata (CLHA) in order to model the behavior of excitable cells. Such cells, whose main purpose is to amplify and propagate an electrical signal known as the action potential (AP), serve as the “biologic transistors” of living organisms. The learning algorithm we propose comprises the following three phases: (1) Geometric analysis of the APs in the training set is used to identify, for each AP, the modes and switching logic of the corresponding Linear Hybrid Automata. (2) For each mode, the modified Prony’s method is used to learn the coefficients of the associated linear flows. (3) The modified Prony’s method is used again to learn the functions that adjust, on a per-cycle basis, the mode dynamics and switching logic of the Linear Hybrid Automata obtained in the first two phases. Our results show that the learned CLHA is able to successfully capture AP morphology and other important excitable-cell properties, such as refractoriness and restitution, up to a prescribed approximation error. Our approach is fully implemented in MATLAB and, to the best of our knowledge, provides the most accurate approximation model for ECs to date.


S. Callanan, R. Grosu, J. Seyster, S.A. Smolka and E. Zadok. Model Predictive Control for Memory Profiling. Proceedings of NSF NGS’07, Next Generation Software Workshop at IPDPS, Long Beach, California, pp. 1-7, IEEE Press (March 2007).

We make two contributions in the area of memory pro- filing. The first is a real-time, memory-profiling toolkit we call Memcov that provides both allocation/deallocation and access profiles of a running program. Memcov requires no recompilation or relinking and significantly reduces the barrier to entry for new applications of memory profiling by providing a clean, non-invasive way to perform two major functions: processing of the stream of memory-allocation events in real time and monitoring of regions in order to receive notification the next time they are hit. Our second contribution is an adaptive memory pro- filer and leak detector called MemcovMPC. Built on top of Memcov, MemcovMPC uses Model Predictive Control to derive an optimal control strategy for leak detection that maximizes the number of areas monitored for leaks, while minimizing the associated runtime overhead. When it observes that an area has not been accessed for a userdefinable period of time, it reports it as a potential leak. Our approach requires neither mark-and-sweep leak detection nor static analysis, and reports a superset of the memory leaks actually occurring as the program runs. The set of leaks reported by MemcovMPC can be made to approximate the actual set more closely by lengthening the threshold period.


R. Grosu, X. Huang, S.A. Smolka, W. Tan and S. Tripakis. Deep Random Search for Efficient Model Checking of Timed Automata. Proceedings of MW’06, 7th Monterey Workshop on Composition of Embedded Systems, Paris, France, pp. 37-48 (Oct. 2006).

We present DRS (Deep Random Search), a new Las Vegas algorithm for model checking safety properties of timed automata. DRS explores the state space of the simulation graph of a timed automaton by performing random walks up to a prescribed depth. Nodes along these walks are then used to construct a random fringe, which is the starting point of additional deep random walks. The DRS algorithm is complete, and optimal to within a specified depth increment. Experimental results show that it is able to find extremely deep counter-examples for a number of benchmarks, outperforming Open-Kronos and Uppaal in the process.


E.W. Stark, R. Cleaveland, and S.A. Smolka. Probabilistic I/O Automata: Theories of Two Equivalences. Proceedings of 17th International Conference on Concurrency Theory (CONCUR 2006), Lecture Notes in Computer Science, Vol. 4137, pp. 343-357, Spring Verlag (Aug. 2006).

Working in the context of a process-algebraic language for Probabilistic I/O Automata (PIOA), we study the notion of PIOA behavior equivalence by obtaining a complete axiomatization of its equational theory and comparing the results with a complete axiomatization of a more standard equivalence, weighted bisimulation. The axiomatization of behavior equivalence is achieved by adding to the language an operator for forming convex combinations of terms.


M.R. True, E. Entcheva, S.A. Smolka, P. Ye and R. Grosu. Efficient Event-Driven Simulation of Excitable Hybrid Automata. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

We present an efficient, event-driven simulation framework for large-scale networks of excitable hybrid automata (EHA), a particular kind of hybrid automata that we use to model excitable cells. A key aspect of EHA is that they possess protected modes of operation in which they are non-responsive to external inputs. In such modes, our approach takes advantage of the analytical solution of the modes' linear differential equations to eliminate all integration steps, and therefore to dramatically reduce the amount of computation required. We first present a simple simulation framework for EHA based on a time-step integration method that follows naturally from our EHA models. We then present our event-driven simulation framework, where each cell has an associated event specifying both the type of processing next required for the cell and a time at which the processing must occur. A priority queue, specifically designed to reduce queueing overhead, maintains the correct ordering among events. This approach allows us to avoid handling certain cells for extended periods of time. Through a mode-by-mode case analysis, we demonstrate that our event-driven simulation procedure is at least as accurate as the time-step one. As experimental validation of the efficacy of the event-driven approach, we demonstrate a five-fold improvement in the simulation time required to produce spiral waves in a 400-times-400 cell array


P. Ye, E. Entcheva, S.A. Smolka, M.R. True and R. Grosu. Hybrid Automata as a Unifying Framework for Modeling Cardiac Cells. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

We propose hybrid automata (HA) as a unifying framework for computational models of excitable cells. HA, which combine discrete transition graphs with continuous dynamics, can be naturally used to obtain a piecewise, possibly linear, approximation of a nonlinear excitable-cell model. We first show how HA can be used to efficiently capture the action-potential morphology--as well as reproduce typical excitable-cell characteristics such as refractoriness and restitution--of the dynamic Luo-Rudy model of a guinea-pig ventricular myocyte. We then recast two well-known computational models, Biktashev's and Fenton-Karma, as HA without any loss of expressiveness. Given that HA possess an intuitive graphical representation and are supported by a rich mathematical theory and numerous analysis tools, we argue that they are well positioned as a computational model for biological processes.


P. Ye, E. Entcheva, S.A. Smolka, M.R. True and R. Grosu. A Cycle-Linear Approach to Modeling Action Potentials. Proceedings of 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBS 2006), IEEE Press (Aug. 2006).

We introduce cycle-linear hybrid automata (CLHA) and show how they can be used to efficiently model dynamical systems that exhibit nonlinear, pseudo-periodic behavior. CLHA are based on the observation that such systems cycle through a fixed set of operating modes, although the dynamics and duration of each cycle may depend on certain computational aspects of past cycles. CLHA are constructed around these modes such that the per-cycle, per-mode dynamics are given by a time-invariant linear system of equations; the parameters of the system are dependent on a deformation coefficient computed at the beginning of each cycle as a function of memory units. Viewed over time, CLHA generate a very intuitive, linear approximation of the entire phase space of the original, nonlinear system. We show how CLHA can be used to efficiently model the action potential of various types of excitable cells and their adaptation to pacing frequency


S. Callanan, R. Grosu, A. Rai, S.A. Smolka, M. True and E. Zadok. Runtime Verification for High-Confidence Systems: A Monte Carlo Approach. Proceedings of Second International Workshop on Model Based Testing (MBT 2006), (March 2006).

We present a new approach to runtime verification that utilizes classical statistical techniques such as Monte Carlo simulation, hypothesis testing, and confidence interval estimation. Our algorithm, MCM, uses sampling-policy automata to vary its sampling rate dynamically as a function of the current confidence it has in the correctness of the deployed system. We implemented MCM within the Aristotle tool environment, an extensible, GCC-based architecture for instrumenting C programs for the purpose of runtime monitoring. For a case study involving the dynamic allocation and deallocation of objects in the Linux kernel, our experimental results show that Aristotle reduces the runtime overhead due to monitoring, which is initially high when confidence is low, to levels low enough to be acceptable in the long term as confidence in the monitored system grows.


S. Callanan, R. Grosu, X. Huang, S.A. Smolka and E. Zadok. Compiler-Assisted Software Verification Using Plug-Ins. Proceedings of NGS’06, Next Generation Software Workshop at IPDPS, Rhodes Island, Greece, pp. 8-15, IEEE Press (April 2006).

We present Protagoras, a new plug-in architecture for the GNU compiler collection that allows one to modify GCC's internal representation of the program under compilation. We illustrate the utility of Protagoras by presenting plug-ins for both compile-time and runtime software verification and monitoring. In the compile-time case, we have developed plug-ins that interpret the GIMPLE intermediate representation to verify properties statically. In the runtime case, we have developed plug-ins for GCC to perform memory leak detection, array bounds checking, and reference-count access monitoring.


R. Grosu and S.A. Smolka. Monte Carlo Methods for Process Algebra. Proceedings of International Workshop on Algebraic Process Calculi: The First Twenty Five Years and Beyond, Bertinoro, Italy, ENTCS Vol. 162, pp. 203-207 (Sep. 2006).

We review the recently developed technique of Monte Carlo model checking and show how it can be applied to the implementation problem for I/O Automata. We then consider some open problems in applying Monte Carlo techniques to other process-algebraic problems, such as simulation and bisimulation.


R. Grosu and S.A. Smolka, Efficient Modeling of Excitable Cells Using Hybrid Automata, Proceedings of Computational Methods in Systems Biology, Lecture Notes in Computer Science, Springer-Verlag (April 2005).

We present an approach to modeling complex biological systems that is based on Hybrid automata (HA). HA combine discrete transition graphs with continuous dynamics. Our goal is to efficiently capture the behavior of excitable cells previously modeled by systems of nonlinear differential equations. In particular, we derive HA models from the Hodgkin-Huxley model of the giant squid axon, the Luo-Rudy dynamic model of a guinea pig ventricular cell, and a model of a neonatal rat ventricular myocyte. Our much simpler HA models are able to successfully capture the action-potential morphology of the different cells, as well as reproduce typical excitable-cell characteristics, such as refractoriness (period of non-responsiveness to external stimulation) and restitution (adaptation to pacing rates). To model electrical wave propagation in a cell network, the single-cell HA models are linked to a classical 2D spatial model. The resulting simulation framework exhibits significantly improved computational efficiency in modeling complex wave patterns, such as the spiral waves underlying pathological conditions in the heart.


R. Grosu and S.A. Smolka, Monte Carlo Model Checking, Proceedings of TACAS 2005: Eleventh International Conference on Tools and Algorithms for the Construction and Analysis of Systems, Lecture Notes in Computer Science, Springer-Verlag (April 2005).


E.W. Stark, R. Cleaveland, and S.A. Smolka, A Process-Algebraic Language for Probabilistic I/O Automata. Proceedings of 14th International Conference on Concurrency Theory (CONCUR 2003), Lecture Notes in Computer Science, Springer-Verlag (Aug. 2003).

We present a process-algebraic language for Probabilistic I/O Automata (PIOA). To ensure that PIOA specifications given in our language satisfy the "input-enabled" property, which requires that all input actions be enabled in every state of a PIOA, we augment the language with a set of type inference rules. We also equip our language with a formal operational semantics defined by a set of transition rules. We present a number of results whose thrust is to establish that the typing and transition rules are sensible and interact properly. The central connection between types and transition systems is that if a term is well-typed, then in fact the associated transition system is input-enabled. We also consider two notions of equivalence for our language, weighted bisimulation equivalence and PIOA behavioral equivalence. We show that both equivalences are substitutive with respect to the operators of the language, and note that weighted bisimulation equivalence is a strict refinement of behavioral equivalence.


S. Basu, D. Saha, Y.-J. Lin, and S.A. Smolka, Counter-Example Generation for Push-Down Systems. Proceedings of 23rd IFIP International Conference on Formal Techniques for Networked and Distributed Systems (FORTE 2003), Lecture Notes in Computer Science, Springer-Verlag (Sep. 2003).

We present a new, on-the-fly algorithm that given a push-down model representing a sequential program with (recursive) procedure calls and an extended finite-state automaton representing (the negation of) a safety property, produces a succinct, symbolic representation of all counter-examples; i.e., traces of system behaviors that violate the property. The class of what we call minimum-recursion loop-free counter-examples can then be generated from this representation on an as-needed basis and presented to the user. Our algorithm is also applicable, without modification, to finite-state system models. Simultaneous consideration of multiple counter-examples can minimize the number of model-checking runs needed to recognize common root causes of property violations. We illustrate the use of our techniques via application to a Java-Tar utility and an FTP-server program, and discuss a prototype tool implementation which offers several abstraction techniques for easy-viewing of generated counter-examples.


Y. Dong, C.R. Ramakrishnan, and S.A. Smolka. Evidence Explorer: A Tool for Exploring Model-Checking Proofs. Proceedings of 15th Conference on Computer-Aided Verification (CAV 2003), Lecture Notes in Computer Science, Springer-Verlag (July 2003).

We present the Evidence Explorer, a new tool for assisting users in navigating the proof structure, or evidence, produced by a model checker when attempting to verify a system specification for a temporal-logic property. Due to the sheer size of such evidence, single-step traversal is prohibitive and smarter exploration methods are required. The Evidence Explorer enables users to explore evidence through a collection of orthogonal but coordinated views. These views allow one to quickly ascertain the overall perception of evidence through consistent visual cues, and easily locate interesting regions by simple drill-down operations. Views are definable in relational graph algebra, a natural extension of relational algebra to graph structures such as model-checking evidence.


F. Moller and S.A. Smolka. On the Computational Complexity of Bisimulation, Redux. Proceedings of the ACM Paris Kanellakis Memorial Workshop (PCK50) in affiliation with the ACM 2003 Federated Computing Research Conference (FCRC 2003), ACM Press (June 2003).

This is a revised and updated version of a 1995 ACM Computing Surveys paper by Moller and Smolka on the computational complexity of bisimulation. Paris Kanellakis and Smolka were among the first to investigate the computational complexity of bisimulation, and Moller has a long-established track record in the field. The authors therefore believe that the proceedings of Principles of Computing and Knowledge: Paris C. Kanellakis Memorial Workshop represent an ideal opportunity for another look at the subject.


Y. Dong, C.R. Ramakrishnan, and S.A. Smolka. Model Checking and Evidence Exploration. Proceedings of Tenth IEEE International Conference on Engineering of Computer Based Systems (ECBS 2003), IEEE Computer Society Press (April 2003).

We present an algebraic framework for evidence exploration: the process of interpreting, manipulating, and navigating the proof structure or evidence produced by a model checker when attempting to verify a system specification for a temporal-logic property. Due to the sheer size of such evidence, single-step traversal is prohibitive and smarter exploration methods are required. Evidence exploration allows users to explore evidence through smaller, manageable views, which are definable in relational graph algebra, a natural extension of relational algebra to graph structures such as model-checking evidence. We illustrate the utility of our approach by applying the Evidence Explorer, our tool implementation of the evidence-exploration framework, to the Java meta-locking algorithm, a highly optimized technique deployed by the Java Virtual Machine to ensure mutually exclusive access to object monitor queues by threads.


P. Yang, C.R. Ramakrishnan, and S.A. Smolka, A Logical Encoding of the pi-Calculus: Model Checking Mobile Processes Using Tabled Resolution. Procedings of Fourth International Conference on Verification, Model Checking, and Abstract Interpretation (VMCAI 2003), Lecture Notes in Computer Science, Springer-Verlag (Jan. 2003).

We present MMC, a model checker for mobile systems specified in the style of the pi-calculus. MMC's development builds on our experience gained in developing XMC, a model checker for an extension of Milner's value-passing calculus implemented using the XSB tabled logic-programming system. MMC, however, is not simply an extension of XMC; rather it is virtually a complete re-implementation that addresses the salient issues that arise in the pi-calculus, including scope extrusion and intrusion, and dynamic generation of new names to avoid name capture. We show that tabled logic programming is especially suitable as an efficient implementation platform for model checking pi-calculus specifications, and can be used to obtain an exact encoding of the pi-calculus's transitional semantics. Moreover, MMC is easily extended to handle process expressions in the spi-calculus. Our experimental data shows that MMC outperforms other knowntools for model checking the pi-calculus.


Y. Dong, B. Sarna-Starosta, C.R. Ramakrishnan, and S.A.Smolka, Vacuity Checking in the Modal Mu-Calculus. Proceedings of Ninth International Conference on Algebraic Methodology and Software Technology (AMAST 2002), Lecture Notes in Computer Science, Springer-Verlag (Sept. 2002).

Vacuity arises when a logical formula is trivially true in a given model due, for example, to antecedent failure. Beer et al. have recently introduced a logic-independent notion of vacuity and shown that certain logics, i.e., those with polarity, admit an efficient decision procedure for vacuity detection. We show that the modal mu-calculus, a very expressive temporal logic, is a logic with polarity and hence the results of Beer et al. are applicable. We also extend the definition of vacuity to achieve a new notion of redundancy in logical formulas. Redundancy captures several forms of antecedent failure that escape traditional vacuity analysis, including vacuous actions in temporal modalities and unnecessarily strong temporal operators. Furthermore, we have implemented an efficient redundancy checker for the modal mu-calculus in the context of the XMC model checker. Our checker generates diagnostic information in the form of all maximal subformulas that are redundant and exploits the fact that XMC can cache intermediate results in memo tables between model-checking runs. We have applied our redundancy checker to a number of previously published case studies, and found instances of redundancy that have gone unnoticed till now. These findings provide compelling evidence of the importance of redundancy detection in the design process.


R. Grosu, Yanhong A. Liu, S.A. Smolka, S.D. Stoller, and J. Yan, Automated Software Engineering Using Concurrent Class Machines. Proceedings of 16th IEEE International Conference on Automated Software Engineering (ASE 2001), IEEE Press, San Diego, CA (Nov. 2001).

Concurrent Class Machines are a novel state-machine model that directly captures a variety of object-oriented concepts, including classes and inheritance, objects and object creation, methods, method invocation and exceptions, multithreading and abstract collection types. The model can be understood as a precise definition of UML activity diagrams which, at the same time, offers an executable, object-oriented alternative to event-based statecharts. It can also be understood as a visual, combined control and data flow model for multithreaded object-oriented programs. We first introduce a visual notation and tool for Concurrent Class Machines and discuss their benefits in enhancing system design. We then equip this notation with a precise semantics that allows us to define refinement and modular refinement rules. Finally, we summarize our work on generation of optimized code, implementation and experiments, and compare with related work.


D. Hansel, R. Cleaveland, S.A. Smolka, Distributed Prototyping from Validated Specifications. Proceedings of 12th IEEE International Workshop on Rapid System Prototyping (RSP 2001), IEEE Press, Monterey, CA (June 2001).

We present vpl2cxx, a translator that automatically generates efficient, fully distributed C++ code from high-level system models specified in the mathematically well-founded VPL design language. As the Concurrency Workbench of the New Century (CWB-NC) verification tool includes a front-end for VPL, designers may use the full range of automatic verification and simulation checks provided by this tool on their VPL system designs before invoking the translator, thereby generating distributed prototypes from validated specifications. Besides being fully distributed, the code generated by vpl2cxx is highly readable and portable to a host of execution environments and real-time operating systems (RTOSes). This is achieved by encapsulating all generated code dealing with low-level interprocess communication issues in a library for synchronous communication, which in turn is built upon the Adaptive Communication Environment (ACE) client-server network programming interface. Finally, example applications show that the performance of the generated code is very good, especially for prototyping purposes. We discuss two such examples, including the RETHER real-time Ethernet protocol for voice and video applications.


R. Sekar, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, Model-Carrying Code (MCC): A New Paradigm for Mobile-Code Security. Proceedings of the New Security Paradigms Workshop (NSPW 2001), ACM Press, Cloudcroft, New Mexico (Sept. 2001).

A new approach to ensuring the security of mobile code is presented. Our approach enables a mobile-code consumer to understand and formally reason about what a piece of mobile code can do; check if the actions of the code are compatible with his/her security policies; and, if so, execute the code. The compatibility-checking process is automated, but if there are conflicts, consumers have the opportunity to refine their policies, taking into account the functionality provided by the mobile code. Finally, when the code is executed, our framework uses advanced runtime-monitoring techniques to ensure that the code does not violate the consumer's (refined) policies. At the heart of our method, which we call model-carrying code (MCC), is the idea that a piece of mobile code comes equipped with an expressive yet concise model of the code's behavior, the generation of which can be automated. MCC enjoys several advantages over current approaches to mobile-code security. Succinctly put, it protects consumers of mobile code from malicious or faulty code without unduly restricting the code's functionality. Moreover, the MCC approach is applicable to the vast majority of code that exists today, which is written in C or C++. This contrasts with previous approaches such as Java 2 security and proof-carrying code, which are either language-specific or are difficult to extend to these languages. Finally, MCC can be combined with existing techniques such as cryptographic signing and proof-carrying code to yield additional benefits.


M. van Osch, S.A. Smolka, Finite-State Analysis of the CAN Bus Protocol. Proceedings of Sixth IEEE International Symposium on High Assurance Systems Engineering (HASE 2001), IEEE Press, Boca Raton, Florida (Oct. 2001).

We formally specify the data link layer of the Controller Area Network (CAN), a high-speed serial bus system with real-time capabilities, widely used in embedded systems. CAN's primary application domain is automotive, and the physical and data link layers of the CAN architecture were the subject of the ISO 11898 international standard. We checked our specification against 12 important properties of CAN, eight of which are gleaned from the ISO standard; the other four are desirable properties not directly mentioned in the standard. Our results indicate that not all properties can be expected to hold of a CAN implementation and we discuss the implications of these findings. Moreover, we have conducted a number of experiments aimed at determining how the size of the protocol's state space is affected by the introduction of various features of the data link layer, the number of nodes in the network, the number of distinct message types, and other parameters.


X. Du, C.R. Ramakrishnan, S.A. Smolka, Tabled Resolution + Constraints: A Recipe for Model Checking Real-Time Systems. Proceedings of the 21st IEEE Real-Time Systems Symposium (RTSS 2000). Lecture Notes in Computer Science, Springer-Verlag (Nov. 2000).

We present a computational framework based on tabled resolution and constraint processing for verifying real-time systems. We also discuss the implementation of this framework in the context of the XMC/RT verification tool. For systems specified using timed automata, XMC offers backward and forward reachability analysis, as well as timed modal mu-calculus model checking. It can also handle timed infinite-state systems, such as those with unbounded message buffers, provided the set of reachable states is finite. We illustrate this capability on a real-time version of the leader election protocol. Finally, XMC/RT can function as a model checker for untimed systems. Despite this versatility, preliminary benchmarking experiments indicate that XMC/RT's performance remains competitive with that of other real-time verification tools.


A. Roychoudhury, K. Narayan Kumar, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, Verification of Parameterized Systems Using Logic Program Transformations. Proceedings of the Sixth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS 2000), Lecture Notes in Computer Science, Springer-Verlag (April 2000).

The i-protocol, an optimized sliding-window protocol for GNU UUCP, came


R. Cleaveland, X. Du, S.A. Smolka, GCCS: A Graphical Coordination Language for System Specification. Proceedings of Fourth International Conference on Coordination Models and Languages (COORDINATION 2000). Lecture Notes in Computer Science, Springer-Verlag (Sept. 2000).

We present GCCS, a graphical coordination language for hierarchical concurrent systems. GCCS, which is implemented in the Concurrency Factory design environment, represents a coordination model based on process algebra. Its coordination laws, given as a structural operational semantics, allow one to infer atomic system transitions on the basis of transitions taken by system components. We illustrate the language's utility by exhibiting a GCCS-coordinated specification of the Rether real-time ethernet protocol. The specification contains both graphical and textual components.


S. Basu, S.A. Smolka, and O.R. Ward, Model Checking the Java Meta-Locking Algorithm. Proceedings of 7th IEEE International Conference and Workshop on the Engineering of Computer Based Systems (ECBS 2000), IEEE Press. Edinburgh, Scotland (April 2000).

We apply the XMC tabled-logic-programming-based model checker to the Java meta-locking algorithm. Meta-locking is a highly optimized technique for ensuring mutually exclusive access by threads to object monitor queues, and therefore plays an essential role in allowing Java to offer concurrent access to objects. Our abstract specification of the meta-locking algorithm is fully parameterized, both on M, the number of threads, and N, the number of objects. Using XMC, we show that for a variety of values of M and N, the algorithm indeed provides mutual exclusion and freedom from lockout. Collectively, our results provide strong testimony to the correctness of the meta-locking algorithm.


S. Lu and S.A. Smolka, Model Checking the Secure Electronic Transactions (SET) Protocol. Proceedings of Seventh International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS'99), IEEE Computer Society Press (Oct. 1999).

We use model checking to establish five essential correctness properties of the Secure Electronic Transaction (SET) protocol. SET has been developed jointly by Visa and MasterCard as a method to secure payment card transactions over open networks, and industrial interest in the protocol is high. Our main contributions are to firstly create a formal model of the protocol capturing the purchase request, payment authorization, and payment capture transactions. Together, these transactions constitute the kernel of the protocol. We then encoded our model and the aforementioned correctness properties in the input language of the FDR model checker. Running FDR on this input established that our model of the SET protocol satisfies all five properties even though the cardholder and merchant, two of the participants in the protocol, may try to behave dishonestly in certain ways. To our knowledge, this is the first attempt to formalize the SET protocol for the purpose of model checking.


Y. Dong, X. Du, Y.S. Ramakrishna, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, O. Sokolsky, E.W. Stark, and D.S. Warren, Fighting Livelock in the i-Protocol: A Comparative Study of Verification Tools. Proceedings of the Fifth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS '99), Lecture Notes in Computer Science, Springer-Verlag (April 1999).

The i-protocol, an optimized sliding-window protocol for GNU UUCP, came to our attention two years ago when we used the Concurrency Factory's local model checker to detect, locate, and correct a non-trivial livelock in version 1.04 of the protocol. Since then, we have repeated this verification effort with five widely used model checkers, namely, Cospan, Murphi, SMV, Spin, and XMC. It is our contention that the i-protocol makes for a particularly compelling case study in protocol verification and for a formidable benchmark of verification-tool performance, for the following reasons: 1) The i-protocol can be used to gauge a tool's ability to detect and diagnose livelock errors. 2) The size of the i-protocol's state space grows exponentially in the window size, and the entirety of this state space must be searched to verify that the protocol, with the livelock error eliminated, is deadlock- or livelock-free. 3) The i-protocol is an asynchronous, low-level software system equipped with a number of optimizations aimed at minimizing control-message and retransmission overhead. It lacks the regular structure that is often present in hardware designs. In this sense, it provides any verification tool with a vigorous test of its analysis capabilities.


K. Narayan Kumar, R. Cleaveland, and S.A. Smolka, Infinite Probabilistic and Non-Probabilistic Testing. Proceedings of the 18th Conference on the Foundations of Software Technology and Theoretical Computer Science (FST&TCS '98), Lecture Notes in Computer Science, Springer-Verlag (Dec. 1998).

We introcude several new notions of infinite testing, for both probabilistic and nonprobabilistic processes, and carefully examine their distinguishing power. We consider three kinds of infinite tests, simple, Buechi and fair. We show that Buechi tests do not change the distinguishing power of nondeterministic testing. In the probabilistic setting, we show that all three have the same distinguishing power as finite tests. Finally, we show that finite probabilistic tests are stronger than nondeterministic fair tests.


X. Liu and S.A. Smolka, Simple Linear-Time Algorithms for Minimal Fixed Points. Proceedings of the 26th International Conference on Automata, Languages, and Programming (ICALP '98), Lecture Notes in Computer Science, Springer-Verlag (July 1998).

We present global and local algorithms for evaluating minimal fixed points of dependency graphs, a general problem in fixed-point computation and model checking. Our algorithms run in linear-time, matching the complexity of the best existing algorithms for similar problems, and are simple to understand. The main novelty of our global algorithm is that it does not use the counter and "reverse list" data structures commonly found in existing linear-time global algorithms. This distinction plays an essential role in allowing us to easily derive our local algorithm from our global one. Our local algorithm is distinguished from existing linear-time local algorithms by a combination of its simplicity and suitability for direct implementation.

We also provide linear-time reductions from the problems of computing minimal and maximal fixed points in Boolean graphs to the problem of minimal fixed-point evaluation in dependency graphs. This establishes dependency graphs as a suitable framework in which to express and compute alternation-free fixed points.

Finally, we relate HORNSAT, the problem of Horn formula satisfiability, to the problem of minimal fixed-point evaluation in dependency graphs. In particular, we present straightforward, linear-time reductions between these problems for both directions of reducibility. As a result, we derive a linear-time local algorithm for HORNSAT, the first of its kind as far as we are aware.


X. Liu, C.R. Ramakrishnan, and S.A. Smolka, Fully Local and Efficient Evaluation of Alternating Fixed Points. Proceedings of the Fourth International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS '98), Lecture Notes in Computer Science, Springer-Verlag (April 1998).

We introduce Partitioned Dependency Graphs (PDGs), an abstract framework for the specification and evaluation of arbitrarily nested alternating fixed points. The generality of PDGs subsumes that of similarly proposed models of nested fixed-point computation such as Boolean graphs, Boolean equation systems, and the propositional modal mu-calculus. Our main result is an efficient local algorithm for evaluating PDG fixed points. Our algorithm, which we call LAFP, combines the simplicity of previously proposed induction-based algorithms (such as Winskel's tableau method for nu-calculus model checking) with the efficiency of semantics-based algorithms (such as the bit-vector method of Cleaveland, Klein, and Steffen for the equational mu-calculus). In particular, LAFP is simply specified, we provide a completely rigorous proof of its correctness, and the number of fixed-point iterations required by the algorithm is asymptotically the same as that of the best existing global algorithms. Moreover, preliminary experimental results demonstrate that LAFP performs extremely well in practice. To our knowledge, this makes LAFP the first efficient local algorithm for computing fixed points of arbitrary alternation depth to appear in the literature.


E.W. Stark and S.A. Smolka, Compositional Analysis of Expected Delays in Networks of Probabilistic I/O Automata, Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science (LICS '98), Indianapolis, IN, IEEE Computer Society Press (June 1998).

Probabilistic I/O automata (PIOA) constitute a model for distributed or concurrent systems that incorporates a notion of probabilistic choice. The PIOA model provides a notion of composition, for constructing a PIOA for a composite system from a collection of PIOAs representing the components. We present a method for computing completion probability and expected completion time for PIOAs. Our method is compositional, in the sense that it can be applied to a system of PIOAs, one component at a time, without ever calculating the global state space of the system (i.e.the composite PIOA). The method is based on symbolic calculations with vectors and matrices of rational functions, and it draws upon a theory of observables, which are mappings from delayed traces to real numbers that generalize the classical "formal power series" from algebra and combinatorics. Central to the theory is a notion of representation for an observable, which generalizes the clasical notion "linear representation" for formal power series. As in the classical case, the representable observables coincide with an abstractly defined class of "rational" observables; this fact forms the foundation of our method.


S.A. Smolka, B. Cui, Y. Dong, X. Du, K. Narayan Kumar, C.R. Ramakrishnan, I.V. Ramakrishnan, A. Roychoudhury, and D.S. Warren. Logic Programming and Model Checking (Invited Paper), Proceedings of PLILP/ALP'98, Pisa, Italy. Lecture Notes in Computer Science, Vol. 1490, Springer-Verlag (Sept. 1998).

We report on the current status of the LMC project, which seeks to deploy the latest developments in logic-programming technology to advance the state of the art of system specification and verification. In particular, the XMC model checker for value-passing CCS and the modal mu-calculus is discussed, as well as the XSB tabled logic programming system, on which XMC is based. Additionally, several ongoing efforts aimed at extending the LMC approach beyond traditional finite-state model checking are considered, including compositional model checking, the use of explicit induction techniques to model check parameterized systems, and the model checking of real-time systems. Finally, after a brief conclusion, future research directions are identified.


A. Roychoudhury, C.R. Ramakrishnan, I.V. Ramakrishnan, and S.A. Smolka, Tabulation-based Induction Proofs with Application to Automated Verification, Proceedings of the International Workshop on Tabulation in Parsing and Deduction (TAPD'98), Paris, France (April 1998).


A. Philippou, O. Sokolsky, I. Lee, R. Cleaveland, and S.A. Smolka, Probabilistic Resource Failure in Real-Time Process Algebra, Proceedings of the Ninth International Conference on Concurrency Theory (CONCUR '98), Nice, France, Lecture Notes in Computer Science, Springer-Verlag (Aug. 1998).

PACSR, a probabilistic extension of the real-time process algebra ACSR, is presented. The extension is built upon a novel treatment of the notion of a resource. In ACSR, resources are used to model contention in accessing physical devices. Here, resources are invested with the ability to fail and are associated with a probability of failure. The resulting formalism allows one to perform probabilistic analysis of real-time system specifications in the presence of resource failures. A probabilsitic variant of Hennessy-Milner logic with until is presented. The logic features an until operator which is parameterized by both a probabilistic constraint and a regular expression over observable actions. This style of parmeterization allows the application of probabilistic constraints to complex execution fragments. A model-checking algorithm for the proposed logic is also given. Finally, PACSR and the logic are illustrated with a telecommunications example.


X. Du, K.T. McDonnell, E. Nanos, Y.S. Ramakrishna, and S.A. Smolka, Software Design, Specification, and Verification: Lessons Learned from the Rether Case Study, Proceedings of the Sixth International Conference on Algebraic Methodology and Software Technology (AMAST'97), Sydney, Australia, Lecture Notes in Computer Science, Springer-Verlag (Dec. 1997).

Rether is a software-based real-time ethernet protocol developed at SUNY Stony Brook. The purpose of this protocol is to provide guaranteed bandwidth and deterministic, periodic network access to multimedia applications over commodity ethernet hardware. It has been implemented in the FreeBSD 2.1.0 operating system, and is now being used to support the Stony Brook Video Server (SBVS), a low-cost, ethernet LAN-based server providing real-time delivery of video to end-users from the server's disk subsystem.

Using local model checking, as provided by the Concurrency Factory specification and verification environment, we showed (for a particular network configuration) that Rether indeed makes good on its bandwidth guarantees to real-time nodes without exposing non-real-time nodes to the possibility of starvation. In the course of specifying and verifying Rether, we identified an alternative design of the protocol that warranted further study due to potential efficiency gains. Again using model checking, we showed that this alternative design also possesses the properties of interest.


Y.S. Ramakrishna and S.A. Smolka, Partial-Order Reduction in the Weak Modal Mu-Calculus, Proceedings of the Eight International Conference on Concurrency Theory (CONCUR '97), Warsaw, Poland, Lecture Notes in Computer Science, Springer-Verlag (July 1997).

We present a partial-order reduction technique for local model checking of hierarchical networks of labeled transition systems in the weak modal mu-calculus. We have implemented our technique in the Concurrency Factory specification and verification environment; experimental results show that partial-order reduction can be highly effective in combatting state explosion in modal mu-calculus model checking.


Y.S. Ramakrishna, C.R. Ramakrishnan, I.V. Ramakrishnan, S.A. Smolka, T. Swift, and D.S. Warren, Efficient Model Checking Using Tabled Resolution, Proceedings of the Ninth International Conference on Computer Aided Verification (CAV '97), Haifa, Israel, Lecture Notes in Computer Science, Vol. 1243, Springer-Verlag (July 1997).

XSB is a logic programming system developed at SUNY Stony Brook that extends Prolog-style SLD resolution with tabled resolution. The principal merits of this extension are that XSB terminates on programs having finite models, avoids redundant subcomputations, and computes the well-founded model of normal logic programs.

In this paper, we demonstrate the feasibility of using XSB as a programmable We demonstrate the feasibility of using the XSB tabled logic programming system as a programmable fixed-point engine for implementing efficient local model checkers. In particular, we present XMC, an XSB-based local model checker for a CCS-like value-passing language and the alternation-free fragment of the modal mu-calculus. XMC is written in under 200 lines of XSB code, which constitute a declarative specification of CCS and the modal mu-calculus at the level of semantic equations.

In order to gauge the performance of XMC as an algorithmic model checker, we conducted a series of benchmarking experiments designed to compare the performance of XMC with the local model checkers implemented in C/C++ in the Concurrency Factory and SPIN specification and verification environments. After applying certain newly developed logic-programming-based optimizations (along with some standard ones), XMC's performance became extremely competitive with that of the Factory and shows promise in its comparison with SPIN.


R. Cleaveland, I. Lee, P.M. Lewis, and S.A. Smolka, A Theory of Testing for Soft Real-Time Processes, Proceedings of the Eighth International Conference on Software Engineering and Knowledge Engineering, Lake Tahoe, Nevada, pp. 474-479. Published by Knowledge Systems Institute, Skokie, Illinois, ISBN 0-9641699-3-2 (May 1996).

We present a semantic framework for soft real-time processes, i.e., processes that meet their deadlines most of the time. The key components of our framework are: (1) a specification language PDP (for Probabilistic and Discrete-time Processes) that incorporates both probabilistic and timing aspects of process behavior; (2) a formal operational semantics for PDP given as a recursively defined probability distribution function over process terms and atomic actions; and (3) a natural notion of a process passing a test with a certain probability, where a test is a process with the added capability of reporting success. By encoding deadlines as tests, the probability of a process passing a test may now be interpreted as the probability of the process meeting a deadline, thereby capturing the essence of soft real-time. A simple video frame transmission example illustrates our approach.


R. Cleaveland, P.M. Lewis, S.A. Smolka, and O. Sokolsky, The Concurrency Factory Software Development Environment. In T. Margaria and B. Steffen, editors, Tools and Algorithms for the Construction and Analysis of Systems (TACAS '96), volume 1055 of Lecture Notes in Computer Science, pages 391--395, Passau, Germany, March 1996. Springer-Verlag.

The Concurrency Factory is an integrated toolset for specification, simulation, verification, and implementation of concurrent systems such as communication protocols and process control systems. Two themes central to the project are the following: the use of process algebra, e.g., CCS, ACP, CSP, as the underlying formal model of computation, and the provision of practical support for process algebra. By "practical" we mean that the Factory should be usable by protocol engineers and software developers who are not necessarily familiar with formal verification, and it should be usable on problems of real-life scale, such as those found in the telecommuncations industry.

This demo is intended to demonstrate the following features of the Concurrency Factory: the graphical user interface VTView and VTSim, the local model checker for the alternation-free modal mu-calculus, and the graphical compiler that transforms VTView specifications into executable code.


R. Cleaveland, P.M. Lewis, S.A. Smolka, and O. Sokolsky, The Concurrency Factory: A Development Environment for Concurrent Systems. In R. Alur and T. Henzinger, editors, Computer-Aided Verification (CAV '96), volume 1102 of Lecture Notes in Computer Science, pages 398--401, New Brunswick, NJ, July 1996. Springer-Verlag.

The Concurrency Factory supports the specification, simulation, verification, and implementation of real-time concurrent systems such as communication protocols and process control systems. While the system uses process algebra as its underlying design formalism, the primary focus of the project is practical utility: the tools should be usable by engineers who are not familiar with formal models of concurrency, and they should be capable of handling large-scale systems such as those found in the telecommunications industry. This paper serves as a status report for the Factory project and briefly describes a case-study involving the GNU UUCP i-protocol.


O. Sokolsky and S.A. Smolka, Local Model Checking for Real-Time Systems. Computer-Aided Verification '95, American Mathematical Society (July 1995).

We present a local algorithm for model checking in a real-time extension of the modal mu-calculus. As such, the whole state space of the real-time system under investigation need not be explored, but rather only that portion necessary to determine the truthhood of the logical formula. To the best of our knowledge, this is the first local algorithm for the verification of real-time systems to appear in the literature.

Like most algorithms dealing with real-time systems, we work with a finite quotient of the inherently infinite state space. For maximal efficiency, we obtain, on-the-fly, a quotient that is as coarse as possible in the following sense: refinements of the quotient are carried out only when necessary to satisfy clock constraints appearing in the logical formula or timed automaton used to represent the system under investigation. In this sense, our data structures are optimal with respect to the given formula and automaton.


S. Zhang, S.A. Smolka, and O. Sokolsky, On the Parallel Complexity of Model Checking in the Modal Mu-Calculus. Proceedings of Ninth Annual IEEE Symposium on Logic in Computer Science, pp. 154-163, London, England (July 1994).

The modal mu-calculus is an expressive logic that can be used to specify safety and liveness properties of concurrent systems represented as labeled transition systems (LTSs). We show that Model Checking in the Alternation-Free Modal Mu-Calculus (MCAFMMC) -- the problem of checking whether an LTS is a model of a formula of the alternation-free fragment of the propositional modal mu-calculus -- is P-complete even for a very restrictive version of the problem. In particular, MCAFMMC is P-complete even if the formula is fixed and the LTS is deterministic, acyclic, and has fan-in and fan-out bounded by 2. The reduction used is from a restricted version of the circuit value problem (Does a circuit alpha output a 1 on inputs x_1, ..., x_n?) known as Synchronous Alternating Monotone Fanout 2 Circuit Value Problem. Interestingly, the reduction is not valid for the logic CTL, a logic subsumed in expressive power by the alternation-free modal mu-calculus, thereby leaving open the parallel complexity of model checking in CTL.

Our P-completeness result is tight in the sense that placing any further non-trivial restrictions on either the formula or the LTS results in membership in NC for MCAFMMC. In particular, we exhibit efficient NC-algorithms for two potentially useful versions of the problem, both of which involve formulas containing a constant number of fixed point operators: 1) the LTS is a finite tree with bounded fan-out; and 2) the formula is additionally OR-free and the LTS is deterministic and over an action alphabet of bounded size.

In the course of deriving the algorithm for 2), we give a parallel constant-time reduction from the alternation-free modal mu-calculus to Datalog with negation. We also provide a polynomial-time reduction in the other direction thereby establishing a kind of equivalence result regarding the expressive power of the two formalisms.


R. Cleaveland, J.N. Gada, P.M. Lewis, S.A. Smolka, O. Sokolsky, and S. Zhang, The Concurrency Factory -- Practical Tools for Specification, Simulation, Verification, and Implementation of Concurrent Systems. Proceedings of DIMACS Workshop on Specification of Parallel Algorithms, American Mathematical Society, Princeton, NJ (May 1994).


A. Uselton and S.A. Smolka, A Compositional Semantics for Statecharts using Labeled Transition Systems. Proceedings of CONCUR '94 -- Fifth International Conference on Concurrency Theory, Uppsala, Sweden, Springer-Verlag Lecture Notes in Computer Science (Aug. 1994).


Articles in Books

M. Kifer and S.A. Smolka, "OSP: An Environment for Operating Systems Projects." Appears as Appendix D in Operating Systems - Internals and Design Principles, W. Stallings, pp. 723-730, Prentice-Hall (2001).


E.W. Stark and S.A. Smolka, "A Complete Axiom System for Finite-State Probabilistic Processes," in Proof, Language and Interaction: Essays in Honour of Robin Milner (G. Plotkin, C.P. Stirling, and M. Tofte, eds.), MIT Press (2000).

A complete equational axiomatization of probabilistic bisimulation for finite-state probabilistic processes is presented. It extends Milner's complete axiomatization of regular behaviors, which appeared in Volume 28 of the Journal of Computer and System Sciences (1984).


W.R. Cleaveland and S.A. Smolka, "Process Algebra," in Encyclopedia of Electrical Engineering (J.G. Webster, editor), John Wiley & Sons (1999).

Process algebra represents a mathematically rigorous framework for modeling concurrent systems of interacting processes. The process-algebraic approach relies on equational and inequational reasoning as the basis for analyzing the behavior of such systems. This chapter surveys some of the key results obtained in the area within the setting of a particular process-algebraic notation, the Calculus of Communicating Systems (CCS) of Milner. In particular, the Structural Operational Semantics approach to defining operational behavior of languages is illustrated via CCS, and several operational equivalences and refinement orderings are discussed. Mechanisms are presented for deducing that systems are related by the equivalence relations and refinement orderings, and different process-algebraic modeling formalisms are briefly surveyed.


A. Giacalone and S.A. Smolka, "Integrated Environments for Formally Based Design and Simulation of Concurrent Systems: A Non-Procedural Approach," in Visual Programming Environments, E.P. Glinert, IEEE Computer Society Press (1990).


P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Tutorial: Distributed Software Engineering, S.M. Shatz and J.-P. Wang, eds., IEEE Computer Society Press, pp. 148-164 (1989).


P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Concurrent Programming (N. Gehani and A. McGettrick, eds.), Addison-Wesley (1987).


S.A. Smolka and P.C. Kanellakis, "Recent Results in the Static Analysis of Communicating Finite State Processes," in Current Advances in Distributed Computing and Communications, Computer Science Press, Rockville, Maryland, pp. 208-233 (1986).


P. Wegner and S.A. Smolka, "Processes, Tasks, and Monitors: A Comparative Study of Concurrent Programming Primitives," in Programming Languages: A Grand Tour, 3rd edition, Ellis Horowitz, Computer Science Press (1986).



Back to Scott Smolka's Homepage