Home > Research

Research Interests
Foundations of Data Mining Data Mining and Knowledge Discovery, Biology and Bioinformatics,
Applications for the Automated Theorem Provers, Inductive Learning Systems,
Rough Sets, Generalized Fuzzy and Rough sets,
Dealing with Uncertain Information, Automated Theorem Proving,
Formal Languages, General models for computing machines,
Automated Theorem Proving in non-classical logics, Experimental Education.

Research Grants
2007-2009
NCIIA NATIONAL COLLEGIATE INVENTORS AND INNOVATORS ALLIANCE Sustainable Vision Grant #4891-07, ìSustainable Technology-based Entrepreneurship for the Senegalese Marketî, co-PI with Prof Christelle Scharff (Pace University), $47,198.00.

1993-94
Fulbright Scholar to Poland for the academic year 1993/1994, grant No. 93 - 68818

1994
Franco-American Commission for Educational Exchange Grant, March- April, 1994

1990
Modern Foundation of Computer Science, Undergraduate Faculty Enhancement, NSF 88-33, Co-PI with Dr. P. Henderson, Computer Science Department, SUNY at Stony Brook and Dr. D. L. Fergusson, Technology and Society Department, SUNY at Stony Brook

1984-95
Faculty Summer Research Grants, Lafayette College, PA

1982
FIPSE (Fund for the Improvement of Post - Secondary Education ) grant for a development of a Senior Colloquium Technology, Culture and Values: Soviet Model at Lafayette College (1982), one of four recipients.

1980
Wesleyan University Research Grant (1980)

1971-72
Belgium Ministry of Higher Education Grant (1971-72) to study Experimental Education Methods. Professor F. Pappy, Universite Libre de Bruxelles, Belgium.

1969-1980
Various grants from Bertrand Russell Foundation and Association for Symbolic Logic covering the total costs of all conferences and summer schools I participated each year in years 1969 - 1980.

Description of Research Areas

  1. Automated theorem proving in non-classical logics (Cut -free Gentzen type formalizations)
    Papers # (1) - (4), (8), and (12).

    In 1934 G. Gentzen defined the formalizations for classical and intuitionistic logic with a very specific property - it allowed for the completely automatic generation of the proof of any formula stated in the given logic. The proofs of completeness of his systems were carried out through so called the cut-elimination theorem (Hauptsatz) and the formalizations, which allow automatic generation of proofs from the formula itself, are hence called cut-free Gentzen type formalizations. The basic feature of a Gentzen system was the use of sequents i.e. of expressions of the form where are sequences of formulas instead of plain formulas.

    In 1950 Rasiowa and Sikorski presented a new cut-free Gentzen type formalization for classical first order logic together with a constructive proof (construction of a counter-model on a decomposition tree) of the completeness theorem. It dealt with sequences of formulas instead of Gentzen sequents. The advantage (and the difficulty if one wants to repeat this technique in a case of non-classical logics) of this formalization over Gentzen's original one is that it produces for a given formula its conjunctive normal form directly in a clause list form.

    • In 1950 Rasiowa stated an open problem of providing a cut-free sequence type formalization for modal propositional S4 calculus. I solved this open problem in 1967 and it appeared in print as my first published result (1). The paper also contained an algebraic proof of completeness theorem followed by a classical Gentzen cut-elimination theorem. The implementation of first modal logic theorem prover was done in LISP with collaboration with B. Waligorski. At that time we didn't think it to be worth a separate publication and it was only mentioned in (1).
    • In (4) I gave the same kind of sequence formalization for SCI-logic (Michaels 1974) with a constructive proof of its completeness theorem.

    • In (8) I used my version of a sequence formalization for the Post m-valued logic to give a constructive proof of the Craig interpolation lemma for it. The algebraic proof was given previously Rasiowa in 1972.

    • In (12) I defined certain classes of cut-free logics whose decidability could be established through the methods of alternative trees and decomposition diagrams and gave a general algorithm for finding out all non-alternative trees of the alternative tree determined by the decomposition diagram.

  2. General models for computing machines
    Papers # (6) and (7).

    These publications contain some of the results which were part of my Ph.D thesis.

    Generalized computing machines were introduced by Pawlak in 1959 as an algebraic model for computers and were intensively investigated by many authors in the early seventies.

    I used the notion of a deductive system of a computing machine as introduced by Ras in 1971 and, among other things, presented the following.

    • Proofs of any formalized theory based on an enumerable language can be treated as computations of a certain machine.

    • Definitions of the classes of machines for which the proof searching procedures are effective and for which they are only partially effective.

    • Definition of a class of so-called theorem proving systems and proofs, by the use of proper machines the decidability theorems for them.

    • Another proof of the decidability theorems for many non-classical logics and some formal languages as a corollary.

  3. Formal Languages and Automated theorem proving via Gentzen type formalizations
    Papers # (5), (9), (11) and (30).

    In the early sixties A. Ehrenfeucht conjectured the existence of a certain relationship between formal languages and Gentzen type formalizations. The conjecture was proved by Ras in 1971 and 1972 to be true in a case of a sequence (Rasiowa, 1950) type formalization for classical logic.

    In order to give a full answer to the conjecture I started a work to establish a more general relationship that would cover all possible Gentzen type formalizations and not only those for a particular logic. As a consequence, I developed the general theory of what could be called the Gentzen type formalizations. I defined two classes and of relational systems which I called Gentzen type theorem proving systems. The restrictions put on them came from known Gentzen type formalizations and implied a sub-formula property. The classes and corresponded to any Gentzen type formalizations for any propositional and predicate logics, respectively. We assumed that the formalizations were based on an enumerable languages and didn't have either cut or infinitistic rules of inference. The main results obtained were the following.

    • Each element A of the universe of any generates a context-free grammar such that its language is non empty if and only if is a theorem of .
    • Each element A of the universe of any generates a sequence of context-free grammars such that is a theorem if and only if there exists a grammar in this sequence with a non-empty language.
    • Decidability results are obtained from the decision theorem for context-free grammars.

    These results were later used in (14) as a part of a proof of relationship between programs and certain classes of logics.

  4. Algebraic Theory of Programs and Cut-Free Gentzen formalizations
    Papers # (10), (13) - (16), (34), and (35).

    There are two main approaches to describe the properties of programs: one is through so called programming logics or logics of programs initiated by Dana Scott. The second is called the algebraic theory of programs and arises from early works by de Bakker (1971) linking the theory of programs with mathematical linguistics and automata theory. In 1971 -77 Mazurkiewicz nad Blikle invented and examined the notions of Push Down (PD) and Finite Control (FC) algorithms. PD and FC algorithms were defined and investigated as algebraic models of programs. It was also proved that the properties of FC- algorithms and PD-algorithms were closely connected with the properties of regular and context-free languages, what fully justified their names.

    • In (10) and (13) I defined two classes of theorem proving systems, which were refinement of the classes and defined previously in (9) and proved that the automatic proof searching procedures for the systems from the first class can be described by FC-algorithms while we did need a PD-algorithm in the case of the systems from the second class.

    • The proofs of the existence of proper algorithms in (10) and (13) were not direct and used the Mazurkiewicz and Blikle results about the formal languages and algorithms and my results presented in (9). In (14) a direct construction of proper algorithms was presented. This together with Mazurkiewicz results established that the difference between programs with recursive procedures (PD-algorithms) and without recursive procedures (FC-algorithms) is of the same kind as that between cut-free Gentzen type formalizations of predicate and propositional logics.

    • In (15) I introduced the notion of monadic second order definability and used the results of Büchi and Ladner (1960, 1977) to prove that it provides a common characterization for finite automata, iterative programs and cut-free propositional logics.
    The notion of PD and FC algorithms was proved to describe among other things some nondeterministic computational properties which cannot be described by means of Algol-like programs. It led to the definition of the deterministic push down (DPD) and deterministic finite control algorithms (DFC), which were proved to be similar to De Bakker- Scott (1971) schemes without identity, to Janicki (1980), and to simple languages introduced and investigated by Korenjak and Hopcroft in 1967.

    • In (30), which was an invited article for a special volume of Studia Logica dedicated to the memory of R. Suszko, I used DFC-algorithms in order to give decision procedures for Suszko SCI logic.

    • In (16) I extended methods used in (30) to a more general framework of any one-to-one Gentzen type theorem proving systems.

  5. Dealing with Uncertain Information
    Papers #(17) - (24), (31) and (36) - (40).

    The idea of rough sets is a mathematical concept meant to be used in reasoning about uncertain or vague knowledge. It was introduced by Z. Pawlak in 1982 and intensively developed by many authors since then. It was shown by Pawlak in 1985 that the concept of a rough set is different from that of fuzzy set. However, it was demonstrated by Wong and Ziarko in 1987 that the generalized notion of rough sets based on the probabilistic approximation space can be conveniently described by the concept of fuzzy sets. This generalized model is called the probabilistic rough set model as opposed to the original deterministic model. Further relationship between those two concepts were extensively studied by Dubois and Prade in 1987 - 1988. The main advantage of rough set theory seems to be the fact that it does not need any preliminary or additional information about data and consequently it is most useful when statistical methods fail. It has been successfully implemented in knowledge based systems in medicine, industry, and engineering design. Wong and Ziarko in 1986 presented an inductive learning algorithm and compared it favorably to the Quinlan's learning systems ID3 and Michalski's system AQ11. The probabilistic rough set model has been also proved to be useful mathematical tool for dealing with the problem of generation of decision rules from inconsistent training examples, or database design. The problem of teaching a student by several imperfect teachers was also handled within probabilistic rough set framework. The authors (Ras, Zemankova, 1986) deal with the case when the teacher and the student understand each other only partially and then define a formal system which provides the basis for writing procedures which generate rules of a knowledge base. Usually rough set approaches use purely semantical methods. In my research I focused mainly on purely syntactical methods and problems which can be solved by them. I established among others the following.

    • Defined a general automatic method of generating from a given term its equivalent normal form.

    • Proved that this normal form can be obtained by the use of a certain finite control algorithm.

    • Constructed a deterministic finite control algorithm which would produce not only all definable sets but also all its elementary components.

    • Extended the methods defined above to a general syntactic procedure of verifying the correctness of decision rules and decision algorithms.

    • Showed how to apply this syntactic procedure to the problem of verifying whether the system (student) can learn a classification provided by an expert. (This is called static learning).

    • Described the procedure for static learning in terms of a DFC algorithm.
    • Generalized the concept of static learning into a framework of of m-valued logic and gave a deterministic decision procedure for whether a student has learned the classification made by an expert according to a certain approximation given within a framework of an approximation space defined by a Post algebra of order (Rasiowa, 1986). This procedure reduces to the decision procedure for static learning when the dimension of approximation space is two.

    • Introduced the notion of linguistic definability and proved that any definable set is linguistically definable and in addition, that any -definable concept (as defined by Orlowska in 1986) is linguistically definable.

    • It was shown by Rauszer in 1988 that functional and multifunctional dependency can be expressed in terms of an indiscernibility relation and that there is a relationship between functional and multifunctional dependencies and formulas of proper logics. I used this fact to show that both functional and multifunctional dependencies are linguistically definable.

  6. Knowledge Discovery in Data Bases, Inductive Learning Systems, Approximate Classifications and Concept Learning
    Papers #(25) - (28), and (40).

    All papers, except (25) describe joint research with my Ph.D student Michael Hadjimichael.

    The automated inference of associations rules from the database is one of the main subjects of Knowledge Discovery in Data bases. The first methods used were those from Machine Learning. The most known systems were: Quinlan's ID3 (1983) and Michalski's AQ11 (1978). They used samples to generate rules, but both produced an unwieldy number of rules. In 1986-87 Wong and Ziarko proposed inductive learning algorithms with minimized attribute sets and probabilistic output rules are compared them favorably to ID3 and AQ11.

    In (25) we generalize on above models by adding conditions to the system. We introduce a notion of Conditional Knowledge Representation System (CNRS). The extension incorporates some of the ideas that were presented by Orlowska in 1986. The CNRS was a model for the implementation of a Conditional Decision Support System that was done by M. Maseda at SUNY Stony Brook.

    The conditions make the system interactive. They are user input which can reduce the size of the rule family and in (40) we present a generalized algorithm (CPLA) which does so.

    In (26), (40) we propose an algorithm, the Condition Suggestion Algorithm (CSA), which will, automatically, generate, syntactically motivated conditions.

    The interaction of CPLA and CSA and some theoretical investigations about the model are presented in (27), (28), and (40).

  7. Applications of systems for Knowledge Discovery by Inductive Learning
    Papers #(32), (47).

    The papers ( joint research with my Ph.D student Michael Hadjimichael) present an interactive probabilistic inductive learning system and its application to Knowledge Discovery in a set of real data. The data consists of a survey of voter preferences taken during the 1988 presidential election in the U.S.A. Results include an analysis of the predictive accuracy of the generated rules, and an analysis of the semantic content of the rules.

  8. Algebraic Characterization of Rough Sets
    Papers # (29), (42), (48), and (53).

    It is well known that sometimes a theory designated originally as a tool for the study of a practical problem generates a purely mathematical interest. In such case the theory becomes generalized way beyond the point needed for applications, the generalizations make use of other theories, often in completely unexpected ways, and the subject becomes established as a new part of pure mathematics. Physics is not the only such external source of mathematical theories. The economics, biology and recently computer science are known to play a similar role. Another, somewhat surprising addition, is formal logic and the branch of pure mathematics that it has brought to life is called algebraic logic. The algebraic logic starts from purely logical considerations, abstracts from them, places them into a general algebraic contest, and makes use of other branches of mathematics such as topology, set theory, and functional analysis. It provides tools to discuss the modern symbolic logic from a different perspective. It serves to clarify its problems and to establish their connection with ordinary mathematics.

    We use the algebraic logic techniques to establish a relationship between rough set theory, logic, abstract algebras and topology. In particular, we show that the notion of a rough equality of sets introduced by Pawlak in 1982 leads, via logic and Lindenbaum-Tarski algebra to a definition of a new class of algebras, called here topological quasi-Boolean algebras. Moreover, we define a new notion of a quasi- topological quasi field of of sets and use it to give a proof of a Representation Theorem for the topological quasi-Boolean algebras. From this we get immediately the following mathematical characterization of a rough algebra, introduced by Banerjee in 1993.

  9. Applications for the Automated Theorem Provers
    Papers #(29), (33), (42), (43), (46), (49), (52) and (56).

    This is a join research with two collaborators J. Hsiang (National Taiwan University) and L. Vigneron ( INRIA - Institute National de Recherche en Informatique et en Automatique). It builds links between algebraic logic, algebraic methods in rough and fuzzy sets and the automated theorem proving.

    It also provides a very powerful, new applications of the automated theorem proving techniques and opens new ways of the use of theorem provers.

    In our work (# 33 ) with J. Hsiang we present an effective proof theory that allows one to reason within algebras of algebraic logic in a purely syntactic, algebraic fashion. We demonstrate the effectiveness of the method by discussing our automated proofs of problems and theorems taken from Professor Helena Rasiowa's bookAn Algebraic Approach to Non-Classical Logics, Studies in Logic and Foundation of Mathematics, Volume 78, North Holland Publishing Company, Amsterdam, London - PWN, Warsaw (1974). We include a detailed proof of a problem presented to us by Professor Helena Rasiowa in June 1993. Most of these proofs are the first direct proofs ever discovered, and they are produced by the computer without human assistance.

    In our papers with L. Vigneron we use a newly established relationship between rough sets and certain classes of rough algebras. Then we use a theorem prover DATAC (recently developed, by L. Vigneron in INRIA, Nancy, France ) to discover new theorems in both modal and rough algebras and to examine the relationship between classes of algebras representing certain properties of rough sets.

    DATAC is a new kind of a prover. It proposes either to prove properties by refutation, as usually provers do, or by straightforward deduction from axioms. The possibility of generating, in a very sophisticated way, a straightforward deduction from axioms is a distinct and the most interesting feature of the prover. In the straightforward deduction, the prover acts as a deductive system, i.e. the end product is a set of theorems with their formal proofs. In our research we use mainly the second technique, but we also use the first technique for comparison of theorems for different classes of algebras. The role of the prover in finding and proving new theorems or properties of these algebras is hence essential in developing their consistent theory.

  10. LT - Fuzzy and Rough sets
    Papers # (44) and (45).

    A concept of LT-fuzzy sets was introduced by Rasiowa and Cat Ho in 1992. LT-fuzzy sets are a modification of L-fuzzy sets introduced by Goguen in 1996. We introduce here a notion of a generalized rough set and show that it can be considered as a particular case of a L-fuzzy set. We also generalize the notion of a rough equality of sets, introduced by Pawlak in 1985 to a notion of topological equality of sets and we prove that the LT-fuzzy sets provide a common characterization for all of the considered concepts. We also introduce a notion of a generalized rough and fuzzy set and examine their relationship.

  11. Data Mining and Knowledge Discovery
    Papers # (50), (51), and (54).

    The amount of electronic data available is growing at a startling rate, and this explosive growth in data and databases has generated a need for new techniques and tools that can intelligently and automatically extract implicit, previously unknown, hidden and potentially useful information and knowledge from these data. Such tools and techniques are the subject of the field of Knowledge Discovery in Databases (KDD).

    KDD and Data mining have become promising topics in the database community in the past few years. Many data mining techniques are available to extract different kinds of knowledge from databases. In an application, the choice of technique depends on the types of question users would be interested in. For instance, a manager in a supermarket may want to know buying patterns of costumers so that he can rearrange shelves in such a way to minimize the shopping time. This problem can be described in terms of finding association rules between sets of attribute-values from a data set. In my work with Ernestina Menasalvas, Madrid, Spain and Michael Hadjimichael, Naval Research Laboratory, CA) we present the design and theoretical foundations of a system we are developing as a joint project between the Computer Science Department at Stony Brook and the Facultad de Informática at the Universidad Politécnica de Madrid. KDD-G (KDD-General) extends on established techniques and it is able to generate fewer and shorter association, generalization and classification rules. Another distinctive feature of KDD-G is that it allows interactiveness in the data mining process.

  12. Mathematical Foundations of Data Mining: general Models for Data Mining.

    Papers #56 - 66, 69, book chapters # 73- 76 and a book (# 77). Research conducted with Ernestina Menasalvas, and Christelle Sharff.

    We usually view Data Mining results and present them to the user in their syntactical descriptive form as it is the most natural form of communication. But the Data Mining process is deeply semantical in its nature. The algorithms process records (semantics) finding similarities which are then presented in a descriptive i.e. syntactic form. We build and investigate a General Model for Data Mining that addresses this the semantics-syntax duality inherent to any Data Mining process.

    Moreover, our General Model formalizes the definition of Data Mining as the process of information generalization. In the model the Data Mining algorithms are defined as generalization operators. We use our framework to show that only three generalizations operators: classification, clustering, and association operator are needed to express all Data Mining algorithms for classification, clustering, and association, respectively. We also prove that classification, clustering and association analysis fall into three different generalization categories.

    Finally, we examine, as a particular case, a General Classification Model. In particular, we define the notion of truthfulness, or a degree of truthfulness of syntactic descriptions obtained by any classification algorithm, represented within the Model by a classification operator. We use our framework to prove that for any classification operator (method, algorithm) the set of all discriminant rules that are fully true (partially true) form semantically the lower approximation (approximate lower approximation) of the class they describe. The set of characteristic rules describes semantically its upper approximation.

    We also present a formal syntax and semantics for descriptive data mining.

    SPRINGER ENCYCLOPEDIA of Complexity and System Science (to be published in 2008: A Granular Model for Data Mining.

    We present here three abstract models: Descriptive, Semantic, and Granular. All of them are abstract structures that allow us to formalize some general properties of Data Mining process and address the semantics-syntax duality inherent to any Data Mining process.

    Bioinformatics: Protein secondary structure Prediction

    Papers # 35, 68, research conducted with Victor Robles.

    Successful secondary structure predictions provide a starting point for direct tertiary structure modeling, and also can significantly improve sequence analysis and sequence-structure threading for aiding in structure and function determination. Hence the improvement of predictive accuracy of the the secondary structure prediction becomes essential for future development of the whole field of protein research. In this work we present seven multi-classifiers that combine the predictions of the best current classifiers available on Internet. Our results prove that combining the predictions of a set of classifiers by creating composite classifiers (hence a name multi-classifier) is a fruitful one. We have created multiclassifiers that are more accurate than any of the component classifiers.

    Biology: Emergent Behavior in Networks of Cardiac Myocytes

    Paper # 67, research conducted with Radu Grosu, Ezio Bartocci, Flavio Corradini, Emilia Entcheva, Scott A. Smolka.

    We address the problem of specifying and detecting emergent behavior in networks of cardiac myocytes, spiral electric waves in particular, a precursor to atrial and ventricular fibrillation. To solve this problem we: (1) Apply discrete mode-abstraction to the cycle-linear hybrid automata (CLH) we have recently developed for modeling the behavior of myocyte networks; (2) Introduce the new concept of spatial-superposition of CLHA modes; (3) Develop a new spatial logic, based on spatial-superposition, for specifying emergent behavior; (4) Devise a new method for learning the formulae of this logic from the spatial patterns under investigation; and (5) Apply bounded model checking to detect (within milliseconds) the onset of spiral waves. We have implemented our methodology as the EMERALD tool-suite, a component of our CLHA framework for specification, simulation, analysis and control of excitable hybrid automata. We illustrate the effectiveness of our approach by applying MERALD to the scalar electrical fields produced by our simulator.

    A Model for Protein Secondary Structure Prediction Meta - Classifiers, in the Proceedings of International Conference NAFIPS08 , New York, May 19-22. pp. 200- 2006,

    We present here a mathematical model for the Protein Secondary Structure Prediction (PSSP) problems and research. It also represents un effort to build a uniform foundations for PSSP research. The model, and hence the paper, is designed to facilitate and speed up understanding of the long standing PSSP research and its problems also for people who want to get involved in it. We present an abstract definition of a protein and its structures and discuss the Protein Data Banks, and other Proteomic Data Bases as well as three generations of PSSP algorithms and servers (all of them web-accessible). We also discuss the development of most important results, problems and methods of data preparation for PSSP classifiers. Finally, we describe a model for a Meta-Classifier utilizing all, or a subset of PSSP servers.

  13. Book: An Introduction to Classical and Non - Classical Logics - General Description

    The book is being used as a textbook for a course Logic (mat371, cse371).

    Logic is a modern scientific discipline. Some parts of it are created and applied by mathematicians, some by philosophers and some, recently, by computer scientists.

    Logic, as mathematics, is a general name for many of its branches. There is classical propositional and predicate logic and its extensions such as second and higher order logics, infinitary logics and - calculus. There are branches of classical logic such as model theory, proof theory, recursion theory and set theory, each of which is a scientific field of its own. There are also non-classical logics, such as intuitionistic logic, as well as a vast array of modal logics and many-valued logics. And there are a number of non-classical logics created recently and extensively investigated by computer scientists such as temporal logics, dynamic logics, belief logics, knowledge logics, non-monotonic logics. Finally programming logic has developed into a branch of computer science and created many non-classical logics describing its semantics.

    Working computer scientists use and create logic in a similar way as physicists use and create modern mathematics. They tend to use, sometimes in a very sophisticated way, the results of different branches of classical logic, but recently non-classical logics have attracted their attention and their contribution to this field have become significant.

    Many excellent introductory and specialized logic books have been published. Computer scientists read and use them and teach from them. But what I have come to realize during my years as a logician in a computer science department is that those books were written by logicians for logicians, not for computer scientists. And therefore computer scientists, especially students, graduate as well as undergraduate, have a very difficult time learning from them. The problem is not the content, but the pace, which is too fast, and the presentation, which is often too abstract and too concise.

    The proposed book will contain long intuitive introductions to each chapter, many detailed examples explaining slowly each of the introduced notions and definitions, and well chosen sets of exercises with carefully written solutions. It will also contain samples of quizzes and tests after each chapter.

    Logic is a well developed and very specialized field. Hence the available books, even introductory books, usually choose one particular way of presenting the material. This is usually a Hilbert style formalization, but there are proof theory books which use the Gentzen style formalization or natural deduction, and there are also semantic tableaux and resolution approaches. They all have characteristic proof techniques, problems and applications. The logicians are usually aware of all of them; computer scientists may not be.

    One purpose of the proposed book is to provide a fundamental introduction to different techniques of classical and some non-classical logics. This will make it possible for readers to access directly the more advanced, specialized books.

    The book will expose the reader to both Hilbert and Gentzen style formalizations as well as the resolution type formalizations for classical logic. It will also present Gentzen style formalization for the intuitionistic logic and a semantic tableaux method for modal logics as examples of the use of those proof techniques.

    In particular the book will cover the standard Hilbert style treatment of classical propositional and predicate calculus with a detailed proof of the completeness theorem. It will discuss first order theories and introduce Gödel's first and second incompleteness theorems. It will also present the proofs of completeness theorems for Gentzen style and resolution type formalizations.

    The book will include a presentation of Kripke models as a semantics for intuitionistic and modal logics, give a constructive proof of the completeness theorem for a Gentzen style formalization of intuitionistic logic and discuss the semantic tableaux and resolution type formalization for some modal logics.

    In the final chapter it will present a short introduction to the Boolean, topological Boolean, Pseudo- Boolean (Heyting) and Post algebras and show that they represent alternative algebraic models for the classical, modal S4, intuitionistic and m-valued logics, respectively.

    The book is intended to be a text for a senior level undergraduate logic course for both Computer Science or Mathematics students, but it also will contain enough material to be used as a text for a first year graduate Foundation of Computer Science course, as a side reading for an undergraduate or graduate Artificial Intelligence or Logic Programming course, or as a reference book for working computer scientists.

  14. Experimental Education
    Papers #(58) - (62).

    The papers describe experiments which were designed by me and conducted by my 5 masters students in four elementary and two high schools in Warsaw over a period of two and half years (1977 -1980). The experiments involved the development of teaching tools for better understanding of basic notions and techniques used in logic and automated theorem proving. We worked with 9 -13 year old children (total over hundred). The idea behind the design was more of the type: find the method of teaching which would be so comprehensible that it would appeal even to children, then of the type: make it so trivial that even children would understand. The lessons took place outside normal classes, once a week, for one or two hours. All children were volunteers. They got so involved in the class work that at the end they were able to prove, with a very little help, the completeness theorem for Gentzen type formalization for propositional calculus.

    The results were presented at Paris VII University seminar on Mathematical Education in 1980 during my 3 weeks stay there.