From the Bulletin of EATCS No. 83#

Interview with Professor Moshe Y. Vardi

by Cris Calude

Professor Moshe Y. Vardi, the Karen Ostrum George Professor in Computational Engineering and Director of the Computer and Information Technology Institute at Rice University,, was briefly in New Zealand and he has kindly answered a few questions.

Cristian Calude: You have referred to logic as "the calculus of computer science". Are mathematical proofs relevant for computer science? Can you cite a practical application of your results which can be understood by a non-mathematician?

Moshe Vardi: I should point out first that the most fundamental ideas in computer science, such as computability, universal machines that can execute arbitrary programs, the distinction between hardware (machines) and software (programs), and programming languages (formalisms to describe computations), all came out from an investigation in the 1920s and l930s into the nature of mathematical proofs. Once people started programming in the 1950s and 1960s, they realized that reasoning about correctness of computer programs highly resembles reasoning about correctness of mathematical theorems. This led to intensive research in "program verification". This research borrowed heavily from various branches of mathematical logic, such as lattice theory, model theory, and automata theory. My own contribution has been in developing the connection with automata theory. Today there are industrial verification tools that are used daily, which are based on "esoteric" concept such as "automata on infinite words".

CC: You have worked in many different subjects, including database theory, finite-model theory, knowledge in multi-agent systems, computer-aided verification and reasoning. Which of your results do you like most?

MV: I happen to like the most the results that have been the most influential. As a Ph.D. student I discovered that when analyzing the computational complexity of database query evaluation one has to distinguish between the contribution of the data to that complexity and the contribution of the query. These are known as "data complexity" and "query complexity", and have become the standard way of looking at the complexity of query evaluation. I also discovered that query complexity is usually exponentially higher than data complexity. Fortunately for us, queries are typically short while databases are large. As a postdoc I discovered the automata-theoretic perspective to program verification. At First this did not lead to new algorithms as much as offered a very simply way to understand existing algorithms. (In fact, it was so simple that the paper was at first rejected when we submitted it to a conference.) Overtime, the automata-theoretic perspective has been highly useful, leading to many development in automata-theory-based algorithms. This is is still a very active research area.

CC: You are a co-winner of the 2000 Gödel Prize. Do you see any relevance of incompleteness for mathematics or computer science?

MV: Gödel’s Incompleteness Theorems established limits on the power of the mathematical approach. It showed that mathematics is a human activity (or, as sociologists would say "a social construct"), with all the limitations that are implied by that. In a similar way, Church-Turing’s Undecidability Theorem established the limits of the computational approach. It showed that computing, which we tend to think of as a mechanical activity, is at the heart a human activity, since computers and programs are designed by humans.

CC: Is computer-aided verification "practical"? What about automated proof-checking?

MV: For many years there was a fierce debate whether computer-aided verification could ever be made practical. See, for example, the article by DeMillo, Lipton, and Perlis on “‘Social Processes and Proofs of Theorems and Programs", Commun. ACM 22(5): 271-280 (1979), Today, model checkers such as FormalCheck, Rulebase, SPIN, and others are in standard industrial usage, so the philosophical debate has become somewhat moot. Proof checking is by nature much more labor intensive than algorithmic methods such as model checking, so its industrial applicability is more limited. Nevertheless, it is used to verify certain critical pieces of hardware designs, such as floating-point arithmetics.

CC: Please tell us more about the Computer and Information Technology Institute (CITI) at Rice.

MV: The basic observation is that the Information Revolution is transforming society - creating new careers, new industries, new academic disciplines, and the need for new educational and research programs. Rice University does not see the Infomation Revolution as a threat but rather as an opportunity. The opportunity is indeed for Rice to become a leading institution in the Information Age. For Rice to assume leadership in the Information Revolution, it is imperative to make information technology a key institutional priority and take bold, coordinated, pervasive steps to enhance Rice’s position in information technology. The infomation technology initiative at Rice needs to encompass information technology research, the digital library effort, information technology education, and information technology in education.

The role of CITI ( is to complement the traditional academic structures of the university in order to provide more flexibility to respond to new educational and research needs in the fast-paced Infomation Age. CITI counteracts the compartmentalization of the university by becoming a focal point for academic activities in information technology. It focused resources to seed and nurture the development of both existing areas and emerging activities. It serves the broad and strong student interest in information technology and incubates new concentrations, programs, majors, and departments as needs emerge. CITI faculty members tie their home departments to CITI and to one another, providing new channels for cross-disciplinary activity related to information technology, CITI is a home that encourages and facilitates exchanges between information technology and the broadest possible range of disciplines at Rice University.

CC: It seems that hardware evolves faster than software. Do you see in this trend any chance for theoretical computer science?

MV: Theoretical research can play different roles in computer science. On one hand, its role is to clarify the fundamental principles of our discipline. These principles are quite independent from technical development. Much of complexity theory, for example, falls in this category. On the other hand, theoretical research can follow technical developments, trying to explain experimental observations, as well as to solve technical challenges that practitioners face. For example, much of current research in computer-aided verification is driven by the growing complexity of hardware designs. Thus, Boolean Decision Diagrams (BDDs), which are used to represent Boolean functions compactly, came out of theoretical research in automata theory and branching programs, motivated by the need to manipulate effectively very large Boolean functions.

CC: Improper scientific conduct undermines public trust in the results and methods of science and threatens public funding and support for scientific research. You have organized a "Research Ethics Seminar" at Rice. Please tell us more about it.

MV: Researchers face ethical issues on a daily basis, managing research funds obtained from public sources, supervising graduate students whose educational interests may diverge from those of their supervisors, dealing with intellectual property issues, and more. Even a decision on who should be an author on a paper has ethical dimensions, At the same time, graduate students, who are essentially apprentice researchers, typically receive little training and education in thinking about ethical aspects of research. The Research Ethics Seminar was an attempt to address that. It was a success and a failure at the same time. On one hand, the students loved it. On the other hand, we did not find enough faculty members with an interest in the topic to make this seminar a regular course offering.

CC: Some public opinion leaders think that theoretical computer science has little relevance for core informatics. Many talented young people don't regard this subject as exciting. Do you agree? Please tell us about the challenges of teaching theoretical computer science.

MV: Twenty and thirty years ago, most of computer science students arrived from mathematics. These students enjoyed learning theoretical computer science as a mathematical theory, so we taught theoretical computer science as a mathematical theory. Today’s computer science students are interested in computing, yet we still often teach theoretical computer science as a mathematical theory. The challenge is on us to make the theory relevant to the practice of computing. For example, I teach at Rice "Logic in Computer Science". Part of the final project of this course is for the students to implement a Boolean satisfiability solver and use it to solve "Einstein’s Puzzle". This demonstrates to the students that Boolean satisliability is not some abstract mathematical concept, but rather a very powerful generic problem - solving framework.

CC: Non - funded research seems to decline. Do you agree? Is this a good trend?

MV: I am not sure I agree with that. If you go back to pre-WWII time, you see that the research enterprise was rather small. It grew enormously after the war, supported by research funding, when it became clear that research can contribute to national security as well as to economic prosperity. There is no question that today’s funded research dwarves non—funded research, but it is not clear to me that non-funded research has actually declined. What clearly has declined is scientifically driven, industrially funded research. No industrial lab today is anything like Bell Labs in its heydays. That is a clear loss to science. At the same time, this was accompanied by much lower communication costs, which is good for science.

CC: Which topics cultivated today in theoretical computer science will survive the end of the century?

MV: If we just knew, we’d be all working on these topics! The most fundamental work will clearly survive. It is hard to imagine that the basic concepts and results of complexity theory will not be used in 100 years (though if it turns out that P=NP, then much of current complexity theory evaporatesl), as are such fundamental algorithmic techniques such as breadth-first search and depth-first search. Some current theoretical topics, such as quantum computation, will either be extremely fundamental or completely irrelevant, depending on whether quantum computing would turn out to be a reality.

CC: Thank you very much for this interview.

Any further pages in alphabetic order of their title as created by you.

Just click at "Create new page", then type a short title and click OK, then add information on the empty page presented to you (including maybe a picture from your harddisk or a pdf-file by using the "Upload" Button) and finally click at "Save". Data available yet!

Imprint Privacy policy « This page (revision-5) was last updated on Monday, 17. May 2010, 16:21 by System
  • operated by