!!From the Bulletin of EATCS No. 102
!A Dialogue about ''Computer System and Network Performance Analysis'' with Professor Erol Gelenbe
by __Cris Calude__
\\ \\
''Erol Gelenbe, the Professor in the Dennis Gabor Chair of the Electrical
and Electronic Engineering Department, Imperial College London
[http://www.ee.ic.ac.uk/gelenbe], is an alumnus of the Middle East Technical University
in Ankara, Turkey, he received a PhD from the Polytechnic Institute of
New York University (Brooklyn Poly) and the Docteur ès Sciences degree from
the University of Pierre et Marie Curie (Paris VI). Professor Gelenbe, one of
the founders of the field of computer system and network performance analysis,
is also well known for his work on Random Neural Networks and G-networks.
Two of his four books were published in English, French, Korean and Japanese.
Elected to the Turkish Academy of Sciences, the Hungarian Academy of Sciences,
l’Académie des Technologies (France) and Academia Europaea, he is a Fellow of
IEEE (1986), Fellow ACM (2001), and has received “honoris causa” doctorates
from the Universities of Liège, Bogaziçi (Istanbul), and Rome II. He has graduated
more than 50 PhDs including many women computer scientists and engineers, and
was awarded the ACM SIGMETRICS Life-Time Achievement Award (2008) and
other awards. He is a Fellow of IEEE (1986) and a Fellow of ACM (2001).''
\\ \\
__Cristian Calude__: Tell us about your experience of studying and working in so
many countries.
\\ \\
__Erol Gelenbe__: No two countries and no two environments, and indeed no two
institutions, are identical. There is no “best” way to do things and each institution
has evolved or adapted according to the personalities of its leaders and the specific
context within which it is operating. Moving from one institution to another
particularly is interesting and challenging, especially from one country to another,
and can be a source of fun and learning (for me, at least). However, being a foreigner
almost everywhere, I can group countries and institutions into two very
broad categories: those that are open to “allogens” and are willing to be inclusive,
and those which have (sometimes in subtle ways) significant barriers to “foreign”
penetration. It is quite different if you are a visiting professor: you are there temporarily
and do not constitute a threat to others. If you are a permanent addition,
matters are different, more challenging and more interesting. Similar things can
be said about being a foreigner needing to acquire a residence permit and work
permit.
\\ \\
I have held chairs in Belgium, France, the USA and UK. In several countries
there can be non-explicit, but widely practiced illegal barriers to foreigners. One
hears about illegal immigrants, but seldom does one hear about barriers to legally
established foreigners, in matters of promotions, awards, employment, etc. Such
practices can continue even when an individual acquires the nationality of the
country where he/she is working, and can even happen to EU citizens with regard
to another EU country, as we have seen even very recently in countries such as
France. My strangest experience was in Italy with my candidacy, at the request
of Italian colleagues, to become Institute Director at CNR; I am fluent in Italian.
My application was eliminated from consideration because they officially stated
that they could not read my signature: “firma non leggibile”! Yet I have always
used my credit card in Italy without problems about accepting my signature. On
that particular instance, a large number of CNR Institute directors were being
appointed, and they deftly managed to avoid appointing a single foreign candidate.
The CNR was reserving the positions for their local friends, and they succeeded.
Even more strangely, a couple of years ago, the lady at a “Metro guichet” in Paris
refused to sell discount day tickets to two of my Greek and Greek Cypriot PhD
students, on the grounds that she just refused to sell discount tickets to foreigners!
\\ \\
__CC__: As a computer scientist, engineer and applied mathematician you have done
a lot of theory, part of which was used in commercial products. For example, your
early work was incorporated into the QNAP2 software package.
\\ \\
__EG__: I have indeed attended to theory, but my work, except for my early work
on stochastic automata [1], is motivated by “practical” problems or direct physical
inspiration. For instance, I got involved in performance modelling and then
in queueing theory because of two main practical drivers. Shortly after defending
my PhD, I spent two summers at the Phillips Research Laboratories in Eindhoven,
where I was asked to work on memory management algorithms for stack oriented
re-entrant programmes. I knew nothing about the subject but was annoyed by the
“ad-hoc” nature of the design choices that were being made. So I felt that some
theory was needed, for instance in the choice of the page and memory segment
sizes, so as to optimise the overhead. Similarly, at my first job at the University
of Michigan (Ann Arbor) as an Assistant Professor, they asked me to teach
Computer Architecture: everyone already there had “taken over” the courses on
automata theory, formal languages, etc. so I (the newcomer) was “stuck” teaching
the subject that others did not wish to teach. Well, there again, I got involved in
developing a more quantitative and seemingly rational (at least to me) approach to
Computer Architecture and Operating Systems, which has given rise to the field of
Computer and Network Performance Analysis and Evaluation. For instance, I was
able to prove results on paging algorithm performance which attracted the attention
of some Hungarian and Russian mathematicians and physicists, as well as on
memory space optimisation which drew theoretical design conclusions from Laszlo
Belady earlier measurements at IBM on “life-time functions”. My development
of novel “product form” networks [4, 9], which are also linked to statistical
mechanics and theoretical chemistry, was motivated by listening to presentations
from neuroscientists while visiting the NASA Research Centre in California, but
that is yet another story to be told below.
\\ \\
So yes – much of my work has had a theoretical bent, but it has almost always
been driven by a strong link with engineering requirements or by observations
from nature. Another example inspired by engineering is the research I did on
“optimum checkpointing” [6] in databases which appeared in the ''Journal of the
ACM'', but was motivated by a practical issue that was recounted to me by Claude
Delobel in relation to the automatic storage in a database of “hits” during some
fencing championships that were taking place in Grenoble, when the computer
being used for this was having some intermittent failures! This work gave rise to
a few PhD theses around me, and to much more work around the world. Other
results were motivated by a property observed in a simpler context, and on the
intuition that it actually holds in a much more general framework.
\\ \\
The QNAP2 (then Modline) software tool for performance evaluation was developed
by my group at IRIA (now INRIA), and the specific technique that I
personally contributed was on “diffusion approximations” that I first published in
the ''Journal of the ACM'' [3]. This software tool has generated some 200 million
euros of income over 20 years for the companies that commercialised it (initially
SIMULOG an INRIA spin-off company). The developers and inventors themselves
hardly got anything; we were naive about such things. Throughout my
career I have been involved with industry, via patents, via tools such as QNAP2,
via consultancies or short-term assignments inside industry, and also via contracts
to my university that are directly funded by industry. Of course, many of my PhD
students have gone on to work for industry, most recently in the financial sector.
\\ \\
__CC__: You are a pioneer in the adaptive control of computer systems.
\\ \\
__EG__: I am bit like the elephant in the dark room: someone comes into the room,
touches the leg of the elephant and thinks it’s a tree, another person feels the tail
and thinks it’s a rope, and yet another catches the elephant’s nose and thinks it’s
a hose! Some people think that I am a pioneer of computer system performance
evaluation (at least that’s what they say on my ACM Sigmetrics Award, and on
my IEEE and ACM Fellowship Awards). The French Academy in 1996 gave me
the France-Telecom Prize for developing mathematical models of communication
networks. The Hungarian Academy of Sciences, in its recent election, mentions
both my work on system and network performance and on neural networks and
learning. As you indicate, in the last six or seven years I have been involved in
developing ideas on Adaptive Computer Systems and Networks such as the “Cognitive
Packet Network” and this has helped generate research projects in “Autonomic
Communications” in Europe. I was asked to write a paper about this work
in the July 2009 issue of the ''Communications of the ACM'' [28]. The Internet is
largely a legacy system based on principles that find their origin in the computers
and data communication systems of the 1970’s, and it is working pretty well. Thus
it is hard to introduce new concepts and methods. Much of the current networking
research only addresses tweaks to well understood aspects.
\\ \\
__CC__: Tell us about random neural networks. You developed their theory—
mathematics and learning algorithms—as well as some applications to engineering
and biology.
\\ \\
__EG__: Let me first tell you what the Random Neural Network (RNN) is [10], and
then I will get round to telling you how it came about. Consider a system composed
of ''N'' counters, and let them be numbered ''i'', ''j'' = 1, ... , N. Each counter can
have a value which is a natural number. At any instant ''t'', only one of the following
''events'' can occur: the ''i − th'' counter increases by one (external arrival of an excitatory
spike to “neuron ''i''), or if a counter has a positive value it may decrease by
one (neuron ''i'' fires and the spike is sent out of the network), or a counter ''i'' may decrease
by 1 and simultaneously some other counter increases by 1 (neuron ''i'' fires
an excitatory spike which arrives instantaneously to neuron ''j''), or ''i'' decreases by 1
and so does ''j'' if both start in a positive value (neuron ''i'' fires an inhibitory spike to
''j''), or finally at time ''t'' nothing happens. What this is modelling is a network of ''N''
neurons which are receiving and exchanging excitatory or inhibitory spikes. The
system operates in continuous time so that ''t'' is a real number, making it a continuous
time network of counters. It is quite extraordinary that this very simple
model has some very powerful properties including the ability to learn [14], and
the ability to approximate continuous and bounded functions and also some very
neat mathematical properties such as “product form”.
\\ \\
It all started when I was visiting the RIACS Research Centre at NASA Ames,
in Moffett Field, California in the summers of 1987 and 1988. This was a lot of
fun because at lunch time, going out from the back of the laboratory one ended
up directly on an airfield where the U2 spy aircraft was taking off. The body of
this airplane is very small and thin, with just enough place for one pilot and his
instruments and commands, but the wings are very long. In fact, the wings have
small wheels at the edges which support them at take-off; if they did not have the
wheels they would be scraping the runway because the wings are long and too
heavy to stay in a horizontal position. From Moffett Field, the job of the U2s was,
officially, to fly along the US Pacific Coast to try to spot and track Russian submarines.
The Director of RIACS at that time was my friend Peter Denning who
had just recently left Purdue University where he had been Department Head.
My official job at RIACS was to work on the performance of parallel processing
systems, since I had just published my small monograph on ''Multiprocessor
Performance''. NASA Ames had some of the largest supercomputers at that time
since they were supposed to eventually replace wind tunnels (another specialty at
NASA Ames) for the testing of aircraft and rockets. It is amusing to note that both
supercomputers and windtunnels are “energy-vorous".
\\ \\
Another funny thing about my stay at NASA Ames, and these were the days
before September 11, was that since it was supposed to be a very secure facility,
and I was a “non-resident alien” (. . . what a funny name for a non-US citizen
without a Green Card!), I was not allowed to enter the airbase officially and had
to work in an external building just at the border of the base. But the funny thing
is that the building’s back door was unlocked, so I could walk onto the tarmac and
observe the U2s, also I could just walk over through the back door to the RIACS.
Anyway, this is just to set the tone about my working environment. Peter Denning
had recruited an interesting man called Penti Kanerva from Pat Suppes’ entourage
at Stanford. Penti was originally a forestry engineer from Finland who had done a
PhD in Philosophy with Suppes. He had invented a model of computation called
“sparse distributed memory” (SDM) which is not too dissimilar from the Kohonen
maps that you may know about. SDMs offer a nice adaptation or learning algorithm
both for numerical and non-numerical data based on slowly modifying a
data representation based on new data that is presented. As a result Penti was also
interested in natural neuronal networks, as were some other people who worked
at RIACS, so they had organised a series of seminars by prominent neuroscientists.
My work on the Random Neural Network started in those circumstances. In
the meanwhile I published a paper on the learning properties of SDM, which has
remained rather obscure.
\\ \\
As many people of my generation, I was familiar with the McCulloch-Pitts
model of neurons, and about the Minsky-Papert controversy concerning nonlinear
perceptrons. I knew of John Hopfield’s model and his results concerning
“optimisation through relaxation”, and of the work of the PDP Research Group at
San Diego, and the contributions of Dave Rummelhart and Terry Sejnowsky, and
about the backpropagation algorithm. At that time, Françoise Fogelman in Paris
was a strong proponent of these techniques. My former student Andreas Stafylopatis
from Athens was also quite interested in these things and we had tried our
hand at some “collective stochastic models” for large numbers of neurons [11].
But I felt, after listening to several presentations by neuroscientists, that none of
these models actually captured the spiking activity of natural neuronal ensembles,
and furthermore (except for John Hopfield’s work) the PDP Group’s work did not
address the important issue of feedback in natural neuronal systems, or “recurrence”
as people say in that area. Filled up with all these interesting neuroscience
lectures I set to work upon my return to Paris in September of 1987. Also I had
the good luck of being hired by ONERA (French Aerospace Research Organisation)
as a consultant in AI which was not my area, and I felt obliged to produce
something significant. In six months I had developed the spiked Random Neural
Network Model, and obtained its analytical solution, but the people at ONERA
could not understand what I was trying to do. The following summer I was back
at RIACS, and met Dave Rummelhart who had moved to the Psychology Department
at Stanford. I dropped abruptly one day into his office without knowing him
personally, and told him what I had done. He was very friendly and interested,
and invited me to give a seminar the following week. After the seminar he told
me to submit my work to the journal that Terry Sejnowski, Dave and others had
started a few years back, ''Neural Computation'', and the first paper was rapidly accepted
and published in 1989. Several papers followed in the same venue over the
years [16], [21], [26], and since the journal indicates the name of the handling editors
after the papers were published I owe a debt of gratitude to Dave Cowan and
Haim Sompolinsky, neither of whom I know personally. My learning algorithm
[14] came later in 1993: it was the first algorithm that established that learning
for an ''N'' neuron recurrent network is of time complexity ''O''(''N'' 3%%sup superscript text/%), while it was well
know that the backpropagation algorithm for a feed-forward network is of time
complexity ''O''(''N'' %%sup 2/%). In the course of this work, there were applications to imaging
[13], [18], adventures and complications related to non-linear mathematics; there
have been several other applications and extensions, and a return to biology while
I was at Duke [20], but that would lead to an even longer story.
\\ \\
__CC__: Please describe the famous G-networks.
\\ \\
__EG__: Queueing theory has been around for at least as long as telephone systems
have existed. The literature contains many tens of thousands of papers which appear
either in publications related to the application domain (e.g. manufacturing
systems, computer systems, the Internet, road traffic, etc.), or in more mathematical
journals related to probability theory or operations research. It is a theory
based on mathematical probability that considers a dynamical system composed
of “service centres” and customers. The latter move around the service centres
according to a designated probabilistic or deterministic behaviour, and at each
service centre a customer waits in line and is then served according to a service
discipline, e.g., First-in-First-Out, Round-Robin, Last-In-First-Out as in a pushdown
stack, or according to some priority scheme and so on. The service time of
each customer in a service centre is typically given in the form of a probability
density function or probability distribution, and this will typically differ in each
service centre. In addition, customers may belong to “classes” so that the service
time distributions and the routing of customers among different centres, may both
depend on the class to which the customer belongs. Often independence assumptions
are made about the service times and the routing of different customers even
though they may belong to the same class. This is a very useful theory in that it is
widely used in industry to design telecommunication systems, manufacturing systems,
transportation, parts handling and assembly, etc. When such systems have
a steady-state or long term solution in which the joint probability distribution of
the number of customers in each of the queues can be expressed as the product of
the marginal distributions in each of the queues, despite the fact that the distinct
service centre queues are in fact coupled, then we say that the queueing network
has “product form”; examples include the Jackson Networks (that Len Kleinrock
used in his very early work to model packet switching networks), and the Baskett-
Chandy-Muntz-Palacios (BCMP) networks. Product form is a remarkable property
which in general reduces the computational complexity of using queueing
networks, from an enumeration of all possible states, to a polynomial time and
space complexity.
\\ \\
G-Networks [12] extend a network of queues, to include certain new types of
customers that can modify the behaviour of others. Thus “negative” customers
[12] destroy other customers; for instance they can represent external decisions
that are made to reduce traffic because of congestion, or to remove packets in a
network that may contain viruses. Triggers are yet another type of customer which
can simply move customers from one queue to another. Multiple class G-Nets are
discussed in [16], [19]. Resets are customers that replenish queues when they are
empty, to represent (for instance) a situation where we wish to keep some part
of the system busy, or when queue length represents the degree of reliability so
that “replenishment” corresponds to repairing a component. Thus you can think
of G-Networks and queueing networks that also incorporate some useful control
functions: for instance the ordinary customers can be packets in a network, while
these special customers can represent control signals that may travel through the
network and affect the ordinary packets at certain specific nodes. The link between
G-Nets and neural networks is discussed in [15]. All of these G-Network models,
and other aspects discussed by my colleagues Jean-Michel Fourneau and Peter
Harrison, lead to product forms. However the solutions obtained, starting with
[12], differ from the earlier Jackson and BCMP networks in that they rely on nonlinear
“traffic equations” which describe the flow of customers of different types
and classes throughout the network. Because of this non-linearity, one also has
to address questions of how and when the solutions one may obtain actually exist
and are unique. My first paper on G-Networks was turned down at an ACMSIGMETRICS
conference because the reviewers did not quite believe that new
models in this area could be found and also solved analytically. Thus I turned to
journals dealing with applied probability . . . and some of my most cited papers
are in this “strange” area which has attracted much attention over the last twenty
years.
\\ \\
__CC__: Tell us about the design of the first random access fibre-optics local area
network.
\\ \\
__EG__: This was a very interesting experience. In the mid 1970’s, thanks to Louis
Pouzin who is one of the pioneers of the Internet, and an extremely sharp and
amusing individual, I was put in contact with the group developing the Arpanet.
In particular I met Bob Kahn in Washington. At that time, a new packet communication
scheme using satellites had been devised: the ALOHA Network, which
was implemented by Norman Abramson at the University of Hawaii. Of course,
ALOHA is the “father” of the Ethernet. Abramson and Kahn had published papers
that described the scheme and computed its maximum throughput; Leonard
Kleinrock and his students were also studying the problem. I felt that the initial
models were addressing steady-state analysis, in a context where the steady-state
might not exist because the system was intrinsically unstable. Together with my
collaborators Guy Fayolle and Jacques Labetoulle, we obtained a strong result,
which after some delay managed appeared in the ''Journal of the ACM'' [5] proving
that the slotted random access communication channel (i.e. known as “slotted
ALOHA”) was intrinsically unstable due to potential simultaneous transmissions
between uncoordinated transmitters, and that it could be stabilised and even optimised
under a “1/''n''” policy which was to retransmit previously collided packets
at a rate that is inversely proportional to the number of backed-up transmitters.
Strong results sometimes upset your colleagues. But Bob Metcalfe, who implemented
Ethernet, was very positive about this work, as he wrote a few years ago
to Jeff Buzen, his then advisor at Harvard.
\\ \\
This work had started while I was at the University of Liège [5], and at INRIA,
and then I moved to Orsay (where I was one of the co-founders of the LRI, Laboratoire
de Recherche en Informatique). At Orsay, I told Wladimir Mercouroff, a
senior member of the university, that this work could have practical applications
to locale area communications. He suggested funding via the DGRST (Délégation
Générale à la Recherche Scientifique et Technique) jointly with a company
called La Calhène, to build a fiber optics local area communication system for environments
with strong electromagnetic perturbations. I would have been happier
to use coaxial technology but the funding agency favoured fiber optics. So we
ended up building a prototype called Xanthos, which used DEC-LSI11 processors
as access nodes, and fiber optics for transport with the random access protocol
using our optimal control algorithm with a clever scheme we had devised to estimate
deviation from optimality based on the frequency of the fiber channel’s
“silent” periods. Once the system was up and running, I presented it to the French
Telecommunications authority for possible commercialisation. They told me that
this work was of academic interest, but because we were using random access,
we could only guarantee delivery times on average and in probability, rather than
with fixed maximum delays; being rather naive at the time, I believed them. So
the project was set aside. A couple of years later Ethernet appeared, and I am sure
that some French Telecom people were biting their nails. As I said, Bob Metcalfe
knows this story. As a consolation prize, the French Telecom hired me as a consultant
for a few years and I was able to do several other things for them, but they
(and I) missed out on a major opportunity. What happened to Louis Pouzin and
his team at INRIA, for similar reasons and regarding the Internet as a whole, is a
far more tragic-comic story.
\\ \\
__CC__: You patented an admission control technique for ATM networks.
\\ \\
__EG__: This was an application of my earlier theoretical paper on diffusion approximations
for queueing systems which was rejected for a prestigious French conference.
My results then appeared in the ''Journal of the ACM'' [3] and ''Acta Informatica'',
motivated by the need to simplify the calculations of queue lengths, server
utilisation and so on, when you have “non-exponential” assumptions. Diffusion
approximations and Brownian Motion are well known, and there is a wonderful
book on the subject by Albert Einstein. This approach had been suggested
to approximate road traffic congestion by G.F. Newell (Berkeley), and then by
Hisashi Kobayashi (IBM) for computer system performance. My original contribution
introduced a mixed discrete-continuous model to address “low traffic”
conditions which were ignored in earlier work. This gave rise to mixed differential
and partial differential equations which I solved in “closed form”. In the mid
1990’s, IBM was designing its N-Way switch for ATM (Asynchronous Transfer
Mode) Networks. The design was carried out at Raleigh (North Carolina) near
Duke University where I was Department Head; the hardware was being designed
at IBM’s La Gaude Laboratory. The fashionable approach at that time to admission
control was to use “large deviations”, whose originator Varadhan from the
Courant Institute is in 2010, like me, elected to the Hungarian Academy of Sciences,
but large deviations only provide “order of magnitude” estimates of packet
or cell loss, which is the primary metric of interest in ATM. I was awarded a contract
by IBM-Raleigh to look at the problem and we developed an algorithm that
used the predictions of my model [17] to decide whether to admit a new flow into
the network, based on predictions for packet or cell loss. One of my students, Xiaowen
Mang (now at AT&T Labs), performed simulations. Because it all seemed
to work well we patented the technique together with IBM engineers Raif Onvural
and Jerry Marin. The US Patent was awarded in 1998 or 1999. Links to these
ideas can be found in my recent work on packet travel time in networks.

__CC__: What is the “cognitive packet network" routing protocol?

__EG__: Call it CPN [22], [23] to make things simpler. It is an algorithm that runs on
specific network routers within an IP (Internet Protocol) or similar network (including
sensor and ad hoc networks), which adaptively choses paths with desirable
properties such as better delay or loss characteristics, lower energy consumption,
lower economic cost, greater security, or a combination of such criteria. The
combination of criteria is incorporated into a “goal” or objective function, whose
instantaneous value is established based on measurements collected with the help
of “smart packets”. CPN is based on on-line measurements, and responds to observations
which are being made and which will in general change over time so
that CPN’s choices also change. CPN offers the end user the possibility to make
such choices, although the end user may delegate the decisions to an agent which
manages its access to the network or which manages several end users. CPN uses
two mechanisms; the first is is the use of “smart packets” which act as scouts and
collect and bring back measurements. The second is the use of recurrent random
neural networks (RNNs) which are installed in routers that take part in CPN’s decisions
(and all routers need not do this) and which act as oracles; the excitatory
and inhibitory connections of these RNNs are updated using the reinforcement
learning rule based on the goal function, as a result of the measurements constantly
collected by the smart packets. The RNNs are used to route the smart
packets (i.e. to inform the ongoing “search”), while all the resulting measurement
information concerning the goal is returned to the end-user or to its decision
agent. The decision agent may then decide to follow completely or only partially,
the advice it receives, so as to select the best paths in the network. For instance,
the decision agent may wish to reduce the frequency with which it makes changes
in paths so as to avoid needless oscillations; in that case it may only decide to
change a path if the estimated benefit is very high. CPN has been implemented
in several wired and wireless network test-beds. It has also been considered as a
means to direct people in crowded environments, or in emergency situations.
CC: You have collaborated with the telecommunication and computer industry in
various capacities. How useful/relevant are theoretical results for this industry?
EG: I think that the value of theory in our field, when it is based on realistic
assumptions and sound evaluation, lies in its ability to provide tremendous shortcuts
that avoid a lot of tedious work based on experimentation and testing. My
first inroads into the telecommunications industry were related to the performance
evaluation of the E10 electronic switch in the late 1970s. The E10 was in fact a
large scale computer, together with electronic equipment, that was going to be
used to establish and automatically manage large numbers of telephone calls. It
was to replace the previous “dumb” electronic switching systems. The French
Telecom research centre CNET had been involved in trying to evaluate whether
the E10 was performing up to specification, and they were relying on simulations
which were taking orders of magnitude longer than the time it took the E10 system
itself to execute the corresponding task. The team studying this was at the end of
their tether, and the team leader finally had a (real) nervous breakdown. Together
with my PhD student Jean Vicard we stepped in and within six months we had a
mathematical model based on queueing networks which was quite accurate and
which could be solved in seconds of computer time, rather than in hours or days
of simulation time. Thanks to this work, I continued being funded by CNET for
twenty years and they hired several of my former PhD students. This also explains
that many of my former PhD students teach in France at schools such as ''Institut
National des Télécommunications and École Nationale des Télécommunications''.
\\ \\
__CC__: In France in 1982 you designed and implemented a national vocational training
in computer technology called the “Programme des Volontaires pour la Formation
à l’Informatique."
\\ \\
__EG__: This was a very interesting experience. At the suggestion of Jacques Gualino,
who was one of INRIA’s managing staff, I started working with the people who
had launched the “Centre Mondial pour l’Informatique” in Paris, namely Jean-
Jacques Servan-Schreiber, Nicholas Negroponte and Seymour Papert. The latter
two wanted to help the third world via personal computers, while Jean-Jacques
was actually (I think) on a mission to transform the French bureaucracy through a
greater use of Information Technology and to attain some form of political power
or political role in the process. The year was 1982, soon after the elections that had
brought François Mitterand and the Socialist Party to power, so there was an opportunity
to make some changes – but the question was what this Centre could do.
While these people aimed at lofty and global goals, I decided to tackle a relatively
small project. I felt that much of vocational education in France was obsolete
and essentially taking misguided teenagers and turning them into disgruntled unemployed
people, simply because vocational education was essentially dispensed
by obsolete technical educators in obsolete machine shops which “trained” young
people to operate obsolete equipment for jobs that did not exist. On the other hand,
both industry and the service sector were looking for people who had some simple
computer education that could be used in technical and service jobs. However,
instructors who were knowledgeable computer scientists and engineers were just
too expensive to provide instruction cheaply. Furthermore computer equipment
was scarce and expensive. In conversations with Jean-Jacques Servan Schreiber,
Pierre Lafitte (then President of the École des Mines in Paris) and myself, we came
up with the idea of using newly graduated engineers who could do a “vocational
education” service for youngsters instead of their military service where they were
often getting very bored. Though conceptually simple, the whole programme had
to be “engineered”, which I did, so that several hundred young graduate engineers
could do this new form of civilian service instead of going into the military
for one year. Several ministries had to be convinced. Several million francs for
equipment, wiring and room security (against computer theft) were needed to get
started, and the network of training centres for unemployed young people had
also to be incorporated into the task. The long and short of it is that we were
successful: up to seven hundred young graduating engineers and computer scientists
got involved in this programme each year. Personal computers from a variety
of sources were purchased and installed in small groups of ten PCs per training
centre. Tens of thousands of young unemployed people were trained and many entered
the job market successfully. For the first two years I ran the programme and
collected detailed statistics about what was happening. Then the programme was
taken over by existing social and government bodies. It came to an end towards
1989 after Jacques Chirac became Prime Minister and some things introduced by
the Socialist government were abrogated.
\\ \\
__CC__: You also served as Science and Technology Advisor to the French Minister
of Universities, and member of the Executive Board and of the Science and Technology
Board of the Data and Information Fusion Defence Technology Centre in
the UK and chaired the Technical Advisory Board of the US Army Simulation
and Training Command.
\\ \\
__EG__: The VFI Programme that I discussed before attracted the attention of the then
French Minister for Universities, Professor Roger-Gérard Schwartzenberg. Normally
I should never have been in his group of advisers: I was a recent immigrant
who had not studied at a Grande École. All of his other advisors, except the three
political appointees from his party who had been to Sciences Politiques (Sciences
Po in Paris) had either studied at ENA (École Nationale d’Administration) or at
École Polytechnique. There was a Professor of Medicine who advised the Minister
for the medical side of things, and then this strange individual: me. Some of
the other people in his group of advisers and higher civil servants in the ministry
obviously thought I was totally out of place. But I was a young professor from
the best science campus in France, Orsay and I taught part-time at École Polytechnique,
so I could not be all stupid. I spoke and wrote French well, and the
Minister’s parents had been immigrants, so he was open minded. I was offered
the job because the VFI programme had shown that I was a “go getter”, able to
handle large projects and deal with the arcane administration. Sure enough I did
bring together another large project. The year was 1984, and most students in
French Universities and Grandes Écoles did not have an introductory course in
Computer Science. The barriers were the usual ones: where to find the lecturers,
where to find the computers, where to find the space with appropriate electrical
and network connectivity as well as security, and what to teach. The whole funding
issue needed to be dealt with because we needed to buy eight hundred PCs
and install them in groups of eight or ten, and the installation itself would cost
quite a bit of money. We had to open many junior faculty positions for the teaching.
We set up working groups to design the material that would be taught by
group of disciplines: math-physics, biology, economics, humanities, and so on.
I will not dwell on the details, but it worked out well; and also made many of
my colleagues unhappy because they thought they could use the equipment funds
much better for CS research, without realising that we could not attract such a
large investment in research alone and that the equipment and new faculty were in
themselves an investment in research as well. In my role as advisor, I had several
other jobs too: monitor and expand the different scientific disciplines that I was
dealing with, monitor the engineering schools, deal with the transformation of the
École Normale Supérieuere (a very interesting story to tell separately some day),
the transformations of PhD programmes and so on. An initiative which proved
very useful was the “Arrêté” which has allowed some of the Grandes Ecoles to
deliver the PhD degree: it has significantly increased the research activities at
many of the elite schools in France. At the end of two years of doing this job from
8am to 8pm, I was totally exhausted. I was delighted (!) when in the Spring of
1986 the Socialist Government lost the elections and I could leave this harassing
activity to return to my lab. In those two years I did manage to write one or two
decent papers and to graduate some PhD students, but it was tough.
\\ \\
You mentioned the Technical Advisory Board (TAB) of the US Army Simulation
and Training Command. I spent 1993 to 1998 at Duke University as Head
of Department, and then moved to Orlando, Florida. There I was the Director of
the School of Electrical Engineering and Computer Science (SEECS) from 1998
to 2003, which I founded by merging several programmes at the University of
Central Florida. I was in contact with the neighbouring modelling and simulation
facilities for the US Army where much of the training is heavily computerised.
I was asked first to sit on this TAB for a year, and then to chair it for four years
until I moved to Imperial College. This offered possibilities for interaction with
a sophisticated organisation in areas such as virtual and enhanced reality, games,
simulation, networking and distributed computing. At UCF I was also Associate
Dean of Engineering and headed an organisation with a total of 2200 students,
nearly 100 instructors, three Master’s and three PhD programmes, and four distinct
undergraduate degree programmes in Computer Science, Computer Engineering,
Electrical Engineering and Information Technology. In the five years I
was there, we secured $ 15 Million for a new building, and it nice to participate in
its design. I visited it in April 2010, and enjoyed seeing my “creation”.
\\ \\
In 2003, when I joined Imperial College, the UK Government had decided
to transfer much of its Defense related research (except for the top secret stuff) to
Universities. I was one of the writers of a proposal to start a research centre around
Imperial College joining with other universities and industry, and a budget of £10
Million per year, with half of it from three companies: BT, General Dynamics UK
Ltd and QientiQ, which ran was until 2009. Since I was involved in the inception,
I became a Member of the Executive and of the Science Board of this Data and
Information Fusion Defence Technology Centre. It meant weekly travel around
the UK, and was a good way for me to “immigrate” more rapidly. Now that it has
ended I appreciate the opportunity to spend more time on own research.
\\ \\
__CC__: What gives you more pleasure in the academic work?
\\ \\
__EG__: I think that we are so lucky to have such a fun job. That’s probably why we
are not good at getting a decent salary in relation to the number of hours that we
put into it. Exercising my curiosity and learning new things, being able to talk to
experts about subjects that are new to me, being a bit of clown when I lecture, and
enjoying the interest of young people, these are some of the things I really enjoy
in my work.
\\ \\
__CC__: Did you ever miss a target? How did you cope?
\\ \\
__EG__: I miss targets all the time, and one reason is that I am rather dispersed as
to the subjects that interest me. Right now they are Computer Networks, Gene
Regulatory Networks [24], Viruses in nature and in computers , Economics [27],
Synthetic Chemistry [25], and a few other things (!) .. I have given up pursuing
conference deadlines. I try to publish papers on my work in the best relevant
journals, and I benefit from serious refereeing and criticism: referees are my best
teachers these days!
\\ \\
__CC__: Many thanks!
\\ \\
__References__
\\ \\
[1|#1] E. Gelenbe “On languages defined by linear probabilistic automata”, ''Information
and Control'', 16: 487-501, 1970.\\
[2|#2] E. Gelenbe “An unified approach to the evaluation of a class of replacement algorithms”,
''IEEE Trans. Computers'', Vol. C-22 (6): 611-618, 1973.\\
[3|#3] E. Gelenbe “On approximate computer system models”, ''Journal ACM'', 22(2): 261-
269, 1975.\\
[4|#4] E. Gelenbe, R. Muntz “Probabilistic models of computer systems - Part I”, ''Acta
Informatica'', 7: 35-60, 1976.\\
[5|#5] G. Fayolle, E. Gelenbe, J. Labetoulle “Stability and optimal control of the packet
switching broadcast channel”, ''Journal ACM'', 24(3): 375-386, 1977.\\
[6|#6] E. Gelenbe “On the optimum check-point interval”, ''Journal ACM'', 26 (2): 259-270,
1979.\\
[7|#7] E. Gelenbe, R. Iasnogorodski “A queue with server of walking type (autonomous
service)”, ''Annales de l’Institut Henry Poincaré'', Série B, XVI (1): 63-73, 1980.\\
[8|#8] E. Gelenbe, G. Hebrail “A probability model of uncertainty in data bases”, ''Proc.
Second International Conference on Data Engineering'', pp. 328-333, Los Angeles,
CA, Feb. 5-7, IEEE Computer Society, 1986, ISBN: 0-8186-0655-X.\\
[9|#9] E. Gelenbe “Réseaux stochastiques ouverts avec clients négatifs et positifs, et
réseaux neuronaux”, ''C.R. Acad. Sci. Paris'', t. 309, Série II, 979-982, 1989.\\
[10|#10] E. Gelenbe “Random neural networks with positive and negative signals and product
form solution”, ''Neural Computation'', 1 (4): 502-510 (1989).\\
[11|#11] E. Gelenbe and A. Stafylopatis “Global behavior of homogeneous random neural
systems,” ''Applied Mathematical Modelling'' 15 (10): 534-541 (1991).\\
[12|#12] E. Gelenbe “Product form queueing networks with negative and positive customers”,
''Journal of Applied Probability'', 28: 656-663 (1991).\\
[13|#13] V. Atalay, E. Gelenbe, N. Yalabik “The random neural network model for texture
generation”, ''International Journal of Pattern Recognition and Artificial Intelligence'',
6 (1): 131-141 (1992).\\
[14|#14] E. Gelenbe “Learning in the recurrent random network”, ''Neural Computation'', 5:
154-164 (1993).\\
[15|#15] E. Gelenbe “G-networks: An unifying model for queueing networks and neural networks,”
''Annals of Operations Research'', 48 (1-4): 433-461 (1994).\\
[16|#16] J.M. Fourneau, E. Gelenbe, R. Suros “G-networks with multiple classes of negative
and positive customers,” ''Theoretical Computer Science'', 155: 141-156 (1996).\\
[17|#17] E. Gelenbe, X. Mang, R. Önvural “Diffusion based Call Admission Control in
ATM”, ''Performance Evaluation'', 27 & 28: 411-436 (1996).\\
[18|#18] E. Gelenbe, M. Sungur, C. Cramer P. Gelenbe “Traffic and video quality in adaptive
neural compression”, ''Multimedia Systems'', 4: 357-369 (1996).\\
[19|#19] E. Gelenbe, A. Labed “G-networks with multiple classes of signals and positive
customers”, ''European Journal of Operations Research'', 108(2): 293-305 (1998).\\
[20|#20] E. Gelenbe, C. Cramer “Oscillatory corticothalamic response to somatosensory input”,
Biosystems, 48 (1-3): 67-75 (1998).\\
[21|#21] E. Gelenbe, J.M. Fourneau “Random neural networks with multiple classes of signals,”
''Neural Computation'', 11 (4):953-963 (1999).\\
[22|#22] E. Gelenbe, “Cognitive Packet Network”, U.S. Patent 6,804,201, Oct. 11, 2004.\\
[23|#23] E. Gelenbe, M. Gellman, R. Lent, P. Su “Autonomous smart routing for network
QoS”, ''Proc. First International Conference on Autonomic Computing'', IEEE Computer
Society, 232 - 239, Washington D.C., 2004.\\
[24|#24] E. Gelenbe “Steady-state solution of probabilistic gene regulatory networks”, ''Physical
Review E'', 76(1), 031903 (2007).\\
[25|#25] E. Gelenbe “Network of interacting synthetic molecules in equilibrium”, ''Proc.
Royal Society A'' (Mathematical and Physical Sciences) 464:2219–2228, 2008.\\
[26|#26] E. Gelenbe and S. Timotheou “Random neural networks with synchronized interactions”,
''Neural Computation'' 20: 2308–2324, 2008.\\
[27|#27] E. Gelenbe “Analysis of single and networked auctions”, ''ACM Trans. Internet Technology'',
9(2), 2009.\\
[28|#28] E. Gelenbe “Steps toward self-aware networks”, ''Communications ACM'', 52(7):66-
75, 2009.