This section contains no technical details. Impatient readers may wish to skip ahead.
The power of a programming notation derives fundamentally from the ability to flexibly combine many small chunks of code to produce a number of potential computations which is exponential in the length of the program.
The tools which allow us to do this are the abstraction mechanisms provided by that notation, and the character of a given programming notation is largely determined by its abstraction mechanisms.
There are not many fundamentally different abstraction mechanisms in common use: It is worth reviewing them before explaining why we need another one.
The simplest form of computation consists of a fixed sequence of operations on some constant set of operands. When we punch up "93 sin" on our hand calculator, we are performing this sort of computation. Useful, but terribly limited. We rarely bother writing down such computations for later use.
The simplest abstraction mechanism consists of introducing variables into our programming notation. This lets us write down a complex expression and then evaluate it repeatedly for different values of the variables. "How much money will I make if interest rates stay at five percent? How much at five and a half.?"
The next abstraction mechanism consists of the conditional
statement: An if-then-else
of some sort that allows us
to perform different computations for different values of
selected variables. Suddenly it become possible to write
vastly more flexible programs!
The next abstraction mechanism usually introduced consists of the procedure: A subcomputation which can be defined once, named, and then used repeatedly. This lets us in essence define a new language as we go along, specialized to our particular needs. Variables, conditionals and procedures together provide us with a universal programming notation which in principle allows us to describe any possible computation. Further abstractions are conveniences rather than necessities, which is why they took much longer to get established.
Object-oriented programming was the next major abstraction mechanism to achieve mainstream acceptance. The heart of object-oriented programming consists of mechanisms to facilitate the definition and naming of new datastructures together with operations on them -- much as procedures let us define and name new codestructures -- and the construction of code which automatically invokes the operations appropriate to the datastructure in use: To create code that within certain limits doesn't care what kinds of data it operates on. Conventional procedural code is written in terms of fixed, prespecified procedures and datastructures, but object-oriented code lets us write code with variables representing procedures and datastructures. Once upon a time, this would have been called "meta-programming" ...
(Functional programming provides even more powerful abstraction mechanisms, but is not yet part of mainstream programming practice, so we will ignore it for now.)
Why do we need yet another abstraction mechanism?
Fundamentally, to deal with the problem of very large programs.
You will have noted that each level of abstraction we have discussed has enabled the construction of programs ten or a hundred times larger than before:
But we are entering an era of very large computations involving millions to billions of lines of code. The entire Internet is becoming an integrated computational engine: A user sitting at a WWWeb browser may with a few mouseclicks invoke computations on machines scattered all over the planet. CORBA (the Common Object Request Broker Architecture) is extending the object-oriented programming paradigm to the Internet scale, allowing intricate computations spanning the net.
Each order-of-magnitude change in computation scale has in the past introduced qualitatively new problems requiring qualitatively new abstraction mechanisms: We may be reasonably confident that this trend will continue as we proceed to these even larger computations.
One particular problem introduced by netscale computations is that of reliability. We may at least entertain fantasies of proving large computations correct, eliminating the possibility of surprise failures.
There is no possibility of proving very large computations correct, even in principle, because the work required in such proofs is exponential in the size of the program, and simply isn't possible in the very large regime, even if programs in this size range ever remained unchanged long enough for such a proof to be attempted -- which they of course do not, Internet computational facilities being constantly updated in uncoordinated and decentralized fashion.
Thus, we must accept that the path of very large computations will always be uncertain and exploratory, involving negotiations between unrelated and mismatched chunks of code, and not infrequently consultation with the user.
This class of problem is elegantly handled by the CommonLisp Condition System, on which the Muq event system is closely modelled. The event system provides a mechanism by which code may:
Go to the first, previous, next, last section, table of contents.