Prof Karen Spärk Jones |
Prof Barbara Liskov |
The excellent talk was videoed, and should be online at some point. Here I just want to pick up on the points that particularly resonated for me.
She started off talking about the software crisis of the early 1970s. There have been several software crises: this was probably the first named one. People didn't know how to write large pieces of software, there was no methodology, and the programming languages of the day didn't help. Dijkstra had published his classic paper, Go To Statement Considered Harmful, in Comms ACM in 1968, calling attention to one aspect of the problem. Nowadays everyone has heard of this paper; "considered harmful" in a title has become a CS trope.
What I didn't realise about his paper, however, was the reaction to it. Liskov explained that many programmers were insulted: of course they could write understandable programs with gotos. But more interestingly, there was a doubt about whether it was even possible to write all programs without gotos. Today, of course, this problem is solved; we deride "spaghetti programming", and instead use languages that incorporate structured gotos encapsulated in commands such as if, for, while, break, continue, and try. (Although OO allows the possibility of spaghetti messaging.)
Liskov went on to talk about her own contributions to teasing out what was needed to structure code: the abstract data type. She and her team designed and implemented the influential language CLU (short for "cluster", the name of its abstraction mechanism), which included ADTs, static type checking, separate compilation, polymorphism, iterators, and exception handling --- but no goto statement.
For this and subsequent work, Liskov won the prestigious Turing Award in 2008. She said that when the award was announced, some student commented: "What did she get this award for? Everyone knows this, anyway!" Precisely: everyone does, now.
Liskov finished off by talking about the present. We have seen Moore's Law take a right-angled turn recently: instead of chips getting small and faster, they have gone multi-core. In the past, we have not had to worry too much about parallel processing, except in certain specialised domains, because Moore's Law would provide a single processor with the required power in a few years. Now, for the first time, we have to bite the bullet of parallelism in everyday computing. Liskov is now working on methodologies and language support for parallel computing, not for systems with just 4 or 8 cores, but with hundreds or thousands of cores.
In my mind, this folded neatly back to the beginning of her talk, and a relevant quote from Dijkstra's paper:
Our powers to visualize processes evolving in time are relatively poorly developed. For this reason we should do ... our utmost to shorten the conceptual gap between the static program and the dynamic process, to make the correspondence between the program (spread out in text space) and the process (spread out in time) as trivial as possible. ...
The unbridled use of the go to statement has an immediate consequence that it becomes terribly hard to find a meaningful set of coordinated in which to describe the process progress.
Back in the 1980s I was using occam, a parallel programming language. One thing that struck me forcibly at the time was that the traditional linear textual form of occam code was completely divorced from the underlying static parallel structure (let alone the parallel temporal execution structure). It was very easy to get lost in a spaghetti of communication channels. I was so irritated, I even developed a 2D visualisation of the program structure.
So, what are the right static structures and abstraction principles to help us build dynamic parallel programs? There are already many languages with parallelism, but I suspect that in 40 years time we will look back on them with the same pity we today reserve for the humble go to. Let's hope Liskov and her successors can find the answer.
No comments:
Post a Comment