|
A quarter past six? Or is it? |
There's a saying that
Even a stopped clock is right twice a day.
This is used to mean: even something completely unreliable can (accidentally) sometimes be right. The saying is sometimes cast as a paradox
A stopped clock is better than a clock an hour slow, because a stopped clock is right twice a day, yet a clock an hour slow is never right!
I want to explain how, in fact, a stopped clock is
never right, and a clock an hour slow is
always right. This requires us to think of a clock as a simple computer, computing the current time, and ask ourselves, how can we tell the current time from the output of its computation?
A computation has three steps
- initialisation: set up the computer to perform the task of interest
- operation: the computer does its thing
- finalisation: read off the answer from the computer
(Don't blame me for the step names; I didn't invent them! For those of you who are interested, this terminology comes from computational refinement theory.) For a clock, these steps are instantiated as:
- initialisation: set the clock to the current time
- operation: the clock does its thing, marking off the passing moments
- finalisation: read off the (now later) current time from the clock
|
It's nearly ten past ten. Or is it? |
Notice how the finalisation step is
non-trivial. The clock doesn't output "the time": it displays an output that requires some effort to be
interpreted as the time. To read the time from my analogue wristwatch (yes, I still use a wristwatch, and yes, it has an analogue display), I have to convert the positions of the hands, relative to a standard vertical (12 o'clock!) position, into a time. This takes a (small amount of) skill: I can remember being taught how to "tell the time", that is, read an analogue clock face, by my aunt when I was about five. Even to read the time from a digital face requires some processing: to convert the displayed pattern of LED segments, or of pixels, into characters (ie, to read the displayed pattern as characters), and interpret those characters ("12:30", say) as a time ("half past twelve").
This finalisation step is not the only one that can be applied, however. This is the key step in the argument. I realised this when I was attending a conference in Toulouse in 1999, and my watch was "broken". It hadn't stopped, but I couldn't change the time (I couldn't reinitialise it), so it was an hour slow (stuck on UK time). That is, when I interpreted its output using the conventional finalisation, the time I got was off by one hour. Given we started with "a clock an hour slow is never right", I could nevertheless use my watch to tell the correct time. How? (The answer will be obvious to anyone who has used a sundial during daylight savings time.) By applying a different finalisation, one appropriate to its actual initialisation. Here's the setup:
- initialisation: set my watch to the current UK time, so an hour behind the current French time
- operation: the watch does its thing, ticking off the passing moments
- finalisation: read off the (now later) time displayed by my watch, and add one hour
Voila! My watch was computing the correct French time, provided I finalised it correctly, that is, provided I correctly interpreted its output. Well, you might say, but how did you know to add the hour? Because I was the one who initialised it: I was the one who set up the computation. Other people looking at my watch would be confused, because they would be applying a different finalisation: the conventional one. But it is
merely a convention (established to make it convenient to use clocks other than ones that you have set yourself). In truth, you cannot tell the time looking at a clock unless you have some extra information: what finalisation you need to apply to interpret its display as a time. In practice, applying the conventional finalisation
works, most of the time.
By using even more powerful finalisations, we can compute the time using even more faulty watches. For example, if I have a watch that loses a minute every hour, I can still use it, by adding the correct number of minutes back on when I read the time. It is the combination of operation and finalisation that gives the resulting computation.
So how about the stopped clock? Can you use it by applying an even more powerful finalisation? No. There is
no finalisation that allows you to read off the correct time. The clock is performing no operation, it is not marking the passing time, so in order to get the desired computation from it, all the work would have to be done in finalisation alone, which would require using another clock! The stopped clock is
never right, because there is
no finalisation: no way to interpret the display. Your fortuitously looking at it when its display shows the current time does not make it right, not even coincidentally, because you have no way of interpreting its output.
However, there
is something that a stopped clock
has computed: the time at which it stopped (subject to applying the correct finalisation to its display, the one that would have been used when it was working). This computation is the staple of many a TV cop show to tell the time of death of the newly discovered corpse with a conveniently smashed wristwatch (and the conventional finalisation being the
wrong one is a cunning red herring in several detective novels).