In this post I’ll explore a concept I discovered in a very interesting writeup by Martin Escardo on Andrej Bauer’s blog. The title of this post is an homage to the title used by Martin Escardo: “Seemingly impossible functional programs”.
I have made the following changes from the original post:
- I don’t use point-set topology. Instead I provide a more direct proof for the existence of modulus of uniform continuity.
- The accompanying implementation is in a non-lazy programming language (C++) which, I believe, makes the algorithms easier to understand.
The Setup and the Punchline
In this post we consider Turing Machines that takes an infinite bit sequence as input and produce one bit as output1. Henceforth in this post “Turing Machine” will be abbreviated as “TM” and the set of all infinite bit sequences will be denoted by .
An infinite bit sequence, , is an oracle that takes as input an index , and returns the bit in the bit sequence at index . Not all infinite bit sequences are computable so not all can be implemented by a TM2.
Furthermore, we only consider TMs that halt for all input bit sequences. That is, we only consider TMs that compute total functions of type . We’ll call the set of these TMs .
Let be the set of indices queries during execution. It is fairly obvious that , is finite: since terminates, is finite, and therefore is a finite natural number.
However, a less obvious fact is that for any there is an input independent upper bound on the indices it will probe its input at! That is, we can compute an such that , .
For the rest of the post, the assertion made above will be referred to as the theorem.
Some Examples and Intuition
It may be helpful to first try to “break” the theorem for intuition.
Consider an that reads an integer from the first 8 bits of the bit sequence and uses that to compute an index, into the bit sequence. In this case the constant upper bound on is .
We could try to use some variable length encoding for the index that does not “bake in” a bitwidth by e.g. interpreting as the integer . In this case you can also construct a bit sequence that repeats infinitely, for which the TM does not halt. This violates our precondition that the TM must halt for all input bit sequences.
Informally, an needs to “decide” whether to ask for bit based on the first bits only. If does not have an input independent upper bound on the maximum index it inspects, it must read bit (or greater) for some input bit sequence and so must ask for bit for some value of the first bits. A malicious input oracle should be able to pick this value for the first bits and force to ask for bit . Furthermore the malicious oracle should be able to do this for all , causing to never halt. The proof is just a more rigorous version of this argument.
Proof by Contradiction
The proof will proceed by showing that if an does not have an input independent upper bound on the indices it accesses then there is an input for which does not halt. This gives us a proof by contradiction because all are supposed to be halting, by assumption.
For let be the set of infinite bit sequences that have as a prefix. Let be , with and .
Formally, “ does not have an input independent upper bound on the indices it accesses” can be stated as , .
Given an , we construct a infinite bit sequence, ( for “Not Halting”), that “computes” (read on for why I have scare quotes here) the bit at index as follows:
- It keeps some state that persists across executions – an (initial value: ) and a (initial value: the empty bitvector). is a finite prefix of the infinite bit sequence represents, and is the portion of the infinite bit sequence that it has already “committed to”.
- If then it returns (i.e. it returns the answer that it has already committed to).
- Otherwise it finds a bit vector that is an extension of 3 (there are such s), such that has no upper bound. That is, , , . This predicate will later be referred to as .
- It sets to and to .
- It returns .
Firstly, note that step (3) is not an algorithm, i.e. it is not obvious that it can be computed by a TM. There may be some clever way around this (or a proof showing that there isn’t), but so far I haven’t found one. So this proof only shows the existence of a bit sequence in , but does not prove that it is computable.
Secondly, (3) implies that will not halt. This is because if halts after when evaluating , in step (3) we will have , , which contradicts the condition in (3).
Finally we need to prove that (3) always succeeds. We can prove this via induction.
Let be the assertion that “step (3) succeeds when the oracle is called for the time”. Without loss of generality we assume the oracle is called with increasing indices.
In the very first call is the empty string and is . Thus , , is just our “ does not have an input independent upper bound on the indices it accesses” precondition and is true by assumption.
Let , , . Let mean “the concatenation of the two bitvectors and ”.
We will be using the notation from the definition of above – is the cached prefix from the previous call, is the index of the bit this call has to produce etc. Let be . This is the length by which we will have to extend to get .
The proof is by contradiction. Let’s assume we are unable to find an sized extension of in the invocation of the oracle.
Not being able to find a extension of means that , . In other words, fails when there is no way to extend (of length ) by (of length ) such that .
Expanding Bounded, we get , , , . Setting to be the maximum of all the s, we get , , , . This further simplifies to , , . But is just , so the we really have , , .
But this contradicts , so we must be able to find an sized extension of !
Making it Real
Given the theorem, we can write an algorithm that computes what is called the “modulus of uniform continuity” in the original post by Martin Escardo. The modulus of uniform continuity is the smallest such that , where , and means the first bits of and are identical. This modulus is really just the , we saw earlier. Since only looks at the first bits in its input (at most), it can’t distinguish between two inputs that differ at indices greater than the modulus.
To compute the modulus of uniform continuity, we first define a way to construct a special oracle, , from a finite bitvector prefix .
- For indices , returns .
- For indices , returns a sentinel value.
- We modify so that the sentinel value causes to return early with a sentinel value as well. This modification can be done mechanically e.g. by a compiler.
- keeps track of the indices for which it returned the sentinel value, for later examination.
Using we can compute the modulus of uniform continuity for an as follows:
- Set to .
- For in :
- Let .
- Execute .
- If :
- Set to and goto 2.
- Return .
This always terminates – iff is the modulus of uniform continuity will not query any higher indices for any and the algorithm will not increment any further.
With this algorithm we can implement a variety of things that seem impossible at first glance:
Given an we can either produce a such that is or definitely say that there is no such .
We can check if two and compute identical functions. We can do this by checking if the TM returns false for any .
A C++ Implementation
I have implemented the ideas in this blog post in C++. Here is a quick overview in case you want to take a look:
BitSequenceclass models an oracle that can be queried. However it is not really an oracle since it only represents computable bit sequences.
std::optionalas the sentinel value. For convenience we have an
ASSIGN_OR_RETURNmacro (as a poor man’s
Maybemonad) to reduce boilerplate.
The fundamental primitive is
ForSomewhich decides whether a function of type returns true for any input. This can, in turn, be used to implement the predicate
ForEvery(M) = , , which lets us compute the modulus of uniform continuity as the smallest such that , .
Finally, the C++ implementation has an optimization in
ForSomethat makes it search exponentially only across the bits requested by the TM, not across all bits with indices less than the currently largest requested index.
I’d encourage implementing this in favorite programming language, it is a fun exercise!
What I Would Like to Understand Better
I’d like to understand the role of computability here. To prove the theorem we exploited the fact that the input bit sequence does not need to be computable. Can the proof be strengthened? Or is the prof “tight” and there exist TMs with an unbounded modulus of continuity if we restrict its inputs to computable bit sequences?