this document outlines the main choices when designing recursive control.
each recursive construct can be analyzed along the following axes:
generative vs structural descent
structural
generative
descent follows a computed subproblem
requires an explicit well-founded metric over the full call state
termination and base-case coverage are independent obligations
every reachable state must monotonically approach the base surface
peel and recomposition must satisfy a coherence condition
reachability may be a strict subset of the totality surface
traversal order, pruning, and pre-order effects become semantically observable
result algebra: how recursive results are combined
identity base value vs sentinel base value
order sensitivity and algebraic laws
each form is characterized by where work occurs and what state is explicit.
post-order structural recursion
tail-recursive accumulator form
pre-order recursion
multi-branch recursion
mutual recursion
continuation-passing style
suspended recomposition made explicit as a value
converts post-order recursion into tail calls
enables early exit and non-local control
recursion factors into descent and ascent.
descent
ascent
fold over suspended recomposition contexts
governed by a result algebra
defines meaning and output structure
the call stack is an encoding of suspended recomposition, not a semantic primitive.
some recursive designs make three roles explicit and independently configurable: how recursion chooses the next subproblems, how results are combined, and what information survives across calls. this treats recursion as a small control language rather than a single fixed pattern.
choose-children: a function that selects which recursive calls to make
prune?: a predicate that stops recursion on a branch
rewrite-before-descent: transform a node before recursing
traversal order: order in which chosen children are visited
examples: left-to-right, right-to-left, depth-first, breadth-first
rebuild: return a structure of the same shape with transformed elements
summarize: fold child results into a single value
collect: produce a flat list of selected elements
short-circuit: stop combining when a condition is met
examples: stop at the first match, stop when an accumulator reaches a bound
depth: pass the current nesting level
context: pass the path from the root to the current node
accumulators: carry partial results or counters
continuations: decide whether and how to continue recursion
examples: early exit, skipping recomposition, resuming later
this makes explicit what is usually implicit in plain recursion: which subproblems are visited, in what order, with what carried information, and how results are assembled. it also explains why many recursive functions differ only in small changes to these three roles.
recursion and iteration are two surface encodings of the same three invariants: a descent metric, a state vector, and a recomposition rule. conversion consists of relocating these invariants between parameters, local variables, and control structure.
eliminate self-calls while preserving descent, state evolution, and recomposition.
descent metric
state
recomposition
if no work occurs after the recursive call (tail recursion):
if work occurs after the recursive call (post-order recursion):
evaluation polarity
pre-order work remains before the metric step
post-order work remains after base discharge
the conversion preserves meaning if and only if:
factor a loop into well-founded descent plus recomposition.
descent metric
state
recomposition
if the result is already encoded in loop state:
if work occurs after the loop finishes:
post-loop work becomes post-order recomposition
the loop exit value becomes the base-case value
the conversion preserves meaning if and only if: