Function Obtained by Substitution from URM Computable Functions

Theorem
Let the functions $$f: \N^t \to \N, g_1: \N^k \to \N, g_2: \N^k \to \N, \ldots, g_t: \N^k \to \N$$ all be URM computable functions.

Let $$h: \N^k \to \N$$ be defined from $$f, g_1, g_2, \ldots, g_t$$ by substitution.

Then $$h$$ is also URM computable.

Proof
From the definition:
 * $$h \left({n_1, n_2, \ldots, n_k}\right) = f \left({g_1 \left({n_1, n_2, \ldots, n_k}\right), g_2 \left({n_1, n_2, \ldots, n_k}\right), \ldots, g_t \left({n_1, n_2, \ldots, n_k}\right)}\right)$$.

Let $$P, Q_1, Q_2, \ldots, Q_t$$ be normalized URM programs which compute $$f, g_1, g_2, \ldots, g_t$$ respectively.

Let $$u = \max \left\{{\rho \left({P}\right), \rho \left({Q_1}\right), \rho \left({Q_2}\right), \ldots, \rho \left({Q_t}\right)}\right\}$$.

Let $$s_j = \lambda \left({Q_j}\right)$$ be the number of basic instructions in $$Q_j$$ for $$1 \le j \le t$$.

Hence:
 * registers $$R_{u+1}, R_{u+2}, \ldots, R_{u+k}$$ can be used to hold a copy of the input $$\left({n_1, n_2, \ldots, n_k}\right)$$ so it can be guaranteed not to be accidentally overwritten by any operations performed by any of $$P, Q_1, Q_2, \ldots, Q_t$$;
 * registers $$R_{u+k+1}, R_{u+k+2}, \ldots, R_{u+k+t}$$ can be used to hold copies of the outputs of each of $$Q_1, Q_2, \ldots, Q_t$$ so they also can be guaranteed not to be accidentally overwritten by any operations performed by any of $$P, Q_1, Q_2, \ldots, Q_t$$.

The following algorithm can be followed to create a URM program $$H$$ to compute $$h$$.

It is assumed that:
 * The input is in $$R_1, R_2, \ldots, R_k$$.
 * Each of $$P, Q_1, Q_2, \ldots, Q_t$$ are written so as to start at line $$1$$.

It can easily be determined that $$H$$ computes $$h$$.

Hence $$h$$ is URM computable.

Commentary
We start with normalized URM programs so as to save having to worry about tidying up the registers and exit jumps. Otherwise there are unnecessary complications.

The object of this exercise is to construct a program which computes, in turn, $$g_1$$ to $$g_t$$. Then, from the outputs of these programs, we use the outputs of these as the input of the program which computes $$f$$.

So the program we are building will consist of each of programs $$Q_1, Q_2, \ldots, Q_t$$ run one after another, progressively building up a block of registers containing their outputs.

Once all those programs have been run, the outputs then become the input to the program $$P$$ which computes $$f$$.

All we have to worry about are:
 * We need to keep a copy of the input somewhere safe so that it can be loaded back into the input registers before the start of each $$Q_j$$;
 * After we've run $$Q_j$$, we need to store the output somewhere safe;
 * We need to increment the Jumps so they are consistent relative to the start of each of $$P, Q_1, Q_2, \ldots, Q_t$$.

Some approaches to the construction of $$H$$ attempt to compute, at each stage, exactly which lines each of the subprograms will occupy, and pre-calculate in each case what the individual Jump destinations before starting.

However, that approach can obscure the simplicity of what the algorithm defined above is designed to do.