5.1. GENERAL PRELIMINARIES AND ISSUES generate, create none none on none projectsc# barcode code 93 Note that for s(n) = (log n none none ), the factor of n can be absorbed by 2 O(s(n)) , and so we may just write t(n) = 2 O(s(n)) . Indeed, throughout this chapter (as in most of this book), we will consider only algorithms that halt on every input (see Exercise 5.5 for further discussion).

Proof: The proof refers to the notion of an instantaneous con guration (in a computation). Before starting, we warn the reader that this notion may be given different de nitions, each tailored to the application at hand. All these de nitions share the desire to specify variable information that together with some xed information determines the next step of the computation being analyzed.

In the current proof, we x an algorithm A and an input x, and consider as variable the contents of the storage device (e.g., work-tape of a Turing machine as well as its nite state) and the machine s location on the input device and on the storage device.

Thus, an instantaneous con guration of A(x) consists of the latter three objects (i.e., the contents of the storage device and a pair of locations), and can be encoded by a binary string of length (.

Code 128 Code Set A x. ) = s(. x. ) + log2 x. + log2 s(. x. ).7 The key observation is none none that the computation A(x) cannot pass through the same instantaneous con guration twice, because otherwise the computation A(x) passes through this con guration in nitely many times, which means that this computation does not halt. This observation is justi ed by noting that the instantaneous con guration, together with the xed information (i.

e., A and x), determines the next step of the computation. Thus, whatever happens (i steps) after the rst time that the computation A(x) passes through con guration will also happen (i steps) after the second time that the computation A(x) passes through .

By the foregoing observation, we infer that the number of steps taken by A on input x is at most 2 (. x. ) , because otherwise the s none none ame con guration will appear twice in the computation (which contradicts the halting hypothesis). The theorem follows..


3. Subtleties Regardi ng Space-Bounded Reductions Lemmas 5.1 and 5.

2 suf ce for the analysis of the effect of many-to-one reductions in the context of space-bounded computations. (By a many-to-one reduction of the function f to the function g, we mean a mapping such that for every x it holds that f (x) = g( (x)).)8.

1. (In the spirit of Lemma none none 5.1:) If f is reducible to g via a many-to-one reduction that can be computed in space s1 , and g is computable in space s2 , then f is computable in space s such that s(n) = max(s1 (n), s2 ( (n))) + (n) + (n), where (n) denotes the maximum length of the image of the reduction when applied to some n-bit string and (n) = O(log( (n) + s2 ( (n)))) = o(s(n)).

2. (In the spirit of Lemma 5.2:) For f and g as in Item 1, it follows that f is computable in space s such that s(n) = s1 (n) + s2 ( (n)) + O(log (n)) + (n), where (n) = O(log(s1 (n) + s2 ( (n)))) = o(s(n)).

. Here we rely on the fact th at s is the binary space complexity (and not the standard space complexity); see summary item 3 in Section 5.1.1.

8 This is indeed a special case of the setting of Lemmas 5.1 and 5.2 (obtained by letting f 1 = and f 2 (x, y) = g(y)).

However, the results claimed for this special case are better than those obtained by invoking the corresponding lemma (i.e., s2 is applied to (n) rather than to n + (n)).

Copyright © . All rights reserved.