The Search for a Search - Measuring the Information Cost of Higher Level Search
The divine comedy Creationism |
![]() |
Running gags |
Jokes aside |
Blooper reel |
“”Intellectual Standards. Are we holding ourselves to high intellectual standards? Are we in the least self-critical about our work? Are we sober or immodest about our work? Do we demand precision and rigor from our each other? Do we examine each other's work with intense critical scrutiny and speak our minds freely in assessing it? Or do we try to keep all our interactions civil, gentlemanly, and diplomatic (perhaps so as not to give the appearance of dissension in our ranks)? Does the mood of our movement alternate between the smug and the indignant -- smug when we hold the upper hand, indignant when we are criticized? Do we react to adverse criticism like first-time novelists who are dismayed to discover that their masterpiece has been trashed by the critics? Or do we take adverse criticism as an occasion for tightening and improving our work?
|
—William Dembski, 2002[1] |
"The Search for a Search - Measuring the Information Cost of Higher Level Search" is an article by William Dembski and Robert Marks which appeared in the Journal of Advanced Computational Intelligence and Intelligent Informatics [2](abstract, full text) The article is a follow-up to Conservation of Information in Search: Measuring the Cost of Success which appeared in Ieee Transactions On Systems, Man, and Cybernetics. [3]
The gist of the article is that Dembski and Marks attempt to demonstrate some difficulties with information searches, building on the existing "no free lunch" theorem, which states (colloquially) that no process for optimizing a search is "better" than any other when applied to all searches. Dembski extends this idea further, and though he does not touch on intelligent design in this paper, he has elsewhere made it clear that he interprets NFL as supporting intelligent design,[4] an idea that has been strongly criticized by no less than David Wolpert, one of the authors of the original NFL paper.[5]
Contents
Original Text | Annotations |
---|---|
Abstract: Needle-in-the-haystack problems look for small targets in large spaces. In such cases, blind search stands no hope of success. Conservation of information dictates any search technique will work, on average, as well as blind search. Success requires an assisted search. But whence the assistance required for a search to be successful? To pose the question this way suggests that successful searches do not emerge spontaneously but need themselves to be discovered via a search. The question then naturally arises whether such a higher-level “search for a search” is any easier than the original search. We prove two results: (1) The Horizontal No Free Lunch Theorem, which shows that average relative performance of searches never exceeds unassisted or blind searches, and (2) The Vertical No Free Lunch Theorem, which shows that the difficulty of searching for a successful search increases exponentially with respect to the minimum allowable active information being sought. |
We will occupy ourselves with the Horizontal No Free Lunch Theorem and show how its assumptions - and its proof - are seriously flawed. |
Introduction[edit]
Original Text | Annotations |
---|---|
[...] Endogenous and active information together allow for a precise characterization of the conservation of information. The average active information, or active entropy, of an unassisted search is zero when no assumption is made concerning the search target. If any assumption is made concerning the target, the active entropy becomes negative. This is the Horizontal NFLT presented in Section 3.2. It states that an arbitrary search space structure will, on average, result in a worse search than assuming nothing and simply performing an unassisted search. [...] |
This is a very surprising claim in the light of the conventional No Free Lunch Theorems (NFL): given the usual assumptions of the NFL, we would expect any search to be as good (or as bad) as random search. And an unassisted search seems to be a random search...
To make our ideas clearer, we will introduce a simple example: a Shell Game, using one, two or even three peas. In the language of Dembski and Marks, our search space is |
Information in Search[edit]
Original Text | Annotations |
---|---|
All but the most trivial searches require information about the search environment (e.g., smooth landscapes) or target location (e.g., fitness measures) if they are to be successful. Conservation of information theorems[6][7][8] show that one search algorithm will, on average, perform as well as any other and thus that no search algorithm will, on average, outperform an unassisted, or blind, search. But clearly, many of the searches that arise in practice do outperform blind unassisted search. How, then, do such searches arise and where do they obtain the information that enables them to be successful? |
These are good questions, but unfortunately, Dembski and Marks will not answer them. But let us at first formulate our search problem in the language of the No Free Lunch Theorems: these are generally about minimizing a (somewhat) unknown function f over a space ![]() ![]() ![]() ![]() ![]() ![]() ![]()
This is the set of functions matching our problem of looking for a single pea under three shells! Now look at
This set isn't c.u.p.: Take the permutation
Clearly not a member of
Of course, this set of functions is very artificial, but all the interesting minimization problems we find in nature are of the non-c.u.p. kind. |
Blind and Assisted Queries and Searches[edit]
Original Text | Annotations |
---|---|
Let |
Interestingly, at the NFL theorems, such a uniformity of probability is assumed - though it is certainly possible to formulate the theorems for a family of distributions - as long as this family is c.u.p. again. But keep in mind that assuming that the events have equal probability doesn't make it true. What does this mean for our shell game? If we assume that the pea is hidden under each shell with probability 1/3, it is okay for us to pick a shell at random - or stick with picking a certain shell. But if the man who hides the pea has a preference for one shell, a random guess will still result in a success-rate of 1/3 - while other strategies may have other rates of success: greater than 1/3 if it shares the proclivity of the master of the game, less than 1/3 if it doesn't! |
Consider Q queries (samples) from
The requirement of sampling without replacement requires
where
From Eqs. (2) and (3), we have
|
These innocuous notations are the main reason of all the trouble to come. Let us first have a look how a deterministic minimization algorithm is described in the language of the No Free Lunch theorems: at the n-th step, you have evaluated the function f (n-1) times, thus you know a sequence ![]() ![]() ![]() ![]() ![]() ![]()
With If our functions Such search strategies are completely determined by the elements - and not the values of the functions. And so, the clumsy writing of
These deterministic 0-1 strategy of length Q can be easily randomized by choosing one of these strategies according to a probability measure on the set of the strategies. Such a measure will be called randomized 0-1 strategy of length Q - or just 0-1 strategy of length Q. So, what Dembski and Marks see as augmented search-space, we regard as the set of deterministic search strategies... |
There appears, at first flush, to be knowledge about the search space |
Nothing wrong about this! |
Keeping track of Q subscripts and exhaustively contracted perturbation search spaces is distracting. As such, let |
Well, we will keep track with the subscripts and the different spaces! For instance, guessing twice in our shell game results in |
Active Information and No Free Lunch[edit]
Original Text | Annotations |
---|---|
Define an assisted query as any choice from
|
Here, |
Let q denote the probability of success of an assisted search. We assume that we can always do at least as well as uniform random sampling. The question is, how much better can we do? Given a small target T in |
So, Dembski's and Marks's uniform random sampling takes an element of |
An instructive way to characterize the effectiveness of assisted search in relation to blind search is with the likelihood ratio |
Active Information[edit]
Original Text | Annotations |
---|---|
Let U denote a uniform distribution on
|
Of course, we can define such a quantity. But does it make sense? Does it even exist? We will see... |
Active information measures the effectiveness of assisted search in relation to blind search using a conventional information measure. It[12] characterizes the amount of information[13] that |
As the NFL theorems are stating that a search which outperforms blind search only exists if there is a non-c.u.p. set of functions, |
The maximum value that active information can attain
is |
So, here we find to examples for assisted searches, the perfect search and its evil twin, the search which is doomed to fail. So, what do the (nonuniform) measures on |
[...] |
|
Active information can also be measured with respect to other reference points. We therefore define the active information more generally as:
for |
Again, this can be defined: but what is it meaning? |
3.2. Horizontal No Free Lunch[edit]
Original Text | Annotations | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
The following No Free Lunch theorem (NFLT) underscores the parity of average search performance. |
|||||||||||||
Theorem 1: Horizontal No Free Lunch. Let
Then |
This is as true as it is meaningless:
Generally, for a search of two or more guesses, there is no meaningful way to get the partition
This strategy works well if the pea is under the first shell - or the second one (target |
||||||||||||
In our opinion, the problem arises from the elegant - but unfortunately invalid - identification of the search space and the space of the searches: For our set of deterministic 0-1 strategies, each measure introduces a randomized strategy. For each such strategy, we can calculate the probability to find a target in the original space. Let's go back to our two-guesses shell game. The deterministic 0-1 strategies are:
Dembski's and Marks's uniform distribution U leads to choosing one of these searches with probability 1/6. If the pea is hidden without any preference for any of the shells, this strategy gets a success rate of 2/3. As does any other randomized 0-1 strategy - because it is the success rate of the deterministic strategies! And that is no surprise at all, as the formulation of the searches introduced a set of c.u.p. functions. If there is a preference for - for instance - the first shell, the picture changes: Suddenly, there are preferable strategies: sticking with |
See also[edit]
- Conservation of Information in Search - Measuring the Cost of Success
- Efficient Per Query Information Extraction from a Hamming Oracle
- Uncommon Descent
- Barry Arrington
References[edit]
- ↑ Becoming a Disciplined Science: Prospects, Pitfalls, and a Reality Check for ID, William Dembski, Keynote address delivered at RAPID Conference (Research and Progress in Intelligent Design), Biola University, La Mirada, California, 25 October 2002
- ↑ Journal of Advanced Computational Intelligence and Intelligent Informatics, Vol.14, No.5, 2010, pp. 475-486
- ↑ Ieee Transactions On Systems, Man, and Cybernetics—Part A: Systems And Humans, Vol. 39, No. 5, September 2009.
- ↑ Dembski, W. A. (2002) No Free Lunch, Rowman & Littlefield
- ↑ Wolpert, D. (2003) "William Dembski's treatment of the No Free Lunch theorems is written in jello".
- ↑ C. Schaffer, “A conservation law for generalization performance,” Proc. Eleventh Int. Conf. on Machine Learning, H. Willian and W. Cohen, San Francisco: Morgan Kaufmann, pp. 295-265, 1994.
- ↑ T. M. English, “Some information theoretic results on evolutionary optimization,” Proc. of the 1999 Congress on Evolutionary Computation, 1999, CEC 99, Vol.1, pp. 6-9, July 1999.
- ↑ D. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evolutionary Computation, Vol.1, No.1, pp. 67-82, 1997.
- ↑ W. A. Dembski and R. J. Marks II, “Bernoulli’s Principle of Insufficient Reason and Conservation of Information in Computer Search,” Proceedings of the 2009 IEEE Int. Conf. on Systems, Man, and Cybernetics, San Antonio, TX, USA, pp. 2647-2652, October 2009.
- ↑ 10.0 10.1 J. Bernoulli, “Ars Conjectandi (The Art of Conjecturing),” Tractatus De Seriebus Infinitis, 1713.
- ↑ 11.0 11.1 A. Papoulis, “Probability, Random Variables, and Stochastic Processes,” 3rd ed., New York: McGraw-Hill, 1991.
- ↑ 12.0 12.1 W. A. Dembski and R. J. Marks II, “Conservation of Information in Search: Measuring the Cost of Success,” IEEE Trans. on Systems, Man and Cybernetics A, Systems and Humans, Vol.39, No.5, pp. 1051-1061, September 2009.
- ↑ T. M. Cover and J. A. Thomas, “Elements of Information Theory,” 2nd Edition, Wiley-Interscience, 2006.