News & Views | Published:

Mathematical physics

Search research

How does one best search for non-replenishable targets at unknown positions? An optimized search strategy could be applied to situations as diverse as animal foraging and time-sensitive rescue missions.

Operations research — the field that uses mathematical methods to optimize complex real-world structures and processes — grew out of the analysis of military problems during the Second World War. One such question was the optimization problem 'How to hunt a submarine'1, the analysis of which had to take several factors into account. For instance, a figure-of-eight search pattern of an aircraft scouring littoral waters would be different from the pattern for a search of deep-ocean waters. The problem was also complicated by the fact that a negative search might mean only that a submarine was submerged, not that it was absent.

The aim of such searches is to remove the target, and, writing in Physical Review E, Bénichou and colleagues2 bend their minds to minimizing the amount of time needed when searching for such 'non-revisitable' targets. Since the early days of aircraft hunting submarines, many types of search have been investigated in which the target may or may not be destroyed upon contact. One such study3 interpreted data from transmitters attached to the legs of itinerant albatrosses. The radio signal was silenced when an albatross was in the water. What emerged was a fractal on–off signal pattern that was consistent with the bird flying a 'self-similar' pattern — one in which the whole has the same shape as smaller component parts — with the end of each segment punctuated by a water landing. Presumably, the flight pattern reflected a search for food, and perhaps also the lifetime of thermals on which the seabirds ride.

Whatever the exact reason, this research led to several papers demonstrating that fractal patterns of the albatross type, called Lévy flights, are an optimal search strategy. But under what conditions? The theoretical Lévy strategy has a wide distribution of flight segment lengths, and the mean of this distribution is infinite. But in the case of the albatross, the act of catching food cannot be accomplished during a flight (Fig. 1), so too much time is spent in the flight segments for a Lévy strategy to minimize search time.

Figure 1: Gotcha! An albatross completes a search for an non-replenishable target.
figure1

Bénichou and colleagues2 consider optimal strategies for such searches.

At first glance, diffusion might seem to be a satisfactory strategy for minimizing search time. When a particle starting at an arbitrary origin moves randomly, its location is described by a gaussian probability distribution that spreads out with a variance that grows linearly with time. It might seem that any target will be found because its location will eventually fall under the spreading gaussian peak. But this is not so. Even in one dimension, some particle trajectories move to the right while a hidden, static target is on the left. Averaging over all possible trajectories presents the startling fact that the mean first-passage time to any particular target is infinite.

But what if there are many targets? Consider a random walker starting on a line of targets on a square lattice. Here, the walker gradually diffuses away from the line, so the targets are visited less and less over time. One can calculate that after a large number of jumps N, only N1/2/lnN of the targets are visited. In three dimensions, the result is worse: even more targets go unfound, with only lnN sites discovered4.

Despite this unpromising prognosis, Bénichou and colleagues2 rely, in part, on diffusion in their two-dimensional search model for non-revisitable static targets. They propose a two-state search pattern. In the first, dynamic case, the searching is diffusive, and the target is found immediately when it is within a certain distance, A. In a second, static case, the seeker is stationary and 'reacts' with the target at a certain rate when the target comes within a certain range.

In both cases, the time spent on the search phase is a random variable. When the search phase ends, either successfully or unsuccessfully, the motion of the seeker changes to a relocating ballistic: it shoots off in a random direction for a random stretch of time during which, according to the model, discovery of a target is not possible. This phase is followed by a further search phase, and so the cycle continues.

The authors derive2 an equation for the distribution of first-passage times required for the seeker to find the target, for a general model combining the static and dynamic cases and allowing for an arbitrary rate of diffusion, D, and rate of reaction, k. As an exact solution is not available, approximations were necessary until an equation with an analytical solution could be derived.

The approximate formula that emerged agreed well with simulations. In the case of a static seeker, abandoning a search too quickly (within time t<1/k) will probably result in missing a target that is present. But spending too much time searching, t>>1/k, when a target is not present will not be conducive to optimizing the search efficiency either. Similarly, spending too much time diffusing in an area without targets is unproductive, as is spending too little time in a target-rich environment. Optimal mean times for search and relocation are possible.

Bénichou et al. consider a low-density system in which the average distance, B, between two targets is much greater than the distance A from which a target can be discovered. In addition, they stipulate the condition D/v<<A, where v is the velocity of the seeker in the ballistic phase. Essentially, this is a requirement that ballistic relocation should be very efficient relative to diffusion. The optimal mean search time is found to be different in the static case, where it depends on k and is proportional to 1/v1/2, and in the dynamic case, where it depends on D and is proportional to 1/v2. Surprisingly, however, for an optimal search in both the static and dynamic cases, the same condition is found for the mean time spent in the ballistic motion phase. This is a function of A and B, proportional to 1/v, and independent of D and k.

In any probability-based search problem, adding information provides values for conditional probabilities. Bénichou and colleagues' analysis holds under strict conditions, especially that the seeker has no knowledge about where the perishable targets are. As soon as that condition changes, the search optimization strategy should also change. For instance, a methodical search would be best if the target must be found regardless of the amount of time involved.

These results for non-replenishable targets might be useful for situations ranging from animals foraging for food to proteins binding on DNA. For other cases — where the target can move or trick the seeker, for example — other optimal strategies are expected. The same would be true if different cost penalties, other than minimizing the search time, are chosen: if, for instance, the ballistic phase were more costly per unit time than the diffusive phase.

References

  1. 1

    Morse, P. M. & Kimball, G. E. reprinted in The World of Mathematics Vol. 4 (ed. Newman, J. R.) 2160–2178 (Simon & Schuster, New York, 1956).

  2. 2

    Bénichou, O., Loverdo, C., Moreau, M. & Voituriez, R. Phys. Rev. E 74, 020102 (2006).

  3. 3

    Viswanathan, G. M. et al. Nature 381, 413–415 (1996).

  4. 4

    Weiss, G. H. & Shlesinger, M. F. J. Stat. Phys. 27, 355–363 (1982).

Download references

Author information

Rights and permissions

Reprints and Permissions

About this article

Further reading

Comments

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.