Abstract
Elementary Motion Detectors (EMD) are wellestablished models of visual motion estimation in insects. The response of EMDs are tuned to specific temporal and spatial frequencies of the input stimuli, which matches the behavioural response of insects to widefield image rotation, called the optomotor response. However, other behaviours, such as speed and position control, cannot be fully accounted for by EMDs because these behaviours are largely unaffected by image properties and appear to be controlled by the ratio between the flight speed and the distance to an object, defined here as relative nearness. We present a method that resolves this inconsistency by extracting an unambiguous estimate of relative nearness from the output of an EMD array. Our method is suitable for estimation of relative nearness in planar scenes such as when flying above the ground or beside large flat objects. We demonstrate closed loop control of the lateral position and forward velocity of a simulated agent flying in a corridor. This finding may explain how insects can measure relative nearness and control their flight despite the frequency tuning of EMDs. Our method also provides engineers with a relative nearness estimation technique that benefits from the low computational cost of EMDs.
Similar content being viewed by others
Introduction
Flying insects like flies, bees, moths and dragonflies are well known for their exquisite flight control capabilities. Despite their tiny brains and relatively crude visual systems, they routinely fly through cluttered environments, navigating over large distances and deftly avoiding obstacles in their path. To control their flight, insects use optic flow, defined as the pattern of apparent motion generated on their retina as they move through a scene^{1}. Granted sufficient image texture, optic flow measures the apparent angular velocity of surrounding objects. For purely translational motion, translational optic flow (TOF) becomes proportional to the relative nearness — noted η — defined here as the ratio between the flight speed and the distance to an object^{2}. Many complex behaviours exhibited by flying insects, such as visual odometry, landing, position, speed and height control are regulated using information extracted from optic flow (for reviews see^{3,4,5}). Similar optic flow based strategies have also been successfully used to generate autonomous behaviour in miniature flying robots^{6,7,8,9,10,11}, and even biohybrid robots^{12}. Optic flow based strategies are interesting for the development of control systems in miniature flying vehicles because they have low computational cost and can be implemented on small platforms where constraints in weight and computational power are important.
The Elementary Motion Detector (EMD), introduced by Hassenstein and Reichardt^{13,14}, is a wellestablished biological model for visual motion estimation. The model was originally developed to account for the turning responses made by walking beetles – known as the optomotor response – when presented with wide field yaw image motion^{13} and has since been shown to match the optomotor responses of a wide range of insects^{15}. The EMD performs spatiotemporal correlation of the signals from two adjacent photoreceptors and requires only two lowpass filters, two subtractions and one multiplication to provide an estimate of visual motion. This organisation is thought to exist in the early processing stages of the insect visual system in the form of hundreds of EMD units, each taking input from neighbouring photoreceptors around the panoramic field of view of insect eyes.
Neurophysiological studies^{16,17,18,19,20,21} have provided good evidence for the EMD as a candidate model for motion detection in insect brains, although recent literature shows evidence for both BarlowLevick^{22} and HassensteinReichardt models^{13,14}, suggesting a hybrid implementation (for reviews see^{15,23}). Indeed, models integrating the output of EMD arrays from a wide field of view – mimicking the tangential cells in the lobula plate of flies^{18} – have been shown to detect the direction and amplitude of egorotations^{9,24}, and to perform control of translational motion with simulated agents^{25,26,27,28,29} and robotic agents^{8,30,31,32}.
One of the key features of the EMD model is its dependency on the spatial structure of the scene^{33,34,35,36,37}. Both the angular image speed tuning and the temporal frequency tuning of the EMD form a bell shape, with a maximum response at a frequency defined by its characteristics – namely, its integration time and interommatidial angle. While the frequency tuning of the EMD model mimics that observed in the optomotor response to rotational motion, strong support for the model as a basis for translational motion detection is lacking. Behavioural experiments suggest that insects are able to use translational optic flow to correctly estimate relative nearness independently of the spatial structure of the visual input^{3,38,39,40}. This is something that cannot be derived unambiguously from the raw EMD signals because of its bellshaped tuning to angular speed that is not a monotonic function. Limitations in EMDbased control of translational motion due to the drop in EMD response at low distance from a surface, causing collisions into the surface, have also been reported^{28,29}.
Here, we present a novel approach that suggests how the output of EMD arrays could indeed provide the basis for translational motion control in both insects and robotic agents. We show that, although the response of a single EMD does not provide a reliable measurement of angular image speed, comparing responses across an array of EMDs can provide an unambiguous estimate of relative nearness. We study analytically the response of an azimuthally distributed array of EMDs when moving along an planar surface covered by a pattern with a natural distribution of spatial frequencies^{41,42,43,44}. This surface models either large objects on the sides of the agent, or the ground bellow the agent. We show that, when the ratio between the speed of the agent and its distance to the surface is higher than a threshold we call η_{min}, the angular location of the EMD with maximum response provides an unambiguous estimate of relative nearness. Our estimator performs best at low distance from the surface – in cases where the raw EMD output provides ambiguous estimates of relative nearness. We then discuss how this finding could be used for flight control, and how the model parameters could be dynamically adapted to enhance the relative nearness estimation. Finally, the proposed EMDbased relative nearness estimator is validated in closedloop control of a simulated agent.
Model
Let us consider an agent — be it biological or artificial — flying in an environment composed of a flat surface (Fig. 1a). This surface could represent the ground below a flying agent, or one of the two vertical walls of the corridors commonly used for behavioural studies of insects and birds (for example^{3,40,45,46,47,48}).
The flying agent moves at speed V and distance d to the surface (Fig. 1a). Let us define the azimuth angle Φ as the angle between the front of the agent and a viewing direction. We will refer to the viewing direction Φ = 90° as the “lateral region” of the field of view, but this could equally be the “ventral region” of the field of view if the surface was below the agent.
In order to mimic the properties of a real world environment, the flat surface is covered with a pattern that contains a natural distribution of spatial frequencies^{41,42,43,44,49}, i.e. its power spectrum follows a distribution of frequencies in 1/f ^{2} (Fig. 1inset).
The eye of the flying agent is composed of a planar array of photoreceptors (Fig. 1b). This plane is orthogonal to the patterned surface and it contains the agent velocity vector \(\overrightarrow{V}\). Each photoreceptor points to a different azimuth angle Φ and has an acceptance angle Δρ. Consecutive photoreceptors are separated by an angle ΔΦ. The receptivity function of an photoreceptor is approximated by a Gaussian window centered on Φ with standard deviation σ as in previous studies^{26,27,50,51,52}. The acceptance angle of a photoreceptor — noted Δρ — is defined as the full width at half maximum of the Gaussian window^{53}.
A series of EMDs^{14,34,35} takes input from the photoreceptor array. The output \({R}_{{{\rm{\Phi }}}_{i}}\) of the EMD circuit pointed at the direction Φ_{ i } is given by the difference of the results of two multiplications (Fig. 1b). The first multiplication is the product of the lowpass filtered signal of the photoreceptor pointed at \({{\rm{\Phi }}}_{i}\frac{{\rm{\Delta }}{\rm{\Phi }}}{2}\) and the unfiltered signal of the photoreceptor pointed at \({{\rm{\Phi }}}_{i}+\frac{{\rm{\Delta }}{\rm{\Phi }}}{2}\). The second multiplication is the product of the unfiltered signal from the photoreceptor pointed at \({{\rm{\Phi }}}_{i}\frac{{\rm{\Delta }}{\rm{\Phi }}}{2}\) and the lowpass filtered signal of the photoreceptor pointed at \({{\rm{\Phi }}}_{i}+\frac{{\rm{\Delta }}{\rm{\Phi }}}{2}\).
Predicted Steadystate EMD Response
In this section we derive the expression of the EMD output value R as a function of five parameters: the azimuth angle Φ, the agent speed V, the distance between the agent and the surface d, the interommatidial angle ΔΦ, and the time constant τ of the EMD lowpass filter blocks.
The EMD used in this study is a balanced correlator^{14} composed of two linear low pass filters, one multiplication and one subtraction. The mean EMD response to a moving broadband image can be expressed as the sum of its responses to the individual sinusoidal components of the image, weighted by the power spectrum of the image^{37}. For a pattern containing a naturalistic distribution of frequencies – i.e. a power spectrum in 1/f ^{2} – the mean EMD response is thus given in equation (1).
where \({R}_{{\rm{\Phi }}}^{f}\) is the response of the EMD pointed at the viewing direction Φ for a surface covered with a pattern that contains only one spatial frequency f, i.e. a sinusoidal grating. In equation (1) the integral computes summation across a range of frequencies. Note that this does not imply that frequency summation is implemented in insect nervous system and thus does not require additional neural computation. The frequency summation is however needed in this study to predict the EMD response to a signal that is itself the sum of sinusoidal components of varying frequencies.
The response R_{ sin } of one EMD to a sinusoidal stimulus was derived in a previous study^{35} for the case of a rotating drum patterned on its inner surface, and is shown in equation (2).
where ΔI is the amplitude of the sinusoidal stimulus, ω is the frequency of the stimulus, λ is its angular period, ΔΦ is the interommatidial angle, and τ is the time constant of the low pass filter.
While R_{ sin } was derived with the assumption that ΔI, λ and ω were constant across the field of view^{35}, in our case (Fig. 1a) they vary depending on the azimuth angle as well as on the position and speed of the agent. Let us introduce the apparent signal amplitude \(\widehat{{\rm{\Delta }}I}={\rm{\Delta }}I(f,{\rm{\Delta }}{\rm{\Phi }},{\rm{\Phi }},d)\), the apparent angular period \(\widehat{\lambda }=\lambda (f,{\rm{\Phi }},d)\), and the apparent angular frequency \(\widehat{\omega }=\omega (f,V)\). For example, the apparent angular period will decrease for increasing distance to the wall, the apparent angular period will also be maximum for Φ = 90° and tend to 0 for Φ → 0° and Φ → 180°. We can thus reformulate equation (2) for our case as equation (3). The expressions of \(\widehat{{\rm{\Delta }}I}\), \(\widehat{\lambda }\), and \(\widehat{\omega }\) are shown in equation (4), for more details see Supplementary Section S2.
By substituting equation (4) in equation (3), we obtain equation (5).
The complete EMD output given by the integral in equations (1) and (5) is approximated as a discrete sum by considering a finite number \({N}_{f}\gg 1\) of spatial frequencies f_{ k }, as shown in equation (6).
The range of spatial frequencies f_{min} and f_{max} (see Supplementary Section S1) was chosen so that they do not interfere with the results of the study. The maximum spatial period is several orders of magnitude larger than the length covered by the agent flying at the maximum speed during an EMD integration time. The minimum spatial period is small compared to the length covered by one acceptance angle ρ at the smallest considered distance to the wall, and is thus filtered by the gaussian acceptance angle convolution, which also avoids potential issues of spatial aliasing^{54}.
Theoretical Results
In this section, we analyse theoretical predictions of the response of an EMD array to an planar surface covered with a natural pattern. We evaluated equation (6) for varying values of the five parameters Φ, V, d, ΔΦ and τ (see Supplementary Table S1). These results are analysed in the following paragraphs.
We first show that the value of the EMD output is not a reliable estimation of relative nearness (i.e. V/d) in that a single value of EMD output can not be unambiguously associated to a single value of the V/d ratio. Then, we introduce the angle Ψ, which is obtained from the azimuthal location of maximum output in the EMD array. We show that the angle Ψ covaries monotonically – though nonlinearly – with V/d, and thus provides an unambiguous estimate of relative nearness.
EMD Response Across the Visual Field
When a flying agent is moving in straight line at constant speed, in a purely translational motion (Fig. 1a), translational optic flow is proportional to flight speed V and inversely proportional to the distance to an object in the scene^{2,55}: \({\rm{TOF}}({\rm{\Phi }})=\frac{V}{{D}_{{\rm{\Phi }}}}\,\sin ({\rm{\Phi }})\). For the planar surface shown in Fig. 1, which is aligned with the velocity vector and at a distance d from the agent, the distance to the surface in the viewing direction Φ is D_{Φ} = d/sin(Φ). Translational optic flow can then be obtained geometrically with equation (7) which is positively correlated to flight speed V and inversely correlated to distance d. Note that translational optic flow at viewing angle Φ = 90° yields a maximum value — noted TOF_{90} — that is equal to the relative nearness η = V/d.
For a planar EMD array, the absolute value of the EMD response R increases at all azimuth angles with increasing flight speed in the range of flight speeds considered in our study (Fig. 2a). At higher flight speeds, the EMD response reaches a maximum then decreases with increasing flight speed (see Supplementary Fig. S3). However, R does not always increase with decreasing distance to the surface (Fig. 2c), contrary to optic flow which increases with decreasing distance. For example, with the EMD parameters used for Fig. 2, R increases for decreasing values of d only in the extreme frontal and rear parts of the field of view (in the ranges Φ ∈ [0°, 30°] and Φ ∈ [150°, 180°]). In most of the field of view (Φ ∈ [45°, 135°]), R increases with increasing values of d, which is the opposite of a relative nearness estimator.
Let us define R_{90} as the EMD response at Φ = 90°, and R_{max} as the maximum EMD response which is located at Φ = Φ_{max} (Fig. 2a). Neither R_{90} or R_{max} provide a correct estimate of relative nearness. While they both depend on flight speed and distance to the surface (Fig. 3b,c), the isocurves of R_{90} and R_{max} are not at a constant V/d ratio, as is the case for relative nearness (Fig. 3a). This means that, unlike relative nearness, a single V/d ratio can correspond to different R_{90} or R_{max} values. An agent flying at speed V and distance d to the surface should measure the same relative nearness when flying at double speed and double distance because the ratio V/d is the same in both cases. However this is not the case for R_{90} and R_{max} which yield two different values when the agent is flying at speed V at distance d, and at speed 2V at distance 2d.
Conversely, a single value of R_{90} or R_{max} can correspond to different V/d ratios. The ambiguity of the estimate provided by R_{90} and R_{max} is clearly visible when they are displayed as function of V/d, i.e. the relative nearness (Fig. 3f,g). A single value of R_{90} or R_{max} can correspond to a wide range of relative nearness. For instance, for R_{90} = 0.06 on Fig. 3f, the relative nearness can be anywhere between 2 rad.s^{−1} and 16 rad.s^{−1}. Similarly, for R_{max} = 0.06 on Fig. 3g, the relative nearness can be anywhere between 2 rad.s^{−1} and 10 rad.s^{−1}.
Deviation of Maximum EMD Response Ψ as Estimation of Relative Nearness
It is interesting to note that the maximum EMD response (noted R_{max} and indicated by red dots in Fig. 2) is not always located where the translational optic flow (defined in equation 7 as the image angular velocity) is the highest, ie. at Φ = 90°. The location of the maximum EMD response (noted Φ_{max}) is thus not equivalent to the location of the maximum translational optic flow. Let us define Ψ, the deviation of the maximum EMD response from the side of the field of view as
The fact that EMD response is not highest at Φ = 90° can be explained by two facts. First, the bellshaped speed tuning of EMDs when presented to broadband images^{37} has a maximum at a specific angular speed (see Supplementary Fig. S3). Second, the apparent image speed is lower in the frontal and rear parts of the visual field than at Φ = 90° as shown in equation (7). Thus, at high relative nearness, the EMD may respond with a larger value to the lower angular image speed at Φ = 90 ± Ψ, than to the larger angular image speed at Φ = 90°.
With fixed distance to the surface, Ψ increases with increasing flight speed (Fig. 2a,b). With fixed flight speed, Ψ increases with decreasing distance (Fig. 2c,d). Thus, Ψ is increasing for increasing values of the ratio V/d, which is the relative nearness. As a consequence, we propose to use Ψ — rather than R — to estimate relative nearness.
Contrary to R_{90} and R_{max}, the isocurves of Ψ are at a constant V/d ratio (Fig. 3d), which is also the case for the relative nearness (Fig. 3a). Moreover a single value of Ψ corresponds to a single V/d ratio (Fig. 3h), like relative nearness (Fig. 3e).
The function \(\eta \mapsto {\rm{\Psi }}\) is monotonically increasing (Fig. 3h). However this function is not strictly increasing for the lower values of η where Ψ = 0° (left region of Fig. 3h). This means that Ψ can be used to compare relative nearness in different regions of the field of view (as described later in the experimental section) only when relative nearness is higher than a threshold value.
Threshold for an Unambiguous Estimation of Relative Nearness
The deviation of maximum EMD response Ψ is equal to zero (i.e. Φ_{max} = 90°) for all values of η below a threshold η_{min} (lower right corner of Fig. 3d and left region of Fig. 3h). If η < η_{min}, then Ψ is null and provides no useful information on relative nearness. However if η > η_{min}, then Ψ is greater than zero and can be used to estimate relative nearness. In other words, the agent needs to fly sufficiently fast and/or close to the surface to get a relative nearness estimate from Ψ.
For Ψ to be measureable in a practical implementation, the maxima of the EMD response have to be sufficiently separated (Fig. 3i). The higher the relative nearness, the easier it is to detect the maxima, as shown by the relative difference between the maximum EMD response R_{max} and the EMD response between the maxima R_{90} (Fig. 3j). For example, for a relative nearness of η = 5 rad.s^{−1}, our model predicts approximately 8% difference between R_{max} and R_{90}. This value increases to approximately 22% for η = 10 rad.s^{−1} (Fig. 3k).
The threshold η_{min} depends on the time constant τ of the EMD low pass filters and on the interommatidial angle ΔΦ (Fig. 4). η_{min} decreases with increasing time constant τ and increases with increasing interommatidial angle ΔΦ. For example, an agent with an interommatidial angle of ΔΦ = 3.0° and an EMD low pass filter constant τ = 10 ms will have a threshold η_{min} = 2 rad.s^{−1} (Fig. 4). To estimate relative nearness from Ψ (i.e. Ψ > 0°), this agent must fly at a speed of V > 2 m/s when it is at a distance d = 1 m from the surface. Similarly, it must remain at a distance of d < 0.5 m when flying at a speed of V = 1 m/s.
Experimental Results
The proposed relative nearness estimator based on EMD is validated with closedloop control of the lateral position and forward velocity of a simulated agent flying in a corridor with walls patterned by the surface shown in Fig. 1a. The agent can increase and decrease its forward velocity and lateral velocity. We will use the terms “forward command” and “lateral command” to refer to the velocity increments added to the forward and lateral velocity, respectively, in order to stay at equal distance to the two walls and to stabilize forward velocity at a constant value.
It is important to note that, in this section, we do not rely on the theoretical predictions of the EMD response presented in the previous section. We implemented the EMD model shown in Fig. 1b and computed its response to simulated images. The theoretical predictions only considered the steadystate EMD response to a signal with known power spectrum, while the results of this section use the actual response of the EMD model to computergenerated images.
Control Strategy for Lateral Position and Forward Velocity
The control strategy is similar to those presented in previous studies^{6,56,57,58}. As the agent moves forward, translational optic flow (TOF) is computed on its left and right sides. The difference between translational optic flow on each side is used to control the lateral position of the agent. For example, a higher translational optic flow on the right side of the agent will result in a leftward command. For speed control, the average translational optic flow on the left and right sides is compared to a reference value TOF_{ref}. The agent will accelerate when the measured average translational optic flow is lower than the reference value, and decelerate otherwise. This control strategy can be summarised as
where u_{lat} is the lateral command, u_{for} is the forward command, K_{lat} and K_{for} are proportional gains, TOF_{left} and TOF_{right} are respectively the translational optic flow measured on the left and right sides of the agent, and TOF_{ref} is a reference value. As the forward velocity is controlled using a reference translational optic flow value, the resulting forward velocity is expected to increase with increasing width of the corridor to compensate for the decreasing optic flow on the left and right sides.
In our experiments, the translational optic flow values TOF in equation (9) are replaced with the measured Ψ values:
where Ψ_{left} and Ψ_{right} are the deviation angles of the maximum EMD response on the left and right sides, and Ψ_{ref} is a reference value (Fig. 5b).
Simulation Environment
The simulated environment consists of two vertical walls covered with a “dead leaves” pattern^{43,59} (Fig. 6), which contains a naturalistic distribution of spatial frequencies. The simulation can be divided in four main steps: Image processing, Control Law, Agent Dynamics and Image Generation (Fig. 5). At each simulation time step, a new panoramic image with 360° field of view and interpixel angle ΔΦ is generated (Fig. 5d). The array of N EMD units takes input from consecutive pixels of the panoramic image, i.e. with constant interommatidial angle, like our eye model (Fig. 1a). The EMD units are updated and spatially filtered, then Ψ values are computed on left and right sides from the output of the EMD units (Fig. 5a). Control commands for lateral and forward acceleration are then computed from Ψ values (Fig. 5b). Finally, the position and velocity of the simulated agent are updated based on its current state and applied control commands (Fig. 5c). The four simulation steps are repeated until the agent converges to stable flight speed and lateral position.
Simulation Results
Simulated flights were performed with different initial lateral position, initial forward speed, tunnel width and reference command Ψ_{ref}. The agent state was measured after it stabilised its velocity and lateral position (Fig. 7).
The agent converges towards the center of the corridor (lateral position equal zero) for each initial lateral position and forward velocity tested (Fig. 7a–d). The final forward speed increases with increasing tunnel width (Fig. 7g). This is an expected behaviour and matches the optic flowbased centering and speed control behaviour observed on flying insects. Indeed, this increase in speed allows the agent to maintain a constant optic flow for all tunnel widths (Fig. 7o). The Ψ angle converges to Ψ_{ref} (Fig. 7i–l), although it does so less reliably for lower Ψ_{ref} values (Fig. 7lleft). Similarly, there is higher standard deviation of the lateral position for lower Ψ_{ref} (Fig. 7dleft). The relationship between Ψ_{ref} and relative nearness (Fig. 7p) is similar to the one predicted by our analytical model (Fig. 3h). This confirms that Ψ is a correct estimate – though nonlinear – of relative nearness.
An example of the EMD response during a simulation is shown in Fig. 6. At the beginning of this experiment (Fig. 6a), the agent is closer to the right wall and is flying at low speed. The Ψ angles are, on average, lower than the command Ψ_{ref} = 60°, which will push the agent to accelerate. Also Ψ angles are larger on the right side than on the left side, which will push the agent towards the left, i.e. closer to the center of the corridor. This is expected because the distance to the right wall is smaller than the distance to the left wall, so the relative nearness is higher on the right wall. Note that the raw EMD response has the inverse property: the EMD response at Φ = +90° (right) is smaller than the EMD reponse at Φ = −90° (left). Thus, if our controller had used EMD response R_{90} to compute the lateral command instead of Ψ, the agent would have been pushed even more to the right and would have eventually collided into the surface. At the end of this experiment (Fig. 6b), the agent is flying closer to the center of the corridor with an increased speed. The Ψ angles are all close to the command Ψ_{ref} = 60°. The agent has converged to stable lateral position and speed.
Discussion
The EMD is a biological model for motion estimation that has received strong experimental support as the foundation of motion detection in insects. Due to its relative simplicity, the EMD model also has good potential as a computationally fast motion estimator for engineering applications. Indeed, an EMD requires two multiplications for each pixel, one subtraction and two time delays while the Lucas Kanade algorithm^{60} requires 11 multiplications and 6 subtractions. However, the EMD model output does not provide a perfect estimation of relative nearness as it cannot be unambiguously expressed in angular speed. The response of EMDs for varying angular velocity indeed has a bell shape with a maximum at an angular image velocity that is function of the EMD parameters as well as the spatial frequency of the input signal. This is problematic for biologists because insects appear to rely on relative nearness for flight control^{4,5} independently of image properties. The ambiguous nature of the EMD output is also problematic for engineers who require measures of angular speed (expressed in pixels or radians per second) for tasks such as egomotion estimation or mapping. Also, for larger angular velocities, the EMD response decreases in a way that cannot be discriminated from a decrease in angular velocity. For example, as an agent approaches a surface – and thus as the angular image velocity increases – the response of an EMD may start decreasing. This leads to a crash into the surface when the EMD response is used to compute a repulsive force^{28,29}. This case is shown in Fig. 6a where the absolute value of R_{90} is larger on the left side of the agent than on the right side even though the agent is offset to the right of the corridor – i.e. the angular image speed is smaller on the left side of the agent. Our simulated agent would have crashed into the wall on its right if the EMD response was used instead of Ψ to compute its lateral command.
In other words, there is an apparent incompatibility between the main neurophysiological model for motion estimation (EMD) and the main behavioural model for insect flight control (optic flow). Several studies have proposed modifications to the EMD in order to correct its output (for example^{33,61,62,63}). Although they demonstrate improved robustness to varying contrast and spatial frequency, these models often require additional computational blocks. Most importantly, these models are less wellsupported by electrophysiological recordings from the insect visual system. Here, we have shown that it is indeed possible to use a simple Hassenstein Reichardt EMD output for estimation of relative nearness with limited additional computational blocks – namely a spatial blurring and maximum location. These blocks integrate EMD responses across the visual field without modifying the structure of the correlator^{13}. Because our method relies on spatial integration accross wide field of view, it is especially suited to estimation of relative nearness to a large obstacles around the agent, or to the ground below the agent.
We introduced the angle Ψ, which is the angle between the viewing direction pointing directly at the patterned surface and the viewing directions with maximal EMD response and showed that this angle is closely related to relative nearness (Fig. 3) and is therefore suitable for controlling flight (Fig. 6). Our model predicts Ψ in the limited case of straight flights parallel to a planar surface. However, we demonstrated successful flight control based on Ψ in a simulation environment that does not constrain the agent to fly along straight paths (Fig. 6right), and also for nonplanar scenes (see Supplementary Fig. S7). The main novelty of the angle Ψ is that it relies on the relative response of several EMD detectors instead of relying on the absolute value of their output. In other words, we suggest that relative nearness is spatially encoded by the relative response of EMDs rather than by the magnitude of their responses, something that has strong biological plausibility. Indeed, computing Ψ consists mostly in detecting the maximum response in an array, which is easily implemented in neural systems using a WinnerTakeAll network^{64} or using differentiation and zerocrossing^{9}.
While there has been much behavioural evidence that honeybees use relative nearness to control their flight^{4,39}, two recent studies have shown that flight control in bumblebees does indeed exhibit some dependency on the spatiotemporal properties of sinusiodal gratings^{65,66}. These apparently conflicting results can nonetheless be explained by the method we propose here because the maximum output of an array of EMDs would exhibit spatiotemporal dependencies when presented with patterns containing single frequencies (see Supplementary Figs S5 and S8) but not when presented with more complex patterns containing multiple frequencies, such as checkerboards that contain a series of discrete frequencies that are harmonics of a fundamental spatial frequency related to the size of the checkerboard squares. To avoid the ambiguities created by sinusoidal and checkerboard patterns and to make our study more relevant to the natural behaviour of insects, we considered the output of the EMD model in response to the deadleaves pattern^{43,59}, which has a spectral content that matches that of natural scenes^{41,42} with a distribution of frequency of 1/f ^{2}. The method we propose in this paper for extracting relative nearness from EMD output is a consequence of EMD dependency on spatial frequency, coupled with the geometry of the environment. The response of EMD is tuned to a specific ratio between image speed and angular period^{33}. When insects fly above the ground or beside large flat objects, visual features of the environment are seen from a greater distance in the forward and rearward regions of their field of view. Hence, these features would subtend a smaller angle in the field of view, that is, they would appear to have a smaller angular period, i.e. a higher spatial frequency. As a consequence, the ratio of image speed to angular period at which the EMD output is maximal is achieved only at specific viewing angles, which then provides an estimate for relative nearness. Our scheme of using the angle Ψ to estimate relative nearness thus explains both the results that find spatiotemporal dependency of flight control behaviour and those that find optic flow dependency. Our scheme also highlights the importance of the structure of the pattern being used on the results of behavioural experiments on flight control.
Locating the maximum EMD response provides an estimate of relative nearness only above a threshold value η_{min}. This means that Ψ is a valuable measure only if the agent is flying fast enough and/or close enough to the surface. Our model predicts the value of the relative nearness threshold η_{min} from the interommatidial angle ΔΦ and the time constant τ of the EMD low pass filters (Fig. 4). We can investigate whether Ψ is a candidate for relative nearness information in an insect species from the speed over distance ratio V/d at which it flies and testing whether it is higher than the value of η_{min} that is predicted from its interommatidial angle and time constant. For an insect such as a bee, with an interommatidial angle ΔΦ = 3.0° measured^{50} at an azimuth angle Φ = 90° and an estimated time constant^{67} τ = 10 ms, the predicted threshold is η_{min} = 2.0 rad.s^{−1} (Fig. 4a). This threshold is indeed lower than the flight speed to distance ratio at which bees flew in previous experiments: lateral relative nearness was recorded between 3.0 rad/s and 3.8 rad/s in Bombus terrestris^{40,48}, and it was recorded between 3.75 rad/s and 4.96 rad/s in Apis mellifera^{39,45}. This supports the hypothesis that these species may be using the visual angle at which maximal EMD output occurs to estimate relative nearness in order to control their flight speed. The same test can be replicated for other species using experimental measurements of ΔΦ, τ, and V/d.
Several studies have shown that ventral relative nearness may also be used by insects for flight control^{40,48,68,69}, in addition to lateral relative nearness. Bumblebees rely primarily on lateral relative nearness cues for speed control when navigating narrow corridors, but ventral relative nearness cues are preferred over lateral relative nearness cues in wider corridors^{40}. However, the lateral relative nearness in narrower tunnels (approx. 3.5 rad/s) is much smaller than the ventral relative nearness in wider tunnels (approx. 5.7 rad/s). Can this be explained by our model? Insect eyes tend to have reduced resolution in the ventral region^{53}, thus, Ψ values are expected to be lower in the ventral region than in the lateral region (Fig. 4b). For the narrow corridor case (ΔΦ_{lateral} = 3.0°, τ = 10 ms, η_{lateral} = 3.5 rad/s) our model predicts Ψ_{lateral} = 40° (Fig. 4b). Within our control strategy (Fig. 5b), this corresponds to the bee maintaining Ψ equal to the reference value Ψ_{ref} = 40°. Assuming that τ is uniform across the eye, and that bees use the same reference Ψ_{ref} to control flight speed when using lateral motion cues and when using ventral motion cues, we can predict the ventral interommatidial angle that matches the higher relative nearness in the ventral region. For the wide corridor case where η_{ventral} = 5.7 rad/s, Ψ_{ventral} = 40° is obtained with interommatidial angle ΔΦ_{ventral} = 4.0° (Fig. 4b), which is indeed larger than ΔΦ_{lateral}. In other words, with equal Ψ_{ref} and τ in ventral and lateral regions, but with a larger ventral interommatidial angle ΔΦ_{ventral} = 4.0° than lateral interommatidial angle ΔΦ_{lateral} = 3.0°, our model correctly predicts the lateral and ventral relative nearness measured in bumblebees^{40}.
The relative nearness estimate provided by Ψ is most precise for η values slightly superior to the threshold η_{min}. Indeed the slope of the function \(\eta \mapsto {\rm{\Psi }}\) is maximum for η values just above the η_{min} threshold, i.e. a small variation in relative nearness would lead to a large variation of Ψ (Figs 3h and 4b,c). Below values of η_{min}, however, Ψ provides no information as the slope of the function \(\eta \mapsto {\rm{\Psi }}\) is null for η < η_{min} (Figs 3h and 4b,c). As a consequence, a flying agent should control its flight in order to maintain V/d values close to the threshold η_{min}, but not below that threshold.
Conversely, adapting η_{min} to a value just below the currently experienced relative nearness value maximizes the precision of the Ψ estimate. Our results show that the value of the threshold can be adapted by varying the interommatidial angle and the EMD time constant (Fig. 4a). A reduced interommatidial angle leads to a reduced threshold (Fig. 4b) which would enable relative nearness estimates at low flight speed and/or faster reactions to obstacles that are approaching in the direction of flight. Interommatidial angles are fixed by the anatomy of the compound eye and thus cannot be modified during flight. Nonetheless, the distribution of interommatidial angles across the eye in different insect species may reflect adaptations that better enable Ψ estimates in relevant parts of the visual field. Another way in which the EMD output can be adapted is by modifying the time constant τ, which can be dynamically varied during flight^{70}. An increased time constant would lead to a decreased threshold which is desirable at low flight speed, while a decreased time constant would lead to an increased threshold which is desirable at high flight speed (Fig. 4c). We suggest that a flying agent using Ψ for flight control can improve the precision of its relative nearness estimate by increasing the value of the time constant τ at low speed, and decreasing its value during fast forward flight. Biological evidence for dynamic changes in τ comes from^{71}, who showed that the decreased flight speed in bumblebees that is observed in response to decreased light intensity is accompanied by an increased time constant in the photoreceptors.
The response of the EMD array to moving images contains spikes resulting from transient responses (Fig. 6 light grey). Transient EMD responses are present in our simulation but not in our model that considers only the steadystate EMD response^{35}. Nonetheless, transient EMD response spikes represent measurement noise which have to be dealt with. For example, spatial differentiation and zerocrossing, which is a potential neuronal implementation for maximum detection^{9}, would be strongly affected by such spikes. In simulation experiments, Ψ angles in the front and rear of the visual field were averaged (Fig. 5a), which lowers measurement noise. In addition, spatial integration of the EMD response was performed fonttoback with a spatial gaussian filter in order to remove spikes and facilitate the detection of maxima (Fig. 5a). However, the gaussian filter also flattens peaks around EMD maxima, which makes peaks difficult to disambiguate when maxima are close to each other (Fig. 6aleft), and may result in the detection of a single maxima (Ψ ≈ 0°). This is a potential explanation for the low Ψ outlier values that appear at low Ψ_{ref} angles, i.e. when EMD maxima are close to each other (Fig. 7lleft). Whether the EMD maxima can be detected and located in the presence of noise is a fundamental requirement for the applicability of our method in a real world scenario. Our model predicts the difference between the peak EMD reponse and the EMD response at Φ = 90°, i.e. the EMD response in the “well” between the two maxima (Fig. 3i–k). We showed that for low relative nearness values above the threshold η_{min}, the EMD maxima are not only close form each other (small Ψ on the left of Fig. 3h), but also separated by a well of similar amplitude (R_{90} close to R_{max} on the left of Fig. 3k). Figure 3k shows the maximum level of measurement noise that allows the EMD maxima to be distinguished from the EMD response at 90 degrees. To obtain a reliable estimate of relative nearness using the EMD output, it is not only necessary to keep the V/d value above the η_{min} threshold, but it is also necessary to keep a margin above η_{min} in order to have clearly separated peaks in the EMD response (Fig. 6b). Biological data discussed previously^{39,40,45,48} suggest that bees fly with Ψ_{ref} = 40°, which means that EMD maxima would be separated by a comfortable angle of 80°.
How does our method for estimation of relative nearness generalize to a two dimensional field of view? With a 2D spherical field of view, a viewing direction is defined by its elevation angle Θ in addition to the azimuth angle Φ. The present study assumes an elevation angle equal to zero. In equation (5), the apparent temporal frequency \(\widehat{\omega }\) is not be affected by varying elevation angle, the same way it is not affected by varying azimuth angle. The geometry of the environment is axially symmetric about the viewing direction with azimuth Φ = 90° and elevation Θ = 0° (Fig. 1a) and so is the apparent angular period \(\widehat{\lambda }\) and apparent signal amplitude \(\widehat{{\rm{\Delta }}I}\). As a consequence, the EMD output R is constant over circles centered on the viewing direction \([\begin{array}{c}{\rm{\Phi }}\\ {\rm{\Theta }}\end{array}]=[\begin{array}{c}90\\ 0\end{array}]\). In other words, with a 2D EMD array, there are not only two maxima in the EMD response. Instead, there is a circle where the EMD response is maximal. This circle is centered on viewing direction \([\begin{array}{c}{\rm{\Phi }}\\ {\rm{\Theta }}\end{array}]=[\begin{array}{c}90\\ 0\end{array}]\), and the radius Ψ of the circle can be used as an estimate of realtive nearness. Note that this circle contains the two viewing directions with maximum EMD response presented in this study \(([\begin{array}{c}{\rm{\Phi }}\\ {\rm{\Theta }}\end{array}]=[\begin{array}{c}90{\rm{\Psi }}\\ 0\end{array}]\,{\rm{and}}\,[\begin{array}{c}{\rm{\Phi }}\\ {\rm{\Theta }}\end{array}]=[\begin{array}{c}90+{\rm{\Psi }}\\ 0\end{array}])\) and is thus an extension of the 1D case. The immediate benefit of using two dimensional field of view is the reduction of measurement noise by spatial integration. Instead of performing spatial integration fronttoback (Fig. 5a), it could be performed using the second dimension of the image — along circular paths — in order to measure the average radius Ψ of the circle where EMD response is maximum. We expect that with this method, maxima would be easier to distinguish even at low Ψ angles, while still filtering transient EMD response spikes. Whether this two dimensional spatial integration is used by insects could be tested by checking whether the output of motion detecting neurons are spatially pooled across regions with circular shape.
Conclusion
In this paper, we modeled the response of an array of EMDs in the case of an agent flying along a flat patterned surface and showed that the raw value of the EMD response is poorly correlated to relative nearness. We showed that the location of the maximum response in the EMD array provides appropriate estimation of relative nearness when the agent is flying sufficiently fast and/or close to the surface. We introduced the notion of relative nearness threshold to provide bounds on speed and distance, and showed that they are consistent with data from flight control experiments on Bombus terrestris and Apis mellifera. Finally, we proposed a flight control strategy that uses the location of maximum EMD response as control input instead of optic flow and tested it in a 3D simulation where we successfully controlled the forward velocity and lateral position of a simulated agent flying in a corridor. Similar to what is observed in insects, and as expected with optic flow based control, the agent’s forward velocity is dependent on the corridor width: the broader the corridor, the faster the agent advances.
The method of extracting relative nearness from EMD output that is described here relies on a standard form^{14} and requires few additional computational steps: namely, spatial filtering and detection of maximum EMD response, with both being easily modeled as neuronal networks. Further studies are needed in order to investigate if this scheme is indeed used in biological systems and to identify its neural underpinnings. Nevertheless, our method provides an algorithm for estimation of relative nearness that has low computational cost and that could be readily used in robotics applications.
Data availability
The datasets generated during and/or analysed during the current study are available from the corresponding author.
References
Gibson, J. J. The perception of the visual world. Psychological Bulletin 48, 1–259 (1950).
Koenderink, J. J. & van Doorn, A. J. Facts on optic flow. Biological Cybernetics 56, 247–254 (1987).
Srinivasan, M. V., Zhang, S. W. & Lehrer, M. Honeybee navigation: odometry with monocular input. Animal behaviour 56, 1245–1260 (1998).
Srinivasan, M. V. & Zhang, S. Visual Motor Computations in Insects. Annual Review of Neuroscience 27, 679–696 (2004).
Egelhaaf, M., Boeddeker, N., Kern, R., Kurtz, R. & Lindemann, J. P. Spatial vision in insects is facilitated by shaping the dynamics of visual input through behavioral action. Frontiers in Neural Circuits 6, 108 (2012).
Beyeler, A., Zufferey, J. C. & Floreano, D. Visionbased control of nearobstacle flight. In Autonomous Robots vol. 27, 201–219 (2009).
Briod, A., Zufferey, J. C. & Floreano, D. A method for egomotion estimation in microhovering platforms flying in very cluttered environments. Autonomous Robots 40, 789–803 (2016).
Ruffier, F. & Franceschini, N. Optic flow regulation: The key to aircraft automatic guidance. Robotics and Autonomous Systems 50, 177–194 (2005).
Plett, J., Bahl, A., Buss, M., Kühnlenz, K. & Borst, A. Bioinspired visual egorotation sensor for MAVs. Biological Cybernetics 106, 51–63 (2012).
Floreano, D., Ijspeert, A. J. & Schaal, S. Robotics and neuroscience. Current Biology 24, R910–R920 (2014).
Expert, F. & Ruffier, F. Flying over uneven moving terrain based on opticflow cues without any need for reference frames or accelerometers. Bioinspiration & Biomimetics 10, 26003 (2015).
Huang, J. V., Wang, Y. & Krapp, H. G. Wall following in a semiclosedloop FlyRobotic interface. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) vol. 9793, 85–96 (2016).
Hassenstein, B. & Reichardt, W. Systemtheoretische analyse der zeit, reihenfolgen, und vorzeichenauswertung bei der bewegungsperzepion des Rüsselkäfers Chlorophanus. Naturforsch 11b, 513–524 (1956).
Barlow. Sensory Communication. In Sensory Communication, June 2015, 844 (2012).
Borst, A. Fly visual course control: behaviour, algorithms and circuits. Nature reviews. Neuroscience 15, 590–599 (2014).
Eichner, H., Joesch, M., Schnell, B., Reiff, D. F. & Borst, A. Internal Structure of the Fly Elementary Motion Detector. Neuron 70, 1155–1164 (2011).
Bahl, A. Object tracking in motionblind flies. Nature Neurosci. 16, 1–11 (2013).
Maisak, M. S. et al. A directional tuning map of Drosophila elementary motion detectors. Nature 500, 212–6 (2013).
Gabbiani, F. et al. Multiplication and stimulus invariance in a loomingsensitive neuron. Journal of Physiology Paris 98, 19–34 (2004).
Higgins, C. M., Douglass, J. K. & Strausfeld, N. J. The computational basis of an identified neuronal circuit for elementary motion detection in dipterous insects. Visual neuroscience 21, 567–586 (2004).
Krapp, H. G. How a fly escapes the reflex trap. Nature Neuroscience 18, 1192–1194 (2015).
Barlow, H. B. & Levick, W. R. The mechanism of directionally selective units in rabbit’s retina. The Journal of Physiology 178, 477–504 (1965).
Borst, A. In search of the holy grail of fly motion vision. European Journal of Neuroscience 40, 3285–3293 (2014).
Borst, A. Neural Circuits for Elementary Motion Detection. Journal of neurogenetics 7063, 1–13 (2014).
Neumann, T. R. & Bülthoff, H. H. Behaviororiented vision for biomimetic flight control. Proceedings of the EPSRC/BBSRC International Workshop on Biologically Inspired Robotics 203, 196–203 (2002).
Lindemann, J. P., Kern, R., van Hateren, J. H., Ritter, H. & Egelhaaf, M. On the Computations Analyzing Natural Optic Flow: Quantitative Model Analysis of the Blowfly Motion Vision Pathway. Journal of Neuroscience 25, 6435–6448 (2005).
Dickson, W. B., Straw, A. D., Poelma, C. & Dickinson, M. H. An Integrative Model of Insect Flight Control. 44th AIAA Aerospace Sciences Meeting and Exhibit; Reno, NV; USA; 9–12 Jan 1–19 (2006).
Lindemann, J. P. & Egelhaaf, M. Texture dependence of motion sensing and free flight behavior in blowflies. Frontiers in behavioral neuroscience 6, 92 (2012).
Bertrand, O. J. N., Lindemann, J. P. & Egelhaaf, M. A Bioinspired Collision Avoidance Model Based on Spatial Information Derived from Motion Detectors Leads to Common Routes. PLoS Computational Biology 11, 1–28 (2015).
Franceschini, N., Pichon, J. M., Blanes, C. & Brady, J. M. From Insect Vision to Robot Vision [and Discussion] (1992).
Reiser, M. B. & Dickinson, M. H. A test bed for insectinspired robotic control. Philosophical transactions. Series A, Mathematical, physical, and engineering sciences 361, 2267–2285 (2003).
Serres, J. R. & Ruffier, F. Biomimetic Autopilot Based on Minimalistic Motion Vision for Navigating along Corridors Comprising Ushaped and Sshaped Turns. Journal of Bionic Engineering 12, 47–60 (2015).
Zanker, J. M., Srinivasan, M. V. & Egelhaaf, M. Speed tuning in elementary motion detectors of the correlation type. Biological cybernetics 80, 109–16 (1999).
Egelhaaf, M. & Reichardt, W. Dynamic response properties of movement detectors: Theoretical analysis and electrophysiological investigation in the visual system of the fly. Biological Cybernetics 56, 69–87 (1987).
Egelhaaf, M. & Borst, A. Transient and steadystate response properties of movement detectors. Journal of the Optical Society of America aOptics Image Science and Vision 6, 116–127 (1989).
Barnett, P. D., Nordström, K. & O’Carroll, D. C. Motion adaptation and the velocity coding of natural scenes. Current Biology 20, 994–999 (2010).
Dror, R. O., O’Carroll, D. C. & Laughlin, S. B. Accuracy of velocity estimation by Reichardt correlators. Journal of the Optical Society of America A 18, 241 (2001).
Srinivasan, M. V., Zhang, S. W., Chahl, J. S., Barth, E. & Venkatesh, S. How honeybees make grazing landings on flat surfaces. Biological cybernetics 83, 171–183 (2000).
Baird, E., Srinivasan, M. V., Zhang, S. & Cowling, A. Visual control of flight speed in honeybees. Journal of experimental biology 208, 3895–3905 (2005).
Linander, N., Baird, E. & Dacke, M. Bumblebee flight performance in environments of different proximity. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 202, 97–103 (2016).
Van der Schaaf, A. & Van Hateren, J. H. Modelling the power spectra of natural images: Statistics and information. Vision Research 36, 2759–2770 (1996).
Balboa, R. M. & Grzywacz, N. M. Power spectra and distribution of contrasts of natural images from different habitats. Vision Research 43, 2527–2537 (2003).
Zoran, D. & Weiss, Y. Natural Images, Gaussian Mixtures and Dead Leaves. Advances in Neural Information Processing Systems 1736–1744 (2012).
Schwegmann, A., Lindemann, J. P. & Egelhaaf, M. Temporal statistics of natural image sequences generated by movements with insect flight characteristics. PLoS One 9 (2014).
Serres, J. R., Masson, G. P., Ruffier, F. & Franceschini, N. A bee in the corridor: Centering and wallfollowing. Naturwissenschaften 95, 1181–1187 (2008).
Bhagavatula, P. S., Claudianos, C., Ibbotson, M. R. & Srinivasan, M. V. Optic flow cues guide flight in birds. Current Biology 21, 1794–1799 (2011).
Linander, N., Dacke, M. & Baird, E. Bumblebees measure optic flow for position and speed control flexibly within the frontal visual field. Journal of Experimental Biology 1051–1059 (2015).
Baird, E., Kornfeldt, T. & Dacke, M. Minimum viewing angle for visually guided ground speed control in bumblebees. Journal of Experimental Biology 213, 1625–1632 (2010).
Schwegmann, A., Lindemann, J. P. & Egelhaaf, M. Depth information in natural environments derived from optic flow by insect motion detection system: a model analysis. Frontiers in Computational Neuroscience 8, 1–15 (2014).
Spaethe, J. & Chittka, L. Interindividual variation of eye optics and single object resolution in bumblebees. Journal of Experimental Biology 206, 3447–3453 (2003).
Wiederman, S. D., Shoemaker, P. A. & O’Carroll, D. C. A model for the detection of moving targets in visual clutter inspired by insect physiology. PLoS One 3, e2784 (2008).
O’Carroll, D. C., Barnett, P. D. & Nordström, K. Temporal and spatial adaptation of transient responses to local features. Frontiers in Neural Circuits 6, 1–12 (2012).
Land, M. F. Visual Acuity in Insects. Annual Review of Entomology 42, 147–177 (1997).
Buchner, E. Behavioural Analysis of Spatial Vision in Insects. In Photoreception and Vision in Invertebrates, 561–621 (1984).
Zufferey, J.C. BioInspired VisionBased Flying Robots. Ph.D. thesis, EPFL (2005).
Portelli, G., Serres, J., Ruffier, F. & Franceschini, N. Modelling honeybee visual guidance in a 3D environment. Journal of Physiology Paris 104, 27–39 (2010).
Neumann, T. & Bulthoff, H. InsectInspired Visual Control of Translatory Flight. Advances in Artificial Life. ECAL 2001. Lecture Notes in Computer Science 2159, 627–636 (2001).
Hyslop, A., Krapp, H. G. & Humbert, J. S. Control theoretic interpretation of directional motion preferences in optic flow processing interneurons. Biological Cybernetics 103, 353–364 (2010).
Lee, A. B., Mumford, D. & Huang, J. Occlusion models for natural images: A statistical study of a scaleinvariant dead leaves model. International Journal of Computer Vision 41, 35–59 (2001).
Lucas, B. D. & Kanade, T. An Iterative Image Registration Technique with an Application to Stereo Vision. Imaging 130, 674–679 (1981).
Higgins, C. M. Nondirectional motion may underlie insect behavioral dependence on image speed. Biological Cybernetics 91, 326–332 (2004).
Brinkworth, R. S. A. & O’Carroll, D. C. Robust models for optic flow coding in natural scenes inspired by insect biology. PLoS Computational Biology 5 (2009).
Li, J., Lindemann, J. P. & Egelhaaf, M. Peripheral Processing Facilitates Optic FlowBased Depth Perception. Frontiers in Computational Neuroscience 10, 111 (2016).
Rumelhart, D. E. & Zipser, D. Feature discovery by competitive learning. Cognitive Science 9, 75–112 (1985).
Dyhr, J. P. & Higgins, C. M. The spatial frequency tuning of opticflowdependent behaviors in the bumblebee Bombus impatiens. The Journal of experimental biology 213, 1643–1650 (2010).
Chakravarthi, A., Kelber, A., Baird, E. & Dacke, M. High contrast sensitivity for visually guided flight control in bumblebees. Journal of Comparative Physiology A (2017).
Harris, R. A., O’Carroll, D. C. & Laughlin, S. B. Adaptation and the temporal delay filter of fly motion detectors. Vision Research 39, 2603–2613 (1999).
Linander, N., Baird, E. & Dacke, M. How bumblebees use lateral and ventral optic flow cues for position control in environments of different proximity. Journal of Comparative Physiology A: Neuroethology, Sensory, Neural, and Behavioral Physiology 203, 343–351 (2017).
Portelli, G., Ruffier, F., Roubieu, F. L. & Franceschini, N. Honeybees’ speed depends on dorsal as well as lateral, ventral and frontal optic flows. PLoS One 6, 10 (2011).
Longden, K. D. & Krapp, H. G. Sensory neurophysiology: Motion vision during motor action. Current Biology 21, 1684 (2011).
Reber, T. et al. Effect of light intensity on flight control and temporal properties of photoreceptors in bumblebees. Journal of Experimental Biology 1339–1346 (2015).
Acknowledgements
We thank Basil Huber, Gregoire Heitz and Olivier Bertrand for helpful discussions. J.L. was supported by the Swiss National Science Foundation (200021_155907). E.B. acknowledges support from The Swedish Foundation for Strategic Research (FFL090056) and the Swedish Research Council (20144762). D.F. acknowledges support from the Swiss National Science Foundation.
Author information
Authors and Affiliations
Contributions
J.L. conceived the model with inputs from E.B. and D.F. J.L. conceived and conducted the experiments and analysed the results. J.L., E.B. and D.F. wrote the manuscript.
Corresponding author
Ethics declarations
Competing Interests
The authors declare no competing interests.
Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Lecoeur, J., Baird, E. & Floreano, D. Spatial Encoding of Translational Optic Flow in Planar Scenes by Elementary Motion Detector Arrays. Sci Rep 8, 5821 (2018). https://doi.org/10.1038/s4159801824162z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s4159801824162z
This article is cited by

Contrast independent biologically inspired translational optic flow estimation
Biological Cybernetics (2022)

The role of optic flow pooling in insect flight control in cluttered environments
Scientific Reports (2019)
Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.