Computational Imaging Prediction of Starburst-Effect Diffraction Spikes

When imaging bright light sources, rays of light emanating from their centres are commonly observed; this ubiquitous phenomenon is known as the starburst effect. The prediction and characterization of starburst patterns formed by extended sources have been neglected to date. In the present study, we propose a novel trichromatic computational framework to calculate the image of a scene viewed through an imaging system with arbitrary focus and aperture geometry. Diffractive light transport, imaging sensor behaviour, and implicit image adjustments typical in modern imaging equipment are modelled. Characterization methods for key optical parameters of imaging systems are also examined. Extensive comparisons between theoretical and experimental results reveal excellent prediction quality for both focused and defocused systems.


A.1 Path-Length Error Calculation
The geometry considered for the calculation of the aberrational phase shift kW (x, y) is illustrated in Figure Figure 1. F: in-focus image plane. W (x, y) is the optical pathlength accumulated between the reference spheres SF and SR (centered at the origins of F and R respectively) by the ideal ray l when traced backward from the ideal image point on R to the point (x, y) within the exit pupil E .

A.2 Aperture Edge-Spike Ratio
For even-sided apertures, the number of observable diffraction spikes will typically be identical to the number of aperture edges; for odd-sided apertures, the number of diffraction spikes will be twice the aperture edge count. This can be qualitatively understood by considering that the aperture edges are the most significant contributors to diffractive effects. At each edge, diffraction spikes emanate in both normal directions-on an even-sided aperture, there is an overlap of diffraction spikes emanating from opposite edges, but such overlaps do not occur on odd-sided apertures, therefore leading to the differing spike-edge ratios.

A.3 Gamma Correction (sRGB)
The processing pipeline applied to experimental and theoretical results entailed a gamma compression on the linear sRGB images in accordance with the sRGB standard, given by where a = 0.0031308, and C is R, G, or B.
Supplementary Information 3 Table B.1 presents the pupil and principal plane positions of the examined AF Nikkor 50mm f /1.8D prime lens as extracted from existing literature [1].

B.2 Focusing Offset Calculations
For an object located at a distance r o from the imaging sensor plane and a focusing distance of r f , it can be deduced from Equation (6) that where γ = z p − r f , and ∆z is as defined in Figure A.1. The latter can then be used in Equation (3) for the calculation of path-length error.

B.3 Sensor Response Function
In an ideal linear sensor, the recorded pixel value ζ is proportional to the charge liberated at the sensor element, until saturation where ζ = 1.
where N ν (λ) is the number of photons absorbed per unit wavelength, M(λ) is the incident spectral power distribution (SPD), A is the pixel area, t is the exposure time, h is Planck's constant, c is the speed of light, and χ(λ) is the quantum efficiency (QE) of the sensor. Keeping A and the light source identical, a linear response to exposure time can thus be expected for each colour channel, albeit with different gradients. In modern charge-coupled device (CCD) and complementary metal-oxide-semiconductor (CMOS) image sensors, this initially linear response is typically followed by a sharp plateau at the saturation point [2][3][4][5].
For monochromatic incident light of wavelength λ 0 and intensity I 0 , Equation (B.3) reduces to ζ ∝ Aχ(λ 0 )λ 0 hc where φ 0 (λ 0 ) is the wavelength-specific reference radiant exposure, defined as the minimum exposure necessary to achieve sensor saturation. As the coefficients are solely dependent on sensor properties, they can be encompassed in the sensor response function Z as introduced in Equation (5), yielding Equation (7) as presented.
It is thus apparent that unlike a complete polychromatic approach, the trichromatic approach adopted does not require knowlege of the complete SPD of the source and the QE of the imaging sensor. Instead, only empirical determination of Z and φ 0 for each colour channel is required.

B.4 Pupil Edge-Detection
A computational edge-detection method was used to characterize the size and geometry of the entrance and exit pupils. Figure B.1 presents the edge-detected pupil profile for the examined lens system.