Optical brush: Imaging through permuted probes

The combination of computational techniques and ultrafast imaging have enabled sensing through unconventional settings such as around corners, and through diffusive media. We exploit time of flight (ToF) measurements to enable a flexible interface for imaging through permuted set of fibers. The fibers are randomly distributed in the scene and are packed on the camera end, thus making a brush-like structure. The scene is illuminated by two off-axis optical pulses. Temporal signatures of fiber tips in the scene are used to localize each fiber. Finally, by combining the position and measured intensity of each fiber, the original input is reconstructed. Unlike conventional fiber bundles with packed set of fibers that are limited by a narrow field of view (FOV), lack of flexibility, and extended coaxial precalibration, the proposed optical brush is flexible and uses off-axis calibration method based on ToF. The enabled brush form can couple to other types of ToF imaging systems. This can impact probe-based applications such as, endoscopy, tomography, and industrial imaging and sensing.

I be the vectorized 1 × N measured matrix at the end of the brush with the visible range camera; the matrix expression for an optical brush is given as: M is the size of the kernel. Here K is the number of fibers that is also equal to the number of ones in the permutation matrix.
For matrix convolution, D is transformed into a N XY × Toeplitz matrix to perform convolution. The kernel matrix in our study is simply a single entry identity matrix since each fiber is coupled into a single pixel in the scene (Q k =1). out I is the output of the fibers that are vectorized based on their order in xy space.
By measuring the time-of-flight to localize each fiber at the XY space, we are indirectly measuring the permutation and distribution matrix. In other words only if there is a pulse measured at x t i = and y t j = . P i and d i are the P and D matrix entries. Additionally the xy visible range camera image is measuring the entry values of out I for each fiber.

Supplementary Note 2 -Ambiguity rate expression and performance interpretation
The ambiguity rate is based on the normal independent identical distribution (i.i.d.) assumption for X k and Y k parameters. This rate is the fundamental ambiguity rate that is forced by the randomness of the positions. This is a notable rate since it predicts the highest bound on the performance (lowest error rate) of a brush system. To obtain this rate let's start with two fibers. By the normal i.i.d. and independent X k and Y k assumption for X and Y coordinates supplementary equation (2) can be derived. This is because a linear combination of i.i.d. normal variables is also a normal variable.
Here r T is the time resolution of the camera multiplied by speed of light in meters (depth resolution), t is an auxiliary integral variable, σ is the standard deviation of normal distribution for the fibers, and erf is the error function integral. By extending the same approach to all the fibers with positive X k and Y k supplementary equation (3) is obtained. The inequality in supplementary equation (3) is based on the symmetry of the normal distribution. An upper bound estimation on intrinsic ambiguity rate for the brush (p(m)) is thus estimated by finding the average distances for m neighboring fibers and finding the probability of that distance being less than depth resolution of the camera. This can be expressed as: where N is the total number of fibers. Supplementary equation (3) Supp. Fig 1 d shows the accuracy of this fit compared to Monte Carlo simulations with variation of m. A larger inaccuracy is induced by increase in the m number, however Supplementary equation (4) holds for m<5. This means that the fit is a good indication of the ambiguity rate of the fibers up to 5 neighboring fibers.
In case the positions were not independent and the X and Y positions had a joint Gaussian distribution, a multivariate Gaussian analysis should be used. Such analysis can be cumbersome as covariance matrix of all the fibers locations should be found.
Therefore, a relaxing assumption such as i.i.d. variables can be very useful.

Supplementary Note 3 -Working distance of an optical brush
Since optical brush is not a lens based system the definition of "in focus image" and thus "working distance" is rather different for it. Unlike the case of lens-based systems where the sensor pixels are packed next to each other and the working distance is forced by the 3D point spread function (PSF) of the system. For an optical brush not only the PSF is probabilistic, but also the pixels are randomly positioned with large gaps in between them. The working distance, therefore, can be defined as the distance at which a certain percent (a%) of the illumination power from the FoV of a certain percent (b%) of the bristles is leaked into the mth adjacent fiber. In addition to these thresholds the optical brush working distance would be a function of m N σ / parameter. Unlike a lens-based system where the depth of field starts far from the lens plane, for optical brush, the depth of field can start right from the brush plane and continue on up to the point where the shuffled output is considered out of focus based on the considered a, b, m parameters. For example, if a single point source is taken further and further away from the brush plane, at some distance more than one fiber can see that point and if taken even further, after a certain distance all the fibers would partially receive some illumination from that point source. In order to eliminate this complexity in our demonstration we have used synthetic 2D scene projected onto a brush plane which mimics the case where a scattering scene is in the working distance and thus in focus.