Motion-compensation approach delivers sharper single-pixel imaging for dynamic scenes
Researchers have developed a motion-compensation method that allows single-pixel imaging to capture sharp images of complex dynamic scenes. The new approach could expand the practical utility of this computational imaging .
Researchers have developed a motion-compensation method that allows single-pixel imaging to capture sharp images of complex dynamic scenes. The new approach could expand the practical utility of this computational imaging method by enabling clearer images of moving targets and improving the quality of surveillance images.
Single-pixel imaging uses a single detector, rather than the traditional array of pixels, to acquire images. Although it offers several advantages, such as high sensitivity and low cost, it can be slow, and moving scenes often lead to blurry or distorted images.
“The ability of our motion-compensated single-pixel imaging method to correct for motion and maintain image quality can significantly improve the clarity of real-time video feeds and reduce blur”, said research team leader Yuanjin Yu from Beijing Institute of Technology in China. “This makes it easier to identify objects or people in dark or obscured environments.”
In the journal Optics Express, the researchers describe their unique motion-compensation technique and show that it improved image quality and video smoothness in several different motion scenarios.
“This work represents the first motion-compensation framework specifically designed for complex scenes in single-pixel imaging, and it can be used for a broad range of real-world scenarios”, said Yu.
“This new method makes it possible to use single-pixel imaging for monitoring applications in challenging environments such as underwater scenes or through fog. It could also eventually enable more precise imaging in fields such as medical diagnostics and remote sensing.”Merging motion solutions
Single-pixel imaging typically involves illuminating a scene with a sequence of light patterns—often created using a digital micromirror device (DMD). The corresponding intensity values are then measured with a single-pixel detector and used to computationally reconstruct an image.
Most existing approaches for single-pixel imaging motion compensation are not well-suited to dynamic scenes, especially those with complex backgrounds and unknown moving objects. They typically work by increasing the imaging frame rate by reducing the number of measurements per frame or by compensating for scene motion by predicting movement.
The researchers took a different approach by combining both motion-compensation strategies. They first use sliding-window sampling, which moves a fixed-size “window” across the image to identify overlapping segments, thus increasing the frame rate. Then, they apply optical flow estimation to predict the pixel motion using two sets of measurements.
Finally, the high- and low-frequency measurements are temporally aligned within the sliding window, resulting in significantly reduced motion-induced artifacts.
“This work was possible thanks to recent innovations in optical flow models, particularly improvements in computational efficiency, robustness and prediction accuracy”, said Yu.
“Additionally, advancements in imaging hardware provide a solid foundation for our method’s effectiveness. For example, improved DMD technology and high-sensitivity single-pixel detectors significantly improve the signal-to-noise ratio of the measurements, enhancing the quality of the low-frequency single-pixel images used for motion estimation.”Making movements look sharp
The researchers evaluated their method by simulating scenes with motion such as a bus moving down a street using high-frame-rate videos from the publicly available REDS dataset, a collection of real-world video sequences for training and benchmarking computer vision models.
They also carried out real-world imaging experiments with moving objects, such as an image of a small dog moving at different speeds against a black background.
The researchers found that the motion-compensation method significantly improved image quality and video smoothness across all the scenarios tested.
They do note, however, that due to the relatively low quality of the low-frequency images used for optical flow estimation, some edge artifacts, such as mild stretching, did occur in certain regions where the motion estimation was inaccurate.
They plan to build on this work by developing an end-to-end single-pixel motion imaging model that reduces redundant computations during motion compensation. This would enable faster imaging of dynamic scenes.