Apple is looking at ways of reducing how much processing an iPhone or other device has to do on the image sensor itself in order to produce high quality video, with the aim of saving both file size and battery power.
Currently cameras in iPhones, iPads, or any other device, have an image sensor which registers all light it receives. That video data is then passed to a processor which creates the image. But that processing on the main device CPU can be intensive and make the device's battery run down quickly.
"Generating static images with an event camera," a newly revealed
">US patent application
that proposes only processing the parts of a video image that have changed since the previous frame.
"Traditional cameras use clocked image sensors to acquire visual information from a scene," says the application. "Each frame of data that is recorded is typically post-processed in some manner [and in] many traditional cameras, each frame of data carries information from all pixels."
"Carrying information from all pixels often leads to redundancy, and the redundancy typically increases as the amount of dynamic content in a scene increases," it continues. "As image sensors utilize higher spatial and/or temporal resolution, the amount of data included in a frame will most likely increase."