The invention of a flexible, transparent imager may lead to touch-free user interface devices and smaller, cheaper CT scanners.

Imagine if you could turn any object into a motion tracking device by simply wrapping a transparent interface around the object as if it were cellophane. It might sound crazy, but this is exactly what Austrian researchers have in mind for their new imaging device, which resembles flexible plastic film, according to a paper published in the Optical Society’s open-access journal Optics Express.

“To our knowledge, we are the first to present an image sensor that is fully transparent—no integrated microstructures, such as circuits—and is flexible and scalable at the same time,” said study author Oliver Bimber of the Johannes Kepler University Linz in Austria in a press release.

This novel image sensor not only flexes and bends, but responds to simple gestures, rather than touch. According to the study, the device is based on a luminescent concentrator (LC), or polymer film, which absorbs light and then transports it to the edges of the LC by total internal reflection. The light transport is measured by line scan cameras that border the film and help to focus and reconstruct the images onto the LC surface.

“Thus, [the] image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable,” the study authors wrote.

Study co-author Alexander Koppelhuber said that Bimber came up with the idea for a transparent image sensor more than two years ago. “The project then started off with my master’s thesis,” Koppelhuber said in an interview with Healthline. “It is now funded by Microsoft and will be continued for the next three years.”

Because the project is still in the basic research phase, Koppelhuber said it’s difficult to say when this technology will be available to the public. The team is in the process of improving the image sensor and have already overcome several major obstacles.

One technical challenge the team encountered was determining where light fell across the surface of the film. This proved difficult because the polymer sheet cannot be divided into individual pixels like the CCD camera inside a smartphone.

“Calculating where each bit of light entered the imager [was] like determining where along a subway line a passenger got on after the train reached its final destination and all the passengers exited at once,” the researchers said.

They solved this problem by measuring light attenuation, or dimming, as it travels through the polymer. By measuring the relative brightness of light reaching the sensor array, they could calculate precisely where the light entered the film.

The team is currently working to improve the image sensor’s resolution by reconstructing multiple images at different positions on the film. “The more images we combine, the higher the final resolution is, up to a certain limit,” said Bimber.

Koppelhuber and Bimber have a few ideas about where their technology may lead.

One possibility is to create a touch-free interface that captures and reconstructs the shadow of objects, such as a person’s hand. However, Koppelhuber said the interpretation of these shadow images presents a new challenge.

“For example, the image of the shadow of two extended fingers must be recognized and then associated with an action (e.g. ‘move canvas’),” he said. “If the shadow of the fingers gets bigger as you move your hand away from the image sensor this could be associated with an action ‘zoom out of the canvas’.”

Koppelhuber and Bimber also speculate that this technology could provide high-dynamic-range or multi-spectral extensions for conventional cameras, perhaps by mounting a stack of LC layers on top of high-resolution CMOS or CCD sensors.

But the real potential advancements lie in the field of medical imaging.

“In CT technology, it’s impossible to reconstruct an image from a single measurement of X-ray attenuation along one scanning direction alone,” Bimber said. “With a multiple of these measurements taken at different positions and directions, however, this becomes possible. Our system works in the same way, but where CT uses X-rays, our technique uses visible light.”

Before Koppelhuber and his colleagues can begin working on this type of application, several technical hurdles must be overcome.

“At the moment we are working on the capability of real time image reconstruction,” he said. “Previously, the reconstruction of an image took several minutes. However, we were already able to reduce the time to less than a second.”