Image Credits: Texas A&M University

Deep Learning Algorithm for Cleaning Images
Possible By Revealing Details Hidden in Noise

Researchers at the A&M University from Texas have now come up with a new machine learning based algorithm. This algorithm is capable of reducing graininess from a low resolution image. Details of this study are published in ‘Nature Machine Intelligence’.

Video Credits: Department of Computer Science and Engineering, YouTube

This machine learning based algorithm can prove to be very important as the entire world is in its dire need. So far we have access to the strong beams which are capable of making the images clear but prove to be damaging for the specimen. Also, whenever weak beams are used, images produced are noisy and low in resolution.

The name of this machine learning based algorithm is GVTNets. This technology is capable of revealing newer details that would otherwise stay buried in the noise.

Shuiwang Ji who is currently serving as an associate professor in the Department of Computer Science and Engineering said, “Images taken with low-powered beams can be noisy, which can hide interesting and valuable visual details of biological specimens.”

He added further that, “To solve this problem, we use a pure computational approach to create higher-resolution images, and we have shown in this study that we can improve the resolution up to an extent very similar to what you might obtain using a high beam.”

In traditional deep learning based image processing techniques, the contribution of the number of pixels in the input image to make up a single pixel in the output image is dependent on the number and network between layers.

Image Credits: Texas A&M University

According to Shuiwang Ji by fixing the number of input pixels which is technically called as the receptive field limits the performance of the algorithm. He explains that, “Imagine a piece of specimen having a repeating motif, like a honeycomb pattern. Most deep-learning algorithms only use local information to fill in the gaps in the image created by the noise.”

He adds further that, “But this is inefficient because the algorithm is, in essence, blind to the repeating pattern within the image since the receptive field is fixed. Instead, deep-learning algorithms need to have adaptive receptive fields that can capture the information in the overall image structure.”

The researchers also found out that the new algorithm developed is very adaptable. It could be adopted easily for denoising into other applications such as 3 D and 2 D computer graphics conversations.

The other denoising algorithms that are available right now are only able to use information coming from a small patch of pixels within a low resolution image. Compared to this, the smart algorithm which is developed by this team of researchers is capable of identifying pixel patterns which are usually spread across the otherwise noisy image. This capability increases its efficiency as a denoising tool.

Shuiwang Ji said, “Our research contributes to the emerging area of a smart microscopy, where artificial intelligence is seamlessly integrated into the microscope.”

He also added that, “Deep-learning algorithms such as ours will allow us to potentially transcend the physical limit posed by light that was not possible before. This can be extremely valuable for a myriad of applications, including clinical ones, like estimating the stage of cancer progression and distinguishing between cell types for disease prognosis.”