AI can read your mind! Researchers use AI to generate images based on people’s brain activity
The researchers claimed that this study was the very first to offer a mathematical interpretation.

Would you believe that an AI could analyze your fantasies and render the mental impressions you have come into reality? This may sound like a cyberpunk book detail, but scientists have actually managed to do this, based on the latest study.

According to a paper released in December, researchers discovered that they were able to recreate high-resolution and completely accurate pictures of activity in the brain by applying the well-known Stable Diffusion picture production version.

In contrast to prior studies, the researchers claimed that they did not need to educate or improve the Artificial intelligence systems in order to generate these photos.


The researcher, from the Graduate School of Frontier Biosciences at Osaka University, reported that they utilized fMRI signals to first estimate a latent representation or model of the picture’s contents.

After that, the model received processing where the diffusion process was used to add noise to it. In order to create the finalized generated image, the scientists used input from the deciphered word representations from the fMRI waves inside the higher visual cortex.

A few experiments, the researchers noted, had achieved high-resolution image reconstructions, only after training and perfecting generative models.

Because of the complexity of training sophisticated models and the scarcity of sampled data in the field of neuroscience, this led to restrictions. No other researchers had attempted to use diffusion models for visual restoration previously to this current study.

The researchers claimed that this study was the very first to offer a mathematical interpretation of the system from a biological standpoint, giving an understanding of the inner workings of diffusion models.

One of the diagrams the researchers produced, for example, illustrates the relationship between brain noise levels and external inputs. The amount of noise and picture quality would both rise as the level of stimulation rose. The researchers also depict how the brain might de-noise a picture in order to rebuild it by triggering several neural networks.

“These results suggest that, at the beginning of the reverse diffusion process, image information is compressed within the bottleneck layer. As denoising progresses, a functional dissociation among U-Net layers emerges within the visual cortex: i.e., the first layer tends to represent fine-scale details in early visual areas, while the bottleneck layer corresponds to higher-order information in more ventral, semantic areas,” the researchers wrote.

Screenshot from research paper. Source: Twitter

Researchers are exploring the interactions between AI models as well as the human brain as generative AI develops. Scientists at Radboud University in the Netherlands trained a generative AI network—a forerunner of Stable Diffusion—on fMRI data from 1,050 distinct faces in a research conducted in January 2022 to translate the findings of brain imaging into actual visuals.

According to the research, the AI was capable of performing unique stimulus reconstruction. The most recent research, published in December, indicated that high-resolution visual reconstruction has now become achievable using current diffusion models.