Researchers in Japan are showing way to decode thoughts


“The team has created a first-of-its-kind algorithm that can interpret and accurately reproduce images seen or imagined by a person,” wrote Alexandru Micu in ZME Science.

Their paper, “Deep image reconstruction from human brain activity,” is on bioRxiv. The authors are Guohua Shen, Tomoyasu Horikawa, Kei Majima, and Yukiyasu Kamitani.

Vanessa Ramirez, associate editor of Singularity Hub, was one of several writers on tech watching sites who reported on the study. The writers noted that this would mark a difference from other research involved in deconstructing images based on pixels and basic shapes.

“Trying to tame a computer to decode mental images isn’t a new idea,” said Micu. “However, all previous systems have been limited in scope and ability. Some can only handle narrow domains like facial shape, while others can only rebuild images from preprogrammed images or categories.”

What is special here, Micu said, is that “their new algorithm can generate new, recognizable images from scratch.”

The study team has been exploring deep image reconstruction. Micu quoted the senior author of the study. “We believe that a deep neural network is good proxy for the brain’s hierarchical processing,” said Yukiyasu Kamitani.

Overview of deep image reconstruction is shown. The pixels’ values of the input image are optimized so that the DNN features of the image are similar to those decoded from fMRI activity. A deep generator network (DGN) is optionally combined with the DNN to produce natural-looking images, in which optimization is performed at the input space of the DGN. Credit: bioRxiv (2017). DOI: 10.1101/240317


Machine learning-based analysis of human functional magnetic resonance imaging (fMRI) patterns has enabled the visualization of perceptual content. However, it has been limited to the reconstruction with low-level image bases (Miyawaki et al., 2008; Wen et al., 2016) or to the matching to exemplars (Naselaris et al., 2009; Nishimoto et al., 2011). Recent work showed that visual cortical activity can be decoded (translated) into hierarchical features of a deep neural network (DNN) for the same input image, providing a way to make use of the information from hierarchical visual features (Horikawa & Kamitani, 2017). Here, we present a novel image reconstruction method, in which the pixel values of an image are optimized to make its DNN features similar to those decoded from human brain activity at multiple layers. We found that the generated images resembled the stimulus images (both natural images and artificial shapes) and the subjective visual content during imagery. While our model was solely trained with natural images, our method successfully generalized the reconstruction to artificial shapes, indicating that our model indeed reconstructs or generates images from brain activity, not simply matches to exemplars. A natural image prior introduced by another deep neural network effectively rendered semantically meaningful details to reconstructions by constraining reconstructed images to be similar to natural images. Furthermore, human judgment of reconstructions suggests the effectiveness of combining multiple DNN layers to enhance visual quality of generated images. The results suggest that hierarchical visual information in the brain can be effectively combined to reconstruct perceptual and subjective images.

We do not begin nor end at the borders of our skin…

The brain has a weak electrical force around it that transmits information;

“Researchers in the US have recorded neural spikes travelling too slowly in the brain to be explained by conventional signalling mechanisms. In the absence of other plausible explanations, the scientists believe these brain waves are being transmitted by a weak electrical field, and they’ve been able to detect one of these in mice.

“Researchers have thought that the brain’s endogenous electrical fields are too weak to propagate wave transmission,” said Dominique Durand, a biomedical engineer at Case Western Reserve University. “But it appears the brain may be using the fields to communicate without synaptic transmissions, gap junctions or diffusion.”

Running computer simulations to model their hypothesis, the researchers found that electrical fields can mediate propagation across layers of neurons. While the field is of low amplitude (approximately 2–6 mV/mm), it’s able to excite and activate immediate neighbours, which subsequently activate more neurons, travelling across the brain at about 10 centimetres per second.

Welcome to my world 😉 Science can not even pin point "thinking" to the brain. I wrote this article in 2012;

Posted by How to make a spacecraft / Uzay gemisi nasıl yapılır on Friday, January 15, 2016

Welcome to my world  Science can not even pin point “thinking” to the brain. I wrote this article in 2012;

(It is also available among my writings here; )