Article
Multidisciplinary Sciences
Akshay V. Jagadeesh, Justin L. Gardner
Summary: The human category-selective visual cortex provides a set of texture-like features that can be flexibly reconfigured to learn and identify new object categories. The representations in this visual cortex are not explicitly encoding objects, but rather capturing complex visual features that support object perception.
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA
(2022)
Article
Neurosciences
Sarah K. Wandelt, Spencer Kellis, David A. Bjanes, Kelsie Pejsa, Brian Lee, Charles Liu, Richard A. Andersen
Summary: This study found that neural signals from high-level areas of the human cortex can be used for grasping and speech brain-machine interface applications. The supramarginal gyrus and ventral premotor cortex in the cortical grasp network can encode the planning and execution of grasps, as well as process aspects of spoken and written language.
Article
Behavioral Sciences
Pietro Caggiano, Giordana Grossi, Lucilla C. De Mattia, Jose' van Velzen, Gianna Cocchini
Summary: Recent findings suggest a specific action-specific link between objects and body parts, indicating that the mental representation of an object contains crucial information about relevant motor interactions. This study aimed to investigate the relationship between objects and body parts through two experiments, showing that recognition of specific body parts can be facilitated by functionally-related objects, and that the functional relationship modulates brain responses.
Article
Computer Science, Interdisciplinary Applications
Zhongyu Huang, Changde Du, Yingheng Wang, Kaicheng Fu, Huiguang He
Summary: Brain signal-based emotion recognition has gained attention for its potential in human-computer interaction. Researchers have attempted to decode human emotions from brain imaging data using emotion and brain representations. However, the relationship between emotions and brain regions is not explicitly incorporated into the representation learning process, leading to insufficient informative representations for specific tasks such as emotion decoding. This work proposes a graph-enhanced emotion neural decoding approach that integrates the relationships between emotions and brain regions into the process, demonstrating its effectiveness and superiority through comprehensive experiments on visually evoked emotion datasets.
IEEE TRANSACTIONS ON MEDICAL IMAGING
(2023)
Article
Computer Science, Artificial Intelligence
Naoko Koide-Majima, Shinji Nishimoto, Kei Majima
Summary: Visual images observed by humans can be reconstructed from brain activity, and the visualization of arbitrary natural images from mental imagery has been achieved through an improved method. This study provides a unique tool for directly investigating the subjective contents of the brain.
Article
Neurosciences
Ke Bo, Lihan Cui, Siyang Yin, Zhenhong Hu, Xiangfei Hong, Sungkean Kim, Andreas Keil, Mingzhou Ding
Summary: This study investigates the temporal dynamics of affective scene processing in the brain using simultaneous EEG-fMRI recordings. The results show that perceptual processing of complex scenes begins in early visual cortex within 80 ms, followed by the ventral visual cortex at 100 ms. Affect-specific neural representations start to form between 200-300 ms, supported mainly by occipital and temporal cortices. These representations are stable and last up to 2 seconds, indicating the involvement of distributed brain areas in sustaining affective scene processing.
Article
Neurosciences
Diego Vidaurre, Radoslaw M. Cichy, Mark W. Woolrich
Summary: Decoding methods often confuse composite and distributed neural processes, making it unclear what specific aspects of neural computations involved in perception are reflected in the data. By decomposing MEG data, researchers identified at least three dissociable stimulus-specific aspects in brain data: a slow, non-oscillatory component, a global phase shift of the oscillation, and differential patterns of phase across channels. Recognizing the multicomponent nature of the signal is important for understanding cognitive interpretations of decoding analysis in the study of perception.
Article
Neurosciences
Gabrielle Aude Zbaren, Sarah Nadine Meissner, Manu Kapur, Nicole Wenderoth
Summary: This study used fMRI to reveal that humans engage in visual imagery when making physical predictions, and that the frontoparietal areas of the brain are involved in this process.
HUMAN BRAIN MAPPING
(2023)
Article
Neurosciences
Zahraa Sabra, Ali Alawieh, Leonardo Bonilha, Thomas Naselaris, Nicholas AuYong
Summary: This study investigated the regional and cross-regional cortical activities underlying the cognition of visual narrative using intracranial stereotactic electroencephalograms recordings. The results showed that the frontal and temporal lobes encode the difference between visual narrative and random image set. Additionally, the frontal lobe is more engaged when contextually novel stimuli are presented.
FRONTIERS IN HUMAN NEUROSCIENCE
(2022)
Article
Robotics
Shuaifeng Zhi, Edgar Sucar, Andre Mouton, Iain Haughton, Tristan Laidlow, Andrew J. Davison
Summary: A neural field trained with self-supervision efficiently represents the geometry and colour of a 3D scene and automatically decomposes it into coherent and accurate object-like regions. By using sparse labelling interactions, a 3D semantic scene segmentation can be produced. Our real-time iLabel system, which takes input from a hand-held RGB-D camera, requires no prior training data, and works in an 'open set' manner, allows users to define semantic classes on the fly. The underlying model of iLabel is a simple multilayer perceptron (MLP), trained from scratch to learn a neural representation of a single 3D scene.
IEEE ROBOTICS AND AUTOMATION LETTERS
(2023)
Article
Computer Science, Artificial Intelligence
Simone Palazzo, Concetto Spampinato, Isaak Kavasidis, Daniela Giordano, Joseph Schmidt, Mubarak Shah
Summary: This work introduces a novel method for replicating human brain-visual representations in machines by learning plausible computational and biological representations through correlating human neural activity and natural images. Experimental results show that the proposed approach successfully decodes visual information from neural signals and improves the performance of deep models.
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(2021)
Article
Computer Science, Artificial Intelligence
Rami Alazrai, Motaz Abuhijleh, Mostafa Z. Ali, Mohammad I. Daoud
Summary: This paper presents a two-phase approach for decoding visually imagined digits and letters using EEG signals. The first phase constructs a joint time, frequency, and spatial representation of the EEG signals. The second phase utilizes a deep learning framework to automatically extract features and decode the imagined digits and letters. The proposed approach outperforms alternative techniques and achieves high accuracy.
EXPERT SYSTEMS WITH APPLICATIONS
(2022)
Article
Audiology & Speech-Language Pathology
Min Xu, Duo Li, Ping Li
Summary: Cross-language brain decoding uses models from one language to decode stimuli of another language, providing insights into how the brain represents multiple languages. While the overall success of this approach remains to be tested, it is expected to continue progressing in the domain of language processing.
BRAIN AND LANGUAGE
(2021)
Article
Neurosciences
Yaoda Xu, Maryam Vaziri-Pashkam
Summary: This study examined the coding strength of object identity and four types of nonidentity features along the human ventral visual processing pathway and compared brain responses with those of 14 convolutional neural networks (CNNs) pretrained to perform object categorization. Overall, identity representation increased and nonidentity feature representation decreased along the ventral visual pathway, with some notable differences among the different nonidentity features. CNNs differed from the brain in a number of aspects in their representations of identity and nonidentity features over the course of visual processing. Our approach provides a new tool for characterizing feature coding in the human brain and the correspondence between the brain and CNNs.
JOURNAL OF NEUROSCIENCE
(2021)
Article
Engineering, Biomedical
Miyoung Chung, Taehyung Kim, Eunju Jeong, Chun Kee Chung, June Sic Kim, Oh-Sang Kwon, Sung-Phil Kim
Summary: This study evaluated the feasibility of decoding pitch imagery directly from human EEG and achieved the best classification performance for seven pitches using support vector machine. It demonstrated for the first time the potential of decoding imagined musical pitch from human EEG.
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING
(2023)