BACK TO INDEX
Publications of year 2022 |
-
Michèle Mazeau,
Ghislaine Dehaene-Lambertz,
Hervé Glasel,
and Caroline Huron.
Les Troubles dys avant 7 ans: Les clés pour dépister et assurer le suivi en médecine de ville.
Elsevier Health Sciences,
2022.
[bibtex-entry]
-
Tiffany Bounmy.
Neural coding of uncertainty during statistical learning in humans: a study using ultra-high field functional magnetic resonance imaging.
PhD Thesis,
Paris Descartes,
2022.
[bibtex-entry]
-
Lorenzo Ciccione.
Les bases cognitives et neurales de la perception et de la compréhension des graphes.
PhD Thesis,
PSL,
2022.
[bibtex-entry]
-
Théo Desbordes.
A search for the neural bases of compositionality.
PhD Thesis,
Sorbonne,
2022.
[bibtex-entry]
-
Shrutiben Naik.
Growing a noisy brain: Development of Structured Variability in Neural Responses and Its Implications.
PhD Thesis,
UPMC,
2022.
[bibtex-entry]
-
Mathias Sablé-Meyer.
Human cognition of geometric shapes, a window into the mental representation of abstract concepts.
PhD Thesis,
PSL,
2022.
[bibtex-entry]
-
Christos Zacharopoulos.
On the dissociation of structural and linear operations in sentence processing.
PhD Thesis,
Sorbonne Université,
2022.
[bibtex-entry]
-
J. Bellet,
M. Gay,
A. Dwarakanath,
B. Jarraya,
T. van Kerkoerle,
S. Dehaene,
and T. Panagiotaropoulos.
Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing.
Neuroscience of consciousness,
2022(1),
2022.
[WWW] [bibtex-entry]
-
Joachim Bellet,
Marion Gay,
Abhilash Dwarakanath,
Bechir Jarraya,
Timo van Kerkoerle,
Stanislas Dehaene,
and Theofanis I Panagiotaropoulos.
Decoding rapidly presented visual stimuli from prefrontal ensembles without report nor post-perceptual processing.
Neuroscience of consciousness,
2022(1):niac005,
2022.
[bibtex-entry]
-
Elisa Castaldi,
Marco Turi,
Guido Marco Cicchini,
Sahawanatou Gassama,
and Evelyn Eger.
Reduced 2D form coherence and 3D structure from motion sensitivity in developmental dyscalculia.
Neuropsychologia,
166:108140,
2022.
[WWW] [bibtex-entry]
-
Maximilien Chaumon,
Pier-Alexandre Rioux,
Sophie K Herbst,
Ignacio Spiousas,
Sebastian L Kübel,
Elisa M Gallego Hiroyasu,
Serife Leman Runyun,
Luigi Micillo,
Vassilis Thanopoulos,
Esteban Mendoza-Duran,
and others.
The Blursday database as a resource to study subjective temporalities during COVID-19.
Nature Human Behaviour,
pp 1--13,
2022.
[WWW] [bibtex-entry]
-
Lorenzo Ciccione,
Mathias Sablé-Meyer,
Esther Boissin,
Mathilde Josserand,
Cassandra Potier-Watkins,
Serge Caparos,
and Stanislas Dehaene.
Graphicacy across age, education, and culture: a new tool to assess intuitive graphics skills.
bioRxiv,
pp 2022--10,
2022.
[bibtex-entry]
-
Lorenzo Ciccione,
Mathias Sablé-Meyer,
and Stanislas Dehaene.
Analyzing the misperception of exponential growth in graphs.
Cognition,
225,
2022.
[WWW] [bibtex-entry]
-
Stanislas Dehaene,
Fosca Al Roumi,
Yair Lakretz,
Samuel Planton,
and Mathias Sablé-Meyer.
Symbols and mental programs: a hypothesis about human singularity.
Trends in Cognitive Sciences,
2022.
[PDF] [bibtex-entry]
-
Dror Dotan and Stanislas Dehaene.
Tracking priors and their replacement: Mental dynamics of decision making in the number-line task.
Cognition,
224:105069,
2022.
[WWW] [bibtex-entry]
-
Donald Dunagan,
Shulin Zhang,
Jixing Li,
Shohini Bhattasali,
Christophe Pallier,
John Whitman,
Yiming Yang,
and John Hale.
Neural correlates of semantic number: A cross-linguistic investigation.
Brain and Language,
229:105110,
June 2022.
[PDF]
Abstract: |
One aspect of natural language comprehension is understanding how many of what or whom a speaker is referring to. While previous work has documented the neural correlates of number comprehension and quantity comparison, this study investigates semantic number from a cross-linguistic perspective with the goal of identifying cortical regions involved in distinguishing plural from singular nouns. Three fMRI datasets are used in which Chinese, French, and English native speakers listen to an audiobook of a children's story in their native language. These languages are selected because they differ in their number semantics. Across these languages, several well-known language regions manifest a contrast between plural and singular, including the pars orbitalis, pars triangularis, posterior temporal lobe, and dorsomedial prefrontal cortex. This is consistent with a common brain network supporting comprehension across languages with overt as well as covert number-marking. |
[bibtex-entry]
-
Xiaoxia Feng,
Karla Monzalvo,
Stanislas Dehaene,
and Ghislaine Dehaene-Lambertz.
Evolution of reading and face circuits during the first three years of reading acquisition.
Neuroimage,
259:119394,
2022.
[WWW] [bibtex-entry]
-
Ana Fló,
Lucas Benjamin,
Marie Palu,
and Ghislaine Dehaene-Lambertz.
Sleeping neonates track transitional probabilities in speech but only retain the first syllable of words.
Scientific Reports,
12(1):1--13,
2022.
[WWW] [bibtex-entry]
-
Ana Fló,
Giulia Gennari,
Lucas Benjamin,
and Ghislaine Dehaene-Lambertz.
Automated Pipeline for Infants Continuous EEG (APICE): A flexible pipeline for developmental cognitive studies.
Developmental Cognitive Neuroscience,
54:101077,
2022.
[WWW] [bibtex-entry]
-
Sophie K Herbst,
Jonas Obleser,
and Virginie van Wassenhove.
Implicit versus explicit timing--separate or shared mechanisms?.
Journal of Cognitive Neuroscience,
34(8):1447--1466,
2022.
[bibtex-entry]
-
Sophie K Herbst,
Gabor Stefanics,
and Jonas Obleser.
Endogenous modulation of delta phase by expectation-A replication of Stefanics et al., 2010.
Cortex,
2022.
[bibtex-entry]
-
Vishal Kapoor,
Abhilash Dwarakanath,
Shervin Safavi,
Joachim Werner,
Michel Besserve,
Theofanis I Panagiotaropoulos,
and Nikos K Logothetis.
Decoding internally generated transitions of conscious contents in the prefrontal cortex without subjective reports.
Nature Communications,
13(1):1--16,
2022.
[WWW] [bibtex-entry]
-
Ulysse Klatzmann,
Sean Froudist-Walsh,
Daniel P Bliss,
Panagiota Theodoni,
Jorge Mejías,
Meiqi Niu,
Lucija Rapan,
Nicola Palomero-Gallagher,
Claire Sergent,
Stanislas Dehaene,
and others.
A connectome-based model of conscious access in monkey cortex.
BioRxiv,
pp 2022--02,
2022.
[WWW] [bibtex-entry]
-
Tadeusz Wladyslaw Kononowicz,
Virginie van Wassenhove,
and Valérie Doyère.
Rodents monitor their error in self-generated duration on a single trial basis.
Proceedings of the National Academy of Sciences,
119(9),
2022.
[WWW] [bibtex-entry]
-
Maxime Maheu,
Florent Meyniel,
and Stanislas Dehaene.
Rational arbitration between statistics and rules in human sequence processing.
Nat Hum Behav,
2022.
[WWW] [PDF] [bibtex-entry]
-
Juliette Millet,
Charlotte Caucheteux,
Pierre Orhan,
Yves Boubenec,
Alexandre Gramfort,
Ewan Dunbar,
Christophe Pallier,
and Jean-Remi King.
Toward a realistic model of speech processing in the brain with self-supervised learning.
Neurips,
2022.
[WWW] [PDF]
Abstract: |
Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on the issue of speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised architecture, Wav2Vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to {\textbackslash}textasciitilde1h of audio books. Our results are four-fold. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech - a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: Wav2Vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain. |
[bibtex-entry]
-
Jacques Pesnot Lerousseau,
Cesare V Parise,
Marc O Ernst,
and Virginie van Wassenhove.
Multisensory correlation computations in the human brain identified by a time-resolved encoding model.
Nature communications,
13(1):1--12,
2022.
[WWW] [bibtex-entry]
-
Samuel Planton,
Fosca Al Roumi,
Liping Wang,
and Stanislas Dehaene.
Compression of binary sound sequences in human working memory.
bioRxiv,
2022.
[WWW] [bibtex-entry]
-
Mathias Sablé-Meyer,
Kevin Ellis,
Josh Tenenbaum,
and Stanislas Dehaene.
A language of thought for the mental representation of geometric shapes.
Cognitive Psychology,
139:101527,
2022.
[WWW] [bibtex-entry]
-
Jordy Tasserie,
Lynn Uhrig,
Jacobo D Sitt,
Dragana Manasova,
Morgan Dupont,
Stanislas Dehaene,
and Béchir Jarraya.
Deep brain stimulation of the thalamus restores signatures of consciousness in a nonhuman primate model.
Science advances,
8(11):eabl5547,
2022.
[WWW] [bibtex-entry]
-
Jordy Tasserie,
Lynn Uhrig,
Jacobo Sitt,
Dragana Manasova,
Morgan Dupont,
Stanislas Dehaene,
and Bechir Jarraya.
Restaurer la conscience grâce à la stimulation cérébrale profonde.
Société des Neurosciences,
63(1),
2022.
[WWW] [bibtex-entry]
-
Alexis Thual,
Quang Huy TRAN,
Tatiana Zemskova,
Nicolas Courty,
Rémi Flamary,
Stanislas Dehaene,
and Bertrand Thirion.
Aligning individual brains with fused unbalanced Gromov Wasserstein.
Advances in Neural Information Processing Systems,
35:21792--21804,
2022.
[WWW] [bibtex-entry]
-
Oscar Woolnough,
Cristian Donos,
Aidan Curtis,
Patrick S Rollo,
Zachary J Roccaforte,
Stanislas Dehaene,
Simon Fischer-Baum,
and Nitin Tandon.
A Spatiotemporal map of reading aloud.
Journal of Neuroscience,
2022.
[WWW] [bibtex-entry]
-
Yang Xie,
Peiyao Hu,
Junru Li,
Jingwen Chen,
Weibin Song,
Xiao-Jing Wang,
Tianming Yang,
Stanislas Dehaene,
Shiming Tang,
Bin Min,
and others.
Geometry of sequence working memory in macaque prefrontal cortex.
Science,
375(6581):632--639,
2022.
[WWW] [bibtex-entry]
-
Christos-Nikolaos Zacharopoulos,
Stanislas Dehaene,
and Yair Lakretz.
Disentangling Hierarchical and Sequential Computations during Sentence Processing.
bioRxiv,
pp 2022--07,
2022.
[WWW] [bibtex-entry]
-
He Zhang,
Yanfen Zhen,
Shijing Yu,
Tenghai Long,
Bingqian Zhang,
Xinjian Jiang,
Junru Li,
Wen Fang,
Mariano Sigman,
Stanislas Dehaene,
and others.
Working memory for spatial sequences: Developmental and evolutionary factors in encoding ordinal and relational structures.
Journal of Neuroscience,
42(5):850--864,
2022.
[WWW] [bibtex-entry]
-
Yair Lakretz,
Théo Desbordes,
Dieuwke Hupkes,
and Stanislas Dehaene.
Can Transformers Process Recursive Nested Constructions, Like Humans?.
In Proceedings of the 29th International Conference on Computational Linguistics,
pages 3226--3232,
2022.
[WWW] [bibtex-entry]
-
Alexandre Pasquiou,
Yair Lakretz,
John T. Hale,
Bertrand Thirion,
and Christophe Pallier.
Neural Language Models are not Born Equal to Fit Brain Data, but Training Helps.
In Proceedings of the 39th International Conference on Machine Learning,
pages 17499--17516,
June 2022.
PMLR.
[WWW] [PDF]
Abstract: |
Neural Language Models (NLMs) have made tremendous advances during the last years, achieving impressive performance on various linguistic tasks. Capitalizing on this, studies in neuroscience have started to use NLMs to study neural activity in the human brain during language processing. However, many questions remain unanswered regarding which factors determine the ability of a neural language model to capture brain activity (aka its 'brain score'). Here, we make first steps in this direction and examine the impact of test loss, training corpus and model architecture (comparing GloVe, LSTM, GPT-2 and BERT), on the prediction of functional Magnetic Resonance Imaging time-courses of participants listening to an audiobook. We find that (1) untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words, with the untrained LSTM outperforming the transformer-based models, being less impacted by the effect of context; (2) that training NLP models improves brain scores in the same brain regions irrespective of the model's architecture; (3) that Perplexity (test loss) is not a good predictor of brain score; (4) that training data have a strong influence on the outcome and, notably, that off-the-shelf models may lack statistical power to detect brain activations. Overall, we outline the impact of model-training choices, and suggest good practices for future studies aiming at explaining the human language system using neural language models. |
[bibtex-entry]
-
Siobhan Roberts.
Is Geometry a Language That Only Humans Know?,
March 2022.
[PDF]
Abstract: |
Neuroscientists are exploring whether shapes like squares and rectangles -- and our ability to recognize them -- are part of what makes our species special |
[bibtex-entry]
BACK TO INDEX
Disclaimer:
This material is presented to ensure timely dissemination of
scholarly and technical work. Copyright and all rights therein
are retained by authors or by other copyright holders.
All person copying this information are expected to adhere to
the terms and constraints invoked by each author's copyright.
In most cases, these works may not be reposted
without the explicit permission of the copyright holder.
Note that this is not the exhaustive list of publications, but only a selection. Contact the individual authors for complete lists of references.
Last modified: Sat Jan 4 12:12:18 2025
Author: cp983411.
This document was translated from BibTEX by
bibtex2html