Study Reveals Our Minds May Process Language Like Chatbots

17 November, 2024
Study Reveals Our Minds May Process Language Like Chatbots

A recent study suggests that the human brain might process language in a way similar to advanced AI language models, by using flexible, context-aware patterns instead of fixed rules. Researchers studied brain activity in the inferior frontal gyrus as participants listened to a podcast, finding geometric patterns in brain activity that closely matched those in AI language models. This similarity allowed researchers to predict how the brain would respond to new words, showing that the brain represents language in a dynamic and context-driven way. These insights could deepen our understanding of how language works in the brain and inspire future improvements in language-processing AI.


 

A recent study led by Dr. Ariel Goldstein from the Department of Cognitive and Brain Sciences and Business School  at the Hebrew University of Jerusalem with close collaboration with Google Research in Israel found  and New York University School of Medicine fascinating similarities in how the human brain and artificial intelligence models process language. The research suggests that the brain, like AI systems such as GPT-2, may use a continuous, context-sensitive embedding space to derive meaning from language, a breakthrough that could reshape our understanding of neural language processing.

Unlike traditional language models based on fixed rules, deep language models like GPT-2 employ neural networks to create “embedding spaces”—high-dimensional vector representations that capture relationships between words in various contexts. This approach allows these models to interpret the same word differently based on surrounding text, offering a more nuanced understanding. Dr. Goldstein’s team sought to explore whether the brain might employ similar methods in its processing of language.

To investigate, the researchers recorded neural activity in the inferior frontal gyrus—a region known for language processing—of participants as they listened to a 30-minute podcast. By mapping each word to a “brain embedding” in this area, they found that these brain-based embeddings displayed geometric patterns similar to the contextual embedding spaces of deep language models. Remarkably, this shared geometry enabled the researchers to predict brain responses to previously unencountered words, an approach called zero-shot inference. This implies that the brain may rely on contextual relationships rather than fixed word meanings, reflecting the adaptive nature of deep learning systems.

“Our findings suggest a shift from symbolic, rule-based representations in the brain to a continuous, context-driven system,” explains Dr. Goldstein. “We observed that contextual embeddings, akin to those in deep language models, align more closely with neural activity than static representations, advancing our understanding of the brain’s language processing.”

This study indicates that the brain dynamically updates its representation of language based on context rather than depending solely on memorized word forms, challenging traditional psycholinguistic theories that emphasized rule-based processing. Dr. Goldstein’s work aligns with recent advancements in artificial intelligence, hinting at the potential for AI-inspired models to deepen our understanding of the neural basis of language comprehension.

The team plans to expand this research with larger samples and more detailed neural recordings to validate and extend these findings. By drawing connections between artificial intelligence and brain function, this work could shape the future of both neuroscience and language-processing technology, opening doors to innovations in AI that better reflect human cognition.

The research paper titled “Alignment of brain embeddings and artificial contextual embeddings in natural language points to common geometric patterns” is now available in Nature Communications and can be accessed at https://doi.org/10.1038/s41467-024-46631-y.  

Researchers:

Ariel Goldstein1,2, Avigail Grinstein-Dabush2,8, Mariano Schain2,8, Haocheng Wang3, Zhuoqiao Hong3, Bobbi Aubrey3,4, Samuel A. Nastase3, Zaid Zada3, Eric Ham3, Amir Feder2, Harshvardhan Gazula3, Eliav Buchnik2, Werner Doyle4, Sasha Devore4, Patricia Dugan4, Roi Reichart5, Daniel Friedman4, Michael Brenner2,6, Avinatan Hassidim2, Orrin Devinsky4, Adeen Flinker4,7, Uri Hasson2,3

Institutions:

  1. Business School, Data Science department and Cognitive Department, Hebrew University
  2. Google Research, Tel Aviv
  3. Department of Psychology and the Neuroscience Institute, Princeton University
  4. New York University Grossman School of Medicine, New York
  5. Faculty of Industrial Engineering and Management, Technion, Israel Institute of Technology
  6. School of Engineering and Applied Science, Harvard University
  7. New York University Tandon School of Engineering

 

 The Hebrew University of Jerusalem is Israel’s premier academic and research institution. With over 23,000 students from 90 countries, it is a hub for advancing scientific knowledge and holds a significant role in Israel’s civilian scientific research output, accounting for nearly 40% of it and has registered over 11,000 patents. The university’s faculty and alumni have earned eight Nobel Prizes, two Turing Awards a Fields Medal, underscoring their contributions to ground-breaking discoveries. In the global arena, the Hebrew University ranks 81st according to the Shanghai Ranking. To learn more about the university’s academic programs, research initiatives, and achievements, visit the official website at http://new.huji.ac.il/en