Publication: Structurally Sound, Cognitively Distinct: A Comparative Analysis of Language Computation in Artificial and Biological Neural Networks
Files
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Artificial neural networks (ANNs), particularly large language models like GPT-2 and BERT, have demonstrated remarkable capabilities in language comprehension and generation, prompting questions about the extent to which their computational processes resemble those of biological neural networks (BNNs)—the dynamic, adaptive systems of the human brain. This thesis investigates whether ANNs and BNNs share fundamental computational strategies in language tasks, specifically differentiating between task-general computations (predictive coding, context-sensitive representations, hierarchical processing) and task-specific ones (task flexibility and latent cause inference). Using methodologies such as encoding models, representational similarity analysis (RSA), attention-head mapping, and zero-shot generalization, the thesis identifies areas of alignment primarily under conditions of restricted modalities. Notably, predictive coding shows strong surface-level similarity between systems; however, deeper exploration reveals significant divergences in context sensitivity, hierarchical processing, and especially task flexibility—where ANNs fail to replicate human-like interpretation. These divergences underscore crucial limitations in current ANN architectures, highlighting their lack of cognitive scaffolding necessary for genuinely flexible comprehension. This thesis concludes by suggesting future research directions, including integrating multimodal inputs and embedding dynamic memory structures into ANN training, to better assess and narrow the cognitive gap between artificial and biological systems.