

The Tensor Brain (TB) has been introduced as a computational model for perception and memory. This paper provides an overview of the TB model, incorporating recent developments and insights into its functionality. The TB is composed of two primary layers: the representation layer and the index layer. The representation layer serves as a model for the subsymbolic global workspace, a concept derived from consciousness research. Its state represents the cognitive brain state, capturing the dynamic interplay of sensory and cognitive processes. The index layer, in contrast, contains symbolic representations for concepts, time instances, and predicates. In a bottom-up operation, sensory input activates the representation layer, which then triggers associated symbolic labels in the index layer. Conversely, in a top-down operation, symbols in the index layer activate the representation layer, which in turn influences earlier processing layers through embodiment. This top-down mechanism underpins semantic memory, enabling the integration of abstract knowledge into perceptual and cognitive processes. A key feature of the TB is its use of concept embeddings, which function as connection weights linking the index layer to the representation layer. As a concept’s “DNA,” these embeddings consolidate knowledge from diverse experiences, sensory modalities, and symbolic representations, providing a unified framework for learning and memory. Although the TB is primarily a computational model, it has been hypothesized to reflect certain aspects of actual brain function. Notably, the sequential generation of symbols in the TB may represent a precursor to the development of natural language. The model incorporates an attention mechanism and supports multitasking through multiplexing, simulating the brain’s ability to rapidly switch between mental states. Additionally, the TB emphasizes multimodality, with the representation layer integrating inputs across multiple sensory and cognitive dimensions.