

We propose the basic computational framework, which can be used for modeling of hierarchical cognitive architectures. The system is build from at least two layers. The first is used as the semantic network for storing semiotic triads, i.e. objects, categories and symbols, which represent them. The network is constructed from an ensemble of machine learning, or clustering methods, each trained on a set of prototypes for a given category. Those semiotic triads are linked with weights representing the cognitive similarity between represented symbols. The number of symbols that can be stored in such network scales linearly with the number of stored datasets of prototypical objects, as it is assumed in symbol grounding paradigm. The second layer is the communication, or interaction network, which allow agents to communicate with each other, namely to speak about symbols and/or underlying objects (categories). Agent can pass an object (perceptual item), and its name to other agent, in order to modify the categorical system of the second agent, or to communicate the symbolic name for a given category. Both cognitive layers are simulated within Agent-Based Modeling (ABM) approach. Therefore, typical cognitive task can be implemented here as the dynamical process of equilibration for a set of semiotic networks in the population of interacting agents with language.