

The progress of artificial intelligence is reaching a point that some research questions that were only relevant for human and other animal agents are becoming relevant for artificial agents as well. One of those questions comes from human intelligence research and is known as Spearman's Law of Diminishing Returns (SLODR). Charles Spearman, the father of factor analysis and the g factor (a dominant factor explaining most of the variance in cognitive tests for human populations), observed that when the analysis was restricted to the subpopulation of most able subjects, the relevance of this dominant factor diminished, as if the power of general intelligence were saturated or not fully used by the most able individuals. In about a century, there have been numerous theoretical explanations and experiments to confirm or reject Spearman's hypothesis. However, all of them have been based on human or animal populations. In this paper, we analyse for the first time whether the SLODR makes sense for artificial agents and what its role should be in the analysis of general-purpose AI. We use a synthetic scenario based on modified elementary cellular automata (ECA) where the ECA rules work as tasks and the population of agents is generated with an agent policy language. Different slices of the population by ability and of the tasks by difficulty are analysed, showing that SLODR does not really appear. Indeed, even if very slightly, we find the reverse, i.e., that more correlation takes place for more able subpopulations, what we conjecture as the Universal Law of Augmenting Returns (ULOAR).