Driscoll et al. (2024) investigated how recurrent artificial neural networks (RNAs) perform multiple tasks flexibly. The authors trained RNAs to perform 15 different cognitive tasks and analyzed the dynamics of networks during task execution. They found that RNAs learned to accomplish these tasks by reusing and recombining a set of “dynamic motifs” – neural activity patterns that implement specific calculations.
These dynamic motifs were shared among tasks that required similar computational elements, such as memory, categorization, and delayed response. For example, tasks that required the memory of a continuous circular variable reused the same ring attractor, a pattern of neural activity that maintains the representation of the variable over time.
The authors also found that dynamic motifs were implemented by groups of neural units, and that lesions in these groups caused modular performance deficits, that is, they only impaired tasks that depended on the injured dynamic motif.
This study sheds light on how artificial neural networks can achieve cognitive flexibility, a hallmark of intelligent behavior. The discovery of shared dynamic motifs suggests that modularity and reuse of computational components may be fundamental principles of neural computation, both in biological brains and in artificial networks.
Reference:
DRISCOLL, Laura N. et al. Flexible multitask computation in recurrent networks uses shared dynamic motifs. Nature Neuroscience, v. 27, pp. 1118-1133, 2024.
Photo by Moritz Kindler’s na Unsplash