Plastix

Inference of Synaptic Plasticity Rules

overview of plastix (Mehta et al., 2024)

Inferring the synaptic plasticity rules that govern learning in the brain is a key challenge in neuroscience. We developed a novel computational method to infer these rules from experimental data, applicable to both neural and behavioral data. Our approach approximates plasticity rules using a parameterized function, employing either truncated Taylor series for theoretical interpretability or multilayer perceptrons. These plasticity parameters are optimized via gradient descent over entire trajectories to align closely with observed neural activity or behavioral learning dynamics. This method can uncover complex rules that induce long nonlinear time dependencies, particularly involving factors like postsynaptic activity and current synaptic weights. We validate our approach through simulations, successfully recovering established rules such as Oja’s, as well as more intricate plasticity rules with reward-modulated terms. We assess the robustness of our technique to noise and apply it to behavioral data from Drosophila in a probabilistic reward-learning experiment. Notably, our findings reveal an active forgetting component in reward learning in flies, improving predictive accuracy over previous models. This modeling framework offers a promising new avenue for elucidating the computational principles of synaptic plasticity and learning in the brain.

References

2024

  1. plastix
    Yash Mehta, Danil Tyulmankov, Adithya E. Rajagopalan, Glenn C. Turner, James E. Fitzgerald, and Jan Funke
    Nov 2024