SUMMARY
- Catastrophic forgetting or interference is the inability of an artificial neural network (ANN) to learn multiple sequential tasks without a degradation in the accuracy of a previously learned task. When an ANN learns a new task, it disrupts connection weights that were important for solving a previous task. Current solutions are limited in the extent to which they boost sequential task accuracy or require high computational overhead.
- Synaptic stabilization weighs the importance of each synapse to determine which to override when learning a new task and context dependent gating uses a random distribution of non-overlapping nodes for each learned task. When these two methods are combined in tandem, a synergy occurs resulting in higher task efficiency retained over more learned tasks than when each method is applied alone.
- A novel algorithm that combines synaptic stabilization and context dependent gating to enable neural networks to “remember” previous tasks when trained on new tasks. The resulting algorithm can be applied both on feedforward or recurrent network architecture and is compatible with both supervised and reinforced learning.
- A recurrent neural network (RNN) was trained using both supervised and reinforcement-based learning on a battery of 20 sequential tasks commonly used in neuroscience experiments to test capabilities such as decision making, working memory and categorization. Implementation of the algorithm with supervised learning boosted average task accuracy from 80.0% to 98.2% as compared to synaptic stabilization with context signal alone. Similarly, implementation of the algorithm with reinforcement learning resulted in an average task accuracy of 96.4%. Similar results were reported for feedforward neural networks trained on image classification tasks.
FIGURE

Performance of an RNN with the implemented algorithm (magenta and black data points) as compared to RNN with synaptic stabilization and context signal alone (green data points) in a battery of cognitive tests. X-axis denotes the specific task and Y axis denotes reported accuracy over 6,000 test batches.
ADVANTAGES
ADVANTAGES
- Higher accuracy than industry standard, particularly at greater than 10 sequential tasks
- Easier to implement in pre-existing neural networks
- Lower computational overhead
APPLICATIONS
- Artificial general intelligence (AGI)
- Computer vision
PUBLICATIONS
Apr 3, 2019
Licensing,Investment,Co-development
Patent Issued
Prototypes (both feedforward and recurrent neural networks)
David Freedman