A research team from London, U.K.-based DeepMind, Google's artificial intelligence arm, developed a method to train computer systems to process information similar to human beings, according to a study published in Proceedings of the National Academy of Sciences.
Wile deep neural networks are a common machine learning technique used to train models to complete tasks like language translation and image classification, they have some drawbacks, according to the study authors.
"One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially," the authors note. "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence."
For the study, the researchers — led by James Kirkpatrick, a research scientist at DeepMind — trained a model on a sequence of tasks by weighing the importance of previous tasks. The researchers found the trained model could retain knowledge, even if it did not experience a task for a period of time.
"We hope that this research represents a step toward programs that can learn in a more flexible and efficient way," the authors write in a DeepMind blog post. "We have shown that catastrophic forgetting is not an insurmountable challenge for neural networks."