View this email in your browser


Hi everyone. We're getting so close to reaching our 1000th member. Only 15 to go. Who might it be?! Stay tuned for all the festivities and other big news to come! Soon, we'll have 1000 people, from academia to industry and beyond, who are excited about building smarter machines. As a small part of that mission, we're happy to share what ContinualAI has been working on over the last month, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't already and feel free to donate if you are able. 


It's that time of year where it makes sense to revisit the Tao of Programming

A Few Recent Announcements

  • If you have not had a chance to watch our CVPR 2020 workshop, remember that you can find the entire workshop on our Youtube account, along with lot's of other great content!

  • Our last online Meetup: "Continual Learning: in the Cloud, at the Edge or Both?  was a success! Don't forget to read the discussions in our forum either! 

  • If you missed the meetup, or our other meetups: such as  "Continual Learning with Sequential Streaming Data" No fear either, find the recordings here.

  • The ContinualAI Wiki  has received some TLC over the last couple weeks, and looks GREAT. We're working hard to push more changes. We're also actively looking for members to help out also. If you would like to help or contribute a short section, please reach out on slack! A big thank you to Andrea for heading the project's next phase. 

  • Reminder: Every Friday, join us for our reading group! Visit the reading-group slack channel for updates, and see the past papers here. You can also watch the recordings.
Not on our mailing list? Join now!

ContinualAI Sponsored Programs


Top paper picks: 

A paper we think you should read, if you have not yet. This is one of the classic CL papers, laying out the earliest instance of the catastrophic interference problem:  

Catastrophic Interference in Connectionist Networks: The Sequential Learning Problem 
(Michael McCloskey & Neal J.Cohen 1989) 

Connectionist networks in which information is stored in weights on connections among simple processing units have attracted considerable interest in cognitive science. Much of the interest centers around two characteristics of these networks. First, the weights on connections between units need not be prewired by the model builder but rather may be established through training in which items to be learned are presented repeatedly to the network and the connection weights are adjusted in small increments according to a learning algorithm. Second, the networks may represent information in a distributed fashion. This chapter discusses the catastrophic interference in connectionist networks. Distributed representations established through the application of learning algorithms have several properties that are claimed to be desirable from the standpoint of modeling human cognition. These properties include content-addressable memory and so-called automatic generalization in which a network trained on a set of items responds correctly to other untrained items within the same domain. New learning may interfere catastrophically with old learning when networks are trained sequentially. The analysis of the causes of interference implies that at least some interference will occur whenever new learning may alter weights involved in representing old learning, and the simulation results demonstrate only that interference is catastrophic in some specific networks.

Other Useful Links: 


Copyright © 2020 ContinualAI, All rights reserved.

Our mailing address is:

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp