View this email in your browser


Hello everyone! The final months of 2020 are upon us, and times are still strange. We took a month hiatus from this letter to give those in academia starting the new academic year and those in industry entering the third quarter push one less email to sift through. Yet meanwhile, ContinualAI has continued to keep up with work we would like to share with you all. 

We are continuing to grow, with well over 1000 members from around the world working towards a shared vision of continually learning AI. We thank each and every one of you who have been along for the ride, for however long you have been with us. This has been a silver lining to one roller coaster of a year, pushing this branch of AI forward. 

Find a good review of AI applications to aid the pandemic efforts here. If you have ideas how we can help, let us know!


We're happy to share what has been working on, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't already and feel free to donate if you are passionate about this goal.  

A Few Recent Announcements

  • We are excited to announce the next ContinualAI Online Meetup (This Friday 5.30 PM CET)! This meetup will be about "Generalization and Robustness in Continual Learning"!  Check out the exceptional schedule we have in plan for the meetup above and prepare your questions! We will make sure to make a strong panel discussion at the end of the meetup! To join, the Eventbrite link is here and MS Teams link is here!

  • The ContinualAI Wiki  has received some TLC over the last couple weeks, and looks GREAT. We recently had a meetup event talking all about it which you can watch here to learn how to use it or even get involved and check out our discussion here. A big thank you to Andrea Cossu for heading up this project. 

  • We've maintained a great reading group line up. Can't make this week's reading group? No worries! See the past papers here, and you can also watch the recordings of all the events that we have had.

  • The ContinualAI Research (CLAIR) collaborative team is always looking for contributors to the many open-source projects we have under development. Contact me or  if you want to learn more about them and join us! We are always looking for motivated people willing to give back to this awesome community!
Not on our mailing list? Join now!

ContinualAI Sponsored Programs

  • Please reach out if you would be interested in us sponsoring your program!
ContinualAI has been an open community from the beginning. From the start, we have strived to make it a more diverse, equitable, and inclusive organization, which will help our mission of making research in continual learning & AI more fair, open and collaborative. This is what we have always hoped ContinualAI would embody!

Towards this goal, we are excited to announce the creation of an Inclusion & Diversity committee within ContinualAI to improve the quality of our community with a particular attention to fulfilling this mission. If you have an idea how to further reach this goal, please feel free to contact us, we would love to hear your ideas. 

Top paper picks: 

A paper we think you should read, if you have not yet, as chosen by the community:

Engineering a Less Artificial Intelligence

Fabian H.Sinz, Xaq Pitkow, Jacob Reimer, Matthias Bethge, Andreas S.Tolias 

Despite enormous progress in machine learning, artificial neural networks still lag behind brains in their ability to generalize to new situations. Given identical training data, differences in generalization are caused by many defining features of a learning algorithm, such as network architecture and learning rule. Their joint effect, called “inductive bias,” determines how well any learning algorithm—or brain—generalizes: robust generalization needs good inductive biases. Artificial networks use rather nonspecific biases and often latch onto patterns that are only informative about the statistics of the training data but may not generalize to different scenarios. Brains, on the other hand, generalize across comparatively drastic changes in the sensory input all the time. We highlight some shortcomings of state-of-the-art learning algorithms compared to biological brains and discuss several ideas about how neuroscience can guide the quest for better inductive biases by providing useful constraints on representations and network architecture.

Other Useful Links: 


Copyright © 2020 ContinualAI, All rights reserved.

Our mailing address is:

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp