View this email in your browser


Hi all. As we reach roughly four mouths since the first news of the pandemic, and almost three million cases, needless to say, times are still strange. Despite this, we have all coped in our own ways, and have stayed diligent as we adjust to this new normal. Zoom calls, working from home, and food delivery have replaced our schedules from the prior year. However, thanks to the awesome work by heath-care workers, industry professionals, scientists, and all others involved, in many countries the curve is flattening. While too soon to tell how long the tunnel may be, we do see the glimmer of light at the end of it. If there is anything ContinualAI can further do to help, we encourage you to reach out and let us know.

Maybe, a short read of artificial intelligence work will be a welcome distraction with everything else going on in the world. As such, we're happy to share what ContinualAI has been working on over the last month, as we work towards fostering and growing the Continual Learning community, helping where we can, and accelerate research surrounding continual learning AI: a major step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't and feel free to donate even a small amount if you are able. It goes a long way!

Please continue to stay diligent and safe through these strange times. 

A Few Recent Announcements

  • This past September, the largest Eisenstein prime was discovered. Yet more interesting to us is the Eisenstein prime 809. Why? Because this month we passed 809 slack members! We look forward to all the diverse ideas and opportunities that will come from such a large body from around the world!

  • The 29th of April, 5 p.m. CEST we will host the 3rd Monthly Online Meetup: "Real-World Applications of Continual Learning". Join us here to check out awesome presentations on how these algorithms are currently being used: from the edge to the cloud. 

  • To further aid communication, last month we announced a new ContinualAI platform for CL discussions based on Discourse. We have a great assortment of knowledge there that we are building. Please do check it out!

  • Every Friday, join us for our reading group! Visit the reading-group slack channel for updates, and see the past papers here
  • Speaking of 800+ members, and social distancing, we thought now would be a good time for us to get to know each other better. So, we're launching the Virtual-Coffee program!  Join the Virtual-coffee channel on slack for more information on how our random matching algorithm works, and how every two weeks  you and another member can get to know each other, towards fostering valuable connections within our community! 
Not on our mailing list? Join now!

ContinualAI Sponsored Programs

  • We are sponsoring the Continual Learning workshop “CLVision” at CVPR2020, 13-19 of June 2020 in Seattle, USA. We're excited that over 50 teams have decided to join the challenge!

  • We are sponsoring the Special Session “New Trends in Continual Learning with Deep Architectures” at IEEE EAIS2020, May 27-29, 2020 in Bari, Italy

Top paper picks: 

A paper we think you should read, as suggested by our community:

Continual Learning with Gated Incremental Memories for sequential data processing (Cossu et al. 2020).
The ability to learn in dynamic, nonstationary environments without forgetting previous knowledge, also known as Continual Learning (CL), is a key enabler for scalable and trustworthy deployments of adaptive solutions. While the importance of continual learning is largely acknowledged in machine vision and reinforcement learning problems, this is mostly under-documented for sequence processing tasks. This work proposes a Recurrent Neural Network (RNN) model for CL that is able to deal with concept drift in input distribution without forgetting previously acquired knowledge. We also implement and test a popular CL approach, Elastic Weight Consolidation (EWC), on top of two different types of RNNs. Finally, we compare the performances of our enhanced architecture against EWC and RNNs on a set of standard CL benchmarks, adapted to the sequential data processing scenario. Results show the superior performance of our architecture and highlight the need for special solutions designed to address CL in RNNs.

Other Useful Links: 


Copyright © 2020 ContinualAI, All rights reserved.

Our mailing address is:

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.

This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp