Copy
View this email in your browser

 

Hey everyone. The summer months glimmer upon us, which means a slight change in the backdrop of an already one of a kind year. Yet, as history has shown, if we stay together and utilize the best aspects of our humanity, few obstacles will break our stride. 

It's these great aspects of humanity that we wish to instill into the algorithms we build, to make AI a bit more human, to emulate or improve the best aspects of ourselves. As a small part of that mission, we're happy to share what ContinualAI has been working on over the last month, towards accelerating research surrounding continual learning AI: a necessary step in the direction of strong AI. Passionate about our mission? Join us on slack if you haven't and feel free to donate even a small amount if you are able. 

Yet, keep in mind that among the highlights of our humanity, there exists the infamous. Should we instill these into AI also? A key example is our own implicit biases which have found their way into AI systems. These are displayed across multiple instances of once thought-to-be agnostic systems: such as this example or this one or this one. Whether research grade or rolled out to thousands of devices at the edge, these faulty systems have the potential to cause harm. Identifying and addressing these shortcomings are a necessary step in the plan to building AI for good, and the world we all want to live in. For more information on these issues, and advice on how to mitigate them, check out this corner of the web, or this one, or even this one, or this one

From an article on how quickly a publicly trained AI can go wrong.
(We all remember the Microsoft Tay incident...)

A Few Recent Announcements

 
  • A HUGE thank you to everyone who was a part of our first CVPR2020 workshop, Continual Learning in Computer VisionIf you missed it, or want to refresh on the great discussions,  you can find the entire workshop on our Youtube account. 

  • Speaking of the workshop, a huge thank you to the 79 registered teams who participated in the CLVision challenge. We thank our co-sponsors Intel, ELEMENTAI, NVIDIA and send congratulations to the overall winners: Zheda Mai, Hyunwoo Kim, Jihwan Jeong, Scott Sanner for Batch-level Experience Replay for Continual Learning!

  • Our last online Meetup: 4th ContinualAI Meetup on "Continual Learning with Sequential Streaming Data" was a success! Click the link for the full talk, and links to all of the past meetups. Don't forget to read the discussions in our forum either! 

  • And speaking of, this Friday 5pm CEST join us for our next meetup: "Continual Learning: in the Cloud, at the Edge or Both?" where we will explore where CL should occur. Sign up and save the date, and find the Google Meet link here.

  • The ContinualAI Wiki  has received some TLC over the last couple weeks, and we're working hard to push more changes. We're actively looking for members to help out also. If you would like to help or contribute a short section, please reach out on slack! A big thank you to Andrea for heading the project's next phase. 

  • Reminder: Every Friday, join us for our reading group! Visit the reading-group slack channel for updates, and see the past papers here
Not on our mailing list? Join now!

ContinualAI Sponsored Programs

 

Top paper picks: 

A paper we think you should read, as suggested by our community:

Lifelong Machine Learning With Deep Streaming Linear Discriminant Analysis (Hayes et al. 2020). 
When an agent acquires new information, ideally it would immediately be capable of using that information to understand its environment. This is not possible using conventional deep neural networks, which suffer from catastrophic forgetting when they are incrementally updated, with new knowledge overwriting established representations. A variety of approaches have been developed that attempt to mitigate catastrophic forgetting in the incremental batch learning scenario, where a model learns from a series of large collections of labeled samples. However, in this setting, inference is only possible after a batch has been accumulated, which prohibits many applications. An alternative paradigm is online learning in a single pass through the training dataset on a resource constrained budget, which is known as streaming learning. Streaming learning has been much less studied in the deep learning community. In streaming learning, an agent learns instances one-by-one and can be tested at any time, rather than only after learning a large batch. Here, we revisit streaming linear discriminant analysis, which has been widely used in the data mining research community. By combining streaming linear discriminant analysis with deep learning, we are able to outperform both incremental batch learning and streaming learning algorithms on both ImageNet ILSVRC-2012 and CORe50, a dataset that involves learning to classify from temporally ordered samples.

Other Useful Links: 

 

Twitter
Website
Medium
GitHub
YouTube
Copyright © 2020 ContinualAI, All rights reserved.

Our mailing address is:
contact@continualai.org

Want to change how you receive these emails? ;(
I suppose that you can update your preferences or unsubscribe from this list.
 






This email was sent to <<Email Address>>
why did I get this?    unsubscribe from this list    update subscription preferences
ContinualAI · Via Caduti della Via Fani, 7 · Bologna, Bo 40121 · Italy

Email Marketing Powered by Mailchimp