Thesis title: Latent Alignment Techniques Enable Modular Policies in the Context of Reinforcement Learning
Visual Reinforcement Learning is a popular and powerful framework that takes full advantage of the Deep Learning breakthrough. It is known that variations in input domains (e.g., different panorama colors due to seasonal changes) or task domains (e.g., altering the target speed of a car) can disrupt agent performance, necessitating new training for each variation. Recent advancements in the field of representation learning have demonstrated the possibility of combining components from different neural networks to create new models in a zero-shot fashion. In this dissertation, we build upon such advancements and adapt them in the Visual Reinforcement Learning setting, so that
it becomes possible to combine agents components to create new agents capable of effectively handling novel visual-task pairs not encountered during training, by enabling communication between encoders and controllers from different models trained under different variations.
Our findings highlight the potential for model reuse, significantly reducing the need for retraining and, consequently, the time and computational resources required.