<dd id="om44c"><optgroup id="om44c"></optgroup></dd>
  • <xmp id="om44c"><nav id="om44c"></nav>
    <xmp id="om44c"><nav id="om44c"></nav>
    <menu id="om44c"></menu>

    科學研究

    Research

    首頁 >  論文  > 詳情

    Multi-Task Reinforcement Learning with Soft Modularization

    發表會議及期刊:NeurIPS

    Ruihan Yang 1 Huazhe Xu 2 Yi Wu 3,4 Xiaolong Wang 1

    1UC San Diego 2UC Berkeley 3IIIS, Tsinghua 4Shanghai Qi Zhi Institute

    Abstract :

    Multi-task learning is a very challenging problem in reinforcement learning. While training multiple tasks jointly allow the policies to share parameters across different tasks, the optimization problem becomes non-trivial: It remains unclear what parameters in the network should be reused across tasks, and how the gradients from different tasks may interfere with each other. Thus, instead of naively sharing parameters across tasks, we introduce an explicit modularization technique on policy representation to alleviate this optimization issue. Given a base policy network, we design a routing network which estimates different routing strategies to reconfigure the base network for each task. Instead of directly selecting routes for each task, our task-specific policy uses a method called soft modularization to softly combine all the possible routes, which makes it suitable for sequential tasks. We experiment with various robotics manipulation tasks in simulation and show our method improves both sample efficiency and performance over strong baselines by a large margin. Our project page with code is at https://rchalyang.github.io/SoftModule/.

    comm@pjlab.org.cn

    上海市徐匯區云錦路701號西岸國際人工智能中心37-38層

    滬ICP備2021009351號-1

    最近中文字幕国语免费完整