Tianjin University
Deep Reinforcement
Learning Laboratory
Our lab has several Ph.D. and Master positions. If you are interested in our research, please send us your CV. (jianye.hao@tju.edu.cn / yanzheng@tju.edu.cn)

实验室长期接受优秀同学交流学习,攻读硕士/博士学位的同学加入。同时欢迎感兴趣学部(院)夏令营活动的同学进行邮件联系!
News
lightbulb
Feb 28, 2024 - Two papers accepted by CVPR 2024:
"Generate Subgoal Images before Act: Unlocking the Chain-of-Thought Reasoning in Diffusion Model for Robot Manipulation with Multimodal Prompts","Improving Unsupervised Hierarchical Representation with Reinforcement Learning"
breaking_news
Jan 15, 2024 - Four papers accepted by ICLR 2024:
"Uni-RLHF: Universal Platform and Benchmark Suite for Reinforcement Learning with Diverse Human Feedback","Sample-Efficient Quality-Diversity by Cooperative Coevolution","AlignDiff: Aligning Diverse Human Preferences via Behavior-Customisable Diffusion Model","Rethinking Branching on Exact Combinatorial Optimization Solver: The First Deep Symbolic Discovery Framework"
campaign
Dec 9, 2023 - Five papers accepted by AAAI 2024:
"PORTAL: Automatic Curricula Generation for Multiagent Reinforcement Learning","Multiagent Gumbel MuZero: Efficient Planning in Combinatorial Action Spaces","OVD-Explorer: Optimism should not be the Sole Pursuit of Exploration in Noisy Environments","A Transfer Approach Using Graph Neural Networks in Deep Reinforcement Learning","PreRoutGNN for Timing Prediction with Order Preserving Partition: Global Circuit Pre-training, Local Signal Delay Learning and Attentional Cell Modeling"
lightbulb
May 5, 2023 - Three papers accepted by ICML 2023:
"ChiPFormer: Transferable Chip Placement via Offline Decision Transformer","MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL","RACE: Improve Multi-Agent Reinforcement Learning with Representation Asymmetry and Collaborative Evolution"
READ MORE
Recent Research
What About Inputting Policy in Value Function: Policy Representation and Policy-extended Value Function Approximator
2023-10-21: We study Policy-extended Value Function Approximator (PeVFA) in Reinforcement Learning (RL), which extends conventional value function approximator (VFA) to take as input not only the state (and action) but also an explicit policy representation. Such an extension enables PeVFA to preserve values of multiple policies at the same time and brings an appealing characteristic, i.e., *value generalization among policies*.
ERL-Re2: Efficient Evolutionary Reinforcement Learning with Shared State Representation and Individual Policy Representation
2023-09-10: Deep Reinforcement Learning (Deep RL) and Evolutionary Algorithms (EA) are two major paradigms of policy optimization with distinct learning principles, i.e., gradient-based v.s. gradient-free. An appealing research direction is integrating Deep RL and EA to devise new methods by fusing their complementary advantages. However, existing works on combining Deep RL and EA have two common drawbacks: 1) the RL agent and EA agents learn their policies individually, neglecting efficient sharing of useful common knowledge; 2) parameter-level policy optimization guarantees no semantic level of behavior evolution for the EA side. In this paper, we propose Evolutionary Reinforcement Learning with Two-scale State Representation and Policy Representation (ERL-Re^2), a novel solution to the aforementioned two drawbacks. The key idea of ERL-Re^2 is two-scale representation: all EA and RL policies share the same nonlinear state representation while maintaining individual} linear policy representations. The state representation conveys expressive common features of the environment learned by all the agents collectively; the linear policy representation provides a favorable space for efficient policy optimization, where novel behavior-level crossover and mutation operations can be performed. Moreover, the linear policy representation allows convenient generalization of policy fitness with the help of the Policy-extended Value Function Approximator (PeVFA), further improving the sample efficiency of fitness estimation. The experiments on a range of continuous control tasks show that ERL-Re^2 consistently outperforms advanced baselines and achieves the State Of The Art (SOTA).
Boosting Multiagent Reinforcement Learning via Permutation Invariant and Permutation Equivariant Networks
2023-09-09: The state space in Multiagent Reinforcement Learning (MARL) grows exponentially with the agent number. Such a curse of dimensionality results in poor scalability and low sample efficiency, inhibiting MARL for decades. To break this curse, we propose a unified agent permutation framework that exploits the permutation invariance (PI) and permutation equivariance (PE) inductive biases to reduce the multiagent state space. Our insight is that permuting the order of entities in the factored multiagent state space does not change the information.
MetaDiffuser: Diffusion Model as Conditional Planner for Offline Meta-RL
2023-08-01: Recently, diffusion model shines as a promising backbone for the sequence modeling paradigm in offline reinforcement learning. However, these works mostly lack the generalization ability across tasks with reward or dynamics change. To tackle this challenge, in this paper we propose a task-oriented conditioned diffusion planner for offline meta-RL(MetaDiffuser), which considers the generalization problem as conditional trajectory generation task with contextual representation. The key is to learn a context conditioned diffusion model which can generate task-oriented trajectories for planning across diverse tasks. To enhance the dynamics consistency of the generated trajectories while encouraging trajectories to achieve high returns, we further design a dual-guided module in the sampling process of the diffusion model. The proposed framework enjoys the robustness to the quality of collected warm-start data from the testing task and the flexibility to incorporate with different task representation method. The experiment results on MuJoCo benchmarks show that MetaDiffuser outperforms other strong offline meta-RL baselines, demonstrating the outstanding conditional generation ability of diffusion architecture.
READ MORE