Boosting Sample Efficiency and Generalization in Multi-agent Reinforcement Learning via Equivariance
Highlights
- In a nutshell and between all the fluff, the only key difference between EGNN and E2G is the addition of the term. It takes some careful reading to realize this. I believe the authors fluffed and obscured this a bit so a lazy reviewer wouldn't reject it with the reasoning that it was only a minor incremental improvement to EGNN. However, the paper still addresses a fundemental challenge in a fairly rigorous way. I would have accepted the paper regardless.
Summary
Rotational and reflection symmetries (of the O(n) symmetry group) are fairly common in Reinforcement Learning scenarios. While solutions such as EGNN has been proposed to exploit equivariance, EGNN still suffers from both early exploration bias and sub-optimal generalization. The authors introduce E2GN2, a modified version of EGNN which seeks to eliminate early exploration bias and further improves generalization. The authors show empirically that E2GN2 outperforms SOTA on standard MARL tests as well as provides rigorous theoretical proofs.
Key Contributions
-
E2GN2 - Exploration-enhanced Equivariant Graph Neural Networks, which exploits environmental symmetries in the form of either equivariance, or invariance.
-
The authors show that E2GN2 has no early exploration bias and is equivariant to both rotations and reflections.
-
The authors show that E2GN2 has the ability to generalize well to test scenarios over baselines given it's equivariance guarantees
Strengths
-
I enjoyed the experiments section. The generalization section in particular was very intuitive
-
Figure 1, showing precisely the problem domain they were targeting did a great job setting everything up right away.
-
The key title and description the others try to push is "sample efficiency". Their contributions sum up to sample efficiency (i.e. the agent learns a policy for a given scenario and is able to generalize by design to O(n) configurations) and improving generalization. Overall, I think the paper hits those points well.
Weaknesses / Questions
-
Some of the terms and terminology could have been repeated, interpreted better, or simply put in a table. It was difficult for a reader, even one versed in reinforcement learning and group theory, to keep track of everything. For this reason, I'm dedicating some of this section to what some of the terms mean in case I re-read this paper. Here's typical EGNN:
- = the invariant features (do not transform under group actions like type)
- are equivariant features (do transform under group actions like rotations)
- = a multi-layer perception against invariant features and squared distance of equivariant features (coordinates).
- = update coordinate embeddings in an equivaraint manner to transforms from
- = Invariant feature updates I assume?
-
E2GN2 (the contribution by the author) modifies the equation above to include another MLP for .
- Before:
- After:
- In the author's words, this extra MLP serves to "offset the bias from the previous layer and solve the early exploration problem".
-
I read over section 4.3 a few times and I still don't feel it holds together well. I would have suggested tying it in or working in the section better.
Appendix: Equivariant Graph Neural Networks in 3D
This is a quick framework I wrote while studying equivariant graph neural network and refreshing myself on group theory. Putting it here for future reference and using 3D since that's what I imagine I may use it for in the future.
- We define an orthogonal group in three dimensions as:
- Each element can be a 3D rotation or reflection
- We can define a unit sphere in as
- We can define the group action of on as the follow matrix-vector multiplication:
- We can define the transitivity of the action in like the following:
- Intuitively, this means for any two points on the sphere, you can always find a 3D rotation or reflection that transforms one point to the other.
- A neural network such as EGNN/E2GN2 is equivariant under if
- Thus, the neural network acts as the intertwiner, or the equivariant linear map
- The really cool thing about this, and the reason I wrote all this pre-amble, is that you can essentially train a neural network to select the best equivariant map
Related Work
-
Equivariant Graph Neural Networks
-
Reinforcement Learning
-
Graph Neural Networks