Skip to content

Quad-Swarm-RL Integrations


Clone into your home directory

Install dependencies in your conda environment

cd ~/quad-swarm-rl
pip install -e .

Note: if you have any error with bezier, run:

BEZIER_NO_EXTENSION=true pip install bezier==2020.5.19
pip install -e .

Running Experiments

The environments can be run from the quad_swarm_rl folder in the downloaded quad-swarm-rl directory instead of from sample-factory directly.

Experiments can be run with the train script and viewed with the enjoy script. If you are running custom experiments, it is recommended to use the quad_multi_mix_baseline runner script and make any modifications as needed. See sf2_single_drone and sf2_multi_drone runner scripts for an examples.

The quadrotor environments have many unique parameters that can be found in Some relevant params for rendering results include --quads_view_mode which can be set to local or global for viewing multi-drone experiments, and --quads_mode which determines which scenario(s) to train on, with mix using all scenarios.



  1. Comparison using a single drone between normalized (input and return normalization) and un-normalized experiments. Normalization helped the drones learn in around half the number of steps.
  2. Experiments with 8 drones in scenarios with and without obstacles. All experiments used input and return normalization. Research and development are still being done on multi-drone scenarios to reduce the number of collisions.


Description HuggingFace Hub Models Evaluation Metrics
Single drone with normalization 0.03 ± 1.86
Multi drone without obstacles -0.40 ± 4.47
Multi drone with obstacles -2.84 ± 3.71


Single drone with normalization flying between dynamic goals.