Go to file
2021-01-14 16:15:01 +11:00
connectivity fix folders 2021-01-14 15:43:03 +11:00
data first version 2021-01-14 15:39:19 +11:00
img_features fix folders 2021-01-14 15:43:03 +11:00
logs fix folders 2021-01-14 15:43:03 +11:00
r2r_src update readme 2021-01-14 16:15:01 +11:00
run first version 2021-01-14 15:39:19 +11:00
snap fix folders 2021-01-14 15:43:03 +11:00
.gitignore first version 2021-01-14 15:39:19 +11:00
README.md update readme 2021-01-14 16:15:01 +11:00
recurrent-vln-bert.yml first version 2021-01-14 15:39:19 +11:00

Recurrent-VLN-BERT

Code of the Recurrent-VLN-BERT paper: A Recurrent Vision-and-Language BERT for Navigation
Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould

[Paper & Appendices | GitHub]

Prerequisites

Installation

Install the Matterport3D Simulator. Please find the versions of packages in our environment here.

Install the Pytorch-Transformers. In particular, we use this version (same as OSCAR) in our experiments.

Data Preparation

Please follow the instructions below to prepare the data in directories:

Trained Network Weights

R2R Navigation

Please read Peter Anderson's VLN paper for the R2R Navigation task.

Our code is based on the code structure of the EnvDrop.

Reproduce Testing Results

To replicate the performance reported in our paper, load the trained network weights and run validation:

bash run/agent.bash

Training

Navigator

To train the network from scratch, first train a Navigator on the R2R training split:

Modify run/agent.bash, remove the argument for --load and set --train listener. Then,

bash run/agent.bash

The trained Navigator will be saved under snap/.

Speaker

You also need to train a Speaker for augmented training:

bash run/speak.bash

The trained Speaker will be saved under snap/.

Augmented Navigator

Finally, keep training the Navigator with the mixture of original data and augmented data:

bash run/bt_envdrop.bash

We apply a one-step learning rate decay to 1e-5 when training saturates.

Citation

If you use or discuss our Entity Relationship Graph, please cite our paper:

@article{hong2020language,
  title={Language and Visual Entity Relationship Graph for Agent Navigation},
  author={Hong, Yicong and Rodriguez, Cristian and Qi, Yuankai and Wu, Qi and Gould, Stephen},
  journal={Advances in Neural Information Processing Systems},
  volume={33},
  year={2020}
}