adversarial_AIRBERT/README.md
Shizhe Chen bbeb69aa5f init
2021-08-02 13:04:04 +00:00

48 lines
1.5 KiB
Markdown

# Finetuning Airbert on Downstream VLN Tasks
This repository stores the codebase for finetuning [Airbert](https://github.com/airbert-vln/airbert) on downstream VLN tasks including R2R and REVERIE. The code is based on [Recurrent-VLN-BERT](https://github.com/YicongHong/Recurrent-VLN-BERT). We acknowledge [Yicong Hong](https://github.com/YicongHong) for releasing the Recurrent-VLN-BERT code.
## Prerequisites
1. Follow instructions in [Recurrent-VLN-BERT](https://github.com/YicongHong/Recurrent-VLN-BERT#prerequisites) to setup the environment and download data.
2. Download the [trained models]().
## REVERIE
### Inference
To replicate the performance reported in our paper, load the trained models and run validation:
```bash
bash scripts/valid_reverie_agent.sh 0
```
### Training
To train the model, simply run:
```bash
bash scripts/train_reverie_agent.sh 0
```
## R2R
### Inference
To replicate the performance reported in our paper, load the trained models and run validation:
```bash
bash scripts/valid_r2r_agent.sh 0
```
### Training
To train the model, simply run:
```bash
bash scripts/train_r2r_agent.sh 0
```
## Citation
Please cite our paper if you find this repository useful:
```
@misc{guhur2021airbert,
title ={{Airbert: In-domain Pretraining for Vision-and-Language Navigation}},
author={Pierre-Louis Guhur and Makarand Tapaswi and Shizhe Chen and Ivan Laptev and Cordelia Schmid},
year={2021},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
}
```