Skip to content

Commit

Permalink
[chore] Add updated citations for pretrain VL paper (facebookresearch…
Browse files Browse the repository at this point in the history
…#111)

Summary:
Pull Request resolved: https://github.com/fairinternal/mmf-internal/pull/111

Pull Request resolved: https://github.com/fairinternal/pythia-internal/pull/111

Update citations for Pretrain VL paper

Reviewed By: apsdehal

Differential Revision: D21368998

fbshipit-source-id: 2736d673fafc54837ef833151ca29e51f7f88d1e
  • Loading branch information
vedanuj authored and apsdehal committed May 8, 2020
1 parent a9d63ab commit b28b6e5
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 5 deletions.
10 changes: 7 additions & 3 deletions projects/pretrain_vl_right/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,10 +2,14 @@

This repository contains the code for modified implementation of VisualBERT and ViLBERT used in the folowwing paper. Please cite this paper if you are using these models:

* Singh, A., Goswami, V., & Parikh, D. (2019). *Are we pretraining it right? Digging deeper into visio-linguistic pretraining*.

TODO: Update citation bibtex once uplaoded to ArXiv.
* Singh, A., Goswami, V., & Parikh, D. (2019). *Are we pretraining it right? Digging deeper into visio-linguistic pretraining*. arXiv preprint arXiv:2004.08744. ([arXiV](https://arxiv.org/abs/2004.08744))
```
@article{singh2020we,
title={Are we pretraining it right? Digging deeper into visio-linguistic pretraining},
author={Singh, Amanpreet and Goswami, Vedanuj and Parikh, Devi},
journal={arXiv preprint arXiv:2004.08744},
year={2020}
}
```

## Installation
Expand Down
14 changes: 13 additions & 1 deletion projects/vilbert/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# ViLBERT

This repository contains the code for ViLBERT model, released originally under this ([repo](https://github.com/jiasenlu/vilbert_beta)). Please cite the following paper if you are using ViLBERT model from mmf:
This repository contains the code for ViLBERT model, released originally under this ([repo](https://github.com/jiasenlu/vilbert_beta)). Please cite the following papers if you are using ViLBERT model from mmf:

* Lu, J., Batra, D., Parikh, D. and Lee, S., 2019. *Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks.* In Advances in Neural Information Processing Systems (pp. 13-23). ([arXiV](https://arxiv.org/abs/1908.02265))
```
Expand All @@ -13,6 +13,18 @@ This repository contains the code for ViLBERT model, released originally under t
}
```

and

* Singh, A., Goswami, V., & Parikh, D. (2019). *Are we pretraining it right? Digging deeper into visio-linguistic pretraining*. arXiv preprint arXiv:2004.08744. ([arXiV](https://arxiv.org/abs/2004.08744))
```
@article{singh2020we,
title={Are we pretraining it right? Digging deeper into visio-linguistic pretraining},
author={Singh, Amanpreet and Goswami, Vedanuj and Parikh, Devi},
journal={arXiv preprint arXiv:2004.08744},
year={2020}
}
```

## Installation

Clone this repository, and build it with the following command.
Expand Down
14 changes: 13 additions & 1 deletion projects/visual_bert/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# VisualBERT

This repository contains the code for pytorch implementation of VisualBERT model, released originally under this ([repo](https://github.com/uclanlp/visualbert)). Please cite the following paper if you are using VisualBERT model from mmf:
This repository contains the code for pytorch implementation of VisualBERT model, released originally under this ([repo](https://github.com/uclanlp/visualbert)). Please cite the following papers if you are using VisualBERT model from mmf:

* Li, L. H., Yatskar, M., Yin, D., Hsieh, C. J., & Chang, K. W. (2019). *Visualbert: A simple and performant baseline for vision and language*. arXiv preprint arXiv:1908.03557. ([arXiV](https://arxiv.org/abs/1908.03557))
```
Expand All @@ -12,6 +12,18 @@ This repository contains the code for pytorch implementation of VisualBERT model
}
```

and

* Singh, A., Goswami, V., & Parikh, D. (2019). *Are we pretraining it right? Digging deeper into visio-linguistic pretraining*. arXiv preprint arXiv:2004.08744. ([arXiV](https://arxiv.org/abs/2004.08744))
```
@article{singh2020we,
title={Are we pretraining it right? Digging deeper into visio-linguistic pretraining},
author={Singh, Amanpreet and Goswami, Vedanuj and Parikh, Devi},
journal={arXiv preprint arXiv:2004.08744},
year={2020}
}
```

## Installation

Clone this repository, and build it with the following command.
Expand Down

0 comments on commit b28b6e5

Please sign in to comment.