Skip to content

Commit

Permalink
LLaVA: Large Language and Vision Assistant
Browse files Browse the repository at this point in the history
LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding
  • Loading branch information
cedricvidal authored Mar 27, 2024
1 parent 1b3c4cc commit 5151dd1
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -274,6 +274,7 @@ The above tables coule be better summarized by this wonderful visualization from
- [UltraLM](https://github.com/thunlp/UltraChat) - Large-scale, Informative, and Diverse Multi-round Chat Models.
- [Guanaco](https://github.com/artidoro/qlora) - QLoRA tuned LLaMA
- [ChiMed-GPT](https://github.com/synlp/ChiMed-GPT) - A Chinese medical large language model.
- [LLaVa](https://github.com/haotian-liu/LLaVA) - LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.
- [BLOOM](https://huggingface.co/bigscience/bloom) - BigScience Large Open-science Open-access Multilingual Language Model [BLOOM-LoRA](https://github.com/linhduongtuan/BLOOM-LORA)
- [BLOOMZ&mT0](https://huggingface.co/bigscience/bloomz) - a family of models capable of following human instructions in dozens of languages zero-shot.
- [Phoenix](https://github.com/FreedomIntelligence/LLMZoo)
Expand Down

0 comments on commit 5151dd1

Please sign in to comment.