Instruction tuning large language models (LLMs) using machine-generated instruction-following data has improved zero-shot capabilities on new tasks, but the idea is less explored in the multimodal field. In this paper, we present the first attempt to use language-only GPT-4 to generate multimodal language-image instruction-following data. By instruction tuning on such generated data, we introduce LLaVA: Large Language and Vision Assistant, an end-to-end trained large multimodal model that connects a vision encoder and LLM for general-purpose visual and language understanding.Our early experiments show that LLaVA demonstrates impressive multimodel chat abilities, sometimes exhibiting the behaviors of multimodal GPT-4 on unseen images/instructions, and yields a 85.1% relative score compared with GPT-4 on a synthetic multimodal instruction-following dataset. When fine-tuned on Science QA, the synergy of LLaVA and GPT-4 achieves a new state-of-the-art accuracy of 92.53%. We make GPT-4 generated visual instruction tuning data, our model and code base publicly available.
来源出处
Visual Instruction Tuning
http://arxiv.org/abs/2304.08485
相关内容
发布日期
08/04/2020 - 01:35
发布日期
01/10/2022 - 19:32
发布日期
08/04/2020 - 01:35
发布日期
10/31/2021 - 01:12
发布日期
09/05/2024 - 19:28
发布日期
02/18/2025 - 20:48
发布日期
10/17/2023 - 23:16
发布日期
06/17/2022 - 10:21
发布日期
01/10/2022 - 19:31
发布日期
02/18/2025 - 20:47
发布日期
01/18/2025 - 20:37
发布日期
08/04/2020 - 01:35
发布日期
08/04/2020 - 01:35
发布日期
01/31/2024 - 13:01
发布日期
10/31/2021 - 01:48
发布日期
01/10/2022 - 19:31
发布日期
07/19/2023 - 21:44
发布日期
08/04/2020 - 01:35
发布日期
02/29/2024 - 16:35
发布日期
06/17/2022 - 10:21