In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
来源出处
Llama 2: Open Foundation and Fine-Tuned Chat Models
http://arxiv.org/abs/2307.09288
相关内容
发布日期
12/25/2022 - 14:52
发布日期
06/17/2022 - 10:21
发布日期
01/10/2022 - 19:32
发布日期
07/12/2024 - 17:58
发布日期
07/14/2024 - 17:58
发布日期
06/17/2022 - 10:21
发布日期
01/10/2022 - 19:31
发布日期
08/04/2020 - 01:35
发布日期
01/01/1970 - 08:00
发布日期
12/07/2023 - 00:57
发布日期
01/10/2022 - 19:31
发布日期
10/07/2023 - 23:02
发布日期
06/17/2022 - 10:21
发布日期
01/30/2024 - 12:59
发布日期
01/22/2024 - 00:46
发布日期
10/31/2021 - 01:12
发布日期
06/17/2022 - 10:21
发布日期
08/13/2023 - 22:07
发布日期
10/31/2021 - 01:12
发布日期
01/10/2022 - 19:32