- 1次围观
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.
来源出处
LLaMA: Open and Efficient Foundation Language Models
http://arxiv.org/abs/2302.13971
相关内容
发布日期
06/23/2024 - 17:52
发布日期
03/19/2024 - 09:13
发布日期
01/10/2022 - 19:31
发布日期
10/31/2021 - 01:16
发布日期
11/09/2024 - 19:46
发布日期
06/17/2022 - 10:21
发布日期
11/17/2024 - 19:48
发布日期
10/08/2023 - 23:02
发布日期
07/23/2023 - 21:46
发布日期
06/17/2022 - 10:21
发布日期
08/04/2020 - 01:35
发布日期
03/11/2025 - 20:51
发布日期
01/10/2022 - 19:31
发布日期
01/10/2022 - 19:31
发布日期
06/17/2022 - 10:21
发布日期
06/17/2022 - 10:21
发布日期
10/14/2023 - 23:10
发布日期
10/19/2024 - 19:37
发布日期
06/05/2024 - 17:45
发布日期
10/31/2021 - 01:12