Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
来源出处
Scaling Instruction-Finetuned Language Models
http://arxiv.org/abs/2210.11416
相关内容
发布日期
12/12/2023 - 01:18
发布日期
01/21/2024 - 12:12
发布日期
06/17/2022 - 10:21
发布日期
06/17/2022 - 10:21
发布日期
01/10/2022 - 19:31
发布日期
01/22/2024 - 01:44
发布日期
06/17/2022 - 10:21
发布日期
08/23/2024 - 19:21
发布日期
06/17/2022 - 10:21
发布日期
08/18/2024 - 19:19
发布日期
08/04/2020 - 01:35
发布日期
06/17/2022 - 10:21
发布日期
08/04/2020 - 01:35
发布日期
06/17/2022 - 10:21
发布日期
09/10/2023 - 22:37
发布日期
10/31/2021 - 01:12
发布日期
10/31/2021 - 01:48
发布日期
06/17/2022 - 10:21
发布日期
10/12/2023 - 23:10
发布日期
08/04/2020 - 01:35