- 1次围观
As AI systems become more capable, we would like to enlist their help to supervise other AIs. We experiment with methods for training a harmless AI assistant through self-improvement, without any human labels identifying harmful outputs. The only human oversight is provided through a list of rules or principles, and so we refer to the method as 'Constitutional AI'. The process involves both a supervised learning and a reinforcement learning phase. In the supervised phase we sample from an initial model, then generate self-critiques and revisions, and then finetune the original model on revised responses. In the RL phase, we sample from the finetuned model, use a model to evaluate which of the two samples is better, and then train a preference model from this dataset of AI preferences. We then train with RL using the preference model as the reward signal, i.e. we use 'RL from AI Feedback' (RLAIF). As a result we are able to train a harmless but non-evasive AI assistant that engages with harmful queries by explaining its objections to them. Both the SL and RL methods can leverage chain-of-thought style reasoning to improve the human-judged performance and transparency of AI decision making. These methods make it possible to control AI behavior more precisely and with far fewer human labels.
来源出处
Constitutional AI: Harmlessness from AI Feedback
http://arxiv.org/abs/2212.08073
相关内容
发布日期
08/04/2020 - 01:35
发布日期
06/17/2022 - 10:21
发布日期
06/22/2024 - 17:53
发布日期
01/10/2022 - 19:32
发布日期
09/21/2023 - 22:52
发布日期
02/10/2022 - 15:24
发布日期
01/10/2022 - 19:31
发布日期
08/04/2020 - 01:35
发布日期
10/23/2024 - 19:39
发布日期
09/02/2024 - 19:26
发布日期
08/04/2020 - 01:35
发布日期
08/04/2020 - 01:35
发布日期
06/07/2024 - 17:46
发布日期
08/20/2024 - 19:21
发布日期
10/31/2021 - 01:12
发布日期
04/18/2024 - 09:29
发布日期
08/04/2020 - 01:35
发布日期
09/18/2024 - 19:30
发布日期
09/02/2024 - 19:26
发布日期
07/02/2023 - 18:27