Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2025 Sep;645(8081):633-638.
doi: 10.1038/s41586-025-09422-z. Epub 2025 Sep 17.

DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning

Daya Guo  1 Dejian Yang  1 Haowei Zhang  1 Junxiao Song  1 Peiyi Wang  1 Qihao Zhu  1 Runxin Xu  1 Ruoyu Zhang  1 Shirong Ma  1 Xiao Bi  1 Xiaokang Zhang  1 Xingkai Yu  1 Yu Wu  1 Z F Wu  1 Zhibin Gou  1 Zhihong Shao  1 Zhuoshu Li  1 Ziyi Gao  1 Aixin Liu  1 Bing Xue  1 Bingxuan Wang  1 Bochao Wu  1 Bei Feng  1 Chengda Lu  1 Chenggang Zhao  1 Chengqi Deng  1 Chong Ruan  1 Damai Dai  1 Deli Chen  1 Dongjie Ji  1 Erhang Li  1 Fangyun Lin  1 Fucong Dai  1 Fuli Luo  1   2 Guangbo Hao  1 Guanting Chen  1 Guowei Li  1 H Zhang  1 Hanwei Xu  1 Honghui Ding  1 Huazuo Gao  1 Hui Qu  1 Hui Li  1 Jianzhong Guo  1 Jiashi Li  1 Jingchang Chen  1 Jingyang Yuan  1 Jinhao Tu  1   3 Junjie Qiu  1 Junlong Li  1 J L Cai  1 Jiaqi Ni  1 Jian Liang  1 Jin Chen  1 Kai Dong  1 Kai Hu  1   4 Kaichao You  1 Kaige Gao  1 Kang Guan  1 Kexin Huang  1   5 Kuai Yu  1 Lean Wang  1 Lecong Zhang  1 Liang Zhao  1 Litong Wang  1 Liyue Zhang  1 Lei Xu  1 Leyi Xia  1 Mingchuan Zhang  1 Minghua Zhang  1 Minghui Tang  1 Mingxu Zhou  1 Meng Li  1 Miaojun Wang  1 Mingming Li  1 Ning Tian  1 Panpan Huang  1 Peng Zhang  1 Qiancheng Wang  1 Qinyu Chen  1 Qiushi Du  1 Ruiqi Ge  1 Ruisong Zhang  1 Ruizhe Pan  1 Runji Wang  1 R J Chen  1 R L Jin  1 Ruyi Chen  1 Shanghao Lu  1 Shangyan Zhou  1 Shanhuang Chen  1 Shengfeng Ye  1 Shiyu Wang  1 Shuiping Yu  1 Shunfeng Zhou  1 Shuting Pan  1 S S Li  1 Shuang Zhou  1 Shaoqing Wu  1 Tao Yun  1 Tian Pei  1 Tianyu Sun  1 T Wang  1 Wangding Zeng  1 Wen Liu  1 Wenfeng Liang  6 Wenjun Gao  1 Wenqin Yu  1   5 Wentao Zhang  1 W L Xiao  1 Wei An  1 Xiaodong Liu  1 Xiaohan Wang  1 Xiaokang Chen  1 Xiaotao Nie  1 Xin Cheng  1 Xin Liu  1 Xin Xie  1 Xingchao Liu  1 Xinyu Yang  1 Xinyuan Li  1   5 Xuecheng Su  1 Xuheng Lin  1 X Q Li  1 Xiangyue Jin  1 Xiaojin Shen  1 Xiaosha Chen  1 Xiaowen Sun  1 Xiaoxiang Wang  1 Xinnan Song  1 Xinyi Zhou  1 Xianzu Wang  1 Xinxia Shan  1 Y K Li  1 Y Q Wang  1 Y X Wei  1 Yang Zhang  1 Yanhong Xu  1 Yao Li  1 Yao Zhao  1 Yaofeng Sun  1 Yaohui Wang  1 Yi Yu  1 Yichao Zhang  1 Yifan Shi  1 Yiliang Xiong  1 Ying He  1 Yishi Piao  1 Yisong Wang  1 Yixuan Tan  1 Yiyang Ma  1 Yiyuan Liu  1 Yongqiang Guo  1 Yuan Ou  1 Yuduan Wang  1 Yue Gong  1   5 Yuheng Zou  1 Yujia He  1   5 Yunfan Xiong  1 Yuxiang Luo  1 Yuxiang You  1 Yuxuan Liu  1 Yuyang Zhou  1 Y X Zhu  1 Yanping Huang  1 Yaohui Li  1 Yi Zheng  1 Yuchen Zhu  1 Yunxian Ma  1 Ying Tang  1 Yukun Zha  1 Yuting Yan  1 Z Z Ren  1 Zehui Ren  1 Zhangli Sha  1 Zhe Fu  1 Zhean Xu  1 Zhenda Xie  1 Zhengyan Zhang  1 Zhewen Hao  1 Zhicheng Ma  1 Zhigang Yan  1 Zhiyu Wu  1 Zihui Gu  1 Zijia Zhu  1 Zijun Liu  1   7 Zilin Li  1 Ziwei Xie  1 Ziyang Song  1   8 Zizheng Pan  1 Zhen Huang  1 Zhipeng Xu  1 Zhongyu Zhang  1 Zhen Zhang  1
Affiliations

DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning

Daya Guo et al. Nature. 2025 Sep.

Abstract

General reasoning represents a long-standing and formidable challenge in artificial intelligence (AI). Recent breakthroughs, exemplified by large language models (LLMs)1,2 and chain-of-thought (CoT) prompting3, have achieved considerable success on foundational reasoning tasks. However, this success is heavily contingent on extensive human-annotated demonstrations and the capabilities of models are still insufficient for more complex problems. Here we show that the reasoning abilities of LLMs can be incentivized through pure reinforcement learning (RL), obviating the need for human-labelled reasoning trajectories. The proposed RL framework facilitates the emergent development of advanced reasoning patterns, such as self-reflection, verification and dynamic strategy adaptation. Consequently, the trained model achieves superior performance on verifiable tasks such as mathematics, coding competitions and STEM fields, surpassing its counterparts trained through conventional supervised learning on human demonstrations. Moreover, the emergent reasoning patterns exhibited by these large-scale models can be systematically used to guide and enhance the reasoning capabilities of smaller models.

PubMed Disclaimer

Conflict of interest statement

Competing interests: The authors declare no competing interests and will not file patents related to the content of this manuscript.

Figures

Fig. 1
Fig. 1. Accuracy and output length of DeepSeek-R1-Zero throughout the training process.
a, AIME accuracy of DeepSeek-R1-Zero during training. AIME takes a mathematical problem as input and a number as output, illustrated in Extended Data Table 1. pass@1 and cons@16 are described in Supplementary Information, section 4.1. The baseline is the average score achieved by human participants in the AIME competition. b, The average response length of DeepSeek-R1-Zero on the training set during the RL process. DeepSeek-R1-Zero naturally learns to solve reasoning tasks with more thinking time. Note that a training step refers to a single policy update operation.
Fig. 2
Fig. 2. The multistage pipeline of DeepSeek-R1.
A detailed background on DeepSeek-V3 Base and DeepSeek-V3 is provided in Supplementary Information, section 1.1. The models DeepSeek-R1 Dev1, Dev2 and Dev3 represent intermediate checkpoints in this pipeline.
Extended Data Fig. 1
Extended Data Fig. 1. Evolution of reasoning-related linguistic features in model outputs across training steps.
a, Frequency of representative reflective terms in model-generated outputs throughout the training process. Reflective terms—including ‘wait’, ‘mistake’, ‘however’, ‘but’, ‘retry’, ‘error’, ‘verify’, ‘wrong’, ‘evaluate’ and ‘check’—were identified and curated by a panel of three human experts. Each expert independently proposed a set of words indicative of reflective reasoning, which were subsequently consolidated through consensus into a final vocabulary list. b, Frequency of the term ‘wait’ in model outputs over the course of training. This term was virtually absent during the initial training stages, appeared sporadically between steps 4,000 and 7,000 and exhibited a marked increase in frequency after step 8,000. These trends suggest the emergence of temporal reasoning or self-monitoring behaviour as training progresses.
Extended Data Fig. 2
Extended Data Fig. 2. Illustration of the proposed GRPO for RL-based training.
In the proposed framework, a LLM is used as a policy model to generate responses {o1, o2,…, oG} conditioned on a given query q. Each response within the group is evaluated by a reward model—either learned (model-based) or manually specified (rule-based)—to assign a scalar reward signal. Subsequently, GRPO computes the relative advantages of each group member based on their assigned rewards. Rather than relying on an explicit value function, as in PPO, GRPO directly estimates advantages from the intra-group reward distribution. The policy parameters are then updated to maximize the expected reward while simultaneously minimizing divergence from a reference policy, typically quantified through the KL divergence. By eliminating the need for a separate value network, GRPO offers a simplified yet effective alternative to traditional actor-critic methods such as PPO.

References

    1. Brown, T. B. et al. Language models are few-shot learners. In Advances in Neural Information Processing Systems 33 (eds Larochelle, H. et al.) (ACM, 2020).
    1. OpenAI et al. GPT4 technical report. Preprint at 10.48550/arXiv.2303.08774 (2024).
    1. Wei, J. et al. Chain-of-thought prompting elicits reasoning in large language models. In Advances in Neural Information Processing Systems 35 (eds Koyejo, S. et al.) 24824–24837 (ACM, 2022).
    1. Wei, J. et al. Emergent abilities of large language models. In Transactions on Machine Learning Research (eds Kamath, G. et al.) (2022).
    1. Kaplan, J. et al. Scaling laws for neural language models. Preprint at 10.48550/arXiv.2001.08361 (2020).

LinkOut - more resources