Behind the Scenes: The Team Behind DPO

All authors contributed to the design, analysis, and writing of the DPO paper. RR focused on theoretical aspects, AS led experimental design and implementation, EM conducted large-scale experiments and human studies, and CF, CM, and SE provided supervision and guidance.


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models

:::info Authors:

(1) Rafael Rafailo, Stanford University and Equal contribution; more junior authors listed earlier;

(2) Archit Sharma, Stanford University and Equal contribution; more junior authors listed earlier;

(3) Eric Mitchel, Stanford University and Equal contribution; more junior authors listed earlier;

(4) Stefano Ermon, CZ Biohub;

(5) Christopher D. Manning, Stanford University;

(6) Chelsea Finn, Stanford University.

:::

Abstract and 1. Introduction

2 Related Work

3 Preliminaries

4 Direct Preference Optimization

5 Theoretical Analysis of DPO

6 Experiments

7 Discussion, Acknowledgements, and References

Author Contributions

\ A Mathematical Derivations

A.1 Deriving the Optimum of the KL-Constrained Reward Maximization Objective

A.2 Deriving the DPO Objective Under the Bradley-Terry Model

A.3 Deriving the DPO Objective Under the Plackett-Luce Model

A.4 Deriving the Gradient of the DPO Objective and A.5 Proof of Lemma 1 and 2

A.6 Proof of Theorem 1

\ B DPO Implementation Details and Hyperparameters

\ C Further Details on the Experimental Set-Up and C.1 IMDb Sentiment Experiment and Baseline Details

C.2 GPT-4 prompts for computing summarization and dialogue win rates

C.3 Unlikelihood baseline

\ D Additional Empirical Results

D.1 Performance of Best of N baseline for Various N and D.2 Sample Responses and GPT-4 Judgments

D.3 Human study details

Author Contributions

All authors provided valuable contributions to designing, analyzing, and iterating on experiments, writing and editing the paper, and generally managing the project’s progress.

\ RR proposed using autoregressive reward models in discussions with EM; derived the DPO objective; proved the theoretical properties of the algorithm and wrote the relevant sections and appendices. He also suggested and helped with organizing experiments and contributed some of the PPO and reward learning baselines.

\ AS initiated the discussion on using weighted regression methods as an alternative to PPO; initiated project-related organization, wrote initial analysis connecting DPO with weighted regression and unlikelihood; design and iterations of DPO + baseline implementations, initial exploratory experiments for DPO; substantial experiment organization and design (datasets, baselines, evaluation); led model training and evaluation for controlled sentiment generation and summarization; design iterations for GPT-4 evaluation (particularly summarization); substantial writing contributions to abstract, prelims/method and experiments; editing contributions to other sections.

\ EM provided input on early discussions on learning autoregressive reward functions; wrote the first implementation of DPO and ran the first DPO experiments; trained the large-scale (summarization and dialogue) DPO models used in paper experiments; conducted initial GPT-4 win rate evaluations and set up related infrastructure; recruited participants for, conducted, and analyzed results from the human study; wrote the abstract, introduction, related work, discussion, and most of experiments; and assisted with editing the rest of the paper.

\ CF, CM, & SE supervised the research, suggested ideas and experiments, and assisted in writing the paper.

\

:::info This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.

:::

\


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models


Print Share Comment Cite Upload Translate Updates
APA

Writings, Papers and Blogs on Text Models | Sciencx (2024-08-25T21:13:16+00:00) Behind the Scenes: The Team Behind DPO. Retrieved from https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/

MLA
" » Behind the Scenes: The Team Behind DPO." Writings, Papers and Blogs on Text Models | Sciencx - Sunday August 25, 2024, https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/
HARVARD
Writings, Papers and Blogs on Text Models | Sciencx Sunday August 25, 2024 » Behind the Scenes: The Team Behind DPO., viewed ,<https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/>
VANCOUVER
Writings, Papers and Blogs on Text Models | Sciencx - » Behind the Scenes: The Team Behind DPO. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/
CHICAGO
" » Behind the Scenes: The Team Behind DPO." Writings, Papers and Blogs on Text Models | Sciencx - Accessed . https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/
IEEE
" » Behind the Scenes: The Team Behind DPO." Writings, Papers and Blogs on Text Models | Sciencx [Online]. Available: https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/. [Accessed: ]
rf:citation
» Behind the Scenes: The Team Behind DPO | Writings, Papers and Blogs on Text Models | Sciencx | https://www.scien.cx/2024/08/25/behind-the-scenes-the-team-behind-dpo/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.