Human Study Validates GPT-4 Win Rates for TL;DR Summarization

A human study was conducted to validate GPT-4’s win rate computations for TL;DR summarization. Human raters compared summaries generated by different algorithms and their judgments were compared to GPT-4’s assessments. The results showed a high correlation between human and GPT-4 win rates.


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models

:::info Authors:

(1) Rafael Rafailo, Stanford University and Equal contribution; more junior authors listed earlier;

(2) Archit Sharma, Stanford University and Equal contribution; more junior authors listed earlier;

(3) Eric Mitchel, Stanford University and Equal contribution; more junior authors listed earlier;

(4) Stefano Ermon, CZ Biohub;

(5) Christopher D. Manning, Stanford University;

(6) Chelsea Finn, Stanford University.

:::

Abstract and 1. Introduction

2 Related Work

3 Preliminaries

4 Direct Preference Optimization

5 Theoretical Analysis of DPO

6 Experiments

7 Discussion, Acknowledgements, and References

Author Contributions

\ A Mathematical Derivations

A.1 Deriving the Optimum of the KL-Constrained Reward Maximization Objective

A.2 Deriving the DPO Objective Under the Bradley-Terry Model

A.3 Deriving the DPO Objective Under the Plackett-Luce Model

A.4 Deriving the Gradient of the DPO Objective and A.5 Proof of Lemma 1 and 2

A.6 Proof of Theorem 1

\ B DPO Implementation Details and Hyperparameters

\ C Further Details on the Experimental Set-Up and C.1 IMDb Sentiment Experiment and Baseline Details

C.2 GPT-4 prompts for computing summarization and dialogue win rates

C.3 Unlikelihood baseline

\ D Additional Empirical Results

D.1 Performance of Best of N baseline for Various N and D.2 Sample Responses and GPT-4 Judgments

D.3 Human study details

D.3 Human study details

In order to validate the usage of GPT4 for computing win rates, our human study collects human preference data for several matchups in the TL;DR summarization setting. We select three different algorithmic matchups, evaluating DPO (temp. 0.25), SFT (temp. 0.25), and PPO (temp 1.0) compared to the reference algorithm PPO (temp 0.). By selecting matchups for three unique algorithms as well as algorithms with a wide range of win rates vs the reference, we capture the similarity of human and GPT-4 win rates across the response quality spectrum. We sample 150 random comparisons of DPO vs PPO-0 and 100 random comparisons PPO-1 vs PPO-0, assigning two humans to each comparison, producing 275 judgments for DPO-PPO[7] and 200 judgments for PPO-PPO. We sample 125 SFT comparisons, assigning a single human to each. We ignore judgments that humans labeled as ties (which amount to only about 1% of judgments), and measure the raw agreement percentage between human A and human B (for comparisons where we have two human annotators, i.e., not SFT) as well as between each human and GPT-4.

\ Figure 5: Layout of the survey in SurveyMonkey. Each respondent completed 25 similarly formatted judgments.

\ Participants. We have 25 volunteer human raters in total, each comparing 25 summaries (one volunteer completed the survey late and was not included in the final analysis, but is listed here). The raters were Stanford students (from undergrad through Ph.D.), or recent Stanford graduates or visitors, with a STEM (mainly CS) focus. See Figure 5 for a screenshot of the survey interface. We gratefully acknowledge the contribution of each of our volunteers, listed in random order:

\

\

:::info This paper is available on arxiv under CC BY-NC-ND 4.0 DEED license.

:::


[7] One volunteer did not respond for the DPO-PPO comparison.


This content originally appeared on HackerNoon and was authored by Writings, Papers and Blogs on Text Models


Print Share Comment Cite Upload Translate Updates
APA

Writings, Papers and Blogs on Text Models | Sciencx (2024-08-26T21:45:16+00:00) Human Study Validates GPT-4 Win Rates for TL;DR Summarization. Retrieved from https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/

MLA
" » Human Study Validates GPT-4 Win Rates for TL;DR Summarization." Writings, Papers and Blogs on Text Models | Sciencx - Monday August 26, 2024, https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/
HARVARD
Writings, Papers and Blogs on Text Models | Sciencx Monday August 26, 2024 » Human Study Validates GPT-4 Win Rates for TL;DR Summarization., viewed ,<https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/>
VANCOUVER
Writings, Papers and Blogs on Text Models | Sciencx - » Human Study Validates GPT-4 Win Rates for TL;DR Summarization. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/
CHICAGO
" » Human Study Validates GPT-4 Win Rates for TL;DR Summarization." Writings, Papers and Blogs on Text Models | Sciencx - Accessed . https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/
IEEE
" » Human Study Validates GPT-4 Win Rates for TL;DR Summarization." Writings, Papers and Blogs on Text Models | Sciencx [Online]. Available: https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/. [Accessed: ]
rf:citation
» Human Study Validates GPT-4 Win Rates for TL;DR Summarization | Writings, Papers and Blogs on Text Models | Sciencx | https://www.scien.cx/2024/08/26/human-study-validates-gpt-4-win-rates-for-tldr-summarization/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.