Anchor-based Large Language Models: More Experimental Results

Anchor-based caching methods for LLMs achieve up to 3.5x faster inference and improve scalability, with key-value cache reduction optimizing real-time inference in translation tasks. Performance gains are noted even with larger models like AnLLM-AC, enhancing accuracy across benchmarks.


This content originally appeared on HackerNoon and was authored by Anchoring

:::info Authors:

(1) Jianhui Pang, from the University of Macau, and work was done when Jianhui Pang and Fanghua Ye were interning at Tencent AI Lab (nlp2ct.pangjh3@gmail.com);

(2) Fanghua Ye, University College London, and work was done when Jianhui Pang and Fanghua Ye were interning at Tencent AI Lab (fanghua.ye.19@ucl.ac.uk);

(3) Derek F. Wong, University of Macau;

(4) Longyue Wang, Tencent AI Lab, and corresponding author.

:::

Abstract and 1 Introduction

2 Related Work

3 Anchor-based Large Language Models

3.1 Background

3.2 Anchor-based Self-Attention Networks

3.3 Anchor-based Inference

4 Experiments and 4.1 Our Implementation

4.2 Data and Training Procedure

4.3 Evaluation

5 Results

6 Analysis

7 Conclusion, Limitations, Ethics Statement, and References

\ A More Experimental Results

B Data Settings

A More Experimental Results

A.1 Testing Acceleration Ratio to Full-Caching Method

In Section 5, we report the testing acceleration ratio following the setting of Wang et al. (2023), comparing the time difference between caching and non-caching inference. Although our method reduces the keys/values caches, enabling smaller space for prefix information storage and improving testing time up to ×3.5, we are still curious about whether it would enhance inference efficiency if conventional methods use full caches that save all keys/values of prefix tokens. As a supplement to Table 1, we present the testing acceleration ratio between anchor-caching and full-caching inference in Table 3. The acceleration ratios for AnLLMEP-AnSAN and AnLLM-AC-AnSAN achieve the highest improvements observed in tasks such as HS, SCIQ, and BoolQ. The average acceleration ratios for AnLLM-EP-AnSAN and AnLLM-ACAnSAN are 1.03. These results demonstrate that our anchor-based caching method can enhance inference efficiency even when compared to conventional methods that save all keys/values of prefix tokens. These results suggest that our anchor-based caching approach, which saves only the keys/values caches of anchor tokens, can effectively accelerate the inference process for the lengthy prefix texts.

A.2 Model Scalability Assessment

To examine the scalability of our approach, we extend the AnLLM-AC model to 13B and assess its performance on eight question-answering benchmarks using the same evaluation strategy as previously mentioned. In comparison to the 7B AnLLM models in Table 1, Results in Table 4 indicate that as the model size expands, the AnLLM-AC model achieves accuracies of 67.5% and 70.0% for 0-shot and 5-shot testing, respectively, resulting in up to a 2.4% improvement. Moreover, by incorporating anchor-based attention, the AnLLM-AC-AnSAN model achieves an average accuracy of 69.5%, signifying a 2.0% increase. The performance enhancement underscores the effectiveness of our methods in accommodating larger model capacities. The consistent improvements observed in the AnLLMAC model across various scenarios highlight its robustness and adaptability. Furthermore, the increased performance of the AnLLM-AC-AnSAN model, facilitated by anchor-based attention, emphasizes the potential of our approaches in optimizing LLMs. Collectively, these findings point to promising avenues for future research aimed at maximizing the utility and efficiency of AnLLM.

A.3 Case Study in Real-Time Inference

To elaborate on the optimization of keys/values caches by AnLLM-EP and AnLLM-AC during real-time inference, we reference examples from the translation task in Section 6.2. As per Table 5, AnLLM-EP and AnLLM-AC use "endpoints" (".") and "" tokens as anchor tokens, respectively. During inference, both models employ auto-regressive generation, creating outputs token-by-token. Upon generating an anchor token (as per Line 16, Algorithm 1), the Reduction function (defined in Line 1) is activated, preserving relevant caches and eliminating others. As a result, the Keys/Values Cache lengths are reduced to roughly the sequence length, averaging around 50 for AnLLM-EP and 35 for AnLLM-AC, as shown in Table 2.

\

:::info This paper is available on arxiv under CC BY 4.0 DEED license.

:::

\


This content originally appeared on HackerNoon and was authored by Anchoring


Print Share Comment Cite Upload Translate Updates
APA

Anchoring | Sciencx (2024-10-11T14:00:25+00:00) Anchor-based Large Language Models: More Experimental Results. Retrieved from https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/

MLA
" » Anchor-based Large Language Models: More Experimental Results." Anchoring | Sciencx - Friday October 11, 2024, https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/
HARVARD
Anchoring | Sciencx Friday October 11, 2024 » Anchor-based Large Language Models: More Experimental Results., viewed ,<https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/>
VANCOUVER
Anchoring | Sciencx - » Anchor-based Large Language Models: More Experimental Results. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/
CHICAGO
" » Anchor-based Large Language Models: More Experimental Results." Anchoring | Sciencx - Accessed . https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/
IEEE
" » Anchor-based Large Language Models: More Experimental Results." Anchoring | Sciencx [Online]. Available: https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/. [Accessed: ]
rf:citation
» Anchor-based Large Language Models: More Experimental Results | Anchoring | Sciencx | https://www.scien.cx/2024/10/11/anchor-based-large-language-models-more-experimental-results/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.