Hawk and Griffin Models: Superior Latency and Throughput in AI Inference Post date January 14, 2025 Post author By Gating Post categories In ai-inference, deep-learning, efficient-ai, griffin-model, hawk-model, high-throughput, low-latency, transformers
Recurrent Models: Decoding Faster with Lower Latency and Higher Throughput Post date January 14, 2025 Post author By Gating Post categories In ai-inference, decoding-efficiency, deep-learning, high-throughput, language-models, low-latency, recurrent-models, transformers