Textbooks Are All You Need: Examples for Section 5 Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks Are All You Need: Limitation of Phi-1 Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks Are All You Need: Additional Examples for Section 3 Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks Are All You Need: Conclusion and References Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: N-gram Overlap and Embedding and Syntax-based Similarity Analysis Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Data Pruning for Unbiased Performance Evaluation Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Evaluation on Unconventional Problems With LLM Grading Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Spikes of Model Capability After Finetuning on CodeExercises Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Model Architecture and Training Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Creation of Synthetic Textbook-quality Datasets Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Filtering of Existing Code Datasets Using a Transformer-based Classifier Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks are All You Need: Training Details and the Importance of High-quality Data Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Textbooks Are All You Need: Abstract and Introduction Post date September 12, 2024 Post author By Knapsack Post categories In ai-model-running-locally, generative-ai, lightweight-ai-model, local-language-model-for-ai, microsoft-phi, microsoft-phi-whitepaper, phi-1, small-language-model
Large Language Models on Memory-Constrained Devices Using Flash Memory: Conclusion & Discussion Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Related Works Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Results for OPT 6.7B Model Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Results for Falcon 7B Model Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Results Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Optimized Data in DRAM Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Improving Throughput Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Load From Flash Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Read Throughput Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference
Large Language Models on Memory-Constrained Devices Using Flash Memory: Flash Memory & LLM Inference Post date July 31, 2024 Post author By Knapsack Post categories In data-transfer-efficiency, dram-optimization, flash-memory, hardware-aware-design, large-language-models, memory-constrained-devices, model-acceleration, model-inference