Increased LLM Vulnerabilities from Fine-tuning and Quantization: Appendix Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Conclusion and References Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Increased LLM Vulnerabilities from Fine-tuning and Quantization: Experiment Set-up & Results Post date October 17, 2024 Post author By Quantization Post categories In adversarial-attacks, alignment-training, fine-tuning, guardrails, jailbreaking, large-language-models-(llms), quantization, vulnerabilities
Fine Tuning LLMs with HuggingFace! Post date October 3, 2024 Post author By Pavan Belagatti Post categories In fine-tuning, hugging-face, hugging-face-tutorials, large-language-models, retrieval-augmented
Improving Text-to-SQL with a Fine-Tuned 7B LLM for DB Interactions Post date October 2, 2024 Post author By Yi Ai Post categories In fine-tuned-7b-llm, fine-tuning, generative-ai, langchain, llm-for-db-interactions, llms, lora, text-to-sql
Unleashing AI Power: Introducing Snowflake AI — Cortex Analyst Post date July 13, 2024 Post author By Rany ElHousieny Post categories In ai, artificial-intelligence, fine-tuning, llm, snowflake
Fine-Tuning Tiny Llama: Custom Data Preprocessing for Writing Style Mimicry Post date June 23, 2024 Post author By M Muneeb Ur Rehman Post categories In data-preprocessing, fine-tuning, llm, python, tinyllama
How to Fine-Tune an NLP Classification Model with OpenAI Post date April 16, 2023 Post author By George Pipis Post categories In chatgpt, fine-tuning, openai, python