This content originally appeared on DEV Community and was authored by Durvesh Danve
Ever wondered how easy it could be to harness the power of cutting-edge AI models in your projects?
With just 8 lines of Python code, you can start using a powerful Large Language Model (LLM) without diving into the complexities of training one from scratch.
Let’s see how!
Tools we'll be using:
1. Huggingface pretrained model (in this case, falcon)
2. Python
3. Langchain
4. Google Colab
First, open Google Colab and create a new notebook.
Let's start coding:
Step 1:
Install the necessary libraries:
!!pip install langchain huggingface_hub langchain_community
Step 2:
Set up your Hugging Face API token as an environment variable:
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "YOUR_TOKEN"
To get your token:
- Visit Hugging Face and sign in or create an account.
- Navigate to the settings page and select the Access Token tab.
- Create a token and replace "YOUR_TOKEN" with your actual token.
Step 3:
Import HuggingFaceHub from langchain :
from langchain import HuggingFaceHub
Initialize your Large Language Model (LLM):
llm = HuggingFaceHub(repo_id="tiiuae/falcon-7b-instruct", model_kwargs={"temperature":0.6})
I’m using the tiiuae/falcon-7b-instruct model here, but there are plenty of models available. You can explore them here.
Let’s test the model:
prompt = 'Generate a Python function to print the Fibonacci series. Ensure the code is optimized for efficiency and has minimal time complexity'
response = llm(prompt)
print(response)
and this results into :
def fibonacci(n):
if n == 0:
return 0
elif n == 1:
return 1
else:
return fibonacci(n - 1)+fibonacci(n - 2)
And just like that, with only 8 lines of code, we’ve set up our own version of ChatGPT! 🎉💻
Complete Code
# Install necessary libraries
!pip install langchain huggingface_hub langchain_community
import os
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "YOUR_TOKEN"
from langchain import HuggingFaceHub
# Initialize the model
llm = HuggingFaceHub(repo_id="tiiuae/falcon-7b-instruct", model_kwargs={"temperature":0.6})
# Use the model to generate a response
prompt = 'Generate a Python function to print the Fibonacci series. Ensure the code is optimized for efficiency and has minimal time complexity'
response = llm(prompt)
print(response)
This content originally appeared on DEV Community and was authored by Durvesh Danve

Durvesh Danve | Sciencx (2024-08-09T12:40:40+00:00) Build a LLM in Just 8 Lines of Code 🚀. Retrieved from https://www.scien.cx/2024/08/09/build-a-llm-in-just-8-lines-of-code-%f0%9f%9a%80/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.