This content originally appeared on DEV Community and was authored by KAMAL KISHOR
Introduction
In this blog post, we'll walk through the process of creating a simple chat application that interacts with Ollama's Llama 3 model. We'll use JavaScript, HTML, and CSS for the frontend, and Node.js with Express for the backend. By the end, you'll have a working chat application that sends user messages to the AI model and displays the responses in real-time.
Prerequisites
Before you begin, ensure you have the following installed on your machine:
- Node.js
- npm (Node Package Manager)
Step 1: Setting Up the Frontend
HTML
First, create an HTML file named index.html
that defines the structure of our chat application.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Chat with Ollama's Llama 3</title>
<link rel="stylesheet" href="styles.css">
</head>
<body>
<div id="chat-container">
<div id="chat-window">
<div id="messages"></div>
</div>
<input type="text" id="user-input" placeholder="Type your message here...">
<button id="send-button">Send</button>
</div>
<script src="script.js"></script>
</body>
</html>
This HTML file includes a container for the chat messages, an input field for user messages, and a send button.
CSS
Next, create a CSS file named styles.css
to style the chat application.
body {
font-family: Arial, sans-serif;
display: flex;
justify-content: center;
align-items: center;
height: 100vh;
background-color: #f0f0f0;
margin: 0;
}
#chat-container {
width: 400px;
border: 1px solid #ccc;
background-color: #fff;
border-radius: 8px;
box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1);
overflow: hidden;
}
#chat-window {
height: 300px;
padding: 10px;
overflow-y: auto;
border-bottom: 1px solid #ccc;
}
#messages {
display: flex;
flex-direction: column;
}
.message {
padding: 8px;
margin: 4px 0;
border-radius: 4px;
}
.user-message {
align-self: flex-end;
background-color: #007bff;
color: #fff;
}
.ai-message {
align-self: flex-start;
background-color: #e0e0e0;
color: #000;
}
#user-input {
width: calc(100% - 60px);
padding: 10px;
border: none;
border-radius: 0;
outline: none;
}
#send-button {
width: 60px;
padding: 10px;
border: none;
background-color: #007bff;
color: #fff;
cursor: pointer;
}
This CSS file ensures the chat application looks clean and modern.
JavaScript
Create a JavaScript file named script.js
to handle the frontend functionality.
document.getElementById('send-button').addEventListener('click', sendMessage);
document.getElementById('user-input').addEventListener('keypress', function (e) {
if (e.key === 'Enter') {
sendMessage();
}
});
function sendMessage() {
const userInput = document.getElementById('user-input');
const messageText = userInput.value.trim();
if (messageText === '') return;
displayMessage(messageText, 'user-message');
userInput.value = '';
// Send the message to the local AI and get the response
getAIResponse(messageText).then(aiResponse => {
displayMessage(aiResponse, 'ai-message');
}).catch(error => {
console.error('Error:', error);
displayMessage('Sorry, something went wrong.', 'ai-message');
});
}
function displayMessage(text, className) {
const messageElement = document.createElement('div');
messageElement.textContent = text;
messageElement.className = `message ${className}`;
document.getElementById('messages').appendChild(messageElement);
document.getElementById('messages').scrollTop = document.getElementById('messages').scrollHeight;
}
async function getAIResponse(userMessage) {
// Example AJAX call to a local server interacting with Ollama Llama 3
const response = await fetch('http://localhost:5000/ollama', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ message: userMessage }),
});
if (!response.ok) {
throw new Error('Network response was not ok');
}
const data = await response.json();
return data.response; // Adjust this based on your server's response structure
}
This JavaScript file adds event listeners to the send button and input field, sends user messages to the backend, and displays both user and AI responses.
Step 2: Setting Up the Backend
Node.js and Express
Ensure you have Node.js installed. Then, create a server.js
file for the backend.
-
Install Express:
npm install express body-parser
-
Create the
server.js
file:
const express = require('express'); const bodyParser = require('body-parser'); const app = express(); const port = 5000; app.use(bodyParser.json()); app.post('/ollama', async (req, res) => { const userMessage = req.body.message; // Replace this with actual interaction with Ollama's Llama 3 // This is a placeholder for demonstration purposes const aiResponse = await getLlama3Response(userMessage); res.json({ response: aiResponse }); }); // Placeholder function to simulate AI response async function getLlama3Response(userMessage) { // Replace this with actual API call to Ollama's Llama 3 return `Llama 3 says: ${userMessage}`; } app.listen(port, () => { console.log(`Server running at http://localhost:${port}`); });
-
Run the server:
node server.js
In this setup, your Node.js server will handle incoming requests, interact with Ollama's Llama 3 model, and return responses.
Conclusion
By following these steps, you've created a chat application that sends user messages to Ollama's Llama 3 model and displays the responses. This setup can be extended and customized based on your specific requirements and the features offered by the Llama 3 model.
Feel free to explore and enhance the functionality of your chat application. Happy coding!
This content originally appeared on DEV Community and was authored by KAMAL KISHOR
KAMAL KISHOR | Sciencx (2024-07-18T20:56:18+00:00) Building a Chat Application with Ollama’s Llama 3 Model Using JavaScript, HTML, and CSS. Retrieved from https://www.scien.cx/2024/07/18/building-a-chat-application-with-ollamas-llama-3-model-using-javascript-html-and-css/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.