This content originally appeared on DEV Community and was authored by Mubarak Mohamed
Artificial intelligence (AI) is ubiquitous in our daily lives, from product recommendations on e-commerce websites to virtual assistants on our smartphones. But behind these sophisticated technologies lies a fundamental structure: the artificial neuron. Understanding and developing an artificial neuron is a crucial step for anyone looking to dive into the fascinating world of AI. In this article, we will guide you step-by-step through the process of creating your own artificial neuron, breaking down complex concepts into simple terms and providing concrete examples. Whether you're a curious beginner or a technology enthusiast, this practical guide will open the doors to a new dimension of innovation. Get ready to transform your understanding of AI and discover the limitless potential of this rapidly growing field.
To develop our artificial neuron program, we will start with a dataset containing 100 rows and 2 columns. This dataset can be likened to plants with the width and length of their leaves. Our goal here is to train our program to recognize toxic and non-toxic plants using this data. To achieve this, we will follow these steps.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
Step 1: Data Acquisition (X, y)
X: The input data
This is the raw information that the model will process.
Example: For an image recognition model, X could be an array of pixels representing an image. For a house price prediction model, X could include variables such as area, number of rooms, location, etc.
y: The labels
These are the correct answers associated with the input data.
Example: For image recognition, y would be the digit represented in the image. For house price prediction, y would be the actual price of the house.
X, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=0)
y = y.reshape((y.shape[0], 1))
print('dimensions de X:', X.shape)
print('dimensions de y:', y.shape)
plt.scatter(X[:,0], X[:, 1], c=y, cmap='summer')
plt.show()
Step 2: Initialization
initialisation(X):
This step allows you to initialize the parameters W and b
def initialisation(X):
W = np.random.randn(X.shape[1], 1)
b = np.random.randn(1)
return (W, b)
Step 3: Model Construction
Model(X, W, b):
The model is a mathematical function that takes the input data X and the model parameters (W and b) to produce a prediction.
W: Weight matrix
Determines the relative importance of each input feature.
b: Bias vector
Allows the model output to be adjusted independently of the inputs.
Activation function:
Transforms the linear output of the model into a non-linear output, allowing for modeling complex relationships.
def model(X, W, b):
Z = X.dot(W) + b
A = 1 / (1 + np.exp(-Z))
return A
Step 4: Error Calculation
Cost(A, y):
The cost function measures the discrepancy between the model's predictions (A) and the true labels (y).
Examples of cost functions:
Mean Squared Error (MSE): Used for regression problems.
Cross-entropy: Used for classification problems.
def log_loss(A, y):
return 1 / len(y) * np.sum(-y * np.log(A) - (1 - y) * np.log(1 - A))
Step 5: Model Optimization
Gradients(A, X, y):
The gradients indicate the direction in which the parameters W and b should be modified to minimize the cost function.
Update(W, b, dW, db):
The parameters are iteratively updated by following the opposite direction of the gradient.
Optimization algorithms:
Stochastic Gradient Descent (SGD): Updates the parameters at each training example.
Batch Gradient Descent: Updates the parameters at each mini-batch of examples.
Mini-batch Gradient Descent: A combination of the previous two.
def gradients(A, X, y):
dW = 1 / len(y) * np.dot(X.T, A - y)
db = 1 / len(y) * np.sum(A - y)
return (dW, db)
def update(dW, db, W, b, learning_rate):
W = W - learning_rate * dW
b = b - learning_rate * db
return (W, b)
Step 6: Model Evaluation
from sklearn.metrics import accuracy_score
def predict(X, W, b):
A = model(X, W, b)
# print(A)
return A >= 0.5
def artificial_neuron(X, y, learning_rate = 0.1, n_iter = 100):
# initialisation W, b
W, b = initialisation(X)
Loss = []
for i in range(n_iter):
A = model(X, W, b)
Loss.append(log_loss(A, y))
dW, db = gradients(A, X, y)
W, b = update(dW, db, W, b, learning_rate)
y_pred = predict(X, W, b)
print(accuracy_score(y, y_pred))
plt.plot(Loss)
plt.show()
return (W, b)
W, b = artificial_neuron(X, y)
Decision boundary
fig, ax = plt.subplots(figsize=(9, 6))
ax.scatter(X[:,0], X[:, 1], c=y, cmap='summer')
x1 = np.linspace(-1, 4, 100)
x2 = ( - W[0] * x1 - b) / W[1]
ax.plot(x1, x2, c='orange', lw=3)
This content originally appeared on DEV Community and was authored by Mubarak Mohamed
Mubarak Mohamed | Sciencx (2024-07-24T21:50:50+00:00) Build Your Own Artificial Neuron: A Practical Guide for AI Beginners. Retrieved from https://www.scien.cx/2024/07/24/build-your-own-artificial-neuron-a-practical-guide-for-ai-beginners/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.