LeakyReLU() and PReLU() in PyTorch

Buy Me a Coffee☕

*My post explains Step function, ReLU, Leaky ReLU and PReLU.

LeakyReLU() can get the 0D or more D tensor of the zero or more values computed by LeakyReLU function from the 0D or more D tensor of zero or more elements as shown below:


This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)

Buy Me a Coffee

*My post explains Step function, ReLU, Leaky ReLU and PReLU.

LeakyReLU() can get the 0D or more D tensor of the zero or more values computed by LeakyReLU function from the 0D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is negative_slope(Optional-Default:0.01-Type:float).
  • The 2nd argument for initialization is inplace(Optional-Default:False-Type:bool). *Keep it False because it's problematic with True.
  • The 1st argument is input(Required-Type:tensor of float).

Image description

import torch
from torch import nn

my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])

lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000])

lrelu
# LeakyReLU(negative_slope=0.01)

lrelu.negative_slope
# 0.01

lrelu = nn.LeakyReLU(negative_slope=0.01, inplace=True)
lrelu(input=my_tensor)
# tensor([8.0000, -0.0300, 0.0000, 1.0000, 5.0000, -0.0200, -0.0100, 4.0000])

my_tensor = torch.tensor([[8., -3., 0., 1.],
                          [5., 0., -1., 4.]])
lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([[8.0000, -0.0300, 0.0000, 1.0000],
#         [5.0000, 0.0000, -0.0100, 4.0000]])

my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
                          [[5., 0.], [-1., 4.]]])
lrelu = nn.LeakyReLU()
lrelu(input=my_tensor)
# tensor([[[8.0000, -0.0300], [0.0000, 1.0000]],
#         [[5.0000, 0.0000], [-0.0100, 4.0000]]])

PReLU() can get the 1D or more D tensor of the zero or more values computed by PReLU function from the 1D or more D tensor of zero or more elements as shown below:

*Memos:

  • The 1st argument for initialization is num_parameters(Optional-Default:1-Type:int). *MeIt must be 1 <= x.
  • The 2nd argument for initialization is init(Optional-Default:0.25-Type:float).
  • The 3rd argument is device(Optional-Type:str, int or device()): *Memos:
  • The 4th argument is dtype(Optional-Type:dtype): *Memos:
  • The 1st argument is input(Required-Type:tensor of float).

Image description

import torch
from torch import nn

my_tensor = torch.tensor([8., -3., 0., 1., 5., -2., -1., 4.])

prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([8.0000, -0.7500, 0.0000, 1.0000,
#         5.0000, -0.5000, -0.2500, 4.0000],
#        grad_fn=<PreluKernelBackward0>)

prelu
# PReLU(num_parameters=1)

prelu.num_parameters
# 1

prelu.init
# 0.25

prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)

prelu = nn.PReLU(num_parameters=1, init=0.25)
prelu(input=my_tensor)
# tensor([8.0000, -0.7500, 0.0000, 1.0000,
#         5.0000, -0.5000, -0.2500, 4.0000],
#        grad_fn=<PreluKernelBackward0>)

prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)

my_tensor = torch.tensor([[8., -3., 0., 1.],
                          [5., 0., -1., 4.]])
prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([[8.0000, -0.7500, 0.0000, 1.0000],
#         [5.0000, 0.0000, -0.2500, 4.0000]],
#        grad_fn=<PreluKernelBackward0>)

prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)

prelu = nn.PReLU(num_parameters=4, init=0.25)
prelu(input=my_tensor)
# tensor([[8.0000, -0.7500, 0.0000, 1.0000],
#         [5.0000, 0.0000, -0.2500, 4.0000]],
#        grad_fn=<PreluKernelBackward0>)

prelu.weight
# Parameter containing:
# tensor([0.2500, 0.2500, 0.2500, 0.2500], requires_grad=True)

my_tensor = torch.tensor([[[8., -3.], [0., 1.]],
                          [[5., 0.], [-1., 4.]]])
prelu = nn.PReLU()
prelu(input=my_tensor)
# tensor([[[8.0000, -0.7500], [0.0000, 1.0000]],
#         [[5.0000, 0.0000],[-0.2500, 4.0000]]],
#        grad_fn=<PreluKernelBackward0>)

prelu.weight
# Parameter containing:
# tensor([0.2500], requires_grad=True)

prelu = nn.PReLU(num_parameters=2, init=0.25)
prelu(input=my_tensor)
# tensor([[[8.0000, -0.7500], [0.0000, 1.0000]],
#         [[5.0000, 0.0000], [-0.2500, 4.0000]]],
#        grad_fn=<PreluKernelBackward0>)

prelu.weight
# Parameter containing:
# tensor([0.2500, 0.2500], requires_grad=True)


This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)


Print Share Comment Cite Upload Translate Updates
APA

Super Kai (Kazuya Ito) | Sciencx (2024-08-15T02:35:11+00:00) LeakyReLU() and PReLU() in PyTorch. Retrieved from https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/

MLA
" » LeakyReLU() and PReLU() in PyTorch." Super Kai (Kazuya Ito) | Sciencx - Thursday August 15, 2024, https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/
HARVARD
Super Kai (Kazuya Ito) | Sciencx Thursday August 15, 2024 » LeakyReLU() and PReLU() in PyTorch., viewed ,<https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/>
VANCOUVER
Super Kai (Kazuya Ito) | Sciencx - » LeakyReLU() and PReLU() in PyTorch. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/
CHICAGO
" » LeakyReLU() and PReLU() in PyTorch." Super Kai (Kazuya Ito) | Sciencx - Accessed . https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/
IEEE
" » LeakyReLU() and PReLU() in PyTorch." Super Kai (Kazuya Ito) | Sciencx [Online]. Available: https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/. [Accessed: ]
rf:citation
» LeakyReLU() and PReLU() in PyTorch | Super Kai (Kazuya Ito) | Sciencx | https://www.scien.cx/2024/08/15/leakyrelu-and-prelu-in-pytorch/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.