ConvTranspose3d() in PyTorch

Buy Me a Coffee☕

*Memos:

My post explains Transposed Convolutional Layer.

My post explains ConvTranspose1d().

My post explains ConvTranspose2d().

My post explains manual_seed().

My post explains requires_grad.

ConvTranspose3d() can get the 4D…


This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)

Buy Me a Coffee

*Memos:

ConvTranspose3d() can get the 4D or 5D tensor of the one or more values computed by 3D transposed convolution from the 4D or 5D tensor of one or more elements as shown below:

*Memos:

  • The 1st argument for initialization is in_channels(Required-Type:float). *It must be 1 <= x.
  • The 2nd argument for initialization is out_channels(Required-Type:float). *It must be 1 <= x.
  • The 3rd argument for initialization is kernel_size(Required-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 4th argument for initialization is stride(Optional-Default:1-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 5th argument for initialization is padding(Optional-Default:0-Type:int or tuple or list of int). *It must be 0 <= x.
  • The 6th argument for initialization is output_padding(Optional-Default:0-Type:int, tuple or list of int). *It must be 0 <= x.
  • The 7th argument for initialization is groups(Optional-Default:1-Type:int). *It must be 1 <= x.
  • The 8th argument for initialization is bias(Optional-Default:True-Type:bool). *If it's False, None is set.
  • The 9th argument for initialization is dilation(Optional-Default:1-Type:int or tuple or list of int). *It must be 1 <= x.
  • The 10th argument for initialization is padding_mode(Optional-Default:'zeros'-Type:str). *Only 'zeros' can be selected.
  • The 11th argument for initialization is device(Optional-Type:str, int or device()). *Memos:
  • The 12th argument for initialization is dtype(Optional-Type:int). *Memos:
  • The 1st argument is input(Required-Type:tensor of float or complex). *complex must be set to dtype of ConvTranspose3d() to use a complex tensor.
  • Input tensor's device and dtype must be same as ConvTranspose3d()'s device and dtype respectively.
  • The tensor's requires_grad which is False by default is set to True by ConvTranspose3d().
  • convtran3d1.device and convtran3d1.dtype don't work.
import torch
from torch import nn

tensor1 = torch.tensor([[[[8., -3., 0., 1., 5., -2.]]]])

tensor1.requires_grad
# False

torch.manual_seed(42)

convtran3d1 = nn.ConvTranspose3d(in_channels=1, out_channels=3, kernel_size=1)
tensor2 = convtran3d1(input=tensor1)
tensor2
# tensor([[[[-2.6722, 0.4197, -0.4236, -0.7046, -1.8290, 0.1386]]],
#         [[[3.2144, -0.5154, 0.5018, 0.8409, 2.1972, -0.1763]]],
#         [[[4.1797, -1.4188, 0.1081, 0.6170, 2.6529, -0.9099]]]],
#        grad_fn=<SqueezeBackward1>)

tensor2.requires_grad
# True

convtran3d1
# ConvTranspose3d(1, 3, kernel_size=(1, 1, 1), stride=(1, 1, 1))

convtran3d1.in_channels
# 1

convtran3d1.out_channels
# 3

convtran3d1.kernel_size
# (1, 1, 1)

convtran3d1.stride
# (1, 1, 1)

convtran3d1.padding
# (0, 0, 0)

convtran3d1.output_padding
# (0, 0, 0)

convtran3d1.groups
# 1

convtran3d1.bias
# Parameter containing:
# tensor([0.5304, -0.1265, 0.1165], requires_grad=True)

convtran3d1.dilation
# (1, 1, 1)

convtran3d1.padding_mode
# 'zeros'

convtran3d1.weight
# Parameter containing:
# tensor([[[[[0.4414]]], [[[0.4792]]], [[[-0.1353]]]]], requires_grad=True)

torch.manual_seed(42)

convtran3d2 = nn.ConvTranspose3d(in_channels=3, out_channels=3, kernel_size=1)
convtran3d2(input=tensor2)
# tensor([[[[3.6068, -1.7503, -0.2893, 0.1977, 2.1458, -1.2633]]],
#         [[[1.6518, 0.4964, 0.8115, 0.9165, 1.3367, 0.6014]]],
#         [[[-0.5008, 0.2990, 0.0809, 0.0082, -0.2827, 0.2263]]]],
#        grad_fn=<SqueezeBackward1>)

torch.manual_seed(42)

convtran3d = nn.ConvTranspose3d(in_channels=1, out_channels=3, 
             kernel_size=1, stride=1, padding=0, output_padding=0, 
             groups=1, bias=True, dilation=1, padding_mode='zeros', 
             device=None, dtype=None)
convtran3d(input=tensor1)
# tensor([[[[4.0616, -0.7939, 0.5304, 0.9718, 2.7374, -0.3525]]],
#         [[[3.7071, -1.5641, -0.1265, 0.3527, 2.2695, -1.0849]]],
#         [[[-0.9656, 0.5223, 0.1165, -0.0188, -0.5598, 0.3870]]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[[8., -3., 0.],
                            [1., 5., -2.]]]])
torch.manual_seed(42)

convtran3d = nn.ConvTranspose3d(in_channels=1, out_channels=3,
                                kernel_size=1)
convtran3d(input=my_tensor)
# tensor([[[[4.0616, -0.7939, 0.5304], [0.9718, 2.7374, -0.3525]]],
#         [[[3.7071, -1.5641, -0.1265], [0.3527, 2.2695, -1.0849]]],
#         [[[-0.9656, 0.5223, 0.1165], [-0.0188, -0.5598, 0.3870]]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[[8.], [-3.], [0.],
                            [1.], [5.], [-2.]]]])
torch.manual_seed(42)

convtran3d = nn.ConvTranspose3d(in_channels=1, out_channels=3, kernel_size=1)
convtran3d(input=my_tensor)
# tensor([[[[4.0616], [-0.7939], [0.5304], [0.9718], [2.7374], [-0.3525]]],
#         [[[3.7071], [-1.5641], [-0.1265], [0.3527], [2.2695], [-1.0849]]],
#         [[[-0.9656], [0.5223], [0.1165], [-0.0188], [-0.5598], [0.3870]]]],
#        grad_fn=<SqueezeBackward1>)

my_tensor = torch.tensor([[[[[8.], [-3.], [0.]],
                            [[1.], [5.], [-2.]]]]])
torch.manual_seed(42)

convtran3d = nn.ConvTranspose3d(in_channels=1, out_channels=3, kernel_size=1)
convtran3d(input=my_tensor)
# tensor([[[[[4.0616], [-0.7939], [0.5304]],
#           [[0.9718], [2.7374], [-0.3525]]],
#          [[[3.7071], [-1.5641], [-0.1265]],
#           [[0.3527], [2.2695], [-1.0849]]],
#          [[[-0.9656], [0.5223], [0.1165]],
#           [[-0.0188], [-0.5598], [0.3870]]]]],
#        grad_fn=<ConvolutionBackward0>)

my_tensor = torch.tensor([[[[[8.+0.j], [-3.+0.j], [0.+0.j]],
                            [[1.+0.j], [5.+0.j], [-2.+0.j]]]]])
torch.manual_seed(42)

convtran3d = nn.ConvTranspose3d(in_channels=1, out_channels=3,
                                kernel_size=1, dtype=torch.complex64)
convtran3d(input=my_tensor)
# tensor([[[[[3.2502+4.1727j], [-1.6053-1.0985j], [-0.2811+0.3391j]],
#           [[0.1603+0.8183j], [1.9259+2.7351j], [-1.1639-0.6193j]]],
#          [[[-0.5731+3.8193j], [0.9147-2.0146j], [0.5090-0.4236j]],
#           [[0.3737+0.1068j], [-0.1673+2.2282j], [0.7795-1.4843j]]],
#          [[[-0.5102+1.0401j], [0.8813-0.2415j], [0.5018+0.1081j]],
#           [[0.3753+0.2246j], [-0.1307+0.6906j], [0.7548-0.1250j]]]]],
#        grad_fn=<AddBackward0>)


This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)


Print Share Comment Cite Upload Translate Updates
APA

Super Kai (Kazuya Ito) | Sciencx (2024-09-14T02:12:34+00:00) ConvTranspose3d() in PyTorch. Retrieved from https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/

MLA
" » ConvTranspose3d() in PyTorch." Super Kai (Kazuya Ito) | Sciencx - Saturday September 14, 2024, https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/
HARVARD
Super Kai (Kazuya Ito) | Sciencx Saturday September 14, 2024 » ConvTranspose3d() in PyTorch., viewed ,<https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/>
VANCOUVER
Super Kai (Kazuya Ito) | Sciencx - » ConvTranspose3d() in PyTorch. [Internet]. [Accessed ]. Available from: https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/
CHICAGO
" » ConvTranspose3d() in PyTorch." Super Kai (Kazuya Ito) | Sciencx - Accessed . https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/
IEEE
" » ConvTranspose3d() in PyTorch." Super Kai (Kazuya Ito) | Sciencx [Online]. Available: https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/. [Accessed: ]
rf:citation
» ConvTranspose3d() in PyTorch | Super Kai (Kazuya Ito) | Sciencx | https://www.scien.cx/2024/09/14/convtranspose3d-in-pytorch/ |

Please log in to upload a file.




There are no updates yet.
Click the Upload button above to add an update.

You must be logged in to translate posts. Please log in or register.