This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)
*Memos:
stack() can get the 1D or more D concatenated tensor of zero or more elements with one additional dimension from the one or more 0D or more D tensors of zero or more elements as shown below:
*Memos:
-
stack()
can be used with torch but not with a tensor. - The 1st argument with
torch
istensors
(Required-Type:tuple
orlist
oftensor
ofint
,float
,complex
orbool
). *The size of tensors must be the same. - The 2nd argument with
torch
isdim
(Optional-Default:0
-Type:int
). - There is
out
argument withtorch
(Optional-Type:tensor
): *Memos:-
out=
must be used. -
My post explains
out
argument.
-
-
tensors+1D
tensor is returned.
import torch
tensor1 = torch.tensor(2)
tensor2 = torch.tensor(7)
tensor3 = torch.tensor(4)
torch.stack(tensors=(tensor1, tensor2, tensor3))
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=0)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-1)
# tensor([2, 7, 4])
tensor1 = torch.tensor([2, 7, 4])
tensor2 = torch.tensor([8, 3, 2])
tensor3 = torch.tensor([5, 0, 8])
torch.stack(tensors=(tensor1, tensor2, tensor3))
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=0)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-2)
# tensor([[2, 7, 4], [8, 3, 2], [5, 0, 8]])
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=1)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-1)
# tensor([[2, 8, 5], [7, 3, 0], [4, 2, 8]])
tensor1 = torch.tensor([[2, 7, 4], [8, 3, 2]])
tensor2 = torch.tensor([[5, 0, 8], [3, 6, 1]])
tensor3 = torch.tensor([[9, 4, 7], [1, 0, 5]])
torch.stack(tensors=(tensor1, tensor2, tensor3))
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=0)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-3)
# tensor([[[2, 7, 4], [8, 3, 2]],
# [[5, 0, 8], [3, 6, 1]],
# [[9, 4, 7], [1, 0, 5]]])
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=1)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-2)
# tensor([[[2, 7, 4], [5, 0, 8], [9, 4, 7]],
# [[8, 3, 2], [3, 6, 1], [1, 0, 5]]])
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=-1)
torch.stack(tensors=(tensor1, tensor2, tensor3), dim=2)
# tensor([[[2, 5, 9], [7, 0, 4], [4, 8, 7]],
# [[8, 3, 1], [3, 6, 0], [2, 1, 5]]])
tensor1 = torch.tensor([[2., 7., 4.], [8., 3., 2.]])
tensor2 = torch.tensor([[5., 0., 8.], [3., 6., 1.]])
tensor3 = torch.tensor([[9., 4., 7.], [1., 0., 5.]])
torch.stack(tensors=(tensor1, tensor2, tensor3))
# tensor([[[2., 7., 4.], [8., 3., 2.]],
# [[5., 0., 8.], [3., 6., 1.]],
# [[9., 4., 7.], [1., 0., 5.]]])
tensor1 = torch.tensor([[2.+0.j, 7.+0.j, 4.+0.j],
[8.+0.j, 3.+0.j, 2.+0.j]])
tensor2 = torch.tensor([[5.+0.j, 0.+0.j, 8.+0.j],
[3.+0.j, 6.+0.j, 1.+0.j]])
tensor3 = torch.tensor([[9.+0.j, 4.+0.j, 7.+0.j],
[1.+0.j, 0.+0.j, 5.+0.j]])
torch.stack(tensors=(tensor1, tensor2, tensor3))
# tensor([[[2.+0.j, 7.+0.j, 4.+0.j],
# [8.+0.j, 3.+0.j, 2.+0.j]],
# [[5.+0.j, 0.+0.j, 8.+0.j],
# [3.+0.j, 6.+0.j, 1.+0.j]],
# [[9.+0.j, 4.+0.j, 7.+0.j],
# [1.+0.j, 0.+0.j, 5.+0.j]]])
tensor1 = torch.tensor([[True, False, True], [False, True, False]])
tensor2 = torch.tensor([[False, True, False], [True, False, True]])
tensor3 = torch.tensor([[True, False, True], [False, True, False]])
torch.stack(tensors=(tensor1, tensor2, tensor3))
# tensor([[[True, False, True], [False, True, False]],
# [[False, True, False], [True, False, True]],
# [[True, False, True], [False, True, False]]])
This content originally appeared on DEV Community and was authored by Super Kai (Kazuya Ito)
Super Kai (Kazuya Ito) | Sciencx (2024-07-14T03:09:37+00:00) stack() in PyTorch. Retrieved from https://www.scien.cx/2024/07/14/stack-in-pytorch/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.