AI, Deep Learning Basics/Basic

[NN] PyTorch 함수 모음

  • torch.jit.is_scripting: GPU에서 실행하면 true, cpu이면 False인듯 싶다.
  • torch.nn.Conv1d: 1D Convolution을 지원한다.(Applies a 1D convolution over an input signal composed of several input planes.)
  • torch.nn.Conv1d(in_channels, out_channels, kernel_size, 
    	stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', 
        	device=None, dtype=None)
    >>> m = nn.Conv1d(16, 33, 3, stride=2)
    >>> input = torch.randn(20, 16, 50)
    >>> output = m(input)
  • torch.nn.conv2d: 2D Convolution을 지원한다.(Applies a 2D convolution over an input signal composed of several input planes.)
  • torch.nn.Conv2d(in_channels, out_channels, kernel_size, 
    	stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', 
        	device=None, dtype=None)
  • >>> # With square kernels and equal stride
    >>> m = nn.Conv2d(16, 33, 3, stride=2)
    >>> # non-square kernels and unequal stride and with padding
    >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
    >>> # non-square kernels and unequal stride and with padding and dilation
    >>> m = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))
    >>> input = torch.randn(20, 16, 50, 100)
    >>> output = m(input)
  • torch.nn.ConvTranspose2d: Applies a 2D transposed convolution operator over an input image composed of several input planes.
    torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, 
    	output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', 
    	device=None, dtype=None)​
  • >>> # With square kernels and equal stride
    >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)
    >>> # non-square kernels and unequal stride and with padding
    >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))
    >>> input = torch.randn(20, 16, 50, 100)
    >>> output = m(input)
    >>> # exact output size can be also specified as an argument
    >>> input = torch.randn(1, 16, 12, 12)
    >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)
    >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)
    >>> h = downsample(input)
    >>> h.size()
    torch.Size([1, 16, 6, 6])
    >>> output = upsample(h, output_size=input.size())
    >>> output.size()
    torch.Size([1, 16, 12, 12])
  • torch.linspace : start 이상 end 미만까지 총 steps 개수의 dtype 타입인 1차원 텐서 생성
    torch.linspace(start, end, steps=100, out=None, 
    	dtype=None, layout=torch.strided, device=None, requires_grad=False) → Tensor
    torch.arange에서 step은 간격을, torch.linspace에서 steps는 개수를 의미한다.
  • >>> torch.linspace(-10, 10, steps=5)
    tensor([-10.,  -5.,   0.,   5.,  10.])
    >>> torch.linspace(0, 10, steps=10)
    tensor([ 0.0000,  1.1111,  2.2222,  3.3333,  4.4444,  
             5.5556,  6.6667,  7.7778,  8.8889, 10.0000])