zamba.pytorch.transforms¶
Attributes¶
imagenet_normalization_values = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
module-attribute
¶
Classes¶
ConvertHWCtoCHW
¶
Bases: torch.nn.Module
Convert tensor from (0:H, 1:W, 2:C) to (2:C, 0:H, 1:W)
Source code in zamba/pytorch/transforms.py
30 31 32 33 34 |
|
ConvertTCHWtoCTHW
¶
Bases: torch.nn.Module
Convert tensor from (T, C, H, W) to (C, T, H, W)
Source code in zamba/pytorch/transforms.py
23 24 25 26 27 |
|
ConvertTHWCtoCTHW
¶
Bases: torch.nn.Module
Convert tensor from (0:T, 1:H, 2:W, 3:C) to (3:C, 0:T, 1:H, 2:W)
Source code in zamba/pytorch/transforms.py
9 10 11 12 13 |
|
ConvertTHWCtoTCHW
¶
Bases: torch.nn.Module
Convert tensor from (T, H, W, C) to (T, C, H, W)
Source code in zamba/pytorch/transforms.py
16 17 18 19 20 |
|
PackSlowFastPathways
¶
Bases: torch.nn.Module
Creates the slow and fast pathway inputs for the slowfast model.
Source code in zamba/pytorch/transforms.py
88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 |
|
Attributes¶
alpha = alpha
instance-attribute
¶
Functions¶
__init__(alpha: int = 4)
¶
Source code in zamba/pytorch/transforms.py
91 92 93 |
|
forward(frames: torch.Tensor)
¶
Source code in zamba/pytorch/transforms.py
95 96 97 98 99 100 101 102 103 104 |
|
PadDimensions
¶
Bases: torch.nn.Module
Pads a tensor to ensure a fixed output dimension for a give axis.
Attributes:
Name | Type | Description |
---|---|---|
dimension_sizes |
A tuple of int or None the same length as the number of dimensions in the input tensor. If int, pad that dimension to at least that size. If None, do not pad. |
Source code in zamba/pytorch/transforms.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
Attributes¶
dimension_sizes = dimension_sizes
instance-attribute
¶
Functions¶
__init__(dimension_sizes: Tuple[Optional[int]])
¶
Source code in zamba/pytorch/transforms.py
55 56 57 |
|
compute_left_and_right_pad(original_size: int, padded_size: int) -> Tuple[int, int]
staticmethod
¶
Computes left and right pad size.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
original_size |
list, int
|
The original tensor size |
required |
padded_size |
list, int
|
The desired tensor size |
required |
Returns:
Type | Description |
---|---|
Tuple[int, int]
|
Tuple[int]: Pad size for right and left. For odd padding size, the right = left + 1 |
Source code in zamba/pytorch/transforms.py
59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
|
forward(vid: torch.Tensor) -> torch.Tensor
¶
Source code in zamba/pytorch/transforms.py
76 77 78 79 80 81 82 83 84 85 |
|
Uint8ToFloat
¶
Bases: torch.nn.Module
Source code in zamba/pytorch/transforms.py
37 38 39 |
|
VideotoImg
¶
Bases: torch.nn.Module
Source code in zamba/pytorch/transforms.py
42 43 44 |
|
Functions¶
slowfast_transforms()
¶
Source code in zamba/pytorch/transforms.py
128 129 130 131 132 133 134 135 136 137 138 |
|
zamba_image_model_transforms(single_frame = False, normalization_values = imagenet_normalization_values, channels_first = False)
¶
Source code in zamba/pytorch/transforms.py
110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
|