generator
ConstantInput
ConstantInput(channels: int, size: Resolution)
Bases: nn.Module
Constant input image
Source code in stylegan2_torch/generator/__init__.py
20 21 22 |
|
__call__
class-attribute
__call__ = proxy(forward)
input
instance-attribute
input = Parameter(torch.randn(1, channels, size, size))
forward
forward(input: Tensor) -> Tensor
Source code in stylegan2_torch/generator/__init__.py
24 25 26 |
|
Generator
Generator(
resolution: Resolution,
latent_dim: int = 512,
n_mlp: int = 8,
lr_mlp_mult: float = 0.01,
channels: Dict[Resolution, int] = default_channels,
blur_kernel: List[int] = [1, 3, 3, 1],
)
Bases: nn.Module
Generator module
Source code in stylegan2_torch/generator/__init__.py
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 |
|
__call__
class-attribute
__call__ = proxy(forward)
convs
instance-attribute
convs = nn.ModuleList()
input
instance-attribute
input = ConstantInput(channels[4], 4)
latent_dim
instance-attribute
latent_dim = latent_dim
mapping
instance-attribute
mapping = MappingNetwork(latent_dim, n_mlp, lr_mlp_mult)
n_layers
instance-attribute
n_layers = int(math.log(resolution, 2))
n_w_plus
instance-attribute
n_w_plus = self.n_layers * 2 - 2
to_rgbs
instance-attribute
to_rgbs = nn.ModuleList()
up_convs
instance-attribute
up_convs = nn.ModuleList()
forward
forward(
input: Sequence[Tensor],
*,
return_latents: bool = False,
input_type: Literal["z", "w", "w_plus"] = "z",
trunc_option: Optional[Tuple[float, Tensor]] = None,
mix_index: Optional[int] = None,
noises: Optional[List[Optional[Tensor]]] = None
)
Source code in stylegan2_torch/generator/__init__.py
131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 |
|
mean_latent
mean_latent(n_sample: int, device: str) -> Tensor
Source code in stylegan2_torch/generator/__init__.py
98 99 100 101 102 103 |
|
conv_block
AddNoise
AddNoise()
Bases: nn.Module
Inject white noise scaled by a learnable scalar (same noise for whole batch)
Source code in stylegan2_torch/generator/conv_block.py
74 75 76 77 78 |
|
__call__
class-attribute
__call__ = proxy(forward)
weight
instance-attribute
weight = Parameter(torch.zeros(1))
forward
forward(input: Tensor, noise: Optional[Tensor]) -> Tensor
Source code in stylegan2_torch/generator/conv_block.py
80 81 82 83 84 85 |
|
ModConvBlock
ModConvBlock(
in_channel: int,
out_channel: int,
kernel_size: int,
latent_dim: int,
)
Bases: nn.Module
Modulated convolution block
disentangled latent vector (w) => affine transformation => style vector style vector => modulate + demodulate convolution weights => new conv weights new conv weights & input features => group convolution => output features output features => add noise & leaky ReLU => final output features
Source code in stylegan2_torch/generator/conv_block.py
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 |
|
__call__
class-attribute
__call__ = proxy(forward)
add_noise
instance-attribute
add_noise = AddNoise()
affine
instance-attribute
affine = EqualLinear(latent_dim, in_channel, bias_init=1)
leaky_relu
instance-attribute
leaky_relu = FusedLeakyReLU(out_channel)
scale
instance-attribute
scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
weight
instance-attribute
weight = Parameter(
torch.randn(
1, out_channel, in_channel, kernel_size, kernel_size
)
)
forward
forward(
input: Tensor, w: Tensor, noise: Optional[Tensor]
) -> Tensor
Source code in stylegan2_torch/generator/conv_block.py
118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 |
|
UpModConvBlock
UpModConvBlock(
in_channel: int,
out_channel: int,
kernel_size: int,
latent_dim: int,
up: int,
blur_kernel: List[int],
)
Bases: nn.Module
Modulated convolution block with upsampling
disentangled latent vector (w) => affine transformation => style vector style vector => modulate + demodulate convolution weights => new conv weights new conv weights & input features => group convolution and upsampling => output features output features => add noise & leaky ReLU => final output features
Source code in stylegan2_torch/generator/conv_block.py
177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 |
|
__call__
class-attribute
__call__ = proxy(forward)
add_noise
instance-attribute
add_noise = AddNoise()
affine
instance-attribute
affine = EqualLinear(latent_dim, in_channel, bias_init=1)
blur
instance-attribute
blur = Blur(blur_kernel, up, kernel_size)
leaky_relu
instance-attribute
leaky_relu = FusedLeakyReLU(out_channel)
scale
instance-attribute
scale = 1 / math.sqrt(in_channel * kernel_size ** 2)
up
instance-attribute
up = up
weight
instance-attribute
weight = Parameter(
torch.randn(
1, out_channel, in_channel, kernel_size, kernel_size
)
)
forward
forward(
input: Tensor, w: Tensor, noise: Optional[Tensor]
) -> Tensor
Source code in stylegan2_torch/generator/conv_block.py
205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
|
demod
demod(weight: Tensor) -> Tensor
Demodulate convolution weights (normalization = statistically restore output feature map to unit s.d.)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
weight |
Tensor
|
(N, C_out, C_in, K_h, K_w) |
required |
Returns:
Name | Type | Description |
---|---|---|
Tensor |
Tensor
|
(N, C_out, C_in, K_h, K_w) |
Source code in stylegan2_torch/generator/conv_block.py
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
|
group_conv
group_conv(input: Tensor, weight: Tensor) -> Tensor
Efficiently perform modulated convolution (i.e. grouped convolution)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
(N, C_in, H, W) |
required |
weight |
Tensor
|
(N, C_out, C_in, K, K) |
required |
Returns:
Name | Type | Description |
---|---|---|
Tensor |
Tensor
|
(N, C, H + K - 1, W + K - 1) |
Source code in stylegan2_torch/generator/conv_block.py
47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
|
group_conv_up
group_conv_up(
input: Tensor, weight: Tensor, up: int = 2
) -> Tensor
Efficiently perform upsampling + modulated convolution (i.e. grouped transpose convolution)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
(N, C_in, H, W) |
required |
weight |
Tensor
|
(N, C_out, C_in, K, K) |
required |
up |
int
|
U. Defaults to 2. |
2
|
Returns:
Name | Type | Description |
---|---|---|
Tensor |
Tensor
|
(N, C, (H - 1) * U + K - 1 + 1, (W - 1) * U + K - 1 + 1) |
Source code in stylegan2_torch/generator/conv_block.py
142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
|
mod
mod(weight: Tensor, style: Tensor) -> Tensor
Modulate convolution weights with style vector (styling = scale each input feature map before convolution)
Parameters:
Name | Type | Description | Default |
---|---|---|---|
weight |
Tensor
|
(1, C_out, C_in, K_h, K_w) |
required |
style |
Tensor
|
(N, 1, C_in, 1, 1) |
required |
Returns:
Name | Type | Description |
---|---|---|
Tensor |
Tensor
|
(N, C_out, C_in, K_h, K_w) |
Source code in stylegan2_torch/generator/conv_block.py
14 15 16 17 18 19 20 21 22 23 24 25 26 |
|
mapping
MappingNetwork
MappingNetwork(
latent_dim: int, n_mlp: int, lr_mlp_mult: float
)
Bases: nn.Sequential
Mapping network from sampling space (z) to disentangled latent space (w)
Source code in stylegan2_torch/generator/mapping.py
23 24 25 26 27 28 29 30 31 32 33 34 |
|
Normalize
Bases: nn.Module
Normalize latent vector for each sample
forward
forward(input: Tensor) -> Tensor
Source code in stylegan2_torch/generator/mapping.py
12 13 14 15 |
|
rgb
ToRGB
ToRGB(
in_channel: int,
latent_dim: int,
up: int,
blur_kernel: List[int],
)
Bases: nn.Module
Source code in stylegan2_torch/generator/rgb.py
67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
|
affine
instance-attribute
affine = EqualLinear(latent_dim, in_channel, bias_init=1)
bias
instance-attribute
bias = Parameter(torch.zeros(1, 1, 1, 1))
scale
instance-attribute
scale = 1 / math.sqrt(in_channel)
upsample
instance-attribute
upsample = Upsample(blur_kernel, up)
weight
instance-attribute
weight = Parameter(torch.randn(1, 1, in_channel, 1, 1))
forward
forward(
input: Tensor,
w: Tensor,
prev_output: Optional[Tensor] = None,
) -> Tensor
Source code in stylegan2_torch/generator/rgb.py
87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 |
|
Upsample
Upsample(blur_kernel: List[int], factor: int)
Bases: nn.Module
Upsampling + apply blurring FIR filter
Source code in stylegan2_torch/generator/rgb.py
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
|
factor
instance-attribute
factor = factor
kernel
instance-attribute
kernel: Tensor = None
pad
instance-attribute
pad = (pad0, pad1)
forward
forward(input: Tensor) -> Tensor
Source code in stylegan2_torch/generator/rgb.py
57 58 59 60 61 62 |
|