relbers / ada-conv-pytorch Goto Github PK
View Code? Open in Web Editor NEWLicense: MIT License
License: MIT License
Hello, I am glad to see your article, I was very inspired, but when I run the code you provided, it appeared
ValueError: test_size=8 should be either positive and smaller than the number of samples 0 or a float in the (0, 1) range
Can you help me solve this problem?
Thanks for publishing this code! I've been tinkering around with AdaConv for months, and it's super helpful seeing somebody else's interpretation!
Something I've implemented in my code you might consider is applying the kernel convolutions in one go, by stacking all the channels together. Example:
def forward(self, style_encoding: torch.Tensor, predicted: torch.Tensor, thumb_stats=None):
N, c, h, w = predicted.size()
depthwise = self.depthwise_kernel_conv(style_encoding)
depthwise = depthwise.view(N*self.c_out, self.c_in // self.n_groups, 5, 5)
s_d = self.pointwise_avg_pool(style_encoding)
pointwise_kn = self.pw_cn_kn(s_d).view(N*self.c_out, self.c_out // self.n_groups, 1, 1)
pointwise_bias = self.pw_cn_bias(s_d).view(N*self.c_out)
if self.norm:
predicted = F.instance_norm(predicted)
predicted = predicted.view(1,N*C,h,w)
content_out = nn.functional.conv2d(
nn.functional.conv2d(self.pad(predicted),
weight=depthwise,
stride=1,
groups=self.batch_groups
),
stride=1,
weight=pointwise_kn,
bias=pointwise_bias,
groups=self.batch_groups)
content_out = content_out.permute([1, 0, 2, 3]).view(N,C,h,w)
return content_out
I believe this achieves the same results without iteration.
In the paper the bias is added after the pointwise cov. Is there a reason why we do it before here?
TypeError: transfer_batch_to_device() takes 3 positional arguments but 4 were given
Thanks for the authors' of this repo hard work.
I have a question about implementation of styleGAN2 with adaConv. Hope, somebody can help me understand how it is supposed to be implemented. StyleGAN2 takes style as a 1d vector, but adaconv requires the style to be an image (2d vector). Does the reshape of 1d style to an image is done at the beginning with replacement of Linear layers with Conv2d or layers aren't changed except for demodulation part, before which reshape is done every time?
Thanks for the authors' hard work of this code repository.
I have noticed that in current implementation, the pointwise convolution is also generated and viewed as group convolution (
). For example, with input channel C_in and output channel C_out, the generated pointwise convolution is with size C_out x C_in/N_g x 1 x 1. However, according my understandig I wonder if the pointwise convolution should have the same size as a normal convolution kernel with size C_out x C_in x 1 x1 ?Looking forward to your respose. Thanks a lot.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.