View Code? Open in Web Editor
NEW
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLP, S2MLPv2, RaftMLP, HireMLP, ConvMLP, AS-MLP, SparseMLP, ConvMixer, SwinMLP, RepMLPNet, WaveMLP, MorphMLP, DynaMixer, MS-MLP, Sequencer2D in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!! trunc_normal_ is supported for Jittor!! MLP Paper is uploaded!
License: MIT License
Jupyter Notebook 3.18%
Python 96.82%
jittor-mlp's Introduction
Hi , Iβm Ruiyang Liu π
Things about me:
πΆ Master's Student in computer vision.
π Insterested in both traditional computer vision and deep learning.
Security fields including edge detection , basic image feature , what a neutal network learning , etc...
β‘ Currently focusing on discrete signal gradient calculation .
π Blog: https://liuruiyang98.github.io/
π CSDN: https://blog.csdn.net/baidu_36913330
jittor-mlp's People
Contributors
jittor-mlp's Issues
Hi, good to find your implementation!
Here, I wonder whether miss a padding=patch_size//2
nn .Conv2d (3 , dim , kernel_size = patch_size , stride = patch_size ),
Best
Thank you for coding this.
Have you compared against the reported results in the papers?
Hi, thank you for your great work! I think the value of dim should be 2.
self .cross_region_restoreH = CrossRegion (step = - cross_region_step , dim = 3 )
Thank you for coding thisοΌ
I want to know why this code is written like this and what the padding operation acts on.
class MorphFC (nn .Module ):