Giter Site home page Giter Site logo

dlgsanet's Introduction

DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution

This repository is the official pytorch implementation of our paper, DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution.

Xiang Li1, Jinshan Pan1, Jinhui Tang1, Jiangxin Dong1

1IMAG Lab, Nanjing University of Science and Technology

Abstract: We propose an effective lightweight dynamic local and global self-attention network (DLGSANet) to solve image super-resolution. Our method explores the properties of Transformers while having low computational costs. Motivated by the network designs of Transformers, we develop a simple yet effective multi-head dynamic local self-attention (MHDLSA) module to extract local features efficiently. In addition, we note that existing Transformers usually explore all similarities of the tokens between the queries and keys for the feature aggregation. However, not all the tokens from the queries are relevant to those in keys, using all the similarities does not effectively facilitate the high-resolution image reconstruction. To overcome this problem, we develop a sparse global self-attention (SparseGSA) module to select the most useful similarity values so that the most useful global features can be better utilized for the high-resolution image reconstruction. We develop a hybrid dynamic-Transformer block(HDTB) that integrates the MHDLSA and SparseGSA for both local and global feature exploration. To ease the network training, we formulate the HDTBs into a residual hybrid dynamic-Transformer group (RHDTG). By embedding the RHDTGs into an end-to-end trainable network, we show that our proposed method has fewer network parameters and lower computational costs while achieving competitive performance against state-of-the-art ones in terms of accuracy.

Framework


Contents

The contents of this repository are as follows:

  1. Dependencies
  2. Train
  3. Test

Dependencies

  • Python
  • Pytorch (1.11 or 1.13)
  • basicsr
  • cupy-cuda

extra infos:

For more details of the dependencies, please refer to requirements.txt

Train

# For X2
sh ./demo_sbatch_file/SISR_ClassicDIV2K/train_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx2_scratch_img_size_48_lr5e_4.sh

# For X3
sh ./demo_sbatch_file/SISR_ClassicDIV2K/train_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx3_scratch_img_size_48_lr5e_4.sh

# For X4
sh ./demo_sbatch_file/SISR_ClassicDIV2K/train_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx4_scratch_img_size_48_lr5e_4.sh

Test

# For X2
sh ./demo_sbatch_file/SISR_ClassicDIV2K/test_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx2_scratch_img_size_48_lr5e_4.sh

# For X3
sh ./demo_sbatch_file/SISR_ClassicDIV2K/test_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx3_scratch_img_size_48_lr5e_4.sh

# For X4
sh ./demo_sbatch_file/SISR_ClassicDIV2K/test_SISR_ClassicDIV2K_Large_90C6G4B_DLGSANet_SRx4_scratch_img_size_48_lr5e_4.sh


Results

  • Pretrained models and visual results
Degradation Model Zoo Visual Results
BI-Efficient SR To-Do To-Do
BI-Classic SR To-Do To-Do
BI-Classic SR (x4) Google Drive / Baidu Netdisk code:IMAG Google Drive / Baidu Netdisk code:IMAG

Visual Results


To Do

Release pre-trained models of regular models

Release the visual results of BI super-resolution

Citation

If this work is helpful for your research, please consider citing the following BibTeX entry.

 @article{li2023dlgsanet,
      title={DLGSANet: Lightweight Dynamic Local and Global Self-Attention Networks for Image Super-Resolution}, 
      author={Li, Xiang and Pan, Jinshan and Tang, Jinhui and Dong, Jiangxin},
      journal={arXiv preprint arXiv:2301.02031},
      year={2023},
}

Acknowledgement

The foundation for the training process is BasicSR , which profited from the outstanding contribution of XPixelGroup .

The following research forms the foundation for the MHDLSA implementation:

On the Connection between Local Attention and Dynamic Depth-wise Convolution paper github

And the following research forms the foundation for the SparseGSA implementation:

Restormer: Efficient Transformer for High-Resolution Image Restoration paper github

Improving Image Restoration by Revisiting Global Information Aggregation paper github

Contact

This repo is currently maintained by Xiang Li (@neonleexiang) and is for academic research use only.

dlgsanet's People

Contributors

neonleexiang avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.