Giter Site home page Giter Site logo

face-parsing-pytorch's Introduction

Face-parsing-pytorch (작성 중... 설명 불일치)

PyTorch implementation of semantic segmentation models.

This repository aims to implement semantic segmentation models with PyTorch.

Python version PyTorch version GitHub release (latest by date) GitHub last commit GitHub license

Table of Contents

Requirements

  • Hardware (Developer environment)
    • CPU: Intel Core i7 9700
    • RAM: 32GiB
    • GPU: Nvidia Geforce RTX 3090
    • Storage: Samsung SSD 970 Pro 512GB
  • Software
    • OS: Ubuntu (Primary), Windows (Secondary)
    • Miniconda (Python 3.8)
    • PyTorch 1.8.1 (CUDA 11.1)
  • Dependent packages
    • Matplotlib
    • PyYAML
    • Scikit-learn
    • Tensorboard
    • Tqdm
  • Useful packages

Models

This repository supports these semantic segmentation models as follows:

  • (U-Net) Convolutional Networks for Biomedical Image Segmentation [Paper]
  • (AR U-Net) Atrous Residual U-Net for Semantic Segmentation in Urban Street Scenes [Paper]
  • (DeepLab V3) Rethinking Atrous Convolution for Semantic Image Segmentation [Paper]
  • (DeepLab V3+) Encoder-Decoder with Atrous Separable Convolution for Semantic Segmentation [Paper]

Datasets

This repository supports these datasets as follows:

  • Cityscapes

    1. Download dataset files. (leftImg8bit_trainvaltest.zip and gtFine_trainvaltest.zip).

    2. Extract downloaded files. The structure of dataset is as follows:

      |-- data
      |  |-- cityscapes
      |  |  |-- gtFine
      |  |  |  |-- test
      |  |  |  |-- train
      |  |  |  |-- val
      |  |  |-- leftImg8bit
      |  |  |  |-- test
      |  |  |  |-- train
      |  |  |  |-- val
      |  |  |-- license.txt
      |  |  |-- README
      
    3. Download cityscapesScripts for inspection, preparation, and evaluation. (or clone this repo)

    4. Edit the script labels.py to specify the label number.

    5. Edit the script createTrainIdLabelImgs.py to set cityscapes path.

    6. Run the script createTrainIdLabelImgs.py to create annotations based on the training labels.

  • Pascal VOC 2012

How-to-use

  1. Clone this repository.

    git clone https://github.com/synml/pytorch-semantic-segmentation
  2. Create and activate a new virtual environment with Miniconda.

    conda create -n [env_name, ex: torch] python=3.8
    conda activate [env_name, ex: torch]
  3. Install PyTorch.

  4. Install the dependent packages mentioned above.

    conda install matplotlib pyyaml scikit-learn tensorboard tqdm
  5. Prepare datasets.

  6. Customize the configuration file. (config.yaml)

    dataset:
      image_size: 400x800	# rows x cols
      num_classes: 20	# 19 + 1 (background)
      num_workers: 8	# number of CPU cores
      root: ../../data/cityscapes	# dataset path
    
    model: UNet		# options [UNet, AR_UNet, DeepLabV3, DeepLabV3plus]
    amp_enabled: True	# Automatic Mixed Precision
    
    UNet:	# Match model name
      batch_size: 16
      epoch: 100
      optimizer:
        name: Adam	# options [SGD, Adam]
        lr: 0.001
        weight_decay: 0.00001
        <optimizer_keyarg1>:<value>
      scheduler:
        name: ReduceLROnPlateau
        factor: 0.5
        patience: 5
        min_lr: 0.00005
      pretrained_weights: weights/UNet_best.pth
    
    Backbone:
      batch_size: 16
      epoch: 100
      optimizer:
        name: Adam
        lr: 0.001
        weight_decay: 0.00001
      scheduler:
        name: ReduceLROnPlateau
        factor: 0.5
        patience: 5
        min_lr: 0.00005
      pretrained_weights: weights/Backbone_val_best.pth
    
    Proposed:
      batch_size: 8
      epoch: 100
      optimizer:
        name: Adam
        lr: 0.0005
        weight_decay: 0.00001
      scheduler:
        name: ReduceLROnPlateau
        factor: 0.5
        patience: 5
        min_lr: 0.00005
      pretrained_weights: weights/Proposed_best.pth
    

Module-description

  • backup.py

  • clean.py

  • demo.py

  • eval.py

  • exec_tensorboard.py

  • featurte_visualizer.py

  • train.py

  • train_interupter.ini

Credits

Contribution

  1. Fork this repository.
  2. Create a new branch or use the master branch.
  3. Commit modifications.
  4. Push on the selected branch.
  5. Please send a pull request.

License

You can find more information in LICENSE.

face-parsing-pytorch's People

Contributors

synml avatar syshin-cubox-ai avatar

Stargazers

 avatar  avatar

Watchers

 avatar

Forkers

ivanvrkic

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.