Giter Site home page Giter Site logo

sbar13 / land_cover_classification_unet Goto Github PK

View Code? Open in Web Editor NEW

This project forked from srimannarayanabaratam/land_cover_classification_unet

0.0 0.0 0.0 101.5 MB

This repository contains code for implementing multi class semantic segmentation (specifically applied to satellite image classification) with PyTorch implementation of UNet.

Home Page: https://baratam-tarunkumar.medium.com/land-cover-classification-with-u-net-aa618ea64a1b

Python 17.99% Jupyter Notebook 82.01%

land_cover_classification_unet's Introduction

Satellite Image Land Cover Segmentation using U-net

This GitHub repository is developed by Srimannarayana Baratam and Georgios Apostolides as a part of Computer Vision by Deep Learning (CS4245) course offered at TU Delft. The implementation of the code was done using PyTorch, it uses U-net architecture to perform multi-class semantic segmentation. The repository from which our implementation has been derived can be found [here]. A well articulated blog is also available [here] for the project by the authors of this repository.

Google Colab Wrapper

For testing the repository, a google colab wrapper is also provided which explains in detail how to execute the code along with insights. Just download the "colab_wrapper.ipynb" file from the repository and open in your colab. Instructions are available there to clone this repository directly to your drive and train using GPU runtime.

Dataset

The dataset we used is taken from the DeepGlobe Challenge of Land Cover Segmentation in 2018. [DeepGlobeChallenges] However, the server for the challenge is no longer available for submission and evaluation of solutions. [DeepGlobe2018 Server] and the validation and test set are not accompanied by labels. For this reason we are using only the training set of the challenge and we are further splitting it into validation and test set to be able to evaluate our solution. The original dataset can be downloaded from Kaggle [DeepGLobe2018 Original Dataset] here and the dataset we use can be downloaded from [Link] separated into the training/validation and test set we used for our model.

Files Explanation

In this section we will present the different files inside the repository as well as an explanation about their functionality

File Name Explanation / Function
U-net_markdown.ipynb Used as a wrapper to run the train and predict scripts.
train.py Used for the training of the network.
predict.py Used to perform a prediction on the test dataset.
diceloss.py It's a function used to calculate the dice loss.
classcount.py Calculates the weights to be used for the weighted cross entropy by counting the pixel number of each class in the train dataset.
distribution.py Used to evaluate the pixel numbers of the validation and training set and visualize them via bar chart.
dataset.py Used as a data loader during the training phase.

Training

The following flags can be used while training the model.

Guidelines

-f : Used to load a model already stored in memory.
-e : Used to specify the Number of training epochs.
-l : Used to specify the learning rate to be used for training.
-b : Used to specify the batch size.
-v : Used to specify the percentage of the validation split (1-100).
-s : Used to specify the scale of the image to be used for training.

Example:

Training the model for 100 epochs using 20% of the data as a validation split, learning rate is 4x10^-5, batch size is 2 and image scale is 20%

!python3 train.py -e 100 -v 20.0 -l 4e-5 -b 2 -s 0.2

Prediction

Guidelines

-m : Used to specify the directory to the model.
-i : Used to specify the directory of the images to test the model on.
-o : Used to specify the directory in which the predictions will be outputted.
-s : Used to specify the scale of the images to be used for predictions.
--viz: Saves the predictions in the form of an image.
(For best results used the same scale you used for training the model)

Note: Inference of scale 0.2 takes approximately 10 minutes.

Example

Making a prediction on the full test set dataset using 30 epoch model trained on full data using a scale of 20%. The script outputs the IoU score of the model.

%%time
!python predict.py -m data/checkpoints/model_ep30_full_data.pth -i data/<test_set_directory>/* -o predictions/ -s 0.2 --viz

land_cover_classification_unet's People

Contributors

srimannarayanabaratam avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.