Giter Site home page Giter Site logo

advaitsave / multiclass-semantic-segmentation-camvid Goto Github PK

View Code? Open in Web Editor NEW
82.0 4.0 41.0 230.72 MB

Tensorflow 2 implementation of complete pipeline for multiclass image semantic segmentation using UNet, SegNet and FCN32 architectures on Cambridge-driving Labeled Video Database (CamVid) dataset.

Jupyter Notebook 100.00%
semantic-segmentation image-segmentation image-processing python segmentation camvid camvid-dataset unet unet-image-segmentation tensorflow tensorflow-2 keras segnet fcn

multiclass-semantic-segmentation-camvid's Introduction

Multiclass Semantic Segmentation using Tensorflow 2 GPU on the Cambridge-driving Labeled Video Database (CamVid)

This repository contains implementations of multiple deep learning models (U-Net, FCN32 and SegNet) for multiclass semantic segmentation of the CamVid dataset

  • Implemented tensorflow 2.0 Aplha GPU package
  • Contains generalized computer vision project directory creation and image processing pipeline for image classification/detection/segmentation

multiclass-semantic-segmentation-camvid's People

Contributors

advaitsave avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

multiclass-semantic-segmentation-camvid's Issues

mask and image not necessarily synchronized

def read_images(img_dir):
   ...
   file_list = [f for f in os.listdir(img_dir) if os.path.isfile(os.path.join(img_dir, f))]
   frames_list = [file for file in file_list if ('_L' not in file) and ('txt' not in file)]
   masks_list = [file for file in file_list if ('_L' in file) and ('txt' not in file)]

adding

for i in range(10):
        print(f"{frames_list[i]} and {masks_list[i]}")

will demonstrate the items can arrive out of order, as they did on my system.

0016E5_08115.png and 0016E5_08031_L.png
0016E5_08075.png and 0016E5_08007_L.png
0016E5_08001.png and 0016E5_08147_L.png
0016E5_07991.png and 0016E5_08107_L.png
0016E5_08157.png and 0016E5_08013_L.png
0016E5_08023.png and 0016E5_08111_L.png
0016E5_07963.png and 0016E5_07975_L.png
0016E5_08015.png and 0016E5_08067_L.png
0016E5_08139.png and 0016E5_07961_L.png
0016E5_08061.png and 0016E5_08149_L.png

a brute force fix

    masks_list = []
    frames_list = []
    for file in file_list:
        if ('_L' in file) and ('txt' not in file):
            index = file.rfind(".")
            frame_file = file[:index-2]+file[index:]
            if os.path.isfile(os.path.join(img_dir, frame_file)):
                masks_list.append(file)
                frames_list.append(frame_file)

IOu score

I am just wondering how Iou_score appears in metrics during training, while it is nowhere declared..

AttributeError: 'Iterator' object has no attribute 'next'

frame_batches = tf.compat.v1.data.make_one_shot_iterator(frame_tensors) # outside of TF Eager, we would use make_one_shot_iterator
mask_batches = tf.compat.v1.data.make_one_shot_iterator(masks_tensors)

n_images_to_show = 5

for i in range(n_images_to_show):

# Get the next image from iterator
frame = frame_batches.next().numpy().astype(np.uint8)
mask = mask_batches.next().numpy().astype(np.uint8)

for the above i get error:

AttributeError Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/api/_v1/keras/models/init.py in
4
5 # Get the next image from iterator
----> 6 frame = frame_batches.next().numpy().astype(np.uint8)
7 mask = mask_batches.next().numpy().astype(np.uint8)
8

AttributeError: 'Iterator' object has no attribute 'next'

TypeError: An op outside of the function building code is being passed a "Graph" tensor

I changed this model to 3 classes, filters=3, batch=1 and I have trained on 3 classes and I have got an error.

Train model

batch_size = 1
result = model.fit_generator(TrainAugmentGenerator(), steps_per_epoch=18 ,
validation_data = ValAugmentGenerator(),
validation_steps = validation_steps, epochs=num_epochs, callbacks=callbacks)
model.save_weights("camvid_model_150_epochs.h5", overwrite=True)

output:
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: Shape_1:0

Preparing training data annotation

How do you recommend preparing training data in my own context which is histology imaging (cells in a tissue)? Is there an easily accessible tool to annotate and prepare the masks required for running the code here?
Is there a way to incorporate hierarchy (groups of cells defining a compartment thereby segmenting the compartment as well as the cells)?
Thanks

Trying to reproduce the notebook with Unet model

Hello!
I got an error at the beginning of training model after getting the following message:

Found 81 images belonging to 1 classes.
Found 81 images belonging to 1 classes.
Epoch 1/100
16/18 [=========================>....] - ETA: 0s - loss: 0.3651 - tversky_loss: 31.2060 - dice_coef: 0.9260 - acc: 0.9066
InvalidArgumentError: 2 root error(s) found.

(0) Invalid argument: Input to reshape is a tensor with 65536 values, but the requested shape has 327680
[[{{node Adam/gradients/conv2d_18/Max_grad/Reshape}}]]

(1) Invalid argument: Input to reshape is a tensor with 65536 values, but the requested shape has 327680
[[{{node Adam/gradients/conv2d_18/Max_grad/Reshape}}]]

[[Adam/gradients/batch_normalization_17/cond_grad/If/then/_468/gradients/zeros_like_1/OptionalGetValue/_819]]

0 successful operations.
0 derived errors ignored. [Op:__inference_keras_scratch_graph_11182]

Nothing was changed in the code besides executing tf.enable_eager_execution() function at the begging on the notebok.

Using tensorflow-gpu 1.14.0

Can you help to resolve this error?

Error when reducing the number of classes from 32 to 8

Hey there,
did someone try to reduce the number of classes? In my case it's about reducing from 32 to 8. Is that possible at all with this implementation?

When I adjust the label_codes, label_names dictionary to 8 entries and also change
from "model = get_small_unet(n_filters = 32)"
to model = get_small_unet(n_filters = 8)

I hope you understand my problem. If more information is required from my side please tell me so I can give it to you! I'm new to posting questions on GitHub ;)

The error I get when training the network:

Epoch 1/2
Found 3312 images belonging to 1 classes.
Found 3312 images belonging to 1 classes.

ValueError Traceback (most recent call last)
in ()
6 #result = model.fit_generator(TrainAugmentGenerator(), steps_per_epoch=18 ,
7 validation_data = ValAugmentGenerator(),
----> 8 validation_steps = validation_steps, epochs=num_epochs, callbacks=callbacks)
9 model.save_weights("camvid_model_150_epochs.h5", overwrite=True)

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
1295 shuffle=shuffle,
1296 initial_epoch=initial_epoch,
-> 1297 steps_name='steps_per_epoch')
1298
1299 def evaluate_generator(self,

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_generator.pyc in model_iteration(model, data, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, validation_freq, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch, mode, batch_size, steps_name, **kwargs)
263
264 is_deferred = not model._is_compiled
--> 265 batch_outs = batch_function(*batch_data)
266 if not isinstance(batch_outs, list):
267 batch_outs = [batch_outs]

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in train_on_batch(self, x, y, sample_weight, class_weight, reset_metrics)
971 outputs = training_v2_utils.train_on_batch(
972 self, x, y=y, sample_weight=sample_weight,
--> 973 class_weight=class_weight, reset_metrics=reset_metrics)
974 outputs = (outputs['total_loss'] + outputs['output_losses'] +
975 outputs['metrics'])

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.pyc in train_on_batch(model, x, y, sample_weight, class_weight, reset_metrics)
251 x, y, sample_weights = model._standardize_user_data(
252 x, y, sample_weight=sample_weight, class_weight=class_weight,
--> 253 extract_tensors_from_dataset=True)
254 batch_size = array_ops.shape(nest.flatten(x, expand_composites=True)[0])[0]
255 # If model._distribution_strategy is True, then we are in a replica context

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training.pyc in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2536 # Additional checks to avoid users mistakenly using improper loss fns.
2537 training_utils.check_loss_and_target_compatibility(
-> 2538 y, self._feed_loss_fns, feed_output_shapes)
2539
2540 # If sample weight mode has not been set and weights are None for all the

/net/store/nbp/projects/affordance_prediction/alimberg/tf2/local/lib/python2.7/site-packages/tensorflow_core/python/keras/engine/training_utils.pyc in check_loss_and_target_compatibility(targets, loss_fns, output_shapes)
741 raise ValueError('A target array with shape ' + str(y.shape) +
742 ' was passed for an output of shape ' + str(shape) +
--> 743 ' while using as loss ' + loss_name + '. '
744 'This loss expects targets to have the same shape '
745 'as the output.')

ValueError: A target array with shape (5, 256, 256, 8) was passed for an output of shape (5, 256, 256, 32) while using as loss categorical_crossentropy. This loss expects targets to have the same shape as the output._

batch_shape causes multi gpu to fail

if you change

batch_shape=(256,256,3)
inputs = Input(batch_shape=(5, 256, 256, 3))

to

shape=(256,256,3)
inputs = Input(shape)

The multi gpu setups can run. Also, then
the generator is the only place where the batch size is specified.

This can not work with tf 2.1 ~

I try to train this cord with tf2.1. But this could not work.
Why? I don't have any idea.
Could you please tell me how to fix.
thanks.

Custom dataset generation

Hi Sir
I want to generate masks like camvid dataset. can you suggest me some links or software where blank masks can be generated as available in camvid, but these masks contain colour labels for multi-classes. please provide your valuable suggestion.
Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.