Giter Site home page Giter Site logo

erfnet's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

erfnet's Issues

Bug when restoring training using "loadEpoch" or "loadLastEpoch"

I've found a bug that was making the code overwrite the previous learning rate and the "best model" flags when resuming the training by using either one of these flags ("loadEpoch" or "loadLastEpoch") after stopping a training. So if you had a training that was stopped and resumed then the learning rate might have been messed up in that one (you can check this in the generated automated_log.txt of a training). This has been fixed in the last commit.

about the weight class training

hi, I use the ENET calculate_class_weighting.py generare the loss weight for my training.
and I find some problems.

First, the weight is:
0.0819
0.4754
0.1324
1.5224
1.5190
2.4730
8.1865
5.2286
0.1870
1.4695
0.6893
1.9814
7.8091
0.4164
1.3809
1.1982
0.6273
5.3535
4.0939
,the distrubution is too large.

Second, I use the weight for loss to train, And the Mean IOU drop 7%.
Could you give me some tricks and help?
thx a lot!

Processing speed on CPU

I'm looking for a faster semantic segmentation architecture and comparing ENet and ERFNet these days.
In my understanding, ERFNet should run around two times faster than ENet according to the paper.
However, it runs about 7 to 8 times slower on my environment, about 2000ms for ENet, 14500ms for ERFNet to process a Cityscape data set image.
I know ERFNet itself is not likely to be the main cause for this issue because I'm loading the model from OpenCV for Unity w/o GPU, but I appreciate if you give me any hits to run ERFNet to the fullest.
Are the model (erfnet_scratch.net) and ERFNet is optimized for GPU and can't run fully only w/ CPU in theory?
Thank you in advance.

About the class weight?

hi:
 I see that you use the class weighting technique, Wclass = 1 /(ln(c + Pclass)), and hou do you calculate the Pclass, or Would you provide it(the Wclass ) for me?

Question about parameter "res"

Hi,
I have tried the code with different relative resolutions. I found that when res = 0.5, the IOUs are as follows.

  • average row correct: 82.642014403092%
  • average rowUcol correct (VOC measure): 71.458449646046%
    But when I tried with res = 1 and res = 0.25, IOUs are different to a large extent:

res = 0.25:

  • average row correct: 73.630133584926%
  • average rowUcol correct (VOC measure): 59.288728394006%

res = 1.0

  • average row correct: 66.633851277201%

  • average rowUcol correct (VOC measure): 52.937617231356%

    Why there are such a large different in these results?

thanks.

Error occurred when I run the eval_cityscapes_server.lua.

Hi,
When I try to run the eval_cityscapes_server.lua to test on the VAL subset of Cityscapes database with(CUDA8.0.61, CuDNN5.1.1 and Torch7.0), the error occurred:

THCudaCheck FAIL file=/home/ted/dl_framework/torch/extra/cunn/lib/THCUNN/generic/SpatialDilatedMaxPooling.cu line=152 error=8 : invalid device function
/home/ted/dl_framework/torch/install/bin/luajit: ...l_framework/torch/install/share/lua/5.1/nn/Container.lua:67:
In 1 module of nn.Sequential:
In 1 module of nn.Sequential:
In 2 module of nn.ConcatTable:
In 1 module of nn.Sequential:
...ted/dl_framework/torch/install/share/lua/5.1/nn/THNN.lua:110: cuda runtime error (8) : invalid device function at /home/ted/dl_framework/torch/extra/cunn/lib/THCUNN/generic/SpatialDilatedMaxPooling.cu:152
stack traceback:
[C]: in function 'v'
...ted/dl_framework/torch/install/share/lua/5.1/nn/THNN.lua:110: in function 'SpatialMaxPooling_updateOutput'
...ork/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:47: in function <...ork/torch/install/share/lua/5.1/nn/SpatialMaxPooling.lua:31>
[C]: in function 'xpcall'
...l_framework/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
..._framework/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function <..._framework/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
...l_framework/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
...framework/torch/install/share/lua/5.1/nn/ConcatTable.lua:11: in function <...framework/torch/install/share/lua/5.1/nn/ConcatTable.lua:9>
[C]: in function 'xpcall'
...l_framework/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
..._framework/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function <..._framework/torch/install/share/lua/5.1/nn/Sequential.lua:41>
[C]: in function 'xpcall'
...l_framework/torch/install/share/lua/5.1/nn/Container.lua:63: in function 'rethrowErrors'
..._framework/torch/install/share/lua/5.1/nn/Sequential.lua:44: in function 'forward'
eval_cityscapes_server.lua:83: in main chunk
[C]: in function 'dofile'
...work/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00405d50

    How to solve this problem? Thanks, best regards. 

About these model's prototxts

hi,all:
I want to reproduce these two experiments in cityscapes. Could you provide the model prototxt in caffe for me?

About the IoU?

I compare your code with LinkNet https://github.com/e-lab/LinkNet, I find that the IoU metric is different.
In your code, teconfusion.averageUnionValid is taken as IoU.
In the linkNet code, teconfusion.averageValid is taken as IoU.
So which is right?

reference:

I modify the code with:

local IoU = teconfusion.averageValid * 100
local iIoU = torch.sum(teconfusion.unionvalids)/#opt.dataconClasses * 100
local GAcc = teconfusion.totalValid * 100
print(string.format('\nIoU: %2.2f%% | iIoU : %2.2f%% | AvgAccuracy: %2.2f%%', IoU, iIoU, GAcc))

Then i get(For erfnet_pretrained.net):
IoU: 82.56% | iIoU : 71.33% | AvgAccuracy: 95.04%
And for your metric:
test_acc= (teconfusion.totalValid~=nil and teconfusion.totalValid * 100.0 or -1)
test_iou= (teconfusion.averageUnionValid~=nil and teconfusion.averageUnionValid * 100.0 or -1)
print (string.format("[test-acc, test-IoU]: [\27[33m%.2f%%, \27[31m%.2f%%]", test_acc, test_iou))

Output:
[test-acc, test-IoU]: [95.04%, 71.33%]

cpu load gpu trained model eval error

image
I use cpu to eval the trained model but the result is wrong, I check the model does not load the weights correctly, but I succeed eval the result in cpu mode when change the code like that.

pretrainedEnc = next(pretrainedEnc.children()).features.encoder

hello
what does this line do?
pretrainedEnc = next(pretrainedEnc.children()).features.encoder
in main.py in train folder

`def main(args):
savedir = f'../save/{args.savedir}'

if not os.path.exists(savedir):
    os.makedirs(savedir)

with open(savedir + '/opts.txt', "w") as myfile:
    myfile.write(str(args))

#Load Model
assert os.path.exists(args.model + ".py"), "Error: model definition not found"
model_file = importlib.import_module(args.model)
model = model_file.Net(NUM_CLASSES)
copyfile(args.model + ".py", savedir + '/' + args.model + ".py")

if args.cuda:
    model = torch.nn.DataParallel(model).cuda()

if args.state:
    #if args.state is provided then load this state for training
    #Note: this only loads initialized weights. If you want to resume a training use "--resume" option!!
    """
    try:
        model.load_state_dict(torch.load(args.state))
    except AssertionError:
        model.load_state_dict(torch.load(args.state,
            map_location=lambda storage, loc: storage))
    #When model is saved as DataParallel it adds a model. to each key. To remove:
    #state_dict = {k.partition('model.')[2]: v for k,v in state_dict}
    #https://discuss.pytorch.org/t/prefix-parameter-names-in-saved-model-if-trained-by-multi-gpu/494
    """
    def load_my_state_dict(model, state_dict):  #custom function to load model when not all dict keys are there
        own_state = model.state_dict()
        for name, param in state_dict.items():
            if name not in own_state:
                 continue
            own_state[name].copy_(param)
        return model

    #print(torch.load(args.state))
    model = load_my_state_dict(model, torch.load(args.state))

"""
def weights_init(m):
    classname = m.__class__.__name__
    if classname.find('Conv') != -1:
        #m.weight.data.normal_(0.0, 0.02)
        n = m.kernel_size[0] * m.kernel_size[1] * m.out_channels
        m.weight.data.normal_(0, math.sqrt(2. / n))
    elif classname.find('BatchNorm') != -1:
        #m.weight.data.normal_(1.0, 0.02)
        m.weight.data.fill_(1)
        m.bias.data.fill_(0)

#TO ACCESS MODEL IN DataParallel: next(model.children())
#next(model.children()).decoder.apply(weights_init)
#Reinitialize weights for decoder

next(model.children()).decoder.layers.apply(weights_init)
next(model.children()).decoder.output_conv.apply(weights_init)

#print(model.state_dict())
f = open('weights5.txt', 'w')
f.write(str(model.state_dict()))
f.close()
"""

#train(args, model)
if (not args.decoder):
    print("========== ENCODER TRAINING ===========")
    model = train(args, model, True) #Train encoder
#CAREFUL: for some reason, after training encoder alone, the decoder gets weights=0. 
#We must reinit decoder weights or reload network passing only encoder in order to train decoder
print("========== DECODER TRAINING ===========")
if (not args.state):
    if args.pretrainedEncoder:
        print("Loading encoder pretrained in imagenet")
        from erfnet_imagenet import ERFNet as ERFNet_imagenet
        pretrainedEnc = torch.nn.DataParallel(ERFNet_imagenet(1000))
        pretrainedEnc.load_state_dict(torch.load(args.pretrainedEncoder)['state_dict'])
        pretrainedEnc = next(pretrainedEnc.children()).features.encoder
        if (not args.cuda):
            pretrainedEnc = pretrainedEnc.cpu()     #because loaded encoder is probably saved in cuda
    else:
        pretrainedEnc = next(model.children()).encoder
    model = model_file.Net(NUM_CLASSES, encoder=pretrainedEnc)  #Add decoder to encoder
    if args.cuda:
        model = torch.nn.DataParallel(model).cuda()
    #When loading encoder reinitialize weights for decoder because they are set to 0 when training dec
model = train(args, model, False)   #Train decoder
print("========== TRAINING FINISHED ===========")`

Error for Decoder training part

Hello
Thanks for your great work. I train the encoder part 150 epochs (I had to stop the training once and the resume to finish the 150 epochs), but then I got an error immediately after the training switch to decoder part.
error :resume option was used but checkpoint was not found in folder.
I appreciate your help.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.