xy-guo / gwcnet Goto Github PK
View Code? Open in Web Editor NEWGroup-wise Correlation Stereo Network, CVPR 2019
License: MIT License
Group-wise Correlation Stereo Network, CVPR 2019
License: MIT License
I've cloned your repository, downloaded Kitti12/15 checkpoints and following scripts gives me errors:
import torch
from models import gwcnet
save = torch.load("./checkpoints/kitti15/gwcnet-g/best.ckpt")
model = gwcnet.GwcNet_G(192)
model.load_state_dict(save['model'])
also simillar error for gwcnet-gc:
save = torch.load("./checkpoints/kitti12/gwcnet-gc/best.ckpt")
model = gwcnet.GwcNet_GC(192)
model.load_state_dict(save['model'])
Here is the error:
Unexpected key(s) in state_dict: "module.feature_extraction.firstconv.0.0.weight", "module.feature_extraction.firstconv.0.1.weight", "module.feature_extraction.firstconv.0.1.bias", "module.feature_extraction.firstconv.0.1.running_mean", "module.feature_extraction.firstconv.0.1.running_var", "module.feature_extraction.firstconv.2.0.weight", "module.feature_extraction.firstconv.2.1.weight", "module.feature_extraction.firstconv.2.1.bias", "module.feature_extraction.firstconv.2.1.running_mean", "module.feature_extraction.firstconv.2.1.running_var", "module.feature_extraction.firstconv.4.0.weight", "module.feature_extraction.firstconv.4.1.weight", "module.feature_extraction.firstconv.4.1.bias", "module.feature_extraction.firstconv.4.1.running_mean", "module.feature_extraction.firstconv.4.1.running_var", "module.feature_extraction.layer1.0.conv1.0.0.weight", "module.feature_extraction.layer1.0.conv1.0.1.weight", "module.feature_extraction.layer1.0.conv1.0.1.bias", "module.feature_extraction.layer1.0.conv1.0.1.running_mean", "module.feature_extraction.layer1.0.conv1.0.1.running_var", "module.feature_extraction.layer1.0.conv2.0.weight", "module.feature_extraction.layer1.0.conv2.1.weight", "module.feature_extraction.layer1.0.conv2.1.bias", "module.feature_extraction.layer1.0.conv2.1.running_mean", "module.feature_extraction.layer1.0.conv2.1.running_var", "module.feature_extraction.layer1.1.conv1.0.0.weight", "module.feature_extraction.layer1.1.conv1.0.1.weight", "module.feature_extraction.layer1.1.conv1.0.1.bias", "module.feature_extraction.layer1.1.conv1.0.1.running_mean", "module.feature_extraction.layer1.1.conv1.0.1.running_var", "module.feature_extraction.layer1.1.conv2.0.weight", "module.feature_extraction.layer1.1.conv2.1.weight", "module.feature_extraction.layer1.1.conv2.1.bias", "module.feature_extraction.layer1.1.conv2.1.running_mean", "module.feature_extraction.layer1.1.conv2.1.running_var", "module.feature_extraction.layer1.2.conv1.0.0.weight", "module.feature_extraction.layer1.2.conv1.0.1.weight", "module.feature_extraction.layer1.2.conv1.0.1.bias", "module.feature_extraction.layer1.2.conv1.0.1.running_mean", "module.feature_extraction.layer1.2.conv1.0.1.running_var", "module.feature_extraction.layer1.2.conv2.0.weight", "module.feature_extraction.layer1.2.conv2.1.weight", "module.feature_extraction.layer1.2.conv2.1.bias", "module.feature_extraction.layer1.2.conv2.1.running_mean", "module.feature_extraction.layer1.2.conv2.1.running_var", "module.feature_extraction.layer2.0.conv1.0.0.weight", "module.feature_extraction.layer2.0.conv1.0.1.weight", "module.feature_extraction.layer2.0.conv1.0.1.bias", "module.feature_extraction.layer2.0.conv1.0.1.running_mean", "module.feature_extraction.layer2.0.conv1.0.1.running_var", "module.feature_extraction.layer2.0.conv2.0.weight", "module.feature_extraction.layer2.0.conv2.1.weight", "module.feature_extraction.layer2.0.conv2.1.bias", "module.feature_extraction.layer2.0.conv2.1.running_mean", "module.feature_extraction.layer2.0.conv2.1.running_var", "module.feature_extraction.layer2.0.downsample.0.weight", "module.feature_extraction.layer2.0.downsample.1.weight", "module.feature_extraction.layer2.0.downsample.1.bias", "module.feature_extraction.layer2.0.downsample.1.running_mean", "module.feature_extraction.layer2.0.downsample.1.running_var", "module.feature_extraction.layer2.1.conv1.0.0.weight", "module.feature_extraction.layer2.1.conv1.0.1.weight", "module.feature_extraction.layer2.1.conv1.0.1.bias", "module.feature_extraction.layer2.1.conv1.0.1.running_mean", "module.feature_extraction.layer2.1.conv1.0.1.running_var", "module.feature_extraction.layer2.1.conv2.0.weight", "module.feature_extraction.layer2.1.conv2.1.weight", "module.feature_extraction.layer2.1.conv2.1.bias", "module.feature_extraction.layer2.1.conv2.1.running_mean", "module.feature_extraction.layer2.1.conv2.1.running_var", "module.feature_extraction.layer2.2.conv1.0.0.weight", "module.feature_extraction.layer2.2.conv1.0.1.weight", "module.feature_extraction.layer2.2.conv1.0.1.bias", "module.feature_extraction.layer2.2.conv1.0.1.running_mean", "module.feature_extraction.layer2.2.conv1.0.1.running_var", "module.feature_extraction.layer2.2.conv2.0.weight", "module.feature_extraction.layer2.2.conv2.1.weight", "module.feature_extraction.layer2.2.conv2.1.bias", "module.feature_extraction.layer2.2.conv2.1.running_mean", "module.feature_extraction.layer2.2.conv2.1.running_var", "module.feature_extraction.layer2.3.conv1.0.0.weight", "module.feature_extraction.layer2.3.conv1.0.1.weight", "module.feature_extraction.layer2.3.conv1.0.1.bias", "module.feature_extraction.layer2.3.conv1.0.1.running_mean", "module.feature_extraction.layer2.3.conv1.0.1.running_var", "module.feature_extraction.layer2.3.conv2.0.weight", "module.feature_extraction.layer2.3.conv2.1.weight", "module.feature_extraction.layer2.3.conv2.1.bias", "module.feature_extraction.layer2.3.conv2.1.running_mean", "module.feature_extraction.layer2.3.conv2.1.running_var", "module.feature_extraction.layer2.4.conv1.0.0.weight", "module.feature_extraction.layer2.4.conv1.0.1.weight", "module.feature_extraction.layer2.4.conv1.0.1.bias", "module.feature_extraction.layer2.4.conv1.0.1.running_mean", "module.feature_extraction.layer2.4.conv1.0.1.running_var", "module.feature_extraction.layer2.4.conv2.0.weight", "module.feature_extraction.layer2.4.conv2.1.weight", "module.feature_extraction.layer2.4.conv2.1.bias", "module.feature_extraction.layer2.4.conv2.1.running_mean", "module.feature_extraction.layer2.4.conv2.1.running_var", "module.feature_extraction.layer2.5.conv1.0.0.weight", "module.feature_extraction.layer2.5.conv1.0.1.weight", "module.feature_extraction.layer2.5.conv1.0.1.bias", "module.feature_extraction.layer2.5.conv1.0.1.running_mean", "module.feature_extraction.layer2.5.conv1.0.1.running_var", "module.feature_extraction.layer2.5.conv2.0.weight", "module.feature_extraction.layer2.5.conv2.1.weight", "module.feature_extraction.layer2.5.conv2.1.bias", "module.feature_extraction.layer2.5.conv2.1.running_mean", "module.feature_extraction.layer2.5.conv2.1.running_var", "module.feature_extraction.layer2.6.conv1.0.0.weight", "module.feature_extraction.layer2.6.conv1.0.1.weight", "module.feature_extraction.layer2.6.conv1.0.1.bias", "module.feature_extraction.layer2.6.conv1.0.1.running_mean", "module.feature_extraction.layer2.6.conv1.0.1.running_var", "module.feature_extraction.layer2.6.conv2.0.weight", "module.feature_extraction.layer2.6.conv2.1.weight", "module.feature_extraction.layer2.6.conv2.1.bias", "module.feature_extraction.layer2.6.conv2.1.running_mean", "module.feature_extraction.layer2.6.conv2.1.running_var", "module.feature_extraction.layer2.7.conv1.0.0.weight", "module.feature_extraction.layer2.7.conv1.0.1.weight", "module.feature_extraction.layer2.7.conv1.0.1.bias", "module.feature_extraction.layer2.7.conv1.0.1.running_mean", "module.feature_extraction.layer2.7.conv1.0.1.running_var", "module.feature_extraction.layer2.7.conv2.0.weight", "module.feature_extraction.layer2.7.conv2.1.weight", "module.feature_extraction.layer2.7.conv2.1.bias", "module.feature_extraction.layer2.7.conv2.1.running_mean", "module.feature_extraction.layer2.7.conv2.1.running_var", "module.feature_extraction.layer2.8.conv1.0.0.weight", "module.feature_extraction.layer2.8.conv1.0.1.weight", "module.feature_extraction.layer2.8.conv1.0.1.bias", "module.feature_extraction.layer2.8.conv1.0.1.running_mean", "module.feature_extraction.layer2.8.conv1.0.1.running_var", "module.feature_extraction.layer2.8.conv2.0.weight", "module.feature_extraction.layer2.8.conv2.1.weight", "module.feature_extraction.layer2.8.conv2.1.bias", "module.feature_extraction.layer2.8.conv2.1.running_mean", "module.feature_extraction.layer2.8.conv2.1.running_var", "module.feature_extraction.layer2.9.conv1.0.0.weight", "module.feature_extraction.layer2.9.conv1.0.1.weight", "module.feature_extraction.layer2.9.conv1.0.1.bias", "module.feature_extraction.layer2.9.conv1.0.1.running_mean", "module.feature_extraction.layer2.9.conv1.0.1.running_var", "module.feature_extraction.layer2.9.conv2.0.weight", "module.feature_extraction.layer2.9.conv2.1.weight", "module.feature_extraction.layer2.9.conv2.1.bias", "module.feature_extraction.layer2.9.conv2.1.running_mean", "module.feature_extraction.layer2.9.conv2.1.running_var", "module.feature_extraction.layer2.10.conv1.0.0.weight", "module.feature_extraction.layer2.10.conv1.0.1.weight", "module.feature_extraction.layer2.10.conv1.0.1.bias", "module.feature_extraction.layer2.10.conv1.0.1.running_mean", "module.feature_extraction.layer2.10.conv1.0.1.running_var", "module.feature_extraction.layer2.10.conv2.0.weight", "module.feature_extraction.layer2.10.conv2.1.weight", "module.feature_extraction.layer2.10.conv2.1.bias", "module.feature_extraction.layer2.10.conv2.1.running_mean", "module.feature_extraction.layer2.10.conv2.1.running_var", "module.feature_extraction.layer2.11.conv1.0.0.weight", "module.feature_extraction.layer2.11.conv1.0.1.weight", "module.feature_extraction.layer2.11.conv1.0.1.bias", "module.feature_extraction.layer2.11.conv1.0.1.running_mean", "module.feature_extraction.layer2.11.conv1.0.1.running_var", "module.feature_extraction.layer2.11.conv2.0.weight", "module.feature_extraction.layer2.11.conv2.1.weight", "module.feature_extraction.layer2.11.conv2.1.bias", "module.feature_extraction.layer2.11.conv2.1.running_mean", "module.feature_extraction.layer2.11.conv2.1.running_var", "module.feature_extraction.layer2.12.conv1.0.0.weight", "module.feature_extraction.layer2.12.conv1.0.1.weight", "module.feature_extraction.layer2.12.conv1.0.1.bias", "module.feature_extraction.layer2.12.conv1.0.1.running_mean", "module.feature_extraction.layer2.12.conv1.0.1.running_var", "module.feature_extraction.layer2.12.conv2.0.weight", "module.feature_extraction.layer2.12.conv2.1.weight", "module.feature_extraction.layer2.12.conv2.1.bias", "module.feature_extraction.layer2.12.conv2.1.running_mean", "module.feature_extraction.layer2.12.conv2.1.running_var", "module.feature_extraction.layer2.13.conv1.0.0.weight", "module.feature_extraction.layer2.13.conv1.0.1.weight", "module.feature_extraction.layer2.13.conv1.0.1.bias", "module.feature_extraction.layer2.13.conv1.0.1.running_mean", "module.feature_extraction.layer2.13.conv1.0.1.running_var", "module.feature_extraction.layer2.13.conv2.0.weight", "module.feature_extraction.layer2.13.conv2.1.weight", "module.feature_extraction.layer2.13.conv2.1.bias", "module.feature_extraction.layer2.13.conv2.1.running_mean", "module.feature_extraction.layer2.13.conv2.1.running_var", "module.feature_extraction.layer2.14.conv1.0.0.weight", "module.feature_extraction.layer2.14.conv1.0.1.weight", "module.feature_extraction.layer2.14.conv1.0.1.bias", "module.feature_extraction.layer2.14.conv1.0.1.running_mean", "module.feature_extraction.layer2.14.conv1.0.1.running_var", "module.feature_extraction.layer2.14.conv2.0.weight", "module.feature_extraction.layer2.14.conv2.1.weight", "module.feature_extraction.layer2.14.conv2.1.bias", "module.feature_extraction.layer2.14.conv2.1.running_mean", "module.feature_extraction.layer2.14.conv2.1.running_var", "module.feature_extraction.layer2.15.conv1.0.0.weight", "module.feature_extraction.layer2.15.conv1.0.1.weight", "module.feature_extraction.layer2.15.conv1.0.1.bias", "module.feature_extraction.layer2.15.conv1.0.1.running_mean", "module.feature_extraction.layer2.15.conv1.0.1.running_var", "module.feature_extraction.layer2.15.conv2.0.weight", "module.feature_extraction.layer2.15.conv2.1.weight", "module.feature_extraction.layer2.15.conv2.1.bias", "module.feature_extraction.layer2.15.conv2.1.running_mean", "module.feature_extraction.layer2.15.conv2.1.running_var", "module.feature_extraction.layer3.0.conv1.0.0.weight", "module.feature_extraction.layer3.0.conv1.0.1.weight", "module.feature_extraction.layer3.0.conv1.0.1.bias", "module.feature_extraction.layer3.0.conv1.0.1.running_mean", "module.feature_extraction.layer3.0.conv1.0.1.running_var", "module.feature_extraction.layer3.0.conv2.0.weight", "module.feature_extraction.layer3.0.conv2.1.weight", "module.feature_extraction.layer3.0.conv2.1.bias", "module.feature_extraction.layer3.0.conv2.1.running_mean", "module.feature_extraction.layer3.0.conv2.1.running_var", "module.feature_extraction.layer3.0.downsample.0.weight", "module.feature_extraction.layer3.0.downsample.1.weight", "module.feature_extraction.layer3.0.downsample.1.bias", "module.feature_extraction.layer3.0.downsample.1.running_mean", "module.feature_extraction.layer3.0.downsample.1.running_var", "module.feature_extraction.layer3.1.conv1.0.0.weight", "module.feature_extraction.layer3.1.conv1.0.1.weight", "module.feature_extraction.layer3.1.conv1.0.1.bias", "module.feature_extraction.layer3.1.conv1.0.1.running_mean", "module.feature_extraction.layer3.1.conv1.0.1.running_var", "module.feature_extraction.layer3.1.conv2.0.weight", "module.feature_extraction.layer3.1.conv2.1.weight", "module.feature_extraction.layer3.1.conv2.1.bias", "module.feature_extraction.layer3.1.conv2.1.running_mean", "module.feature_extraction.layer3.1.conv2.1.running_var", "module.feature_extraction.layer3.2.conv1.0.0.weight", "module.feature_extraction.layer3.2.conv1.0.1.weight", "module.feature_extraction.layer3.2.conv1.0.1.bias", "module.feature_extraction.layer3.2.conv1.0.1.running_mean", "module.feature_extraction.layer3.2.conv1.0.1.running_var", "module.feature_extraction.layer3.2.conv2.0.weight", "module.feature_extraction.layer3.2.conv2.1.weight", "module.feature_extraction.layer3.2.conv2.1.bias", "module.feature_extraction.layer3.2.conv2.1.running_mean", "module.feature_extraction.layer3.2.conv2.1.running_var", "module.feature_extraction.layer4.0.conv1.0.0.weight", "module.feature_extraction.layer4.0.conv1.0.1.weight", "module.feature_extraction.layer4.0.conv1.0.1.bias", "module.feature_extraction.layer4.0.conv1.0.1.running_mean", "module.feature_extraction.layer4.0.conv1.0.1.running_var", "module.feature_extraction.layer4.0.conv2.0.weight", "module.feature_extraction.layer4.0.conv2.1.weight", "module.feature_extraction.layer4.0.conv2.1.bias", "module.feature_extraction.layer4.0.conv2.1.running_mean", "module.feature_extraction.layer4.0.conv2.1.running_var", "module.feature_extraction.layer4.1.conv1.0.0.weight", "module.feature_extraction.layer4.1.conv1.0.1.weight", "module.feature_extraction.layer4.1.conv1.0.1.bias", "module.feature_extraction.layer4.1.conv1.0.1.running_mean", "module.feature_extraction.layer4.1.conv1.0.1.running_var", "module.feature_extraction.layer4.1.conv2.0.weight", "module.feature_extraction.layer4.1.conv2.1.weight", "module.feature_extraction.layer4.1.conv2.1.bias", "module.feature_extraction.layer4.1.conv2.1.running_mean", "module.feature_extraction.layer4.1.conv2.1.running_var", "module.feature_extraction.layer4.2.conv1.0.0.weight", "module.feature_extraction.layer4.2.conv1.0.1.weight", "module.feature_extraction.layer4.2.conv1.0.1.bias", "module.feature_extraction.layer4.2.conv1.0.1.running_mean", "module.feature_extraction.layer4.2.conv1.0.1.running_var", "module.feature_extraction.layer4.2.conv2.0.weight", "module.feature_extraction.layer4.2.conv2.1.weight", "module.feature_extraction.layer4.2.conv2.1.bias", "module.feature_extraction.layer4.2.conv2.1.running_mean", "module.feature_extraction.layer4.2.conv2.1.running_var", "module.dres0.0.0.weight", "module.dres0.0.1.weight", "module.dres0.0.1.bias", "module.dres0.0.1.running_mean", "module.dres0.0.1.running_var", "module.dres0.2.0.weight", "module.dres0.2.1.weight", "module.dres0.2.1.bias", "module.dres0.2.1.running_mean", "module.dres0.2.1.running_var", "module.dres1.0.0.weight", "module.dres1.0.1.weight", "module.dres1.0.1.bias", "module.dres1.0.1.running_mean", "module.dres1.0.1.running_var", "module.dres1.2.0.weight", "module.dres1.2.1.weight", "module.dres1.2.1.bias", "module.dres1.2.1.running_mean", "module.dres1.2.1.running_var", "module.dres2.conv1.0.0.weight", "module.dres2.conv1.0.1.weight", "module.dres2.conv1.0.1.bias", "module.dres2.conv1.0.1.running_mean", "module.dres2.conv1.0.1.running_var", "module.dres2.conv2.0.0.weight", "module.dres2.conv2.0.1.weight", "module.dres2.conv2.0.1.bias", "module.dres2.conv2.0.1.running_mean", "module.dres2.conv2.0.1.running_var", "module.dres2.conv3.0.0.weight", "module.dres2.conv3.0.1.weight", "module.dres2.conv3.0.1.bias", "module.dres2.conv3.0.1.running_mean", "module.dres2.conv3.0.1.running_var", "module.dres2.conv4.0.0.weight", "module.dres2.conv4.0.1.weight", "module.dres2.conv4.0.1.bias", "module.dres2.conv4.0.1.running_mean", "module.dres2.conv4.0.1.running_var", "module.dres2.conv5.0.weight", "module.dres2.conv5.1.weight", "module.dres2.conv5.1.bias", "module.dres2.conv5.1.running_mean", "module.dres2.conv5.1.running_var", "module.dres2.conv6.0.weight", "module.dres2.conv6.1.weight", "module.dres2.conv6.1.bias", "module.dres2.conv6.1.running_mean", "module.dres2.conv6.1.running_var", "module.dres2.redir1.0.weight", "module.dres2.redir1.1.weight", "module.dres2.redir1.1.bias", "module.dres2.redir1.1.running_mean", "module.dres2.redir1.1.running_var", "module.dres2.redir2.0.weight", "module.dres2.redir2.1.weight", "module.dres2.redir2.1.bias", "module.dres2.redir2.1.running_mean", "module.dres2.redir2.1.running_var", "module.dres3.conv1.0.0.weight", "module.dres3.conv1.0.1.weight", "module.dres3.conv1.0.1.bias", "module.dres3.conv1.0.1.running_mean", "module.dres3.conv1.0.1.running_var", "module.dres3.conv2.0.0.weight", "module.dres3.conv2.0.1.weight", "module.dres3.conv2.0.1.bias", "module.dres3.conv2.0.1.running_mean", "module.dres3.conv2.0.1.running_var", "module.dres3.conv3.0.0.weight", "module.dres3.conv3.0.1.weight", "module.dres3.conv3.0.1.bias", "module.dres3.conv3.0.1.running_mean", "module.dres3.conv3.0.1.running_var", "module.dres3.conv4.0.0.weight", "module.dres3.conv4.0.1.weight", "module.dres3.conv4.0.1.bias", "module.dres3.conv4.0.1.running_mean", "module.dres3.conv4.0.1.running_var", "module.dres3.conv5.0.weight", "module.dres3.conv5.1.weight", "module.dres3.conv5.1.bias", "module.dres3.conv5.1.running_mean", "module.dres3.conv5.1.running_var", "module.dres3.conv6.0.weight", "module.dres3.conv6.1.weight", "module.dres3.conv6.1.bias", "module.dres3.conv6.1.running_mean", "module.dres3.conv6.1.running_var", "module.dres3.redir1.0.weight", "module.dres3.redir1.1.weight", "module.dres3.redir1.1.bias", "module.dres3.redir1.1.running_mean", "module.dres3.redir1.1.running_var", "module.dres3.redir2.0.weight", "module.dres3.redir2.1.weight", "module.dres3.redir2.1.bias", "module.dres3.redir2.1.running_mean", "module.dres3.redir2.1.running_var", "module.dres4.conv1.0.0.weight", "module.dres4.conv1.0.1.weight", "module.dres4.conv1.0.1.bias", "module.dres4.conv1.0.1.running_mean", "module.dres4.conv1.0.1.running_var", "module.dres4.conv2.0.0.weight", "module.dres4.conv2.0.1.weight", "module.dres4.conv2.0.1.bias", "module.dres4.conv2.0.1.running_mean", "module.dres4.conv2.0.1.running_var", "module.dres4.conv3.0.0.weight", "module.dres4.conv3.0.1.weight", "module.dres4.conv3.0.1.bias", "module.dres4.conv3.0.1.running_mean", "module.dres4.conv3.0.1.running_var", "module.dres4.conv4.0.0.weight", "module.dres4.conv4.0.1.weight", "module.dres4.conv4.0.1.bias", "module.dres4.conv4.0.1.running_mean", "module.dres4.conv4.0.1.running_var", "module.dres4.conv5.0.weight", "module.dres4.conv5.1.weight", "module.dres4.conv5.1.bias", "module.dres4.conv5.1.running_mean", "module.dres4.conv5.1.running_var", "module.dres4.conv6.0.weight", "module.dres4.conv6.1.weight", "module.dres4.conv6.1.bias", "module.dres4.conv6.1.running_mean", "module.dres4.conv6.1.running_var", "module.dres4.redir1.0.weight", "module.dres4.redir1.1.weight", "module.dres4.redir1.1.bias", "module.dres4.redir1.1.running_mean", "module.dres4.redir1.1.running_var", "module.dres4.redir2.0.weight", "module.dres4.redir2.1.weight", "module.dres4.redir2.1.bias", "module.dres4.redir2.1.running_mean", "module.dres4.redir2.1.running_var", "module.classif0.0.0.weight", "module.classif0.0.1.weight", "module.classif0.0.1.bias", "module.classif0.0.1.running_mean", "module.classif0.0.1.running_var", "module.classif0.2.weight", "module.classif1.0.0.weight", "module.classif1.0.1.weight", "module.classif1.0.1.bias", "module.classif1.0.1.running_mean", "module.classif1.0.1.running_var", "module.classif1.2.weight", "module.classif2.0.0.weight", "module.classif2.0.1.weight", "module.classif2.0.1.bias", "module.classif2.0.1.running_mean", "module.classif2.0.1.running_var", "module.classif2.2.weight", "module.classif3.0.0.weight", "module.classif3.0.1.weight", "module.classif3.0.1.bias", "module.classif3.0.1.running_mean", "module.classif3.0.1.running_var", "module.classif3.2.weight".
My Pytorch version: 1.7.1. Python: 3.8.4
你好,我加载了自己的一张图片,运行时,在gwcnet.py文件中的hourglass类里面的forward函数里,出现数据维度不匹配问题。
“RuntimeError: The size of tensor a (82) must match the size of tensor b (81) at non-singleton dimension 4”
我把forward函数里所有数据的尺寸都打印了一下,发现了问题:
def forward(self, x):
conv1 = self.conv1(x) #torch.Size([1, 64, 24, 70, 81])
conv2 = self.conv2(conv1) #torch.Size([1, 64, 24, 70, 81])
conv3 = self.conv3(conv2) #torch.Size([1, 128, 12, 35, 41])
conv4 = self.conv4(conv3) #torch.Size([1, 128, 12, 35, 41])
# print(self.conv5(conv4).size()) # torch.Size([1, 64, 24, 70, 82])<-------([1, 128, 12, 35, 41])
# print(self.redir2(conv2).size()) # torch.Size([1, 64, 24, 70, 81])
conv5 = F.relu(self.conv5(conv4) + self.redir2(conv2), inplace=True)
conv6 = F.relu(self.conv6(conv5) + self.redir1(x), inplace=True)
return conv6
是self.conv5(conv4)和self.redir2(conv2)最后一维尺寸不一样,请问这个问题应该怎么解决呢?
我刚跑完了自己修改过的psmn的代码,但显卡受限,batchsize=1,结果不是很好。
请问您如果使用batchsize=1是不是效果会不理想?
另外如果batchsize=1,BatchNormalization是否会有作用?
还有一个问题,为什么不把kitti2012也作为预训练,然后去训练kitti2015呢。我打算去做这件事。
问题有点多哈,如您能万分忙碌之中抽空回答,万分感谢。
I would like to ask the author, if the size of the image used for network training is 960x540, then when I want to use this network test, the resolution of the photos taken by my camera is very large, such as 4608x3456. When I take this picture to test, the parallax map effect is very bad. When I reduce the original image to 960x540, the parallax map effect is very good OK, but I don't know how to restore the parallax value of the original image. Or how to use your network to test higher resolution images without enough similar data sets to train? Hope that the author can guide, thank you very much!
Hi,
I am getting the following error on using the Kitti 2015 checkpoint:
RuntimeError: Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.feature_extraction.lastconv.0.0.weight", "module.feature_extraction.lastconv.0.1.weight", "module.feature_extraction.lastconv.0.1.bias", "module.feature_extraction.lastconv.0.1.running_mean", "module.feature_extraction.lastconv.0.1.running_var", "module.feature_extraction.lastconv.2.weight".
size mismatch for module.dres0.0.0.weight: copying a param with shape torch.Size([32, 40, 3, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 64, 3, 3, 3]).
Firstly, thank you for publishing your code.
I used your pre-trained model on KITTI2012(best.ckpt) to evaluate the validation set of kitti2012(14 pairs of images) and found the following results: for gwcnet-gc
avg_test_scalars {'D1': [0.024143276503309608], 'EPE': [1.3547629032816206], 'Thres1': [0.6347695120743343], 'Thres2': [0.06658247805067471], 'Thres3': [0.02868282133048134]}
However, this is different from the data reported in your paper.
In your paper, the results are:
Gwc40-Cat24: for kitti2012, EPE(px):0.659, D1-all(%):2.10
The results of this experiment confused me. Am I doing something wrong?
Dear guo
Is there a pretrained model provided ?
Could you please provid a pre-trained model?
Thanks
I am trying to use gwcNet to test other datasets in kiiti. But the size of image is 1392*512.
When i run the program, i got error message: assert top_pad > 0 and right_pad > 0.
I find that:
top_pad = 384 - h
right_pad = 1248 - w
assert top_pad > 0 and right_pad > 0
Is the image size must be lower than 1248*384 ?
What will happen, if i comment the 'assert top_pad > 0 and right_pad > 0' like:
#assert top_pad > 0 and right_pad > 0
Thank you very much.
I would like to test this Net in middlebury dataset, can I use this net to train middlebury dataset.
I submit my result on stereo 2015, then I receive a letter from "KITTI Evaluation Benchmark ", it give me a link to check the result. But when I click the link, it show me the error: ERROR: Result key 3708644dc4ab21dc7e6405e2d0c2d31f835 not registered with user fe4d4b5364cb33b1b410d3d27e2aae1. I don't know how to solve it. Could you give me some advice? Thank you!
ps: my zip only include disp_0 floder, because I just want to submit the stereo 2015 test set result
感谢您公开代码,请问您是怎么保存sceneflow数据集生成的视差图
I tried to train on TITAN(12GB Total), but it always went out of memory. Not even batch_size = 1.Is that normal?
请问,KITTI15的评价标准里面的 All(%)是否对应于用到的ground truth(disp_occ_0和disp_occ_1),以及Noc(%)对应于用到的ground truth(disp_noc_0和disp_noc_1)呢?
D1评价标准的函数您代码里已经给出来了,但是D1-bg D1-fg D1-all 这些是怎么计算的,有没有相应的计算方法呢?很是不明白,请前辈多多指教!~~~
蟹蟹~~~
Hello, is the pre-training model provided in README trained under the Scene Flow data set?
郭老师您好,想问下您这边能提供下您SceneFlow训练出来的模型吗?我发现经过KITTI数据集 FineTune的模型会让网络的预测偏向于模糊,比如在Middlebury数据集上进行测试,而仅通过SceneFlow数据集得到的结果的在一些真实数据集上的表现似乎更准确一些。
hi, we use proposed model loss to optimize, however it stops converging at about 0.21 in the second epoch, is it a bug or it converges quite slowly at this training stage? thx
I used your save_disp.py to run the KITTI2012 dataset, but when I submit it to the KITTI benchmark, the website always reminds me something wrong with the data format. Have you had a similar problem?
作者您好,原谅我使用汉语请教您,(我英文水平太菜了)。
1、对于SceneFlow数据集的评估,普遍都使用EPE(也就是MAE)作为评估标准,而且代码里也可以实现评估函数进行评估。
2、对于KITTI2012数据集,评价标准有Noc和Occ(All)的>2px, >3px, >4px, >5px以及Mean Error的错误率和错误像素数的评估,这些评估都是需要在自己代码里面实现它们的函数吗?还是需要提交到KITTI官网上生成评测结果呢?
3、对于KITTI2015数据集,评价标准里有All(Occ)和Noc的D1-bg,D1-fg,D1-all的错误率评估,需要自己在代码里面实现评估函数吗,还是必须提交到KITTI官网上评测结果呢?
4、对于kitti12来说,所以的评估标准可以自己代码实现;但是对于kitti2015来说,自己无法实现评价代码,D1-bg,D1-fg,D1-all这些怎么实现?
5、而且发论文的话,KITTI12和15的实验数据必须来自kitti的官方网站吗?
对于以上问题,目前还是比较迷惑的,kitti网站好像说是不能用于调试程序,每个人只能在规定时间内提交一次把,也不能申请多个账号吧。
所以对于这些评价标准的问题,还望作者大佬您能在百忙中抽出时间不吝赐教,万分感谢!!!~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~小白先行谢过!!!!!!!
Hi,
Please if can provide me the pretrained SceneFlow checkpoint file " pretrained.ckpt".
I will be very thankful to you.
Thanks in advance.
作者您好,非常感谢您贡献过的代码!很激动!
对于SceneFlow数据集,您用的是Finalpass,我这里想用Cleanpass但是不知道目录结构怎么设置,运行脚本文件总是提示找不到目录。
希望能够得到您的解答,谢谢!!!
欢迎加入深度学习双目视觉,群聊号码:1018698420
您好,感谢您出色的工作,我是新手,现在遇到下面这个问题您知道要怎么改吗 result = self.forward(*input, **kwargs)
TypeError: forward() missing 1 required positional argument: 'right'
I am a freshman.
I use Pytorch1.8.0 but got this error, but do not know how to fix...
Traceback (most recent call last): File "main.py", line 201, in <module> train() File "main.py", line 101, in train loss, scalar_outputs, image_outputs = train_sample(sample, compute_metrics=do_summary) File "main.py", line 159, in train_sample image_outputs["errormap"] = [disp_error_image_func()(disp_est, disp_gt) for disp_est in disp_ests] File "main.py", line 159, in <listcomp> image_outputs["errormap"] = [disp_error_image_func()(disp_est, disp_gt) for disp_est in disp_ests] File "/home/rc/anaconda3/envs/DL/lib/python3.7/site-packages/torch/autograd/function.py", line 262, in __call__ "Legacy autograd function with non-static forward method is deprecated. " RuntimeError: Legacy autograd function with non-static forward method is deprecated. Please use new-style autograd function with static forward method. (Example: https://pytorch.org/docs/stable/autograd.html#torch.autograd.Function)
Thank for your codes!
In your paper, you have mentioned that experiments on kitt12 and kitt15 are finetuned using the pre-trained model on scene flow. HIt is very slow when I download the scene flow dataset. So I want to train the network on kitti12 directly.
However I didn't find the pre-trained model on scene flow, could you please provide that pre-trained model, thank you very much!
Hello, when I infer the sceneflow dataset using your pretrained model, the result seems different from your paper result.
The EPE I get is 0.823, and your paper result is 0.765. I infer the model using 2080ti and 3090, getting the same result(0.823), could you please tell me where might be wrong.
The weight I used is "./checkpoint_sceneflow/sceneflow/gwcnet-gc/checkpoint_000015.ckpt"
我训练的设置情况如下:batchsize为12,GPU为2080Ti,跑了16个epoch,但是结果都偏大一点,文献的指标和数据(我的测试数据)分别是:>1px=8.03(8.42),>2px=4.47(4.75),>3px=3.30(3.56),EPE=0.765(0.864)
请问这是为什么呢?想咨询一下前辈的意见,麻烦您啦!
The training time does not decrease by increasing the batch size. When I move the last two hourglass modules and set batch size = 4, it takes about 3 hours every epoch for training sceneflow dataset on one 3090 GPU. When I set batch size = 8, the total training time for one epoch does not decrease. Why?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.