I followed all the steps as mentioned in the document but While running the command for Phase: module_training.
I'm getting this error: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same
on line: output_dict = self._nmn(batch["image"], pg_output_dict["predictions"], batch["answer"]) in module_training_trainer.py
When I change the type of batch["image"] to batch["image"].float() i got error: TypeError: zip argument #1 must support iteration
I tried different ways to convert batch["image"] of type torch.cuda.FloatTensor but every time i got it is not iterable. I also tried to change the get_item from dataset to return an image of type float but still, it is not working.
I am pasting the completing error for clear reference.
Batch['image'] type : torch.cuda.DoubleTensor
> Traceback (most recent call last):
> File "scripts/train.py", line 138, in <module>
> trainer.step(iteration)
> File "/data/vivek/neural-symbolic/probnmn-clevr/probnmn/trainers/_trainer.py", line 147, in step
> output_dict = self._do_iteration(batch)
> File "/data/vivek/neural-symbolic/probnmn-clevr/probnmn/trainers/module_training_trainer.py", line 96, in _do_iteration
> output_dict = self._nmn(batch["image"], pg_output_dict["predictions"], batch["answer"])
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
> result = self.forward(*input, **kwargs)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward
> outputs = self.parallel_apply(replicas, inputs, kwargs)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply
> return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply
> raise output
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker
> output = module(*input, **kwargs)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
> result = self.forward(*input, **kwargs)
> File "/data/vivek/neural-symbolic/probnmn-clevr/probnmn/models/nmn.py", line 183, in forward
> feat_input_volume = self.stem(features)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
> result = self.forward(*input, **kwargs)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/container.py", line 92, in forward
> input = module(input)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in __call__
> result = self.forward(*input, **kwargs)
> File "/data/vivek/anaconda3/envs/probnmn/lib/python3.6/site-packages/torch/nn/modules/conv.py", line 320, in forward
> self.padding, self.dilation, self.groups)
> RuntimeError: Input type (torch.cuda.DoubleTensor) and weight type (torch.cuda.FloatTensor) should be the same
>
>