Giter Site home page Giter Site logo

anilbatra2185 / road_connectivity Goto Github PK

View Code? Open in Web Editor NEW
116.0 116.0 33.0 22.69 MB

Improved Road Connectivity by Joint Learning of Orientation and Segmentation (CVPR2019)

License: MIT License

Python 2.65% Jupyter Notebook 95.45% Shell 0.06% Java 1.72% HTML 0.12%

road_connectivity's People

Contributors

anilbatra2185 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

road_connectivity's Issues

Question about SapceNet dataset processing

Hi,
I tried your Data Preparation process to the step of Split Datasets. I did got the full folder with 2780 images. However, using your code, it came out 1999 images in train folder and 566 images in val folder. The result is different from '2213 images for training and 567 for testing' in your CVPR2019 essay.
Should the 566 images be used for testing? Should I pull some images out from train folder to make a val dataset?
I really want to use the same dataset with you to evaluate the performance of our network.
Thanks a lot!

share .pth.tar file

Thank you very much for your work! I trained the deepglobe dataset with your train_mtl.py , but the prediction is not very good, even worse than using segmentation network like unet or linknet. Could you share your model file? And other question,How the orientation task enhance the final result,add two tasks result?Thanks a lot!

AttributeError: 'MultiGraph' object has no attribute 'node'

Good evening,

when I am trying to start training the network, I get the following error:

File "C:\Users...\road_connectivity-master\data_utils\graph_utils.py", line 53,
in simplify_graph
full_segments = np.row_stack([graph.node[s]["o"], ps, graph.node[e]["o"]])
AttributeError: 'MultiGraph' object has no attribute 'node'

how to infer on the test set

Thanks for your great work! I've trained the model successfully, and I want to use the model to infer on the test set and submit the predicted pictures on deepglobe website to evaluate. However, I doesn't find the infer code in it. Can you give me some advices?

Details of junction learning

Thank you very much for your work, it helped me a lot. I want to know more about junction learning, where do I need to find its details or implementation? Thanks again

precision, recall and f1 metrics

Hi, Anil Batra. I read your paper, I think it helps me.However, I have a question. IOU can reach your level, but precision, recall and f1 are a lot worse in DeepGlobe . Whether precision, recall and f1 metrics are relaxed in your paper. Can you tell me how to calculate these metrics in you paper. Looking forward to your answer,thank you!

How to evaluate DeepGlobe graph with APLS metric?

Hi, thank you for your response.
I wonder how to evaluate the inferred graph using APLS metric?
What data structure do you use to evaluate, and what's the key attribution that the data structure must have?
The reason why I ask so is that, according to https://github.com/CosmiQ/apls, the evaluation metric seems to be connected with lon/lat coordinate or utm coordinate.
Source codes may help more! Thx!

how can we produce training dataset with DeepGlobe?

In the paper, I see you use linestring(this information is in the dataset SpaceNet ). However, There is no linestring datastruct in dataset Deepglobe(there are just two kinds of pictures binary gt pics and satelite pics). So how can we produce training dataset with DeepGlobe? Thank you very much!

add: Assertion `pos >= 0 && pos < buffer.size()` failed

Thanks for your fantastic work!When I tried to train this model,I meet this problem:
Traceback (most recent call last):
File "train_mtl.py", line 427, in
train(epoch)
File "train_mtl.py", line 264, in train
torch.autograd.backward([loss1, loss2])
File "/home/sy/anaconda3/envs/connect/lib/python2.7/site-packages/torch/autograd/init.py", line 99, in backward
variables, grad_variables, retain_graph)
RuntimeError: torch/csrc/autograd/input_buffer.cpp:14: add: Assertion pos >= 0 && pos < buffer.size() failed.
Hope you can give me some instruction to solve this problem.

Some questions about the evalution metrics

Hi,
I notice that you used a buffer of 4 pixels in your evaluations in your paper. And I noticed that there is function called "relaxed_f1". However, I find you do not use the "relaxed_f1" function in your main training or test stage (i.e., train_mtl.py).

So my question is that the results reported in your paper used "the bufffer evalution" or not? It confused me.

Producing DeepGlobe training dataset

Thank you for your great job!!! I have seen the colsed issue which askes for the training DeeoGlobe dataset, but I can't find the code for computing the road line string ground-truths. Where can I find the code?

utils.loss

can't find loss in the utils;
lead to ModuleNotFoundError: No module named 'utils.loss'

split_data.sh: line 43: /data/deepglobe/train.txt: No such file or directory split_data.sh: line 51: /data/deepglobe/val.txt: No such file or directory

I have put the deepglobe data in the right path:

--data/deepglobe/train val.txt train.txt
--data/deepglobe/train/_sat.jpg _mask.png

but when I run:

bash split_data.sh /deepglobe/train /data/deepglobe _sat.jpg _mask.png

it still told me:
split_data.sh: line 43: /data/deepglobe/train.txt: No such file or directory
split_data.sh: line 51: /data/deepglobe/val.txt: No such file or directory

This question has been solved when I use a deep path ! thank you ! you can delete this question~ because it is not a common problem

how to get

train_crops.txt # created by script
I do not know by which script

Actual model code

Hi,

I was wondering when the actual model code becomes available?

Thanks!

direction of the direction vector

It's a great job, but I have some questions about some details in the code.

  1. In the getKeypoints function (in affinity_utils.py), when the points list is updated, the keypoint list_is also updated, which results in the final keypoint list recorded repeatedly the same road line. What is the purpose of doing this? Is it because the later getVectorMapsAngles function will iterate through each repeatedly recorded road line, the later records of the same road line will overwrite the previous results, so there is no impact on generating angle?

  2. For the keypoint generated by a road line, are the points close to the (0,0) point at the top of the list?
    image

  3. The directional vectors generated in the code are based on the pixel coordinate system. If 2 is right, I think that the direction of vectors point from right to left and bottom to top in a image.

  4. In the visualize_task.ipybn, you use the plotVecMap function to visualize the arrow. To the best of my knowledge, the plt.quiver() function you used plot the arrow in cartesian coordinate system rather than the pixel coordinate system. For the direction, the x-positive direction is the same for both, while the y-axis has the opposite positive direction, so I think it should be V*-1 instead of U*-1 in plotVecMap function.

image

Post-training

Thank you for your great work. Is the post-training file released yet? Or please tell me the input of fine-tuning is the output of the test set in the mul network?thanks

Cannot train the module

I run the following command in my terminal and I get the following message:

$ set CUDA_VISIBLE_DEVICES=0 & python train_mtl.py --config config.json --dataset deepglobe --model_name "LinkNet34MTL" --exp dg_L34_mtl --multi_scale_pred false
Random Seed:  7
Single Cuda Node is avaiable
Training with dataset => DeepGlobeDataset
****************************************************************************************************
Trainable parameters for Model LinkNet34MTL : 22.004903 M
****************************************************************************************************
...
...
Some numba warnings and runtime warnings
...
...


[>....... 1/1 ............]S:9s492ms|T:0ms|Loss: -0.284238 | VecLoss: 4.207048 | road miou: 28.0047%(18.9634%) | angle miou: 0.2155%

Traceback (most recent call last):
  File "train_mtl.py", line 443, in <module>
    main()
  File "train_mtl.py", line 434, in main
    train(epoch)
  File "train_mtl.py", line 294, in train
    write=True,
  File "C:\Users\...\road_connectivity-master\utils\util.py", line 147, in performMetrics
    100 * fwavacc,
TypeError: a bytes-like object is required, not 'str'

Do you know how to solve this issue?

requirements.txt

Hi,

Thanks for making the code available. For some reason I am unable to run this code on my system due to some unmet dependencies. Could you please share a detailed requirements.txt file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.