Giter Site home page Giter Site logo

raster-vision-examples's Issues

Add versioning to this repo

With each minor and major release, we should add a version branch to this repo. Each version branch will use that version of the RV Docker image and the examples should work with that version. The master branch should work with the develop branch of RV. Currently, the master branch uses the 0.8 version of the image which is incorrect.

Add example for using shapefiles

Problem Statement

The Esri shapefile format is a popular vector data format. As such, I believe it would be worthwhile to construct an example illuminating how shapefile labels can be used in raster-vision.

References

A good example, IMHO, would be importing the datasets used in WaterNet, which consists of TIFF base images and shapefile labels (sometimes multiple shapefiles correspond to the same base image). In WaterNet, the shapefiles are "burnt" into raster images consisting of 0's and 1's for Not Water and Water. This conversion, however, is not memory efficient.

Some Preliminary Work

Ideally, raster-vision should be able to read the shapefile directly, but I am unsure if this yet possible or feasible. A workaround would be to convert shapefiles into a format recognized by raster-vision, such as GeoJSON or vector tiles.

For conversion into GeoJSON, something like the following code snippet can be used (which was designed for the dataset used in WaterNet):

import os.path
from . import ogr2ogr

def create_merge_vrt(sources, vrt_uri):
    '''
    Writes a vrt file for use with ogr2ogr to merge files together.
    '''

    with open(vrt_uri, "w") as vrtfile:
        
        # Create wrapper.
        vrtfile.write("""<OGRVRTDataSource>
    <OGRVRTUnionLayer name="unionLayer">
""")
        
        # Add sources.
        for source in sources:
            
            print('Adding source: {}'.format(source))
            
            vrtfile.write("""            <OGRVRTLayer name="{}">
                <SrcDataSource relativeToVRT="0">{}</SrcDataSource>
            </OGRVRTLayer>
""".format(get_file_name(source), source))
            
        # Conclude file.
        vrtfile.write("""    </OGRVRTUnionLayer>
</OGRVRTDataSource>
""")
        
    print('Created vrt file at {}.'.format(vrt_uri))
    

def convert_to_geojson(sources, save_uri):
    '''
    Convert a list of files to the GeoJSON format.
    '''
    
    print('Converting sources to GeoJSON at {}{}...'.format(save_uri,'.geojson'))
    
    #Create a VRT file that includes information on the included sources.
    vrt_uri = save_uri + '.vrt'
    create_merge_vrt(sources, vrt_uri)
    
    # Convert shapefile to GeoJSON
    ogr2ogr.main(["","-f", "GeoJSON", save_uri+'.geojson', vrt_uri])
    print('Created GeoJSON file ({}).'.format(save_uri+'.geojson'))

Though this code is hacky, it is quite memory efficient (thanks to ogr2ogr).

Help Required on semantic segmentation example of Potsdam

Hi,

I just started out with deep learning and found this to be an excellent source. Thanks a lot . However being an absolute beginner I do have certain doubts. I started with the Potsdam example of Semantic Segmentation. I tried out the set up locally and it is executing; however I do not see any outputs in the folder that I have defined. I have defined the following data folder:

  1. Data_URI = This points to my S3 bucket with the source data files satellite images
  2. Root_URI= This points to a folder in my local machine for output storing

rastervision run local -e potsdam.semantic_segmentation
-a test_run True -a root_uri ${ROOT_URI} -a data_uri ${DATA_URI}

I am trying to figure out if the set up ran as expected and if so why am I not able to see the output in the folder.Attached is the Terminal window. Many thanks for your help!

/usr/local/lib/python3.5/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
TensorBoard 1.10.0 at http://932c17caf67d:6006 (Press CTRL+C to quit)
INFO:tensorflow:Training on train set
INFO:tensorflow:Initializing model from path: /tmp/tmpzyxwc5qv/tmp61jrqise/http/download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug/model.ckpt-30000
WARNING:tensorflow:From /usr/local/lib/python3.5/dist-packages/tensorflow/contrib/slim/python/slim/learning.py:737: Supervisor.init (from tensorflow.python.training.supervisor) is deprecated and will be removed in a future version.
Instructions for updating:
Please switch to tf.train.MonitoredTrainingSession
2019-03-29 08:57:45.278047: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO:tensorflow:Restoring parameters from /tmp/tmpzyxwc5qv/tmp61jrqise/http/download.tensorflow.org/models/deeplabv3_mnv2_pascal_train_aug/model.ckpt-30000
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path Users/gunjanthakuria/Desktop/projects/data_project/Satellite/postdam_contest/raster-vision-examples-master/output/train/potsdam-seg/model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
2019-03-29 08:57:58.398151: W tensorflow/core/framework/allocator.cc:108] Allocation of 12582912 exceeds 10% of system memory.
2019-03-29 08:57:59.072704: W tensorflow/core/framework/allocator.cc:108] Allocation of 67108864 exceeds 10% of system memory.
2019-03-29 08:57:59.297481: W tensorflow/core/framework/allocator.cc:108] Allocation of 67108864 exceeds 10% of system memory.
2019-03-29 08:57:59.380239: W tensorflow/core/framework/allocator.cc:108] Allocation of 79691776 exceeds 10% of system memory.
2019-03-29 08:57:59.518586: W tensorflow/core/framework/allocator.cc:108] Allocation of 79691776 exceeds 10% of system memory.
INFO:tensorflow:Recording summary at step 0.
INFO:tensorflow:Stopping Training.
INFO:tensorflow:Finished training! Saving model to disk.
2019-03-29 08:58:16:rastervision.backend.tf_deeplab: INFO - Exporting frozen graph (Users/gunjanthakuria/Desktop/projects/data_project/Satellite/postdam_contest/raster-vision-examples-master/output/train/potsdam-seg/model)
/usr/local/lib/python3.5/dist-packages/h5py/init.py:36: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
from ._conv import register_converters as _register_converters
INFO:tensorflow:Prepare to export model to: Users/gunjanthakuria/Desktop/projects/data_project/Satellite/postdam_contest/raster-vision-examples-master/output/train/potsdam-seg/model
INFO:tensorflow:Exported model performs single-scale inference.
2019-03-29 08:58:33.755165: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

Terminal Saved Output.pdf

About the Spacenet Vegas Roads model

Hi

Can I ask how the Spacenet Vegas Roads model pretrained? Is it pretrained on COCO, VOC 2012 or other datasets? Also, I see that the network backbone is Mobilenet-v2, what's the depth-multiplier? Does the model employ ASPP and decoder modules?

Thanks

Investigate using SpaceNet examples with no AWS account set up

A user ran the spacenet example notebook without having an aws cli configuration on their machine. This cause a ProfileNotFound: The config profile (default) could not be found error on the s3 = boto3.client('s3') line.

Modify the example notebook and/or docker container setup to allow users without aws cli set up on the host machine to run the notebook.

predict not showing any output/error

Hi,

I installed rv 0.9 using pip install and everything is in order. However, when I run the predict command on the Potsdam predict package from the model zoo with an image from the same dataset, it runs (takes 2-3 seconds) but shows no error or any output in the desired directory.

The command is run from the same folder the files are located in:

rastervision predict ./predict_package.zip ./sample.tif ./predictions.tif

What can be the issue?

Add model zoo

There is one for Potsdam seg at s3://azavea-research-public-data/raster-vision/examples/potsdam/semseg-deeplab-mobilenet-predict-package.zip

Standardize examples

All examples should have the test option that Vegas has. This is to make it easier to test that all examples run.

Ideally we should also add use_remote_data and provide a tiny subset of the dataset that is compatible with the test option to make running tests faster.

Use replace_model=False

replace_model=True appears several times in the examples such as here

It seems like the examples should use default settings unless there's a reason not to. The default value of replace_model is False, so I would like to remove lines setting replace_model to True. I think this may have caused a problem for a user who was trying to resume training and had their checkpoint deleted.

AttributeError: module 'rastervision' has no attribute 'RASTERIZED_SOURCE'

When I run

rastervision run local -e spacenet.vegas -a test True -a root_uri /opt/data/spacenet-vegas -a target roads -a task_type semantic_segmentation -a use_remote_data False

I get an error
AttributeError: module 'rastervision' has no attribute 'RASTERIZED_SOURCE'

Am I missing something trivial?

Here's the trace:

root@31c4ef4f2499:/opt/src# rastervision run local -e spacenet.vegas -a test True -a root_uri /opt/data/spacenet-vegas -a target roads -a task_type semantic_segmentation -a use_remote_data False
None
<module 'rastervision' from '/opt/src/rastervision/__init__.py'>
Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/src/rastervision/__main__.py", line 17, in <module>
    rv.main()
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/opt/src/rastervision/cli/main.py", line 136, in run
    experiments = loader.load_from_module(experiment_module)
  File "/opt/src/rastervision/experiment/experiment_loader.py", line 54, in load_from_module
    result += self.load_from_set(experiment_set)
  File "/opt/src/rastervision/experiment/experiment_loader.py", line 58, in load_from_set
    return self.load_from_sets([experiment_set])
  File "/opt/src/rastervision/experiment/experiment_loader.py", line 81, in load_from_sets
    es = self.load_from_experiment(exp_func, full_name)
  File "/opt/src/rastervision/experiment/experiment_loader.py", line 112, in load_from_experiment
    exp = exp_func(**kwargs)
  File "/opt/src/spacenet/vegas.py", line 369, in exp_main
    vector_tile_options=vector_tile_options)
  File "/opt/src/spacenet/vegas.py", line 175, in build_dataset
    for id in train_ids]
  File "/opt/src/spacenet/vegas.py", line 175, in <listcomp>
    for id in train_ids]
  File "/opt/src/spacenet/vegas.py", line 121, in build_scene
    label_raster_source = rv.RasterSourceConfig.builder(rv.RASTERIZED_SOURCE) \
AttributeError: module 'rastervision' has no attribute 'RASTERIZED_SOURCE'

training stage error Keras validatation_steps=None

I encountered this error trying to run the rio spacenet example in Google Colab

Ensuring input files exist [####################################] 100%
Checking for existing output [####################################] 100%
Saving command configuration to data/examples/spacenet/rio/remote-output/train/spacenet-rio-chip-classification-test/command-config-0.json...
Saving command configuration to data/examples/spacenet/rio/remote-output/bundle/spacenet-rio-chip-classification-test/command-config-0.json...
Saving command configuration to data/examples/spacenet/rio/remote-output/predict/spacenet-rio-chip-classification-test/command-config-0.json...
Saving command configuration to data/examples/spacenet/rio/remote-output/eval/spacenet-rio-chip-classification-test/command-config-0.json...
python -m rastervision run_command data/examples/spacenet/rio/remote-output/train/spacenet-rio-chip-classification-test/command-config-0.json
Training model...
/usr/local/lib/python3.6/dist-packages/pluginbase.py:439: FutureWarning: Conversion of the second argument of issubdtype from float to np.floating is deprecated. In future, it will be treated as np.float64 == np.dtype(float).type.
fromlist, level)
Using TensorFlow backend.
2019-06-06 14:27:00:rastervision.utils.files: INFO - Downloading https://github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5 to /tmp/tmp7n3lkztl/tmpd83op6nh/http/github.com/fchollet/deep-learning-models/releases/download/v0.2/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
2019-06-06 14:27:03.794498: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2200000000 Hz
2019-06-06 14:27:03.794813: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x14cf4a0 executing computations on platform Host. Devices:
2019-06-06 14:27:03.794849: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): ,
2019-06-06 14:27:04.065789: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:998] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2019-06-06 14:27:04.066325: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x14cf080 executing computations on platform CUDA. Devices:
2019-06-06 14:27:04.066371: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): Tesla T4, Compute Capability 7.5
2019-06-06 14:27:04.066753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1433] Found device 0 with properties:
name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59
pciBusID: 0000:00:04.0
totalMemory: 14.73GiB freeMemory: 14.60GiB
2019-06-06 14:27:04.066777: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 0
2019-06-06 14:27:05.517909: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:
2019-06-06 14:27:05.517977: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0
2019-06-06 14:27:05.517990: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N
2019-06-06 14:27:05.518287: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:42] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.
2019-06-06 14:27:05.518385: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 14115 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)
Found 0 images belonging to 2 classes.
Found 0 images belonging to 2 classes.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
TensorBoard 1.13.1 at http://fea6b3f9897c:6006 (Press CTRL+C to quit)
Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/rastervision/main.py", line 17, in
rv.main()
File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.6/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/rastervision/cli/main.py", line 292, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/usr/local/lib/python3.6/dist-packages/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/usr/local/lib/python3.6/dist-packages/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/usr/local/lib/python3.6/dist-packages/rastervision/command/train_command.py", line 21, in run
task.train(tmp_dir)
File "/usr/local/lib/python3.6/dist-packages/rastervision/task/task.py", line 138, in train
self.backend.train(tmp_dir)
File "/usr/local/lib/python3.6/dist-packages/rastervision/backend/keras_classification/backend.py", line 263, in train
_train(backend_config_path, pretrained_model_path, do_monitoring)
File "/usr/local/lib/python3.6/dist-packages/rastervision/backend/keras_classification/commands/train.py", line 15, in _train
trainer.train(do_monitoring)
File "/usr/local/lib/python3.6/dist-packages/rastervision/backend/keras_classification/core/trainer.py", line 150, in train
callbacks=callbacks)
File "/usr/local/lib/python3.6/dist-packages/keras/legacy/interfaces.py", line 91, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/training.py", line 1418, in fit_generator
initial_epoch=initial_epoch)
File "/usr/local/lib/python3.6/dist-packages/keras/engine/training_generator.py", line 68, in fit_generator
raise ValueError('validation_steps=None is only valid for a'
ValueError: validation_steps=None is only valid for a generator based on the keras.utils.Sequence class. Please specify validation_steps or use the keras.utils.Sequence class.
TensorBoard caught SIGTERM; exiting...
/tmp/tmpkna9hjv6/tmpfdeg5814/Makefile:6: recipe for target '2' failed
make: *** [2] Error 1

Add support for "developer mode"

This will be instructions (and potentially an alternative set of scripts) to mount local RV into the container and inherit from the local containers. This will make it easier to put print statements in RV and debug examples.

ImportError: No module named 'spacenet.semantic_segmentation'

When I run
rastervision run local -e spacenet.semantic_segmentation -a root_uri /opt/src/rastervision True -a /opt/src/spacenet -a target s3://spacenet-dataset/SpaceNet_Roads_Competition/AOI_2_Vegas_Roads_Train.tar.gz

in my exmaples container, I get an "ImportError"...

Call Stack:

Traceback (most recent call last):
  File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
    "__main__", mod_spec)
  File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/opt/src/rastervision/__main__.py", line 17, in <module>
    rv.main()
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in __call__
    return self.main(*args, **kwargs)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
    rv = self.invoke(ctx)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
    return callback(*args, **kwargs)
  File "/opt/src/rastervision/cli/main.py", line 136, in run
    experiments = loader.load_from_module(experiment_module)
  File "/opt/src/rastervision/experiment/experiment_loader.py", line 50, in load_from_module
    module = import_module(name)
  File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 986, in _gcd_import
  File "<frozen importlib._bootstrap>", line 969, in _find_and_load
  File "<frozen importlib._bootstrap>", line 956, in _find_and_load_unlocked
ImportError: No module named 'spacenet.semantic_segmentation'

minimum system required to execute the examples

Hello
I face the examples, but I take a long time to execute them .. around 8 hours ..
what would be a resonable time to execute the object detection xview example?
What should be the minimum system required to execute the examples in a reasonable time?
what gpu? how much ram? etc...
Thank you

fail install on windows with python 3.7 64bits

I have next error, when try install rastervision from VisualStudio or command line in Windows 10:

Usuario

/c/Program Files (x86)/Microsoft Visual Studio/Shared/Python37_64
$ ./python -m pip install rastervision --user --no-cache-dir
Collecting rastervision
Downloading https://files.pythonhosted.org/packages/08/e7/353df3b3cb7a8caab68702d8d72b2ae4c826b54d5e4a4213212d61fb0d3e/rastervision-0.10.0-py3-none-any.whl (396kB)
ERROR: Could not install packages due to an EnvironmentError: [WinError 267] El nombre del directorio no es v▒lido: 'C:\Users\Usuario\AppData\Local\Temp\pip-install-fpxxp1qz\rastervision\rastervision/command/aux'

I have tried different things but I have not been able to install it on windows. I have also tried version 0.9.0 and neither
I know that it is not a priority for rastervision to offer support on windows, however for me it is important to do small tests locally.
Thanks

Question regarding the pretrained models...

These models look like Tensorflow exported object detection models (with scores, classes ... etc tensors)

If I run rastervision predict https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo/xview-vehicle-od/predict_package.zip https://s3.amazonaws.com/azavea-research-public-data/raster-vision/examples/model-zoo/xview-vehicle-od/1677.tif s3://mybucket/1047-raster.tiff

The output is a Geojson file containing all of the detected objects. It's quite accurate.

If I take the same model and use it in Tensorflow natively, it detects nothing on that picture.

Why?

Most of the instructions are outdated

Almost every examples' instructions are outdated.
Docker/Run script also outdated. It cant run proper docker images.
SpaceNet dataset on S3 changed its directory hierarchy but they are not updated here

Vegas image 1000 is missing

We should skip this example because the image has been replaced with a folder on S3. We should also contact the contest organizers about this.

screen shot 2019-01-29 at 10 41 57 am

Both roads semantic segmentation and buildings object detection don't work.

Buildings chip classification and semantic segmentation are able to use, but both buildings object detection and roads semantic segmentation don't work.

Buildings object detection task fails immediately:

root@61c432694331:/opt/src# rastervision run local -e spacenet.vegas -a test False -a root_uri spacenet/Local_GPU/ -a target buildings -a task_type object_detection -a use_remote_data False -x
None
Checking for existing output [####################################] 100%
Saving command configuration to spacenet/Local_GPU/analyze/buildings_object_detection/command-config.json...
/opt/tf-models/object_detection/utils/np_box_ops.py:97: RuntimeWarning: invalid value encountered in true_divide
return intersect / areas
/opt/tf-models/object_detection/utils/np_box_list_ops.py:385: RuntimeWarning: invalid value encountered in greater_equal
keep_bool = np.greater_equal(intersection_over_area, np.array(minoverlap))
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision/main.py", line 17, in
rv.main()
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/opt/src/rastervision/cli/main.py", line 253, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/opt/src/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/opt/src/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/opt/src/rastervision/command/analyze_command.py", line 18, in run
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.scenes))
File "/opt/src/rastervision/command/analyze_command.py", line 18, in
map(lambda s: s.create_scene(cc.task, tmp_dir), cc.scenes))
File "/opt/src/rastervision/data/scene_config.py", line 44, in create_scene
task_config, extent, crs_transformer, tmp_dir)
File "/opt/src/rastervision/data/label_source/object_detection_label_source_config.py", line 28, in create_source
task_config.class_map, extent)
File "/opt/src/rastervision/data/label_source/object_detection_label_source.py", line 30, in init
geojson, crs_transformer, extent=extent)
File "/opt/src/rastervision/data/label_source/utils.py", line 68, in geojson_to_object_detection_labels
'Geometries of type {} are not supported in object detection labels.'.format(geom_type))
Exception: Geometries of type Point are not supported in object detection labels.

Roads semantic segmentation task crashes during "Making training chips"

2019-01-03 13:23:45:rastervision.task.task: INFO - Making train chips for scene: 463
2019-01-03 13:23:45:rastervision.backend.tf_deeplab: INFO - Creating TFRecord
Traceback (most recent call last):
File "/usr/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/src/rastervision/main.py", line 17, in
rv.main()
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 722, in call
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 1066, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.5/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/opt/src/rastervision/cli/main.py", line 253, in run_command
rv.runner.CommandRunner.run(command_config_uri)
File "/opt/src/rastervision/runner/command_runner.py", line 11, in run
CommandRunner.run_from_proto(msg)
File "/opt/src/rastervision/runner/command_runner.py", line 17, in run_from_proto
command.run()
File "/opt/src/rastervision/command/chip_command.py", line 29, in run
task.make_chips(train_scenes, val_scenes, augmentors, tmp_dir)
File "/opt/src/rastervision/task/task.py", line 128, in make_chips
train_scenes, TRAIN, augment=True)
File "/opt/src/rastervision/task/task.py", line 124, in _process_scenes
return [process_scene(scene, type, augment) for scene in scenes]
File "/opt/src/rastervision/task/task.py", line 124, in
return [process_scene(scene, type, augment) for scene in scenes]
File "/opt/src/rastervision/task/task.py", line 103, in _process_scene
with scene.activate():
File "/opt/src/rastervision/data/activate_mixin.py", line 40, in enter
manager.enter()
File "/opt/src/rastervision/data/activate_mixin.py", line 40, in enter
manager.enter()
File "/opt/src/rastervision/data/activate_mixin.py", line 40, in enter
manager.enter()
File "/opt/src/rastervision/data/activate_mixin.py", line 40, in enter
manager.enter()
File "/opt/src/rastervision/data/activate_mixin.py", line 21, in enter
self.activate()
File "/opt/src/rastervision/data/activate_mixin.py", line 54, in do_activate
self._activate()
File "/opt/src/rastervision/data/raster_source/rasterized_source.py", line 110, in _activate
geojson = self.vector_source.get_geojson()
File "/opt/src/rastervision/data/vector_source/vector_source.py", line 27, in get_geojson
self._get_geojson())
File "/opt/src/rastervision/data/vector_source/geojson_vector_source.py", line 19, in _get_geojson
return json.loads(file_to_str(self.uri))
File "/usr/lib/python3.5/json/init.py", line 319, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.5/json/decoder.py", line 339, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.5/json/decoder.py", line 357, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.