Giter Site home page Giter Site logo

plan2scene's Introduction

Plan2Scene

Official repository of the paper:

Plan2Scene: Converting floorplans to 3D scenes

Madhawa Vidanapathirana, Qirui Wu, Yasutaka Furukawa, Angel X. Chang , Manolis Savva

[Paper, Project Page]

Task Overview In the Plan2Scene task, we produce a textured 3D mesh of a residence from a floorplan and set of photos.

Dependencies

  1. We use a conda environment initialized as follows.
    # Python 3.6 and PyTorch 1.6
    conda create -n plan2scene python=3.6
    conda activate plan2scene
    conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
    pip install -r code/requirements.txt
    
    # Install the cuda_noise package, which we have copied from the neural texture project: https://github.com/henzler/neuraltexture.
    cd code/src/plan2scene/texture_gen/custom_ops/noise_kernel
    python setup.py install
    cd ../../../../../../
    
    # Install PyTorch Geometric
    export CUDA=cu102 # Specify CUDA version used by PyTorch. Refer https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details.
    export TORCH=1.6.0
    pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html --no-cache
    pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html --no-cache
    pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html --no-cache
    pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html --no-cache
    pip install torch-geometric --no-cache
  2. Setup the command line library of Embark Studios texture-synthesis project.
    1. You can download a pre-built binary available here. Alternatively, you may build from the source.
    2. Download the seam mask available here.
    3. Rename ./conf/plan2scene/seam_correct-example.json to 'seam_correct.json' and update the paths to the texture synthesis command line library binary, and the seam mask.

Use 'code/src' as the source root when running python scripts.

export PYTHONPATH=./code/src

Data

  1. Rent3D++ dataset

    1. Download and copy the Rent3D++ dataset to the [PROJECT_ROOT]/data directory. The data organization is described here.

    2. [Optional] We have provided 3D scenes pre-populated with CAD models of objects. If you wish to re-populate these scenes using the Object Placement approach we use, follow the instructions here.

    3. To replicate our results, you should use the pre-extracted crops we provide. These crops are provided with the Rent3D++ dataset and are copied to the ./data/processed/surface_crops directory.

      [Optional] If you wish to extract new crops instead of using these provided crops, following these instructions.

    4. Select ground truth reference crops and populate photo room assignment lists.

      # Select ground truth reference crops.
      python code/scripts/plan2scene/preprocessing/generate_reference_crops.py ./data/processed/gt_reference/train ./data/input/photo_assignments/train train
      python code/scripts/plan2scene/preprocessing/generate_reference_crops.py ./data/processed/gt_reference/val ./data/input/photo_assignments/val val
      python code/scripts/plan2scene/preprocessing/generate_reference_crops.py ./data/processed/gt_reference/test ./data/input/photo_assignments/test test 
      
      # We evaluate Plan2Scene by simulating photo un-observations.
      # Generate photoroom.csv files considering different photo un-observation ratios.
      python code/scripts/plan2scene/preprocessing/generate_unobserved_photo_assignments.py ./data/processed/photo_assignments/train ./data/input/photo_assignments/train ./data/input/unobserved_photos.json train
      python code/scripts/plan2scene/preprocessing/generate_unobserved_photo_assignments.py ./data/processed/photo_assignments/val ./data/input/photo_assignments/val ./data/input/unobserved_photos.json val
      python code/scripts/plan2scene/preprocessing/generate_unobserved_photo_assignments.py ./data/processed/photo_assignments/test ./data/input/photo_assignments/test ./data/input/unobserved_photos.json test  
  2. [Optional] Stationary Textures Dataset - We use one of the following datasets to train the texture synthesis model. Not required if you are using pre-trained models.

    • Version 1: We use this dataset in our CVPR paper. Details are available here.
    • Version 2: Updated textures dataset which provides improved results on the Rent3D++ dataset. Details are available here.
  3. [Optional] Substance Mapped Textures dataset. Only used by the retrieve baseline.

Pretrained models

Pretrained models are available here.

Inference

  1. Download and pre-process the Rent3D++ dataset as described in the data section.

  2. Setup a pretrained model or train a new Plan2Scene network.

  3. Synthesize textures for observed surfaces using the VGG textureness score.

    # For test data without simulating photo unobservations. (drop = 0.0)
    python code/scripts/plan2scene/preprocessing/fill_room_embeddings.py ./data/processed/texture_gen/test/drop_0.0 test --drop 0.0
    python code/scripts/plan2scene/crop_select/vgg_crop_selector.py ./data/processed/vgg_crop_select/test/drop_0.0 ./data/processed/texture_gen/test/drop_0.0 test --drop 0.0
    # Results are stored at ./data/processed/vgg_crop_select/test/drop_0.0
  4. Propagate textures to unobserved surfaces using our texture propagation network.

    python code/scripts/plan2scene/texture_prop/gnn_texture_prop.py ./data/processed/gnn_prop/test/drop_0.0 ./data/processed/vgg_crop_select/test/drop_0.0 test GNN_PROP_CONF_PATH GNN_PROP_CHECKPOINT_PATH --keep-existing-predictions --drop 0.0

    To preview results, follow the instructions below.

Previewing Outputs

  1. Complete inference steps.
  2. Correct seams of predicted textures and make them tileable.
    # For test data without simulating photo unobservations.
    python code/scripts/plan2scene/postprocessing/seam_correct_textures.py ./data/processed/gnn_prop/test/drop_0.0/tileable_texture_crops ./data/processed/gnn_prop/test/drop_0.0/texture_crops test --drop 0.0
  3. Generate .scene.json files with embedded textures using embed_textures.py. A scene.json file describes the 3D geometry of a house. It can be previewed via a browser using the 'scene-viewer' of SmartScenesToolkit (You will have to clone and build the SmartScenesToolkit).
    # For test data without simulating photo unobservations.
    python code/scripts/plan2scene/postprocessing/embed_textures.py ./data/processed/gnn_prop/test/drop_0.0/archs ./data/processed/gnn_prop/test/drop_0.0/tileable_texture_crops test --drop 0.0
    # scene.json files are created in the ./data/processed/gnn_prop/test/drop_0.0/archs directory.
  4. Render .scene.json files as .pngs using render_house_jsons.py.
    • Download and build the SmartScenesToolkit.
    • Rename ./conf/render-example.json to ./conf/render.json and update its fields to point to scene-toolkit.
    • Run the following command to generate previews.
      CUDA_VISIBLE_DEVICES=0 python code/scripts/plan2scene/render_house_jsons.py ./data/processed/gnn_prop/test/drop_0.0/archs --scene-json
      # A .png file is created for each .scene.json file in the ./data/processed/gnn_prop/test/drop_0.0/archs directory.
  5. Generate qualitative result pages with previews using preview_houses.py.
    python code/scripts/plan2scene/preview_houses.py ./data/processed/gnn_prop/test/drop_0.0/previews ./data/processed/gnn_prop/test/drop_0.0/archs ./data/input/photos test --textures-path ./data/processed/gnn_prop/test/drop_0.0/tileable_texture_crops 0.0
    # Open ./data/processed/gnn_prop/test/drop_0.0/previews/preview.html

Test

  1. [Optional] Download a pre-trained model or train the substance classifier used by the Subs metric. Training instructions are available here. Pre-trained weights are available here. Skip this step to omit the Subs metric.

  2. Generate overall evaluation report at 60% photo unobservations. We used this setting in paper evaluations.

    # Synthesize textures for observed surfaces using the VGG textureness score.
    # For the case: 60% (i.e. 0.6) of the photos unobserved. 
    python code/scripts/plan2scene/preprocessing/fill_room_embeddings.py ./data/processed/texture_gen/test/drop_0.6 test --drop 0.6
    python code/scripts/plan2scene/crop_select/vgg_crop_selector.py ./data/processed/vgg_crop_select/test/drop_0.6 ./data/processed/texture_gen/test/drop_0.6 test --drop 0.6
    
    # Propagate textures to un-observed surfaces using our GNN.
    # For the case: 60% (i.e. 0.6) of the photos unobserved.
    python code/scripts/plan2scene/texture_prop/gnn_texture_prop.py ./data/processed/gnn_prop/test/drop_0.6 ./data/processed/vgg_crop_select/test/drop_0.6 test GNN_PROP_CONF_PATH GNN_PROP_CHECKPOINT_PATH --keep-existing-predictions --drop 0.6
    
    # Correct seams of texture crops and make them tileable.
    # For test data where 60% of photos are unobserved.
    python code/scripts/plan2scene/postprocessing/seam_correct_textures.py ./data/processed/gnn_prop/test/drop_0.6/tileable_texture_crops ./data/processed/gnn_prop/test/drop_0.6/texture_crops test --drop 0.6
    
    # Generate overall results at 60% simulated photo unobservations.
    python code/scripts/plan2scene/test.py ./data/processed/gnn_prop/test/drop_0.6/tileable_texture_crops ./data/processed/gt_reference/test/texture_crops test
  3. Generate evaluation report for observed surfaces. No simulated unobservation of photos. We used this setting in paper evaluations.

    # Run inference on using drop=0.0.
    python code/scripts/plan2scene/preprocessing/fill_room_embeddings.py ./data/processed/texture_gen/test/drop_0.0 test --drop 0.0
    python code/scripts/plan2scene/crop_select/vgg_crop_selector.py ./data/processed/vgg_crop_select/test/drop_0.0 ./data/processed/texture_gen/test/drop_0.0 test --drop 0.0
    
    # Correct seams of texture crops and make them tileable by running seam_correct_textures.py.
    python code/scripts/plan2scene/postprocessing/seam_correct_textures.py ./data/processed/vgg_crop_select/test/drop_0.0/tileable_texture_crops ./data/processed/vgg_crop_select/test/drop_0.0/texture_crops test --drop 0.0
    
    # Generate evaluation results for observed surfaces.
    python code/scripts/plan2scene/test.py ./data/processed/vgg_crop_select/test/drop_0.0/tileable_texture_crops ./data/processed/gt_reference/test/texture_crops test
  4. Generate evaluation report for unobserved surfaces at 60% photo unobservations. We used this setting in the paper evaluations.

    # It is assumed that the user has already generated the overall report at 0.6 drop fraction.
    
    # Generate results on unobserved surfaces at 60% simulated photo unobservations.
    python code/scripts/plan2scene/test.py ./data/processed/gnn_prop/test/drop_0.6/tileable_texture_crops ./data/processed/gt_reference/test/texture_crops test --exclude-prior-predictions ./data/processed/vgg_crop_select/test/drop_0.6/texture_crops
  5. Generate evaluation report on FID metric as described here.

Training a new Plan2Scene network

Plan2Scene consists of two trainable components, 1) the texture synthesis stage and 2) the texture propagation stage. Each stage is trained separately. The training procedure is as follows.

  1. Train the texture synthesis stage as described here.
  2. Train the texture propagation stage as described here.

Baseline Models

The baseline models are available here.

plan2scene's People

Contributors

madhawav avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.