Giter Site home page Giter Site logo

mirage-deux / space2vec Goto Github PK

View Code? Open in Web Editor NEW

This project forked from gengchenmai/space2vec

1.0 0.0 0.0 10.71 MB

Gengchen Mai, Krzysztof Janowicz, Bo Yan, Rui Zhu, Ling Cai, Ni Lao. Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells, In: Proceedings of ICLR 2020, Apr. 26 - 30, 2020, Addis Ababa, ETHIOPIA .

License: Apache License 2.0

Shell 3.33% Python 90.33% CSS 0.10% HTML 0.80% Jupyter Notebook 5.44%

space2vec's Introduction

Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells

Code for recreating the results in our ICLR 2020 paper.

Related Link

  1. OpenReview Paper
  2. Arxiv Paper
  3. ICLR 2020 Presentation
  4. ICLR 2020 Presentation Slides

Please visit my Homepage for more information.

Motivation

intro

This code contains two parts which are corresponding to two tasks in our paper:

Point of Interest (POI) Type Classification Task

spacegraph/ folder contains codes for recreating the evaluation results of POI type classification task: both location modeling and context modeling.

Dependencies

  • Python 2.7+
  • Torch 1.0.1+
  • Other required packages are summarized in spacegraph/requirements.txt.

To set up the code for space2vec for POI Type classfication, run python spacegraph/setup.py.

Data

You can find the POI type classification dataset in spacegraph/data_collection/Place2Vec_center/.

  1. spacegraph/data_collection/Place2Vec_center/poi_type.json: the POI type ID to POI type name;
  2. spacegraph/data_collection/Place2Vec_center/pointset.pkl: the total list of POI beng considered in the form: (id, (X, Y), (Type1, Type2,...TypeM), training/validation/test) where (X,Y) are projected coordinates of this POI;
  3. spacegraph/data_collection/Place2Vec_center/neighborgraphs_train.pkl (neighborgraphs_validation.pkl or neighborgraphs_test.pkl): the neighboring POI graph structure for training/validation/test in the form: [CenterPT, [ContextPT1, ContextPT2, ..., ContextPTN], training/validation/test]. CenterPT is the center POI id while ContextPTs are its nearby POI id. Note that for validation and test dataset, the center point corresponds to the POI with validation/test in pointset.pkl.

Code Usage

This code is implemented in Python 2.7 All codes about the POI type classification task are in spacegraph/spacegraph_codebase/.

Location Modeling (See Section 5.1.1 and 5.1.2 in our ICLR 2020 paper)

You can train different models from different bash files in spacegraph/(See Appendix A.1 in our ICLR 2020 paper):

  1. direct: run bash Place2Vec_2_enc_dec_global_naive.sh.
  2. tile: run bash Place2Vec_2_enc_dec_global_gridlookup.sh.
  3. wrap: run bash Place2Vec_2_enc_dec_global_aodha.sh.
  4. rbf: run bash Place2Vec_2_enc_dec_global_rbf.sh.
  5. gird: run bash Place2Vec_2_enc_dec_global_grid.sh.
  6. theory: run bash Place2Vec_2_enc_dec_global_theory.sh.

Results:

  1. The trained model will be available in the directory you specified for --model_dir.
  2. The log including the evaluation results will be available in the directory you specified for --log_dir.
The Comparison Among the Response Maps of Different Models

location_modeling

Spatial Context Modeling (See Section 5.1.3 in our ICLR 2020 paper)

You can train different models from different bash files in spacegraph/(See Appendix A.1 in our ICLR 2020 paper):

  1. none: run bash Place2Vec_2_enc_dec_none.sh.
  2. direct: run bash Place2Vec_2_enc_dec_naive.sh.
  3. polar: run bash Place2Vec_2_enc_dec_polar.sh.
  4. tile: run bash Place2Vec_2_enc_dec_gridlookup.sh.
  5. polar_tile: run bash Place2Vec_2_enc_dec_polargridlookup.sh.
  6. wrap: run bash Place2Vec_2_enc_dec_aodha.sh.
  7. rbf: run bash Place2Vec_2_enc_dec_rbf.sh.
  8. scaled_rbf: run bash Place2Vec_2_enc_dec_scaledrbf.sh.
  9. gird: run bash Place2Vec_2_enc_dec_grid.sh.
  10. theory: run bash Place2Vec_2_enc_dec_theory.sh.

Results:

  1. The trained model will be available in the directory you specified for --model_dir.
  2. The log including the evaluation results will be available in the directory you specified for --log_dir.
The Comparison Among the Response Maps of Different Models

context_modeling

Geo-Aware Fine-Grained Image Classification Task

geo_prior/ folder contains codes for recreating the evaluation results of the geo-aware fine-grained image classification task in our ICLR 2020 paper.

These codes are modified from Mac Aodha et al.'s GitHub codebase in which we add multiple Space2Vec location encoder modules to capture the geographic priors information about images.

Dependencies

  • Python 3.6+
  • Torch 1.3.0+
  • Other required packages are summarized in geo_prior/requirements.txt.

Data

In order to obtain the data, please go to Mac Aodha et al.'s project website.

Code Usage

This code is implemented in Python 3.

geo_prior/geo_prior/ contains the main code for training and evaluating models (We use BirdSnap† dataset as an example):

  1. geo_prior/geo_prior/train_geo_net.py is used to train the location encoder model. Run python3 train_geo_net.py.
  2. geo_prior/geo_prior/run_evaluation.py is used to evaluate the location encoder model by combining the pre-trained CNN features. Run python3 run_evaluation.py.

Reference

If you find our work useful in your research please consider citing our paper.

@inproceedings{space2vec_iclr2020,
	title={Multi-Scale Representation Learning for Spatial Feature Distributions using Grid Cells},
	author={Mai, Gengchen and Janowicz, Krzysztof and Yan, Bo and Zhu, Rui and  Cai, Ling and Lao, Ni},
	booktitle={The Eighth International Conference on Learning Representations},
	year={2020},
	organization={openreview}
}

If you want to use our code for image classification, please cite both our ICLR 2020 paper and Mac Aodha et al's ICCV paper:

@inproceedings{geo_priors_iccv19,
  title     = {{Presence-Only Geographical Priors for Fine-Grained Image Classification}},
  author    = {Mac Aodha, Oisin and Cole, Elijah and Perona, Pietro},
  booktitle = {ICCV},
  year = {2019}
}

space2vec's People

Contributors

gengchenmai avatar

Stargazers

Mirage II avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.