Giter Site home page Giter Site logo

association-rosia / flair-2 Goto Github PK

View Code? Open in Web Editor NEW
8.0 0.0 0.0 45.79 MB

Engage in a semantic segmentation challenge for land cover description using multimodal remote sensing earth observation data, delving into real-world scenarios with a dataset comprising 70,000+ aerial imagery patches and 50,000 Sentinel-2 satellite acquisitions.

Home Page: https://codalab.lisn.upsaclay.fr/competitions/13447

License: MIT License

Python 0.18% Jupyter Notebook 99.82%
computer-vision deep-learning multiclass-segmentation cookiecutter-template deeplearning image-processing lightning multimodal multimodal-deep-learning pytorch

flair-2's Introduction

🛰️ FLAIR #2

The challenge involves a semantic segmentation task focusing on land cover description using multimodal remote sensing earth observation data. Participants will explore heterogeneous data fusion methods in a real-world scenario. Upon registration, access is granted to a dataset containing 70,000+ aerial imagery patches with pixel-based annotations and 50,000 Sentinel-2 satellite acquisitions.

This project was made possible by our compute partners 2CRSi and NVIDIA.

🏆 Challenge ranking

The score of the challenge was the mIoU.
Our solution was the 8th one (out of 30 teams) with a mIoU equal to 0.62610 🎉.

The podium:
🥇 strakajk - 0.64130
🥈 Breizhchess - 0.63550
🥉 qwerty64 - 0.63510

🖼️ Result example

Aerial input image Multi-class label Multi-class pred

View more results on the WandB project.

🏛️ Model architecture

#️⃣ Command lines

Launch a training

python src/models/train_model.py <hyperparams args>

Create a submission

python src/models/predict_model.py -n {model.ckpt}

🔬 References

Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.

Garioud, A., De Wit, A., Poupée, M., Valette, M., Giordano, S., & Wattrelos, B. (2023). FLAIR# 2: textural and temporal information for semantic segmentation from multi-source optical imagery. arXiv preprint arXiv:2305.14467.

Xie, E., Wang, W., Yu, Z., Anandkumar, A., Alvarez, J. M., & Luo, P. (2021). SegFormer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34, 12077-12090.

📝 Citing

@misc{RebergaUrgell:2023,
  Author = {Louis Reberga and Baptiste Urgell},
  Title = {FLAIR #2},
  Year = {2023},
  Publisher = {GitHub},
  Journal = {GitHub repository},
  Howpublished = {\url{https://github.com/association-rosia/flair-2}}
}

🛡️ License

Project is distributed under MIT License

👨🏻‍💻 Contributors

Louis REBERGA

Baptiste URGELL

flair-2's People

Contributors

baptisteurgell avatar louisreberga avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flair-2's Issues

Sentinel deal with saturated pixel

Les données sentinel sont à la base entre 1 et 10000. Il arrive cependant que cette valeur soit dépassée, dans ce cas on dit que le pixel est saturé. Il faut donc regarder la démarche à effectuer lorsque c'est le cas. De même il arrive que la valeur soit de 0, cela arrive lorsqu'il n'y a pas de valeur, on devrais donc ne pas la compter dans la moyenne.

liens intéressants :
👉 https://gis.stackexchange.com/questions/233874/what-is-the-range-of-values-of-sentinel-2-level-2a-images
👉 https://docs.digitalearthafrica.org/en/latest/data_specs/Sentinel-2_Level-2A_specs.html

Test Dataset

Le dataset ne fonctionne pas en mode test car il n'existe pas de label.

ref : rasterio.errors.RasterioIOError: data/raw/test/labels/D061_2020/Z7_AU/msk/MSK_085636.tif: No such file or directory

Dataloader Test

Lorsqu'on met le dataset en mode is_test=True et qu'on passe le dataset dans un dataloader une erreur ressort. C'est dû au fait qu'on ne puisse pas retourner None.

ref:
image

Traceback (most recent call last):
File "/Users/titou/Documents/flair-2/src/data/make_dataset.py", line 199, in
for image_id, aerial, sen, labels in dataloader:
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 633, in next
data = self._next_data()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 54, in fetch
return self.collate_fn(data)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py", line 265, in default_collate
return collate(batch, collate_fn_map=default_collate_fn_map)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py", line 142, in collate
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility.
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py", line 142, in
return [collate(samples, collate_fn_map=collate_fn_map) for samples in transposed] # Backwards compatibility.
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/utils/data/_utils/collate.py", line 150, in collate
raise TypeError(default_collate_err_msg_format.format(elem_type))
TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'NoneType'>

Submission request

To create the submission file, the Aerial image name is mandatory. Apply modification on dataset to return the image name

TTA & Batch

Wrapper doesn't work with a batch of data :

File "/Users/titou/Documents/flair-2/src/models/make_train.py", line 82, in
main()
File "/Users/titou/Documents/flair-2/src/models/make_train.py", line 77, in main
trainer.fit(model=lightning_model)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 529, in fit
call._call_and_handle_interrupt(
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 568, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 973, in _run
results = self._run_stage()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 1016, in _run_stage
self.fit_loop.run()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 201, in run
self.advance()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/fit_loop.py", line 354, in advance
self.epoch_loop.run(self._data_fetcher)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 133, in run
self.advance(data_fetcher)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/training_epoch_loop.py", line 218, in advance
batch_output = self.automatic_optimization.run(trainer.optimizers[0], kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 185, in run
self._optimizer_step(kwargs.get("batch_idx", 0), closure)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 260, in _optimizer_step
call._call_lightning_module_hook(
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 144, in _call_lightning_module_hook
output = fn(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/core/module.py", line 1256, in optimizer_step
optimizer.step(closure=optimizer_closure)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/core/optimizer.py", line 155, in step
step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 225, in optimizer_step
return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 114, in optimizer_step
return optimizer.step(closure=closure, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper
out = func(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/optim/optimizer.py", line 33, in _use_grad
ret = func(self, *args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/optim/adamw.py", line 148, in step
loss = closure()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/plugins/precision/precision_plugin.py", line 101, in _wrap_closure
closure_result = closure()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 140, in call
self._result = self.closure(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 126, in closure
step_output = self._step_fn()
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/loops/optimization/automatic.py", line 307, in _training_step
training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values())
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 291, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 367, in training_step
return self.model.training_step(*args, **kwargs)
File "/Users/titou/Documents/flair-2/./src/models/lightning.py", line 94, in training_step
outputs = self.forward(inputs={'aerial': aerial, 'sen': sen})
File "/Users/titou/Documents/flair-2/./src/models/lightning.py", line 86, in forward
x = self.model(inputs=inputs, step=self.step, batch_size=self.batch_size)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/titou/Documents/flair-2/./src/data/tta/wrappers.py", line 23, in forward
inputs = augmentation.augment(inputs, param)
File "/Users/titou/Documents/flair-2/./src/data/tta/augmentations.py", line 77, in augment
inputs[key] = F.rotate(inputs[key], angle=angle)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torchvision/transforms/functional.py", line 1140, in rotate
return F_t.rotate(img, matrix=matrix, interpolation=interpolation.value, expand=expand, fill=fill)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torchvision/transforms/_functional_tensor.py", line 669, in rotate
return _apply_grid_transform(img, grid, interpolation, fill=fill)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torchvision/transforms/_functional_tensor.py", line 560, in _apply_grid_transform
img = grid_sample(img, grid, mode=mode, padding_mode="zeros", align_corners=False)
File "/opt/homebrew/Caskroom/miniconda/base/envs/flair-2-env/lib/python3.10/site-packages/torch/nn/functional.py", line 4244, in grid_sample
return torch.grid_sampler(input, grid, mode_enum, padding_mode_enum, align_corners)
RuntimeError: grid_sampler(): expected grid to have size 3 in last dimension, but got grid with sizes [16, 40, 40, 2]

The file "data\\raw\\labels-statistics-12.csv" missing!

Dear all thanks for making the module available.

I have been trying to reproduce the results from your code about the pixelwise classification but I think the "labels-statistics-12.csv" is uploaded in this repository, could you please share it with me?

I am really interested in how you merged the features maps for both modalities (Aerial and Sentinel Time Series.)
Here is the errror message I am getting when trying to train the model:
handle = open( **FileNotFoundError: [Errno 2] No such file or directory: 'data\\raw\\labels-statistics-12.csv'** wandb: Waiting for W&B process to finish... (failed 1). Press Ctrl-C to abort syncing. wandb: View run daily-forest-1 at: https://wandb.ai/hubert10/flair-2/runs/1sn7i49o wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s) wandb: Find logs at: .\wandb\run-20240130_194359-1sn7i49o\logs

Thanks in advance!

Comment make_dataset

Ajoute des commentaire dans ton code stp je viens de passer un petit moment à me demander comment les transformations de sen fonctionnaient. Utilise des termes plus explicite pour le nommage des variables ex: channels ambiguë est-ce les bands ou les date ?. Type les variables pour savoir sur quoi on travaille. Et ajoutes des docstring (utilise chatgpt par exemple).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.