Giter Site home page Giter Site logo

fpichi / gca-rom Goto Github PK

View Code? Open in Web Editor NEW
25.0 1.0 10.0 539.03 MB

GCA-ROM is a library which implements graph convolutional autoencoder architecture as a nonlinear model order reduction strategy.

Home Page: https://fpichi.github.io/gca-rom/

License: GNU General Public License v3.0

Python 0.32% Jupyter Notebook 99.68%
deep-learning graph-neural-networks machine-learning model-order-reduction neural-networks parametric-pdes pytorch torch-geometric

gca-rom's Introduction

GCA-ROM

GCA-ROM is a library which implements graph convolutional autoencoder architecture as a nonlinear model order reduction strategy.

Installation

GCA-ROM requires pytorch, pyg, matplotlib, scipy and h5py. They can be easily installed via pip or conda.

Open In Colab

In the notebook folder, one can find the *.ipynb files corresponding to the tutorials to run the models in Google Colab without installing the package.

MacOS

The latest version of pyg is currently not available on conda. The required dependencies, exported in utils/gca_rom.yml, can be automatically installed in a new environment via

conda env create -f gca_rom.yml

Linux

conda create -n 'gca_rom' python=3.10
conda activate gca_rom
conda install pytorch -c pytorch 
conda install pyg -c pyg
conda install matplotlib pandas scipy jupyter h5py

The official distribution is on GitHub, and you can clone the repository using

git clone [email protected]:fpichi/gca-rom.git

Summary of GCA-ROM Features

- OFFLINE PHASE

- ONLINE PHASE

The proposed modular architecture, namely Graph Convolutional Autoencoder for Reduced Order Modelling (GCA-ROM), subsequently exploits:
  1. a graph-based layer to express an unstructured dataset;
  2. an encoder module compressing the information through:
    1. spatial convolutional layers based on MoNet to identify patterns between geometrically close regions;
    2. skip-connection operation, to keep track of the original information and help the learning procedure;
    3. a pooling operation, to down-sample the data to obtain smaller networks;
  3. a bottleneck, connected to the encoder by means of a dense layer, which contains the latent behavior in a vector;
  4. a decoder module, recovering the original data by applying the same operations as in the encoder, but in reverse order.

Tutorials

The nonlinear ROM methodology has been tested on 14 different benchmarks, including:

  • scalar/vector and linear/nonlinear equations (01_poisson.ipynb)
  • advection-dominated regime (02_advection.ipynb)
  • physical and geometrical parametrizations (03_graetz.ipynb)
  • bifurcating phenomena (04_navier_stokes_vx.ipynb, 05_navier_stokes_vy.ipynb, 06_navier_stokes_p.ipynb)
  • time-dependent models (07_diffusion.ipynb, 08_poiseuille.ipynb)
  • a 3D elastic problem (09_elasticity.ipynb)
  • high-dimensional parametric applications (10_stokes.ipynb)
  • complex time-dependent problems (11_holed_advection.ipynb, 12_lid_driven_cavity.ipynb, 13_moving_hole_advection.ipynb)

To run a benchmark, navigate to the tutorial folder and run the corresponding file.ipynb. If available, a GUI will open with preset values for the hyperparameter configuration of the network. Once the window is closed, the code starts the training phase, unless a trained model with the same configuration already exists.

After the GCA-ROM is evaluated, many plots are automatically generated, ranging from training losses, latent evolution, relative errors, solution and error fields, and gif of the dynamics. Below are some snaphots of the approximated solutions for the available benchmarks:

Cite GCA-ROM

[1] Pichi, F., Moya, B. and Hesthaven, J.S. (2023) ‘A graph convolutional autoencoder approach to model order reduction for parametrized PDEs’. Available at: arXiv, Journal of Computational Physics

If you use GCA-ROM for academic research, you are encouraged to cite the paper using:

@article{PichiGraphConvolutionalAutoencoder2024,
  title = {A Graph Convolutional Autoencoder Approach to Model Order Reduction for Parametrized {{PDEs}}},
  author = {Pichi, Federico and Moya, Beatriz and Hesthaven, Jan S.},
  year = {2024},
  journal = {Journal of Computational Physics},
  volume = {501},
  pages = {112762},
  doi = {10.1016/j.jcp.2024.112762},
  urldate = {2024-01-18}
}

Authors and contributors

in collaboration with the MCSS group at EPFL of Prof. Jan S. Hesthaven.

With contributions from:

gca-rom's People

Contributors

beatrizmoya avatar fpichi avatar francpp avatar oisin-m avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

gca-rom's Issues

The version of all python packages

Could you provide a requirement.txt file to indicate the version of packages? It seems like the code cannot work when I use the latest python packages.
-------------------------------------------Here is the error when I run the code with the latest packages----------------------------------------
1691462653993
And I think the reason is that you did not define the abstract method len,get of the parent class in your child class. And I guess if I use the same package version with you, the problem will be solved.

Remove Case and Match Statements

Python 3.10 requirement is present due to the match and case statements usage. Can replace with standard if statements to remove this need.

Predicts u,v,p using single GCA-ROM

Thank you for the precious code, which is very helpful to me.

In the paper in JCP, you wrote as below:

Given the nature of the model, with velocity and the pressure fields as unknowns, we show the adaptability of the architecture to vector problems. Thus, we consider a monolithic approach, where the three fields are concatenated as features for each node, rather than a partitioned one, where we recover independently each field.

As I understood, the GCA-ROM in your paper predicted the u,v,p at the same time with the single GCA-ROM model.
However, in the NS codes in the tutorial folder, we have to train each GCA-ROM to predict u,v,p, respectively.

  1. Could you please share the original code that predicts u,v,p all at once with the single GCA-ROM?

Also, additional question.

  1. The current GCA-ROM framework seems to be only trained and tested on the same number of nodes (N_h in the paper), due to its conv nature. Do you have any plan to extend this framework so that it can be applied to various graphs with different numbers of nodes?

Load Dataset in Chunks to Avoid Memory Issues

In some cases, the datasets are large enough that they can cause memory issues.
However, each sample can fit comfortably in memory and the preprocessing step can also run without issues.

Therefore, since the memory issues only arise when loading all the data in training, a possible solution would be to load the dataset in chunks instead of trying to run on the entire dataset all at once.

About Pooling Operation

I have read the code but found no pooling operation applied. Are you going to upload the pooling version code?

Add GBM

Implement the option to use GBM and provide an easy example tutorial notebook.

Add Choice of Activation Functions

Allowing the user to specify activation functions would be desirable, similar to #10 .

Additionally, currently the HyperParams.act function only gets passed to the mapper module, and the default activation functions differ for the mapper compared to the autoencoder part of the code. It would be better to thus have the option to specify separately the activations for both the autoencoder and mapper modules.

Minibatch Optimisation

Currently, optimisation is done using the entire dataset, but it would be desirable to have the option to optimise over minibatches.

Also, could help with memory issues (see #4 )

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.