Giter Site home page Giter Site logo

mathlab / pina Goto Github PK

View Code? Open in Web Editor NEW
328.0 12.0 60.0 37.34 MB

Physics-Informed Neural networks for Advanced modeling

Home Page: https://mathlab.github.io/PINA/

License: MIT License

Python 97.58% Shell 0.18% TeX 2.24%
physics-informed-neural-networks modeling machine-learning deep-learning python pytorch differential-equations ode pde physics-informed

pina's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pina's Issues

Documentation

  • add docstrings
  • add sphinx configuration
  • add rst file

Plotter class

Is your feature request related to a problem? Please describe.
Plotter class is limited in the v0.1. Specifically:

  • We can only plot on rectangles in 2d

Describe the solution you'd like
Add or updated the following methods

  • plot : should work in 2D and in 3D with arbitrary shaped geometries. Using scatter plot is easier but the results are not so good looking, so something else should be thought.
  • plot_loss : should plot the loss/ losses (in case more than one model) history
  • plot_geometry_discretisation : should plot the geometry discretisation used for training. Different colors of points for different conditions.

Question: How to use extra input variables from data in the pinn that do not occur in the equations

The objective
I was wondering if and how I can use my input_variables from my input_tensor in the pinn. Or does it already do this?

class Heat2D(TimeDependentProblem, SpatialProblem):
    
    # Define these yourself
    LENGTH_X = 82
    LENGTH_Y = 70
    DURATION = 2284
    
    output_variables = ['u']
    spatial_domain = Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y]})
    temporal_domain = Span({'t': [0, DURATION]})

    def heat_equation_2D(input_, output_):
        '''1'''
        # c is thermal diffusivity, variates for different materials so google
        c = (0.01/torch.pi) ** 0.5
        
        du = grad(output_, input_)
        ddu = grad(du, input_, components=['dudx','dudy'])
        return (
            du.extract(['dudt']) -
            (c**2)*(ddu.extract(['ddudxdx']) + ddu.extract(['ddudydy']))
        )

    def nil_dirichlet_x(input_, output_):
        '''2 and 3'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudx']) - u_expected_boundary
    
    def nil_dirichlet_yL(input_, output_):
        '''4'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary
    
    def nil_dirichlet_y0(input_, output_):
        '''5'''
        # TODO: make this conditionally on if door is open
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary

    def initial_condition(input_, output_):
        u_expected_initial = torch.sin(torch.pi*input_.extract(['x']))
        return output_.extract(['u']) - u_expected_initial
    

    conditions = {
        'boundx0': Condition(location=Span({'x': 0, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundxL': Condition(location=Span({'x': LENGTH_X, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundy0': Condition(location=Span({'x': [0, LENGTH_X], 'y': 0, 't': [0, DURATION]}), function=nil_dirichlet_y0),
        'boundyL': Condition(location=Span({'x': [0, LENGTH_X], 'y': LENGTH_Y, 't': [0, DURATION]}), function=nil_dirichlet_yL),
        'initial': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': 0}), function=initial_condition),
        'heat_eq': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=heat_equation_2D),
        'data': Condition(input_points=input_tensor , output_points=output_tensor),
        }

I was wondering all this since

heat_problem = Heat2D()
print(heat_problem.input_variables)

gives

['x', 'y', 't']

instead of

['Hot gas pipe', 'Liquid pipe', 'Suction pipe',
       'Energie meter (kwh / pulse) Watt', 'Door', 'Fridge', 'Water-1',
       'Water-2', 'Water-3', 'x', 'y', 't']

Training different devices

Describe the bug
When training on GPU the user needs to specify for specific input/output points where they will be moved. This is not efficient, since the one sampled are automatically moved (without asking the user)

Additional context
It would be better to send all the points, once sampled or inserted in CPU to GPU all together. Ideally when building in the Trainer

Batching

Is your feature request related to a problem? Please describe.
Add the possibility of using the batching within a PINN train.

Describe the solution you'd like
A new argument in the constructor to select the batch size and using it in the optimization cycle.

Additional context
Mandatory for long train, especially using GPU

run_stokes example uses undefined attribute history

Describe the bug
The example run_stokes.py attempts to save out history on ln 43:

for i, losses in enumerate(pinn.history):

The PINN class has no attribute history, this line results in an AttributeError.

To Reproduce
python examples/run_stokes.py -s 1

Expected behavior
A file to be saved with loss history for run.

Output

Traceback (most recent call last):
  File "/Users/eliprater/Documents/Projects/PINA/examples/run_stokes.py", line 43, in <module>
    for i, losses in enumerate(pinn.history):
AttributeError: 'PINN' object has no attribute 'history'

Additional context
Was it was meant for this line to reference the PINN attribute history_loss?

In that case the line could be changed to:

for i, losses in pinn.history_loss.items():

Add logging support

Is your feature request related to a problem? Please describe.
At the moment, the only output of a typical PINA application is the loss trend during the training. Better outputs (maybe modulable in verbosity) would be a nice feature, especially for debugging purposes.

Describe the solution you'd like
https://docs.python.org/3/library/logging.html#module-logging library can be used in order to let the user set the wanted verbosity.

Describe alternatives you've considered
loguru but it's a further dependency.

Dropout/BatchNorm added

Is your feature request related to a problem? Please describe.
In FeedForward class it would be a nice feature to add also the possibility of dropout/batchnorm (or both) for regularising the training. Is this something you are intended to add?

Review of the State of the Field for the JOSS paper

Is your feature request related to a problem? Please describe.
In the paper outline, statement of need section, some of the existing libraries have been mentioned. But their comparison with PINA seems a little underrepresented.

Describe the solution you'd like
Some additional description should be included detailing what novelty PINA brings. Stating that "PINA wants to emerge for its easiness of usage, allowing the users to quickly formulate the problem at hand and solve it, resulting in an intuitive frameworks designed by researchers for researchers." is inadequate. To some extent, the following points should be addressed-

  • Is the codebase more transparent or intuitive to make it generalizable?
  • If it indeed is intended for researchers, an important focus should be the addition of novel features (architectures, loss functions, regularizations, numerically difficult-to-solve problems) in new research projects. How easy or hard is it to add novel features compared to some of the existing software?
  • Reusability of trained models for future inference (I think the repo could also benefit from some examples/tutorials showing this)
  • Sustainability of the project (how will it be maintained down the line? what new features we might be expecting soon?)

Describe alternatives you've considered
I understand addressing all these issues might be difficult and lengthy, so I won't be very rigorous about addressing all of these issues. But some improvement is expected.

Additional context
Related to openjournals/joss-reviews#5352

Enhance Geometry Operations Docs

Geometry Operations docstrings are too vague. The docstring in the init should explain, using set notation, what each operation does.

Add 'closure' function to the optimization function

In order to use optimizer like LBFGS the training procesure needs to provide the closure function that performs a forward, zero_grad and backward on the model. While this function may be optional for certain optimizers, it is highly recommended to utilize it for every optimizer.

Here is an example without closure:

def training_step(self, batch, batch_idx):
          opt = self.optimizers()
          opt.zero_grad()
          loss = self.compute_loss(batch)
          self.manual_backward(loss)
          opt.step()

And this one with closure:

def training_step(self, batch, batch_idx):
    opt = self.optimizers()

    def closure():
        loss = self.compute_loss(batch)
        opt.zero_grad()
        self.manual_backward(loss)
        return loss

    opt.step(closure=closure)

`Span.sample()` raises an exception when `variables != 'all'`

Describe the bug
Span.sample() raises an exception when variables != 'all'

To Reproduce

>>> from pina import Span
>>> s = Span({'x': [-1, 1], 'y': [-1, 1], 'a': [-1, 1], 'b': [-1, 1]})
>>> s.sample(100, 'random', ['x', 'y'])
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File ".../PINA/pina/span.py", line 111, in sample
    return _Nd_sampler(n, mode, variables)
  File ".../PINA/pina/span.py", line 92, in _Nd_sampler
    result.labels = list(self.range_.keys())
  File ".../PINA/pina/label_tensor.py", line 83, in labels
    raise ValueError(
ValueError: the tensor has not the same number of columns of the passed labels.

Expected behavior
I expected the call to return the result of a sample on the given variables.

Adding Comments to the Examples

Is your feature request related to a problem? Please describe.
The package contains a good set of examples. But their comprehensibility is not up to the mark, especially for someone who is just starting with the package and not yet familiar with the different modules and their utilities.

Describe the solution you'd like
In the examples, comments should be added to make it clear what problem is being solved, what are the domains, constraints etc. Additional comments should be added to clearly illustrate what different parts of the code are accomplishing.

Describe alternatives you've considered

Additional context
openjournals/joss-reviews#5352

Loss/Residual calculation using the equation not the condition

Hello, it's me again! :)

I have reviewed the training_step function and noticed that it iterates through the conditions rather than the corresponding functions. While this approach is acceptable when there are no system of equations associated with those conditions, when working with a system of equations, it is typically desired to handle them differently. For instance, one might want to declare different condition.data_weights for each equation in the system. Currently, if you wish to handle one of the equations in the system differently during the training process, you have to define the same domain twice. This makes the SystemEquation class obsolete.

Condition takes only the following keyword BUG

Upon running the Tutorial 1 Code (copy-pasted), an error message appeared instantly:

Traceback (most recent call last):
File "C:\Users\PycharmProjects\pythonProject\venv\lib\site-packages\IPython\core\interactiveshell.py", line 3508, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
runfile('C:\Users\PycharmProjects\pythonProject\test.py', wdir='C:\Users\PycharmProjects\pythonProject')
File "C:\Program Files\JetBrains\PyCharm 2023.1.1\plugins\python\helpers\pydev_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Program Files\JetBrains\PyCharm 2023.1.1\plugins\python\helpers\pydev_pydev_imps_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\PycharmProjects\pythonProject\test.py", line 82, in
class SimpleODE(SpatialProblem):
File "C:\Users\PycharmProjects\pythonProject\test.py", line 110, in SimpleODE
'x0': Condition(Span({'x': 0.}), initial_condition),
File "C:\Users\PycharmProjects\pythonProject\venv\lib\site-packages\pina\condition.py", line 69, in init
raise ValueError('Condition takes only the following keyword arguments: {input_points, output_points, location, function, data_weight}.')
ValueError: Condition takes only the following keyword arguments: {input_points, output_points, location, function, data_weight}.

too complex methods

Is your feature request related to a problem? Please describe.
Some methods have high complexity and poor readability. Refactorization of them is required.

Some of the methods are:

  • PINN.train
  • PINN.span_pts

Structure of residuals

Hi,
currently, I'm experimenting with version 0.1 and I have a question regarding stacking two residuals. Instead of using two different equations that essentially perform the same task, I would like to stack them together. I've provided an example below:

def eq1(input_, output_):
    u_grad = grad(output_, input_)
    u1_xx = grad(u_grad, input_, components=['du1dx'], d=['x'])
    u2_xx = grad(u_grad, input_, components=['du2dx'], d=['x'])
    return torch.stack([u1_xx - 0.05 , u2_xx - 0.01], dim=1)  #minimize u1xx - 0.05 and u2_xx - 0.01

class Mechanics(SpatialProblem):
    output_variables = ['u1', 'u2']
    spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})

    conditions = {
        'D': Condition(
            location=CartesianDomain({'x': [0, 1], 'y': [0, 1]}),
            equation=SystemEquation([eq1]))

My specific query pertains to the "dim" parameter used in the torch.stack() function. I would like to confirm whether it is correct to set it as "1" or if it should be set as "0". I appreciate any clarification you can provide.

Thank you!

Review of the State of the Field for the JOSS paper

Is your feature request related to a problem? Please describe.
In the statement of need section, some PyTorch and TensorFlow based libraries are mentioned. One notable omission in the PyTorch section is Modulus. In addition the description of the unique value proposition of PINA is a little too high level.

Describe the solution you'd like
In addition to adding Modulus to the list of existing works, it would be good to add a couple of examples or description of specific features to support the claim: "PINA wants to emerge for its easiness of usage, allowing the users to quickly formulate the problem at hand and solve it, resulting in an intuitive frameworks designed by researchers for researchers."
The other thing that is missing is a description of the inference workflow for how the final trained model is expected to be used.

Additional context
For JOSS review: openjournals/joss-reviews#5352

Better names for methods and class

Is your feature request related to a problem? Please describe.
Currently, the class Span and some of the methods (eg. span_pts) are quite misleading and they should be changed in the next releases.

Describe the solution you'd like
Some options:
Span -> HyperCubeDomain, CartesianDomain, SquareDomain
PINN.span_pts -> sample_location, generate_samples, collocate_samples

Lightning MacOS Py37

Describe the bug
There is a bug on the Lightning using MacOS with Python3.7

To Reproduce
Run the pytest

Output

ImportError while importing test module '/Users/runner/work/PINA/PINA/tests/test_cartesian.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../../hostedtoolcache/Python/3.7.[17](https://github.com/mathLab/PINA/actions/runs/5322584162/jobs/9639232481?pr=118#step:5:18)/x64/lib/python3.7/importlib/__init__.py:127: in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
tests/test_cartesian.py:4: in <module>
    from pina import LabelTensor, Condition, CartesianDomain, PINN
pina/__init__.py:13: in <module>
    from .pinn import PINN
pina/pinn.py:10: in <module>
    from .solver import SolverInterface
pina/solver.py:5: in <module>
    import lightning.pytorch as pl
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/__init__.py:32: in <module>
    from lightning.app import storage  # noqa: E402
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/__init__.py:25: in <module>
    from lightning.app import components  # noqa: E402, F401
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/components/__init__.py:1: in <module>
    from lightning.app.components.database.client import DatabaseClient
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/components/database/__init__.py:2: in <module>
    from lightning.app.components.database.server import Database
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/components/database/server.py:29: in <module>
    from lightning.app.core.work import LightningWork
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/core/__init__.py:1: in <module>
    from lightning.app.core.app import LightningApp
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/core/app.py:40: in <module>
    from lightning.app.core.work import LightningWork
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/core/work.py:25: in <module>
    from lightning.app.storage import Path
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/storage/__init__.py:1: in <module>
    from lightning.app.storage.drive import Drive  # noqa: F401
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/storage/drive.py:23: in <module>
    from lightning.app.storage.path import _filesystem, _shared_storage_path, LocalFileSystem
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/lightning/app/storage/path.py:23: in <module>
    from fsspec import AbstractFileSystem
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/fsspec/__init__.py:12: in <module>
    from .compression import available_compressions
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/site-packages/fsspec/compression.py:2: in <module>
    from bz2 import BZ2File
../../../hostedtoolcache/Python/3.7.17/x64/lib/python3.7/bz2.py:[19](https://github.com/mathLab/PINA/actions/runs/5322584162/jobs/9639232481?pr=118#step:5:20): in <module>
    from _bz2 import BZ2Compressor, BZ2Decompressor
E   ModuleNotFoundError: No module named '_bz2'

DeepONet Example

Is your feature request related to a problem? Please describe.
I'm currently trying to use physics informed DeepONets, and I'm struggling to get them working.

Describe the solution you'd like
It would be really nice, if there was a small example, that demonstrates how to use them.

SolverInterface changes

Is your feature request related to a problem? Please describe.
I would like to implement a Solver for the new v0.1 version with multiple neural networks as models (for example a GAN) and multiple optimisers. Nevertheless, the solver interface takes only one model. If I want to define two models, I need to use the class Network in the GAN solver to do all the checks. Same story for optimisers. It could lead to high code redundancy.

Describe the solution you'd like
We can make in SolverInterface a list of models instead, and also a list of optimisers. So the __init__ would become:

class SolverInterface(pl.LightningModule, metaclass=ABCMeta):
    """ Solver base class. """
    def __init__(self, models, optimizers, problem, extra_features=None):

In this way when creating a new solver we can make all the checks in the SolverInterface class, avoiding repetition in the code for each solver.

Additional context
Add any other context or screenshots about the feature request here.

Sampling for single point condition

Describe the bug
pinn.span_pts does not work when the condition applies on a single point (for instance for boundary conditions of 1D problems).

To Reproduce
With the pinn defined in examples/first_order_ode.py running:

  1. pinn.span_pts(1, 'grid', locations=['bc'])
  2. pinn.span_pts(1, 'random', locations=['bc'])

Output

  1. returns error in line 69 of span.py, in _1d_sampler:
    result = tmp[0]
    IndexError: list index out of range
  2. returns error in line 88 of span.py, in _Nd_sampler:
    keys, values = map(list, zip(*pairs))
    ValueError: not enough values to unpack (expected 2, got 0)

Plotter does not plot the correct (sum) of all losses

I observed that the Plotter is not accurately plotting the total sum of losses, but instead, it only displays the first local loss, which is the loss of the first defined condition. This issue is caused by a specific code segment.

    loss = np.array(list(pinn.history_loss.values()))
    if loss.ndim != 1:
              loss = loss[:, 0]

As the 'pinn.history_loss.values()' does not contain the cumulative sum of all losses at index 0, a solution could be to include this value or modify the above code as follows:

    loss = np.array(list(pinn.history_loss.values()))
    if loss.ndim != 1:
              loss = np.sum(loss, axis=1)

nil_dirichlet condition variable on input tensor.

The objective
I am trying to accurately model the heat system of a fridge and I was wondering if it is at all possible to make the boundary condition act conditionally on whether the fridge door is open or closed since these would be different boundary conditions ofcourse.

class Heat2D(TimeDependentProblem, SpatialProblem):
    
    # Define these yourself
    LENGTH_X = 82
    LENGTH_Y = 70
    DURATION = 2284
    
    output_variables = ['u']
    spatial_domain = Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y]})
    temporal_domain = Span({'t': [0, DURATION]})

    def heat_equation_2D(input_, output_):
        '''1'''
        # c is thermal diffusivity, variates for different materials so google
        c = (0.01/torch.pi) ** 0.5
        
        du = grad(output_, input_)
        ddu = grad(du, input_, components=['dudx','dudy'])
        return (
            du.extract(['dudt']) -
            (c**2)*(ddu.extract(['ddudxdx']) + ddu.extract(['ddudydy']))
        )

    def nil_dirichlet_x(input_, output_):
        '''2 and 3'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudx']) - u_expected_boundary
    
    def nil_dirichlet_yL(input_, output_):
        '''4'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary
    
    def nil_dirichlet_y0(input_, output_):
        '''5'''
        # TODO: make this conditionally on if door is open
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary

    def initial_condition(input_, output_):
        u_expected_initial = torch.sin(torch.pi*input_.extract(['x']))
        return output_.extract(['u']) - u_expected_initial
    

    conditions = {
        'boundx0': Condition(location=Span({'x': 0, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundxL': Condition(location=Span({'x': LENGTH_X, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundy0': Condition(location=Span({'x': [0, LENGTH_X], 'y': 0, 't': [0, DURATION]}), function=nil_dirichlet_y0),
        'boundyL': Condition(location=Span({'x': [0, LENGTH_X], 'y': LENGTH_Y, 't': [0, DURATION]}), function=nil_dirichlet_yL),
        'initial': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': 0}), function=initial_condition),
        'heat_eq': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=heat_equation_2D),
        'data': Condition(input_points=input_tensor , output_points=output_tensor),
        }

This is the code I am currently running and the nil_dirichlet_y0 is the side of the door. In my input_tensor I use in data I have a column with Door either 0 or 1, respectively Closed and Open, and a column with the temperature outside of the fridge called Environment.

I was thinking something like this

    def nil_dirichlet_y0(input_, output_):
        '''5'''
        if input_tensor.extract(['Door']) == 0:
            du = grad(output_, input_)
            u_expected_boundary = 0.0
            return du.extract(['dudy']) - u_expected_boundary
        else if input_tensor.extract(['Door']) == 1:
            u_expected_boundary = input_tensor.extract(['Environment'])
            return input_.extract(['u']) - u_expected_boundary

But after trying to train the model I ofcourse get an error that occurs because I am trying to compare the entire Door tensor at once instead of the values one at a time.

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Cell In[60], line 22
     17 pinn.span_pts(
     18     {'n': 200, 'mode': 'grid', 'variables': 't'},
     19     {'n': 20, 'mode': 'grid', 'variables': ['x', 'y']},
     20     locations=['heat_eq'])
     21 pinn.span_pts(150, 'random', locations=['boundx0', 'boundxL', 'initial', 'boundyL', 'boundy0'])
---> 22 pinn.train(10, 1)

File ~/opt/anaconda3/lib/python3.8/site-packages/pina/pinn.py:252, in PINN.train(self, stop, frequency_print, save_loss, trial)
    250 predicted = self.model(pts)
    251 for function in condition.function:
--> 252     residuals = function(pts, predicted)
    253     local_loss = (
    254         condition.data_weight*self._compute_norm(
    255             residuals))
    256     single_loss.append(local_loss)

Cell In[58], line 38, in Heat2D.nil_dirichlet_y0(input_, output_)
     36 def nil_dirichlet_y0(input_, output_):
     37     '''5'''
---> 38     if input_tensor.extract(['Door']) == 0:
     39         du = grad(output_, input_)
     40         u_expected_boundary = 0.0

File ~/opt/anaconda3/lib/python3.8/site-packages/torch/_tensor.py:1295, in Tensor.__torch_function__(cls, func, types, args, kwargs)
   1292     return NotImplemented
   1294 with _C.DisableTorchFunctionSubclass():
-> 1295     ret = func(*args, **kwargs)
   1296     if func in get_default_nowrap_functions():
   1297         return ret

RuntimeError: Boolean value of Tensor with more than one value is ambiguous

Is there any way to do this?

Is it possible to provide an example or tutorial about solving 1D diffusion eq with a heterogenous domain? For example, dividing the domain into two domain and assign to each domain different value of diffusion coefficient (D) ?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

plot_samples errors for Condition(input_points=..., output_points=...) in pinn

Describe the bug
First of this is not an urgent matter since after the error the plot is still shown correctly.
The function plotter.plot_samples() results in an error when your problem definition includes a Condition using data.

Problem definition

class Heat2D(TimeDependentProblem, SpatialProblem):
    
    # Define these yourself
    LENGTH_X = 82
    LENGTH_Y = 70
    DURATION = 2284
    
    output_variables = ['u']
    spatial_domain = Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y]})
    temporal_domain = Span({'t': [0, DURATION]})

    def heat_equation_2D(input_, output_):
        '''1'''
        # c is thermal diffusivity, variates for different materials so google
        c = (0.01/torch.pi) ** 0.5
        
        du = grad(output_, input_)
        ddu = grad(du, input_, components=['dudx','dudy'])
        return (
            du.extract(['dudt']) -
            (c**2)*(ddu.extract(['ddudxdx']) + ddu.extract(['ddudydy']))
        )

    def nil_dirichlet_x(input_, output_):
        '''2 and 3'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudx']) - u_expected_boundary
    
    def nil_dirichlet_yL(input_, output_):
        '''4'''
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary
    
    def nil_dirichlet_y0(input_, output_):
        '''5'''
        # TODO: make this conditionally on if door is open
        du = grad(output_, input_)
        u_expected_boundary = 0.0
        return du.extract(['dudy']) - u_expected_boundary

    def initial_condition(input_, output_):
        '''6'''
        u_expected_initial = torch.sin(torch.pi*input_.extract(['x']))
        return output_.extract(['u']) - u_expected_initial
    

    conditions = {
        'boundx0': Condition(location=Span({'x': 0, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundxL': Condition(location=Span({'x': LENGTH_X, 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=nil_dirichlet_x),
        'boundy0': Condition(location=Span({'x': [0, LENGTH_X], 'y': 0, 't': [0, DURATION]}), function=nil_dirichlet_y0),
        'boundyL': Condition(location=Span({'x': [0, LENGTH_X], 'y': LENGTH_Y, 't': [0, DURATION]}), function=nil_dirichlet_yL),
        'initial': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': 0}), function=initial_condition),
        'heat_eq': Condition(location=Span({'x': [0, LENGTH_X], 'y': [0, LENGTH_Y], 't': [0, DURATION]}), function=heat_equation_2D),
        'data': Condition(input_points=X_input_tensor , output_points=X_output_tensor),
        }

Pinn:

class myFeature(torch.nn.Module):
    #TODO
    """
    Feature: sin(pi*x)
    """
    def __init__(self, idx):
        super(myFeature, self).__init__()
        self.idx = idx

    def forward(self, x):
        return LabelTensor(torch.sin(torch.pi * x.extract(['x'])), ['sin(x)'])

heat_problem = Heat2D()
model = FeedForward(
    layers=[30, 20, 10, 5],
    output_variables=heat_problem.output_variables,
    input_variables=heat_problem.input_variables,
    func=Softplus,
    extra_features=[myFeature(0)],
)

pinn = PINN(
    heat_problem,
    model,
    lr=0.01,
    error_norm='mse',
    regularizer=0)

pinn.span_pts(
    {'n': 10, 'mode': 'grid', 'variables': 't'},
    {'n': 10, 'mode': 'grid', 'variables': ['x', 'y']},
    locations=['heat_eq'])
pinn.span_pts(20, 'random', locations=['boundx0', 'boundxL', 'initial', 'boundyL', 'boundy0'])
pinn.train(1000, 100)

and

print(pinn.input_pts.keys())

gives

dict_keys(['heat_eq', 'boundx0', 'boundxL', 'initial', 'boundyL', 'boundy0', 'data'])

and when plotting

# plot samples
plotter = Plotter()
plotter.plot_samples(pinn=pinn)

I get

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[58], line 3
      1 # plot samples
      2 plotter = Plotter()
----> 3 plotter.plot_samples(pinn=pinn)

File ~/opt/anaconda3/lib/python3.8/site-packages/pina/plotter.py:46, in Plotter.plot_samples(self, pinn, variables)
     44 ax = fig.add_subplot(projection=proj)
     45 for location in pinn.input_pts:
---> 46     coords = pinn.input_pts[location].extract(variables).T.detach()
     47     if coords.shape[0] == 1:  # 1D samples
     48         ax.plot(coords[0], torch.zeros(coords[0].shape), '.',
     49                 label=location)

AttributeError: 'Condition' object has no attribute 'extract'

followed by a plot showing all the function samples correctly.

Expected behavior
I think the plot_samples method should be checking whether the location is an input-output condition before trying to plot it

Error training

The objective
I'm trying to train my model on both my problem and my data. But as soon as I run pinn.train it gives an error that the Condition class does not have an attribute 'to'.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
Cell In[38], line 22
     17 pinn.span_pts(
     18     {'n': 200, 'mode': 'grid', 'variables': 't'},
     19     {'n': 20, 'mode': 'grid', 'variables': 'x'},
     20     locations=['heat_eq'])
     21 pinn.span_pts(150, 'random', locations=['boundx0', 'boundxL', 'initial'])
---> 22 pinn.train(500, 100)

File ~/opt/anaconda3/lib/python3.8/site-packages/pina/pinn.py:246, in PINN.train(self, stop, frequency_print, save_loss, trial)
    244 if hasattr(condition, 'function'):
    245     pts = batch[condition_name]
--> 246     pts = pts.to(dtype=self.dtype, device=self.device)
    247     pts.requires_grad_(True)
    248     pts.retain_grad()

AttributeError: 'Condition' object has no attribute 'to'

Already tried tests
At first I thought it might have something to do with the condition using input_points and output_points since before I added that condition a few weeks ago everything seemed to work just fine. But after commenting out that condition I still ran into the same problem. I have also tried to re-install pina since I thought the library might have been updated but jupyter notebooks doesn't allow me to uninstall it.
WARNING: Skipping pina as it is not installed. Note: you may need to restart the kernel to use updated packages.

Weid behaviour of local losses

I have tested tutorial 1 and it has been functioning well so far. I modified the code by employing multiple functions in 'Conditions' on the same domain to observe the training's behavior.

Here are the modified conditions:

Unbenannt1

I had expected that the local loss of 'x0' would be equivalent to the loss of 'x01' and that the local loss of 'D' would be equivalent to 'D1'. Additionally, I wanted to verify that the loss of 'D2' would be equal to the sum of the loss of 'D' and 'D1', but this was not the case. Is this considered normal behavior?

Here is the result I got:

Unbenannt2

GPU not working lightning

Describe the bug
GPU lightning doesn't work at the moment

To Reproduce
Run pytest but set accellerator:True in the test_pinn.py.

Expected behavior
Should run.

TypeError in tutorial-1.ipynb

Tried the jupyternotebook file " tutorial-1.ipynb".

It produced the Error: "TypeError: unsupported format string passed to LabelTensor.format"after the first epoch iteration.


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In [3], line 10
      8 pinn.span_pts(20, 'grid', ['D'])
      9 pinn.span_pts(20, 'grid', ['gamma1', 'gamma2', 'gamma3', 'gamma4'])
---> 10 pinn.train(5000, 100)

File ~\PycharmProjects\pythonProject\venv\lib\site-packages\pina\pinn.py:263, in PINN.train(self, stop, frequency_print, trial)
    261         print('[epoch {:05d}] {:.6e} '.format(self.trained_epoch, sum(losses).item()), end='')
    262         for loss in losses:
--> 263             print('{:.6e} '.format(loss), end='')
    264         print()
    266 return sum(losses).item()

File ~\PycharmProjects\pythonProject\venv\lib\site-packages\torch\_tensor.py:855, in Tensor.__format__(self, format_spec)
    853 def __format__(self, format_spec):
    854     if has_torch_function_unary(self):
--> 855         return handle_torch_function(Tensor.__format__, (self,), self, format_spec)
    856     if self.dim() == 0 and not self.is_meta and type(self) is Tensor:
    857         return self.item().__format__(format_spec)

File ~\PycharmProjects\pythonProject\venv\lib\site-packages\torch\overrides.py:1534, in handle_torch_function(public_api, relevant_args, *args, **kwargs)
   1528     warnings.warn("Defining your `__torch_function__ as a plain method is deprecated and "
   1529                   "will be an error in future, please define it as a classmethod.",
   1530                   DeprecationWarning)
   1532 # Use `public_api` instead of `implementation` so __torch_function__
   1533 # implementations can do equality/identity comparisons.
-> 1534 result = torch_func_method(public_api, types, args, kwargs)
   1536 if result is not NotImplemented:
   1537     return result

File ~\PycharmProjects\pythonProject\venv\lib\site-packages\torch\_tensor.py:1278, in Tensor.__torch_function__(cls, func, types, args, kwargs)
   1275     return NotImplemented
   1277 with _C.DisableTorchFunction():
-> 1278     ret = func(*args, **kwargs)
   1279     if func in get_default_nowrap_functions():
   1280         return ret

File ~\PycharmProjects\pythonProject\venv\lib\site-packages\torch\_tensor.py:858, in Tensor.__format__(self, format_spec)
    856 if self.dim() == 0 and not self.is_meta and type(self) is Tensor:
    857     return self.item().__format__(format_spec)
--> 858 return object.__format__(self, format_spec)

TypeError: unsupported format string passed to LabelTensor.__format__

latin hypercube sampling

Describe the bug
When using pinn.span_pts, latin hypercube method not working

To Reproduce
With the pinn defined in tutorial 2.

  1. pinn.span_pts(20, 'lhs', locations=['gamma1'])
  2. pinn.span_pts(20, 'lh', locations=['gamma1'])
  3. pinn.span_pts(20, 'latin', locations=['gamma1'])

Output

  1. UnboundLocalError: local variable 'pts' referenced before assignment
  2. ValueError: mode=lh is not valid.
  3. ValueError: mode=latin is not valid.

Simple Tutorial

Is your feature request related to a problem? Please describe.
I'm new to PINA and I would like a simple tutorial explaining how to define problems and do a simple training.

Describe the solution you'd like
A simple tutorial explaining the basic usage of pina.

Variables Sampling V0.1

Describe the bug
In v0.1 when in discretise_domain the variables kwargs is passed, the variables are ignored and all problem.input_variables are used for sampling.

To Reproduce

from pina.problem import SpatialProblem, TimeDependentProblem
from pina.geometry import CartesianDomain, EllipsoidDomain
from pina.equation.equation_factory import FixedValue
from pina import Condition


class FooProblem(SpatialProblem, TimeDependentProblem):
    output_variables = ['u1', 'u2']
    spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
    temporal_domain =  CartesianDomain({'t': [0, 1]})
    conditions = {
        'D': Condition(
            location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}),
            equation=FixedValue(0.))
    }


foo_problem = FooProblem()
foo_problem.discretise_domain(n=10, mode='grid', variables=['x', 'y'])

print(foo_problem.input_pts['D'].shape)
print(foo_problem.input_pts['D'].labels)

Expected behavior

torch.Size([100, 2])
['x', 'y',]

Output

torch.Size([1000, 3])
['x', 'y', 't']

Tutorial in the documentation fails when run

Describe the bug
The first tutorial example fails because the Condition arguments are wrong
https://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html

To Reproduce
The code in https://mathlab.github.io/PINA/_rst/tutorial1/tutorial.html

Expected behavior
The example should run fine

Output

Traceback (most recent call last):
  File "example.py", line 8, in <module>
    class SimpleODE(SpatialProblem):
  File "example.py", line 39, in SimpleODE
    'x0': Condition(Span({'x': 0.}), initial_condition),
  File "/code/pina/condition.py", line 69, in __init__
    raise ValueError('Condition takes only the following keyword arguments: {`input_points`, `output_points`, `location`, `function`, `data_weight`}.')
ValueError: Condition takes only the following keyword arguments: {`input_points`, `output_points`, `location`, `function`, `data_weight`}.

Additional context
openjournals/joss-reviews#5352

Time Dependency

Hello,

I would like to inquire about how to implement an increasing force on one of the boundaries using the TimeDependentProblem class. I am currently facing difficulties in achieving this objective.

Here is the relevant code snippet:

def gamma_right(input_, output_):
    u1 = output_.extract(["u1"])
    return  u1 - 0.05* input_.extract(["t"])   # For t=0 I would have have 0 and for t=1 I would have the maximum of 0.05

class Mechanics(SpatialProblem, TimeDependentProblem):
    output_variables = ['u1', 'u2']
    spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
    temporal_domain =  CartesianDomain({'t': [0, 1]})
    conditions = {
        'D': Condition(
            location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}),
            equation=Equation(equilibrium)),
        'gamma_right': Condition(
                  location=CartesianDomain({'x': 1, 'y': [0, 1], 't': [0, 1]}),
                  equation=Equation(equilibrium))
    }

Let me know if there's anything else I can provide.

Solver and Network save parameters

Is your feature request related to a problem? Please describe.
There is should be a feature to easily save a Solver and Network instance.

Describe the solution you'd like
Use torch or lighting built in function to save the models/ solvers

Loggers Lightning

Is your feature request related to a problem? Please describe.
Loggers and Console Output are not implemented for the v0.1

Describe the solution you'd like
Use lightning loggers (link1, link2)

Multiple functions for Condition to handle in the same domain.

Although my demonstration involved PDEs, I want to clarify that my point applies to a broader range of scenarios. For instance, restrictions such as the positivity of 'u' over the domain Omega can be effectively addressed with this approach. However, at present, I must define a new condition for each constraint, even if it pertains to the same domain.

Additionally, if I understand correctly, PINA allows for minimizing 'u' given specific input and output data. However, how about handle input and output data that correspond to derivatives of the function 'u.'
In mechanics, there is another approach to consider where multiple functions correspond to each other through a mapping but need to solve different PDEs over the same domain Omega. For instance, the functions "strain," "stress," and "displacement" may need to be determined. Stresses are subject to the PDE 'div(stress)=f,' while strains must satisfy the PDE 'grad(displacement)=strain.' These two functions, strains and stresses, are related through a mapping 'C(strain)=stress,' which may be quite complex.

In this scenario, three conditions would be required over the same domain. It's worth noting that this approach has applications in various fields, including engineering and physics.

Originally posted by @LoveFrootLoops in #90 (comment)

Adding Regularization to the PINN Loss function

Is your feature request related to a problem? Please describe.
The condition module is the only place where you can define your physics-informed loss functions. However, there seems to be no way of adding additional regularizers like a L1 or L2 regularizer on the model's weights. Such regularization can be very useful to constrain overfitting, especially if training with noisy IC.

Describe the solution you'd like
It would be good to have an extension of the condition module (or something similar) to allow regularization.

Describe alternatives you've considered

Additional context
openjournals/joss-reviews#5352

Label Tensor slicing

Describe the bug
Label Tensor when extracting only one column doesn't work.

To Reproduce

>>> x = torch.rand((10,2))
>>> labels = ['x', 'y']
>>> l_x = LabelTensor(x, labels)
>>> l_x
LabelTensor([[0.2713, 0.7692],
             [0.4961, 0.9652],
             [0.4201, 0.5002],
             [0.6814, 0.6618],
             [0.0545, 0.9571],
             [0.6296, 0.1529],
             [0.9426, 0.7315],
             [0.5458, 0.0875],
             [0.9378, 0.7342],
             [0.2751, 0.8039]])
>>> l_x[:, 0]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/Users/dariocoscia/Desktop/PINA/pina/label_tensor.py", line 201, in __getitem__
    selected_lt.labels = self.labels
  File "/Users/dariocoscia/Desktop/PINA/pina/label_tensor.py", line 85, in labels
    raise ValueError(
ValueError: the tensor has not the same number of columns of the passed labels.

Expected behavior

LabelTensor([[0.2713],
             [0.4961],
             [0.4201],
             [0.6814],
             [0.0545],
             [0.6296],
             [0.9426],
             [0.5458],
             [0.9378],
             [0.2751]])

with ['x'] as labels.

API documentation

Is your feature request related to a problem? Please describe.
I didn't see any formal API documentaion page in the repo. For example, what are the assumptions and constraints of the various API functions.
One thing I'm wondering about in particular is: does the API for specifying boundary conditions allow for irregular boundaries (ie, non-rectangular). All examples seem to rely on rectangular boundaries and I don't see the constraints documented anywhere.

Describe the solution you'd like

Describe alternatives you've considered

Additional context
openjournals/joss-reviews#5352

EllipsoidDomain is_inside method

Describe the bug
The is_inside method of the EllipsoidDomain class doesn't seem to work.

To Reproduce

ellipsoid1 = EllipsoidDomain({'x': [0, 1], 'y': [0, 1]})
pt_1 = LabelTensor(torch.tensor([[0.5, 0.5]]), ['x', 'y'])
print(ellipsoid1.is_inside(pt_1))

Expected behavior
True

Output
False

Additional context
Add any other context about the problem here.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.