Giter Site home page Giter Site logo

pylbm_ui's Introduction

pylbm

Binder GithubAction Doc badge Join the chat at https://gitter.im/pylbm/pylbm

pylbm is an all-in-one package for numerical simulations using Lattice Boltzmann solvers.

This package gives all the tools to describe your lattice Boltzmann scheme in 1D, 2D and 3D problems.

We choose the D'Humières formalism to describe the problem. You can have complex geometry with a set of simple shape like circle, sphere, ...

pylbm performs the numerical scheme using Cython, NumPy or Loo.py from the scheme and the domain given by the user. Pythran and Numba wiil be available soon. pylbm has MPI support with mpi4py.

Installation

You can install pylbm in several ways

With mamba or conda

mamba install pylbm -c conda-forge
conda install pylbm -c conda-forge

With Pypi

pip install pylbm

or

pip install pylbm --user

From source

You can also clone the project and install the latest version

git clone https://github.com/pylbm/pylbm

To install pylbm from source, we encourage you to create a fresh environment using conda.

conda create -n pylbm_env python

As mentioned at the end of the creation of this environment, you can activate it using the comamnd line

conda activate pylbm_env

Now, you just have to go into the pylbm directory that you cloned and install the dependencies

conda install --file requirements-dev.txt -c conda-forge

and then, install pylbm

pip install .

For more information about what you can achieve with pylbm, take a look at the documentation

http://pylbm.readthedocs.io

pylbm_ui's People

Contributors

bgraille avatar gouarin avatar

Stargazers

 avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

gouarin mtulow

pylbm_ui's Issues

Put the 'RUN PARAMETRIC STUDY' button in the analysis area

The 'RUN PARAMETRIC STUDY' is currently in the left panel below the PS definition widget. Put it in the analysis area (the right panel, at the top of the PCP if any) would allows for:

  • highlight the button in a more visible default position (actually, It is not visible by default on my screen, I need to scroll)
  • more consistent with the LBM SIMULATION tab where the similar 'START' button is already in the analysis area

issue with la value

we try the implosion test case with the default parameters values and the following process:

  1. run the simulation directly to obtain the reference results
  2. change la from 15 (default) to 150
  3. change la back from 150 to 15 (default)
  4. run again the simulation and observe a different result -> very strange

Text description bug on local install

The scheme description bug when the pylbm_ui is used in local. See the screenshot in attachment for more details. The test has been performed using a local installation following the procedure provided in the README.
pdfIssue.pdf

Download results do not work online

All in the title. Note that a windows open if launched in local

It is probably more flexible for the user analysis to allows downloading the whole directory for the current simulation or parametric study? -> displace the "download" widget from the POST-TREATMENT tab to both the PARAMETRIC STUDY and the LBM SIMULATION tab

Improve design space for relaxation parameters

For now, even if we choose sigma notation and log, the min and max of the relaxation parameters are then computed from these inputs and the sampling is done after.

It will be better to make the sampling of the min and max given by the user and then recompute the relaxation parameters.

Improve the figure in POST_TREATMENT

the figure could be improved with the following:

  • add a legend (requested for readability in case of multiple data in the same plots)
  • Automatic configuration of the figure: adapt the axes name, the figure title and the legend text according to the selected data (see as an example the Cenaero prototype). Such small feature tends to improve the user productivity (less time to configure the plot -> more time to analyze)

Change simulation name lead to freeze

The following sequence lead to freeze the simulation interface:

  1. define and run a LBM simulation
  2. click again on start and select NO in the popup window asking to replace the files
  3. change the simulation name
  4. click on start again -> freeze

Field output request

In LBM SIMULATION the widget Field output request proposes only the mass (and all but it is only the mass also) even for the Euler model...

Issues with the run_simulation.py script

Different errors are obtained depending on the executing directory:

from the pylbm_ui/Outputs directory, try the command: python ../../scripts/run_simulation.py ../simu_0/simu_config.json
leads to the following error:
Traceback (most recent call last):
File "../../scripts/run_simulation.py", line 10, in
from pylbm_ui.simulation import simulation
ModuleNotFoundError: No module named 'pylbm_ui'

from the pylbm_ui/scripts directory, try the command: python run_simulation.py ../Outputs/simu_0/simu_config.json
leads to the following error:
Traceback (most recent call last):
File "run_simulation.py", line 10, in
from pylbm_ui.simulation import simulation
File "../pylbm_ui/simulation.py", line 21, in
from .widgets.pylbmwidget import out
File "../pylbm_ui/widgets/init.py", line 11, in
from .simulation import SimulationWidget
File "../pylbm_ui/widgets/simulation.py", line 18, in
from ..simulation import simulation, Plot
ImportError: cannot import name 'simulation' from partially initialized module 'pylbm_ui.simulation' (most likely due to a circular import) (../pylbm_ui/simulation.py)

The reference results are not available for post treatment

Accuracy assessment requires to compare the LBM fields with reference (when available like for the Toro cases). It requires to

  • Allows the user to save the reference field during the computations (at the same times than the LBM fields?)
  • Make the reference fields available in the post treatment widget.

Names of the responses too long / wrong

The names of the responses are too long, especially when used 'as it' in the PCP which becomes totally unreadable for more than 4 axes. Moreover, they have to corresponds to the names used in the LBMHYPE final report for coherence.

The simplest is to use the proposal class ResponsesWidget in the branch "fixPCP"

In practice:

  • 'log of error avg on mass' -> 'errAvg_Mass' (same for all fields)

  • 'log of error std on mass' -> 'errStd_Mass' (same for all fields)

  • 'log of error on mass' -> 'errEnd_Mass' (same for all fields), not used in the report but always interesting to have

  • 'log of relative error on mass' -> 'errRel_Mass' (same for all fields), not used in the report but always interesting to have

  • 'sigma for s_rho' -> 'Sig_rho'

  • 'log of sigma for s_rho' -> 'LogSig_rho'

  • 'diff for s_rho' -> 'Diff_rho'

  • 'log of diff for s_rho' -> 'LogDiff_rho'

  • 'diff with dx=1 for s_rho' -> 'DiffOdx_rho '

  • 'log of diff with dx=1 for s_rho' -> 'LogDiffOdx_rho '
    NOTE: the expression 'log of diff with dx=1 for s_rho' is confusing since diff:=sigma dx**2/dt := sigma la dx using the first of the second or the second expression with dx = 1 will lead to different results! The real targeted response (associated with the onset of spurious oscillations) is diff/dx and is called 'DiffOdx' (for 'diff over dx') in the reports

AttributeError: 'PosixPath' object has no attribute 'startswith'

OS: Ubuntu 20.04

Installed pylbm_ui as per instructions on GitHub page.
Installation OK.

voila --debug voila.ipynb
or
voila --debug /media/l1nux/SAN240/conda_envs/envs/pylbm_ui/voila.ipynb

Stops with AttributeError.
Traceback output as below:

Thanks for any help to run "viola viola.ipynb"


KeyError Traceback (most recent call last)
/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/pkgutil.py in get_importer(path_item)
414 try:
--> 415 importer = sys.path_importer_cache[path_item]
416 except KeyError:

KeyError: PosixPath('/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/schema')

During handling of the above exception, another exception occurred:

AttributeError Traceback (most recent call last)
in
----> 1 from pylbm_ui.voila_main import main

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/pylbm_ui/init.py in
8 from . import simulation
9 from . import responses
---> 10 from . import widgets

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/pylbm_ui/widgets/init.py in
11 from .stability import StabilityWidget
12 from .simulation import SimulationWidget
---> 13 from .parametric_study import ParametricStudyWidget
14 from .post_treatment import PostTreatmentWidget
15 from .pylbmwidget import out

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/pylbm_ui/widgets/parametric_study.py in
22
23 from .debug import debug, debug_func
---> 24 from .design_space import DesignWidget, DesignItem
25 from .dialog_path import DialogPath
26 from .discretization import dx_validity

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/pylbm_ui/widgets/design_space.py in
14 from traitlets import Unicode, Float, List, Bool
15
---> 16 from schema.utils import SchemeVelocity, RelaxationParameterFinal
17
18 from .debug import debug

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/schema/init.py in
8 from .utils import define_cases
9
---> 10 cases = define_cases(file, name)

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/schema/utils.py in define_cases(filename, modulename)
42 gbl = globals()
43 package_dir = Path(filename).resolve().parent
---> 44 for _, module_name, ispkg in iter_modules([package_dir]):
45 if ispkg:
46 module = f"{modulename}.{module_name}"

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/pkgutil.py in iter_modules(path, prefix)
127
128 yielded = {}
--> 129 for i in importers:
130 for name, ispkg in iter_importer_modules(i, prefix):
131 if name not in yielded:

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/pkgutil.py in get_importer(path_item)
417 for path_hook in sys.path_hooks:
418 try:
--> 419 importer = path_hook(path_item)
420 sys.path_importer_cache.setdefault(path_item, importer)
421 break

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/importlib/_bootstrap_external.py in path_hook_for_FileFinder(path)

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/importlib/_bootstrap_external.py in init(self, path, *loader_details)

/media/l1nux/SAN240/conda_envs/envs/pylbm_ui/lib/python3.8/importlib/_bootstrap_external.py in _path_isabs(path)

AttributeError: 'PosixPath' object has no attribute 'startswith'

Param study: add relaxation parameters

In the popup windows to add a relaxation parameter, the second widget is also called 'relaxation parameter'. Could you please rename it 'relaxation rates' which is more clear.

Linear stability tool is unstable

The linear stability tool gives different results for the same states and scheme parameters during a test campaign.
Use Toro1 and D1Q333_0:

  1. run stability analysis as it leas to all stable representative states, as expected
  2. change lambda to 8 leads to unstablility for Star 2 state, as expected too
  3. change back the value of lambda to 10 like in the first trial and now all the states are computed as linearly unstable!

first trial with default values:
Screenshot from 2021-06-18 11-23-12

second trial with modified lambra=8:
Screenshot from 2021-06-18 11-23-27

third trial, back to default lambda value = 10:
Screenshot from 2021-06-18 11-23-35

Change addvisc to 0.25 in D1Q333_0

current value at line 156 of D1q333 file is addvisc = 0.5. Please change to 0.25 for consistency with the tests performed in LBMHYPE reports.

Note that it would be nice to add such internal scheme parameters in the interface to let the user:

  1. know that it exist and may affect the results
  2. play with the parameter value

but for the moment, please just change the hard coded value

Field output request: last step not saved

tests performed using the implosion case:

  • add an field output request with: Fields=All; When=Frequency; When save the field?=1 -> the fields are saved at step 0 but not at the last step as expected
  • add an field output request with: Fields=All; When=list of steps; When save the field?=Number of time steps (from the discretization widget) -> nothing saved
  • add an field output request with: Fields=All; When=list of steps; When save the field?=Number of time steps-1 (from the discretization widget) -> results are saved

response average and std errors are wrong

PS using sod case + D1Q333 (the corrected version with la_ instead of la), dx = 0.01 and parameters la = [2:100] and logSig=[-5:0] lead to unrealistic values of the average and std errors. Normally, the average and std errors evolves smoothly according to logDiff which is not the case here.

To track the error, the value of errStd_mass is replaced by len(self.error) in response.py. Using startTime = duration*0.92 and dx = 0.01, errStd_mass=len(self.error) must be twice the lattice velocity ("la"). The following figure shows that it is not the case: values of errStd_mass=len(self.error) are uncorrelated to "la" highlighting an issue in the saving of the self.error list.

newplot

response 'plot XXX' in h5 format instead of png image

The current 'plot XXX' response create an .png output that cannot be used in the POST TREATMENT tab.
Could you please save the requested fields in .h5 format for further analyses/comparisons of the simulations created during the parametric study using the POST TREATMENT tab. (That was a major goal of the POST TREATMENT tool in the prototype)

dx value issue

we try to define a implosion test case with 400 cells in both directions corresponding to a space step=0.00075. But automatic correction if the the space step value do not allows for that value.
Best is 0.0006993006993006993 leading to 430 cells.

Improve data selection in POST-TREATMENT

The current data selection procedure is not easy to use in case of large amount of data on the disk. A filtering approach is more user friendly in such case(see the LBMHYPE prototype).

Moreover, the selection table are showed in the main window and the plot itself is below and often off screen. It is more convenient to reserve the main window to the plot and put the filtering tools in the left part of the window.

Parametric Study functionalty (PS) do not work properly

Several tests show strange results of the PS functionality. Part of them are solved in the the branch 'fixPCP' but obviously wrong results are still observed. as an example, PS is performed for Toro 1 case + D1Q333 scheme (addVisc = 0.25) with design space la [1,100] and relax rates SRT + sigma + log [-5,0]. The PS performed with LBMHYPE prototype predict that

  • all points with la < ~10 are linearly stable. With pylbm_ui, lots of the points in that region are computed as linearly unstable.
  • If la < ~10 and logSigma > 3 the LBM simulations are stable while lot of unstable simulations are observed in this region with pylbm_ui.
    Note that the run of one of theses falsely unstable simulation with the LBM simulation tab in pylbm_ui lead to the expected stable simulations. This is quite a severe issue since the current PS chain cannot be trusted.

Beside this issue, performing the tests presented above highlights strong limitations of the current pylbm_ui PS functionalities :

  1. The reference simulation of a PS is created 'on the fly' from the other application tab (scheme, test case, simulation): cannot know exactly what is the reference simulation used for the PS, especially in case of changes in the other tabs during the setup of PS. -> it would be far more convenient to load a preconfigured LBM simulation file, typically a .json previously created by the LBM simulation tab.
  2. The setup of the PS is not saved: cannot rerun a previous PS or look at the details of the current PS setup. -> save the design space, responses, and sampling input parameters in a .json file and allows to reaload that file with the reference simulation .json file in order to easily to rerun the whole study in a couple of 'clicks'.
  3. The generated database is not saved: cannot analyse a preexisting PS database using the PCP tool. The whole PS need to be redefined (see the two previous points) and the sample need to be re-evaluated to plot the PCP which is quite a huge waste of time -> save the database in a .json file (the one use to save the PS definition is perfect) and allows to reaload that file to plot PCP.

Note that, in order to ease the coupling with the Cenaero software Minamo, it is proposed to use the minamo format for the PS and database .json files. Such coupling further request an efficient standalone LBM simulation script able to read the simulation.json file including field output and responses requests -> see the pylbm_ui issue related to run_simulation.py.

Code generator useless in Parametric Studies

The Code generator option of the Parametric study tab is useless since code generator is already define in the simulation tab.
Could you remove this option in the Parametric study tab and use the code generator define in the simulation tab only?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.