Giter Site home page Giter Site logo

stfc / psyclone Goto Github PK

View Code? Open in Web Editor NEW
102.0 12.0 26.0 321.23 MB

Domain-specific compiler and code transformation system for Finite Difference/Volume/Element Earth-system models in Fortran

License: BSD 3-Clause "New" or "Revised" License

Python 69.42% Fortran 27.74% Shell 0.07% Makefile 1.31% Jupyter Notebook 0.31% Jinja 1.13% C 0.01% BitBake 0.02%
python fortran compiler finite-elements finite-difference finite-volume optimization high-performance-computing parallel-computing hacktoberfest

psyclone's People

Contributors

adamvoysey avatar aidanchalk avatar andrewcoughtrie avatar arporter avatar bhfock avatar christophermaynard avatar dennissergeev avatar drewsilcock avatar hiker avatar julienremy avatar kinow avatar lonelycat124 avatar matthewrmshin avatar mcjamieson avatar mo-lottieturner avatar nmnobre avatar oakleybrunt avatar pelson avatar rupertford avatar schreiberx avatar scottwales avatar sergisiso avatar svalat avatar teranivy avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psyclone's Issues

Add support for logging

PSyclone currently has no mechanism for issuing messages other than when an exception is raised. However, there are many cases where we might want to generate a warning to the user but allow execution of PSyclone to continue.

In this ticket we will add an interface to a logging mechanism that will allow us to generate warning/debug messages.

Failing tests on master branch

Running py.test in src/psyclone/tests on master branch results in 3 failing tests in dynamo0p3_test.py:

  • test_kernel_stub_usage
  • test_kernel_stub_gen_cmd_line
  • test_vn_generation
    All failures are related to missing files, e.g. E OSError: [Errno 2] No such file or directory.
    The full error report is attached (Master_test_errors.txt).

Crash when printing Invoke class

A minor issue: while adding some debug prints to study some code, I caused a crash when printing an instance of psygen::Invoke().
Reason is that __str__ is:

    def __str__(self):
        return self._name+"("+str(self.unique_args)+")"

And self.unique_args is not defined. And while I have a test case and fix ready for this, I realised that self.unique_args is never used or set anywhere in PSyclone - looks like a left over. Unfortunately I am not sure what should be used. Maybe just return self._name in __str__() (but even then it needs a minor fix in case that Invoke(None, None, None) is used since self._name is then still undefined ... setself._name to "Undefined" perhaps in this special case??).

PSYKE: PSyclone Kernel extractor

Build a PSy layer which when the transformation KernelDumpTrans is applied to a Kern object it calls proflib_io to write out all the scalars and arrays necessary to call the kernel. Moreover it generates a driver code which reads the dump with proflib_io and calls the kernel.

Check the initialisation of pointers in generated code

At the moment PSyclone initializes pointers using => null in the PSy layer. However, this will only set the pointers to null the first time the routine is called. There was some talk about adding => null makes the variable have the saved attribute, however I'd like to verify that.

Anyway, there may be cases where we want to variable to be null and it is not due to the parent routine being called more than once. If this is the case then we might want to replace => null with nullify(var)

Built-ins do not like arithmetic operations on scalars

Not a major issue, but I noticed that

call invoke( scale_field (alpha/beta, x) )

throws an exception (see below). The obvious workaround is this:

alpha_divided_by_beta = alpha/beta
call invoke( scale_field (alpha_divided_by_beta, x) )

This is the exception that is thrown:

psyclone -api dynamo0.3 -l -d kernel \
	            -opsy ../../build/dynamo/psy/psy_cma_test_mod.f90 \
	            -oalg  ../../build/dynamo/algorithm/cma_test_mod.f90 \
	            algorithm/cma_test_mod.x90
Error, unexpected exception, please report to the authors:
Description ...
'BinaryOperator' object has no attribute 'name'
Type ...
<type 'exceptions.AttributeError'>
Stacktrace ...
  File "/home/n/em459/scratch/library//psyclone_cma_support//bin/psyclone", line 198, in main
    distributed_memory=args.dist_mem)
  File "/home/n/em459/scratch/library//psyclone_cma_support//bin/psyclone", line 89, in generate
    line_length=line_length)
  File "/home/n/em459/scratch/library/psyclone_cma_support/psyclone/parse.py", line 927, in parse
    variableName = a.name
make[2]: *** [../../build/dynamo/psy/psy_cma_test_mod.f90] Error 1
make[2]: Leaving directory `/beegfs/scratch/user/n/em459/svn_workspace/lfric/r9601_additional_cma_tests/src/dynamo'
make[1]: *** [applications] Error 2
make[1]: Leaving directory `/beegfs/scratch/user/n/em459/svn_workspace/lfric/r9601_additional_cma_tests/src/dynamo'
make: *** [build] Error 2

Modify PSyclone contributions/install script

The way the installation of PSyclone works now is (script constributions/install):

  1. Copy everything from unpacked PSyclone-<version>/src to <installroot>/psyclone directory except tests and generator.py,
  2. Copy PSyclone-<version>/src/generator.py to <installroot>/bin/psyclone,
  3. Change <installroot>/bin/psyclone permissions to "-rwxr-xr-x".

This works well for compiling the LFRic code, but not for PSyclone code which complains about missing generator.py.

The proposed modifications are:

  1. Copy everything from unpacked PSyclone-<version>/src to <installroot>/psyclone directory except tests,

  2. Make relative symbolic link from <installroot>/bin/psyclone to <installroot>/psyclone/generator.py,

  3. Change <installroot>/psyclone/generator.py permissions to "-rwxr-xr-x".

Manual creation of Linux symlink works well for our current version of PSyclone 1.4.1 for both LFRic and PSyclone code generation and PSyclone py.test.

The modified install script is attached as install.txt due to Github attachment rules. Modified
install was tested on Met Office Linux environment for Python 2.6.6 and 2.7.6 versions and produces the correct directory structure. PSyclone works normally with it.

Code simplification for testing the grid-index offsets in gocean1p0

PSyclone only supports one type of grid offset (see https://github.com/stfc/PSyclone/blob/master/src/psyclone/gocean1p0.py#L111), which is tested in the GOInvokes constructor.

I found that code rather confusing, since index_offsets is a list (that as far as I can tell) will be filled with identical offset values (and which are then looped over in line 126 https://github.com/stfc/PSyclone/blob/master/src/psyclone/gocean1p0.py#L126). My suggestion would be to make index_offset a string (so only store one value). Will provide a link to a pull request ... once I found the right VM with this patch ;)

Build generated code when testing

At the moment we do not build any code when performing PSyclone tests. Therefore we do not know if the code will compile or not, we just trust this by inspection. This will help reduce mistakes by us before shipping new versions to users. In this ticket we will look at adding the building of code as part of our testing system i.e. within py.test. This was previously 469 on SRS.

Release 1.3.3 of PSyclone

Tom has requested a new release to take advantage of the issues fixed in fparser that are now available in PSyclone.

Support gh_shape=gh_evaluator

Currently PSyclone only supports gh_shape = gh_quadrature_xyoz. In this issue we will add support for
gh_shape = gh_evaluator. This was originally ticket 942 on the SRS.

Coded version.py rather than a generated one

At the moment we generated a version.py file after we have run setup.py. This then acts as a central place for documentation etc to pull the correct version.

I would prefer there to be a pre-existing version.py file which is read by setup.py as well as other parts of the distribution as we would then not need to run setup.py before anything else. This issue is observed when trying to build the documentation.

Andy has pointed out that there is a bit of chicken and egg here as setup.py will not be able to see version.py until the distribution is installed, which is done by setup.py.

However, he also said that you could play with the python path within setup.py to point to the correct place (the relative location is known). I would prefer to do this.

Support for operator boundary conditions

Boundary conditions are applied to fields through calls to enforce_bc_kernel that are either explicitly written by the algorithm generator or automatically applied after calls to matrix_vector_kernel for W1 and W2 function spaces.

It has been decided to directly include the boundary conditions inside of operators through the use of a enforce_operator_bc_kernel that modifies a operator type. LFRic ticket https://code.metoffice.gov.uk/trac/lfric/ticket/999 implements this new kernel and this issue is to replace the psykalite call with psyclone support.

Additionally with the boundary conditions placed in the appropriate operators there will no longer be any need for automatic generation of enforce_bc calls after matrix_vector calls and so this can be removed.

Complete the dependence analysis

Completing dependence analysis in PSyclone will allow various transformations and optimisations to be safely implemented. For example re-ordering of statements in the PSy-layer, including moving haloswap calls.

Support for CMA meta-data

CMA operators require an extension to PSyclone to generate calls in the PSy layer into the operator infrastructure. New metadata to describe this has been added. The associated LFRic ticket is 536.

This ticket will add support for parsing the new metadata.

Determine and check an arguments function space

Now that dependence analysis has been implemented (well it is currently pull request #20) we can use this to try to determine the space of an argument in a kernel. For example:

kern1(a)
kern2(a)

kern1's first argument is specified as being w3 but kern2's argument is specified as being any_space. Therefore we can infer that kern2's first argument in this particular case is w3. We already have a couple of xfailing tests for this sort of analysis.

This sort of logic can be used to check for correctness as well. If we take the above example again but this time kern2's argument is specified as being w0 then we know that the user has made a mistake and can raise an exception.

Add runtime checks for the depth of a stencil when passed via the algorithm layer

PSyclone supports a stencil depth passed in from the algorithm layer. If the value of this depth is not a literal then the size (of the variable) is unknown. At the moment no checks are performed on the size of such variables so the user could make a mistake. It has been agreed that the depth should be a positive integer i.e >0. PSyclone should add code in the PSy layer that checks the size of such variables at run time.

I don't think there is a need for a maximum value check as there are (or will be) run-time checks that ensure a stencil does not access data beyond a fields halo.

modify dynamo example3 to use builtins and multiple calls per invoke

PSyclone's dynamo example 3 was taken from LFRic when builtin's were not supported. It therefore calls the PSyKAl-lite (manually written) builtins. Now that PSyclone supports builtins we should make use of them so that people can see how PSyclone should be used. One some cases manually written builtins are called one after another. As PSyclone allows multiple kernel/builtin calls to be contained within a single invoke this should also be done where possible.

Obtain list of supported built-ins for Dynamo 0.3 by parsing Fortran meta-data

Currently the list of built-ins supported for the Dynamo 0.3 API is specified in the dynamo0p3 module. However, this information is also available from the Fortran file containing the meta-data for the built-in operations. In this ticket we will remove this duplication by generating the list of supported built-ins from the parsed content of the meta-data file. Was previously 588 on SRS but no work was done there.

Allow function spaces to know what cell to loop to

Currently psyclone (and psykalite) obtains the horizontal looping limit from the mesh depending on if the field being written to is continuous (or unknown) or discontinuous

If the field is continuous or unknown (such as for general kernels) then mesh%get_last_halo_cell(1) is used
If the field is discontinuous then mesh%get_last_edge_cell() is used

Its proposed to put this information into the function spaces so that psyclone just needs to generate
do cell = 1,ouput_field%vspace%get_last_cell() ... end do

where vspace%get_last_cell() contains either mesh%get_last_halo_cell(1) or mesh%get_last_edge_cell()

The corresponding lfric ticket is
https://code.metoffice.gov.uk/trac/lfric/ticket/985#ticket

Add options for run-time checks in generated PSy layer

Currently the code generated by PSyclone contains no correctness checking.
In order to aid code development and debugging, this ticket will add the option of having PSyclone generate checks for correctness. One example is to check that the supplied fields are on the correct function spaces. This was previously 543 on SRS but no work was done there.

Methods of ArgOrdering should be 'private'

With the exception of generate all of the methods of ArgOrdering must only be called from within the class (from generate in fact). They should therefore have _ pre-pended to their names.

remove f2pygen and use the fgenerator package instead

The f2pygen.py code has been placed in its own GitHub repository and is now being maintained there. It is now called fgenerator. We will therefore remove f2pygen.py, any tests and change documentation appropriately so that it can now use fgenerator.

Upgrade to fparser 0.0.3 for CLASS bug-fix

fparser 0.0.2 has a bug whereby Fortran class(my_class) :: a gets turned into class(kind=my_class) a which is incorrect due to the addition of kind=. This is being fixed under stfc/fparser#27. Once a new release of fparser has been made we need to upgrade PSyclone to use it (i.e. change the documentation).

Update built-ins support

Iva's work in Ticket 907 on the SRS has revealed a lot of cases where would-be built-ins are called with
one or more duplicated arguments, e.g.:

call invoke( axpby(a, fld1, b, fld1, fld3) )

Currently PSyclone rejects such kernel calls because they break Fortran rules on argument aliasing.
Tom has suggested that PSyclone automatically identify such cases and 'call' an alternative, suitably specialised implementation of the built-in. This then prevents the need for a proliferation of built-ins according to the various ways in which their arguments may or may not be duplicated.

We will investigate, and if possible, implement this functionality in this Issue. Although I'm 'assigning' this to both Iva and Rupert, that is really FYI as I propose to do it. (Unless anyone else feels keen :-)

W2any support in psyclone

LRFic ticket #975 https://code.metoffice.gov.uk/trac/lfric/ticket/975 implements mass matrices in the horizontal (W2h) and vertical (W2v) components of the W2 space. However a new kernel had to be created as the concept of W2any does not exist in LFRic or psyclone.

It would be useful to support the kernels written for W2any function spaces where W2any could be W2, W2v or W2h. In the psy layer these would all appear the same.

LFRic ticket #807 https://code.metoffice.gov.uk/trac/lfric/ticket/807 will implement the corresponding metadata support in LFRiic

Introduce GH_READWRITE access (for discontinuous quantities)

This was previously ticket 831 on SRS.

During discussions with Tom on SRS 576 it has become clear that there is a requirement to have kernels that accumulate a result into a W3 (i.e. discontinuous) field. Since we use GH_INC to mean that multiple cells contribute to a shared quantity (which then requires some sort of reduction operation in parallel), we will re-introduce the GH_READWRITE access descriptor. This can then be used to describe operations that accumulate into a discontinous quantity and thus do not require a reduction when performed in parallel.

The same situation arises when applying boundary conditions to an operator (#22) - the operator needs to be INTENT(inout) because it is both read and written. However, since the operator is discontinuous this access can safely be performed in parallel.

Add runtime checks for accesses beyond the maximum depth of a halo

It is possible to specify a stencil extent, or redundant computation in the halo, or a combination of the two, that causes the code to try to access data that is beyond the maximum halo depth.

The halo exchange code has an internal check for this issue which covers all reads to fields. However, writes to fields are not covered. Therefore PSyclone should add appropriate checking code (re-using or copying the code implemented in the halo_exchange routine) that ensures that write accesses do not go beyond the maximum halo depth.

The simplest way to do this would be before any loop that has redundant computation. A more sophisticated solution would attempt to minimize these checks in any multi-loop PSy-layer subroutines. The simplest option is probably what we should start with. However, looking longer term, we may need the latter solution if we are working towards support for a two-tiered PSy-layer (to help support PGI + OpenACC as well as (potentially) improving performance on other compilers).

[LFRic] Kernel meta data for mesh information

LFRic ticket: https://code.metoffice.gov.uk/trac/lfric/ticket/983 Introduced adjacency information into the mesh that is required for certain boundary integral kernels.

Currently this information is extracted from the mesh in the psy layer and passed into the kernels.
At the moment these kernels are called through psykal lite partly because the metadata for passing adjacency information in does not exist.

It seems that this is part of a larger issue, how to pass mesh information into kernels?

It is proposed to include a new metadata type, mesh_data_type that would specify what mesh data is required in the kernel

This ticket is to implement the psyclone changes to support the proposed changes in the realated lfric tickethttps://code.metoffice.gov.uk/trac/lfric/ticket/986

Correct documentation to refer to github rather than svn

The getting-going and system-specific set-up sections of the documentation both still talk about getting PSyclone from the Met Office SRS repository using subversion. This needs to be updated to talk about github instead.

create new 1.4.0 release

CMA support has been added to PSyclone and the Met Office are waiting for this functionality so we are making a new release. As CMA support extends the PSyclone API and is quite a significant addition we are moving from 1.3.* to 1.4.0.

new release 1.4.1

1.4.0 has a bug in it that stops its use by the Met Office. The latest commit to master happens to remove the code that causes the bug in 1.4.0. Therefore we are making a quick 1.4.1 release that should work correctly.

Support halo exchanges with complex sizes

Since halo exchange placement has been improved (in #50), it is now possible for more than one read to depend on the same halo exchange.

For example

halo_exchange(a,depth)
k1(a)
k2(a)

i.e if two or more kernels read field a then only one halo exchange is required.

However the depth of this halo exchange now depends on how the field a is accessed in more than one place and must be the maximum of these.

If both k1 and k2 iterate over a continuous function space then they both require the halo to be of size 1. If k1 has been specified as performing redundant computation to depth 3 and k2 is left unchanged then the halo should be of size 3. PSyclone currently works correctly in these cases (at least is does in #50) as the values can be converted to integers (they are currently held as strings) and their size compared (and the maximum taken).

However, if redundant computation has been specified as being the maximum possible value e.g. for k1 then the value stored here is currently mesh%get_last_halo_depth(). Clearly a simple max of this and the value 1 will fail.

Further, if a stencil is specified with a fixed literal value (e.g. stencil depth 2) when iterating over a continuous function space then the depth is stored as 2+1. This could be evaluated in theory but currently is not.

Also, if a stencil is specified with a variable value (passed from the algorithm layer) when iterating over a continuous function space then the depth is stored as extent+1 where extent is the variable name. This value can't be evaluated. The halo exchange logic in this case can get complicated as we don't know the size of extent at compile time.

For example, if field a in kernel k1 needed to be of size extent_1 and field a in kernel k2 needed to be of size extent_2 and both loops iterated over a continuous function space then the size of the halo exchange is max(extent_1+1, extent_2+1).

To be able to create the required code the halo_exchange class needs to be modified. At the moment a size is kept as a string and a stencil is determined from the field associated with the halo_exchange.

I think that what need to happen is that the halo_exchange class needs to keep known sizes as integers, the max halo value as a boolean and an extent as a string. If there is more than one extent then a list of these and any associated integer values must be kept. This should allow code generation to create appropriate code.

Redundant computation transformation

By default PSyclone will generate loop bounds which correctly compute locally owned elements.

A potential optimisation would be to redundantly compute values in the halos. The potential advantage of this is that we might avoid (halo exchange) communication if we do so.

The Met Office have identified support for redundant computation as a priority.

This issue will discuss and then implement a redundant computation transformation for lfric.

Inter-grid kernels

Inter-grid kernels for the multigrid solver and physics/dynamics require and extension to the PSyKAl API.
This is documented in MORS Ticket 1037. The rules are repeated here, but the example is in a branch of LFRic.

  • An inter-grid kernel (and metadata) must have ''at least'' one field on both the fine and coarse meshes. Specifying all fields as coarse or fine is forbidden.
  • All fields must have a halo depth of at least 2. This could be a run-time check.
  • Always do the horizontal looping over the coarse mesh, requiring only depth 1 halo looping
  • Colouring for the coarse mesh for shared memory parallelism will be correct.
  • fields on different meshes must always live on different function spaces.
  • For DG fields (W3 and W_theta) metadata for {{{any_space_dg_n}}} has been introduced so that DG looping can be employed if the fine and the coarse fields are both (for example) W3. This will allow optimisation of the horizontal looping that can be developed later.
  • Only fields are allowing in inter-grid kernels. This can be re-visited later.
  • Looping over the coarse cells implies the {{{whole_dof_map}}} (not just a column slice) for the ''fine'' dof_map must be passed in, along with the accompanying {{{ncell_f}}} scalar.

Prototype new API for NEMO

Discussions with NEMO developers have highlighted the fact that they really do not want to change their source code. In this issue we will investigate the feasibility (or otherwise) of adapting PSyclone to process 'raw' Fortran conforming to the NEMO coding standards.

builtins do not allow arguments with loop indexing

It has been reported that arguments to builtins which have an index e.g. a(k) cause the parser to fail. This should not be the case as kernel arguments support this feature. It therefore sounds like there is a bug somewhere. This issue is raised to check what the problem is and to fix it.

Documentation for Using OpenSUSE

I've started to document the installation process on OpenSUSE, which has a few issues (e.g. the default pip only installs for python3). The current state is in: https://github.com/hiker/PSyclone/blob/opensuse-doc/doc/system_specific_setup.rst

My main suggestion which needs some discussion:
quite a few packages that are in the current docs installed using apt-get can be installed using pip. The advantage is that there is then more in common, and imho a bit of a restructuring might be useful in order to reduce duplicated documentation (and therefore reduce work if things are changed). My proposed (not yet done in my branch) structure would be:

  1. User Installation
    • Ubuntu specific (apt-get) installations - mostly pip I think
    • OpenSUSUE specific (zypper) installations (pip again)
    • Common (pip) installation
      • Including details about --user and how to uninstall
  2. Developer installation
    • Ubuntu specific (apt-get) installations (latex)
    • OpenSUSE specific (zypper) installations (latex)
    • Common (pip) installation (no more --user and uninstall instructions)

Am happy to follow any other approach, e.g.:

  1. Distro specific installation
    • Ubuntu
      • User
      • Dev
    • OpenSUSE
      • User
      • Dev
  2. User (pip-based) installation, common to all distros
    • including --user and how to uninstall
  3. Developer (pip-based) installation, common to all distros

Or we can leave it the way it is, but I would say that this means that around 90% of the content is identical between the distros (just apt-get vs zipper, a few package names are different).

And could you let me know when is a good time to do this? It appears that you are currently working on the way it is installed and the docs, so let me know when you are finished before I start on it.

Feedback welcome!

Re-structure for pypi

In order to distribute PSyclone using the Python Package Index we will need to re-structure things.

No Halo exchange when reading from W3 field

PSyclone is unnecessarily generating halo exchanges when reading from a W3 (discontinuous field) as shown in the example below from advection_alg_mod.x90
First halo exchange on mf field is correct. The writes to W3 field r_rho and loops to last edge, not last halo cell. The calls set_dirty() on W3 field, which is correct because the field is dirty then into the second loop which reads and writes to a W3 field, but no halo exchange is required as the loop is only to last edge because we are reading from W3 field.

  IF (mf_proxy%is_dirty(depth=1)) THEN                                      
    CALL mf_proxy%halo_exchange(depth=1)                                    
  END IF                                                                    
  !                                                                         
  DO cell=1,mesh%get_last_edge_cell()                                       
    !                                                                       
    CALL dg_matrix_vector_code(cell, nlayers, r_rho_proxy%data, mf_proxy%data,  &
        div_proxy%ncell_3d, div_proxy%local_stencil, &                              
       &ndf_w3, undf_w3, map_w3(:,cell), ndf_any_space_1_mf, undf_any_space_1_mf, 
        map_any_space_1_mf(:,cell))                                                         
  END DO                                                                    
  !                                                                         
  ! Set halos dirty for fields modified in the above loop                   
  !                                                                         
  CALL r_rho_proxy%set_dirty()                                              
  !                                                                         
  IF (r_rho_proxy%is_dirty(depth=1)) THEN                                   
    CALL r_rho_proxy%halo_exchange(depth=1)                                 
  END IF                                                                    
  !                                                                         
  DO cell=1,mesh%get_last_edge_cell()                                       
    !                                                                       
    CALL dg_matrix_vector_code(cell, nlayers, rrho_prediction_proxy%data, r_rho_proxy%data, &
           mm_w3_inv_proxy%ncell_3d, &                                    
           &mm_w3_inv_proxy%local_stencil, ndf_w3, undf_w3, map_w3(:,cell), &
            ndf_any_space_1_r_rho, undf_any_space_1_r_rho, &                                              
            &map_any_space_1_r_rho(:,cell))                                                 
  END DO                                                                    
  !                                                                         
  ! Set halos dirty for fields modified in the above loop                   
  !                                                                         
  CALL rrho_prediction_proxy%set_dirty()                                    
  !                                          

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.