Comments (4)
Is it possible to apply other, more complicated kernels here?
No, only kernels that act on the boundaries and do not depend on the inner point computations of the primary compute statement (update_T
here). These are requirements to guarantee correct results. If we allowed for a second full computation kernel, then we would get wrong results if, for example, its boundary point computations depended on the inner computations of the first kernel. The thing is that the order of execution within the hide communication is not the expected if you had multiple full kernels.
That said, the rules could be weakened a bit in certain cases. However, it is key that the rules are very easily understandable as else users might get wrong results without even notifying...
Now in your case, you should be able to do all the you describe (using the @parallel_ async
construct), but outside of a hide communication statement. This might be just fine if you can still apply hide communication on one of your heavier kernels.
from parallelstencil.jl.
Thanks for the quick reply, @omlins
These are requirements to guarantee correct results. If we allowed for a second full computation kernel, then we would get wrong results if, for example, its boundary point computations depended on the inner computations of the first kernel.
What if we were to (1) assume all kernel domains are disjoint (such that only boundaries of kernels talk) and (2) explicitly apply a halo around each kernel domain, even if that kernel was on the same device? This way we don't have to worry about "clobbering" each other's boundary points.
If we did this, we could place all "kernels" within the same halo update block, right?
The @parallel_async
option looks promising. I'm just looking if we can leverage the best of both worlds (as asynchronous halo updates seem to be what helps ensure consistent weak scaling).
from parallelstencil.jl.
What if we were to (1) assume all kernel domains are disjoint (such that only boundaries of kernels talk) and (2) explicitly apply a halo around each kernel domain, even if that kernel was on the same device? This way we don't have to worry about "clobbering" each other's boundary points.
If we did this, we could place all "kernels" within the same halo update block, right?
Sorry, I'm not sure I get what you mean.
The @parallel_async option looks promising. I'm just looking if we can leverage the best of both worlds (as asynchronous halo updates seem to be what helps ensure consistent weak scaling).
If you split your domain yourself in multiple kernels, then you might also be able to split off the boundary regions yourself explicitly; if so, you can yourself overlap the communication with computation using @parallel_async
to launch the kernels.
from parallelstencil.jl.
If you split your domain yourself in multiple kernels, then you might also be able to split off the boundary regions yourself explicitly; if so, you can yourself overlap the communication with computation using @parallel_async to launch the kernels.
Got it, thanks 🙂
from parallelstencil.jl.
Related Issues (20)
- AMDGPU v0.5.0 compat HOT 1
- Add device_sync
- sync issues on AMDGPU backend
- Make CellArrays mutable HOT 4
- finite volume method HOT 3
- [JuliaCon/proceedings-review] @parallel keyword argument `loopopt` deprecated? HOT 1
- ParallelStencil on 1.10 HOT 6
- [JuliaCon/proceedings-review] DOI of paper by Besard et al. HOT 2
- [JuliaCon/proceedings-review] Community guidelines HOT 1
- [JuliaCon/proceedings-review] Performance metrics HOT 4
- Type unstable Data.Number HOT 2
- GPU memory management issue when running multi-GPU code HOT 10
- Add support for Polyester's `@batch` HOT 20
- Generalize loopopt
- Create and update GPU unit tests
- Thread (CPU) Float32/Float64 performance comparison on miniapp acoustic2D HOT 12
- Example for init_global_grid_usage HOT 3
- How to implement custom finite differencing operators HOT 8
- CUDA Crash with julia 1.9.0 HOT 8
- Non cartesian gather! HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from parallelstencil.jl.