Comments (5)
I think this was introduced in #8 and added for HOB in #14. My apologies for not explicitly stating the breaking changes. So let me first explain the choices. They only affect boundary condition objects + HOB by the way.
The idea for #8 was to make it easier (less verbose) to create boundary condition objects. Instead of supplying every data set as a separate function argument to rmf_create_*
, you would only supply rmf_list
or rmf_2d_array
objects + dis
and let the function create everything else like np
and itmp
in the background. If you look at MODFLOW boundary condition file structures, they can be condensed into three parts (this is heavily opinionated): (1) dimensions, (2) data; either some sort of "table" (discrete boundary conditions & HOB) or 2d matrices (continuous bc) and (3) information on the stress period timing (not for HOB). So that's what these objects look like in RMODFLOW: lists with dimensions
, data
(or recharge
/evt
) and kper
elements.
For dimensions
, I get that it's less user-friendly than e.g. hob$nh
. For data
however, I think it's more user-friendly to have a data.frame
than to have individual vectors of e.g. riv$stage, riv$conductance
etc. If you want to look at your river data, pulling the entire data frame at once makes more sense to me than to pull a bunch of individual vectors. Furthermore, the MODFLOW data structure for RIV, GHB, etc. is literally a data frame-like structure. Similarly for HOB although there can be list-columns in there for multi-layer observations. For RCH and EVT, the data are lists with rmf_2d_arrays
. Regarding the stress period timing, kper
gives you an overview of which tables/arrays are active during each stress period. I think this is more informative than object$itmp
although it does make a strong abstraction from the initial MODFLOW file structure. In general however, I don't think users are interested in the itmp
values, but rather in "which arrays are active in stress period x".
Removing the dimensions
element makes sense to me and shouldn't be too difficult. Removing data
and/or kper
on the other hand will require a rewrite of the internals. I think for the latter two it's easier to make S3 $.object
operators to obtain and compute certain object parameters on the fly if we want to go down that road (see $.crs
in sf
for a good example).
from rmodflow.
Ok, makes sense. Definitely the data frame part (just wondering now what the status of data.frame vs tibble use is currently, but might look into that later (note I would prefer using the latter throughout the package, mainly because of the more user-friendly printing)). Need to look deeper into the kper thing as well once, but that's ok for now. I will remove the dimensions layer in a feature branch then asap, and check if rmf_optimize()
and rmf_analyze()
are fixed by that.
from rmodflow.
(just wondering now what the status of data.frame vs tibble use is currently
Currently, everything except rmf_as_tibble
uses data.frame
. This should be straightforward to adjust.
I will remove the dimensions layer in a feature branch then asap
If you want me to do that just let me know; since I introduced it in the first place. More than happy to help.
from rmodflow.
If you have time for that, that would be great. Just put it in a data-frame-to-tibble feature branch or so. Should be straightforward to merge.
from rmodflow.
Oh, I see I misinterpreted your suggestion above. Well ... had to fix things asap to continue working on one of the projects. The tibble thing can be done later I suppose. It's more a UI thing than anything else I think, so maybe I'll try to do that when introducing rui throughout the package (unless you feel like taking action already of course). Will close this one now.
from rmodflow.
Related Issues (17)
- Error reading .dis file HOT 2
- RMODFLOW extension packages
- Consistency with ModelMuse HOT 2
- Add examples in function documentation
- Replace akima dependency HOT 4
- Speed-up reading/writing arrays
- Reading recharge package fails when time-varying parameters are present HOT 3
- Weighted goodness-of-fit measures for use in optimization
- Automatic installation of MODFLOW variants, and finding paths to executables
- Consistent S3 class names HOT 2
- Bug in rmf_create_hob (and rmf_write_hob) when dealing with transient observations HOT 1
- Metadata HOT 3
- Basic MODFLOW package/file object functions HOT 5
- Install codes and example models HOT 1
- Execute, optimize, analyze HOT 2
- rmf_convert_grid_to_xyz works not correctly HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rmodflow.