mobiodiv / mobr Goto Github PK
View Code? Open in Web Editor NEWTools for analyzing changes in diversity across scales
License: Other
Tools for analyzing changes in diversity across scales
License: Other
quadprog
rarefaction
so it is clear what the different methods accomplishsubset.mob_in
function that aids in pulling out a reduced number of rows of the mob_in
object - this can be really useful when wanting to only work with certain treatment combinationsplot_rarefy
and move that code into plot.mob_out
which currently calls that function.log
argument in plot.mob_out
as in mob_out
plot.mob_out
sometimes log transforms the x axis depending on mob_out$log_scale
and sometimes does not. This needs to be made consistent across the figures. (high priority)rarefaction
so it will accept a vector for sample based rarefactionmobr
object output from get_delta_stats
rarefaction
that sample-based rarefaction is not actually used anywhere in the mob methods.Tiffany's empirical data uncovered several shortcomings of our function make_comm_obj
. Specifically in how the plot attributes are handled. Specific problems are
Hi,
Would be nice for the user to have a legend in the richness graphs produced by plot_9_panels(), indicating which color is assigned to trt_group (e.g. red) and which to ref_group (e.g. blue).
Best,
Valentin
Hi @FelixMay - here you go, our wishlist :) Thanks!
boxplot.mob_in
and a vioplot.mob_in
function but that can be down the roadmob_stats
we would like two different matrices of stats to be output:
anova(lm(S~trt))$F[1]
as your test statistic for each randomization. Compute the p-value as follows p = # of times F observed =< F permuted / # of permutations.fields::rdist.earth
Hey @rueuntal we discussed how dat_plot
would be the input for a gradient method and would be identical to the pairwise case except it will now have additional columns for the continuous variables. Along these continuous variables we have several options for aggregating the rarefaction information.
group
variable to identify unique "sites" along the gradient compute rarefaction for each site and then place it on the gradient according to that site's mean value along the gradientIn future version of the code let's work hard to minimize the number of class change declarations that we make in the code (e.g., calling as.character()
) these seem to be generally unneeded and make the code very cumbersome to work through. A better approach is simply to convert all factors to strings at the beginning of the analysis need any downstream calls to group variables will be character strings which generally avoid side-effects. This isn't so much a single change to the code but a change to our coding behavior. I'm done complaining now thank you ;)
Right now it seems that rarefaction()
throws a warning when effort
> total abundance N
and gives back S_obs
. We've probably talked about this before - is there a reason why it doesn't return NA
instead?
For the new 3-curve (or 4-curve) approach.
here is the R code I was able to get working
comm_group_noagg = list()
for(i in seq_along(group_levels)) {
comm_group = comm$comm[env_data == groups[i], ]
sp_sums = colSums(comm_group)
comm_group_noagg[[i]] = sapply(sp_sums, function(x)
table(c(sample(1:nrow(comm_group), x,
replace=T),
1:nrow(comm_group))) - 1)
}
the object `comm_group_noagg is list with as many elements as groups. For each list element there is a community site by species matrix with identical species sums (i.e., colSums) to the orginal data but different rowSums. The abundances for each species are populated randomly across each plot within a treatment.
I think the easiest way to incorporate this into our existing code is to wrap all of get_delta_s in a for loop that at the beginning either calls this function and redefines comm or does not depending on if you want to turn off within plot aggregation signatures on the sample based test. I don't think any internal get_delta_S code would have to be changed. Just define that only the sample test is wanted. some holder variables will also need to be defined and averaged across. Sorry this solution is not more complete
Allow argument to get_delta_stats
that allows user to supply their own grouping variable rather than simply using the default behavior of using each unique value of the env_var
. For example if the explanatory variable is latitude and some groups of sites have the same latitude then it would be nice if the user could provide group_id so that these same latitude plots would not be mixed in the sampled based analyses.
Hi!
I worked today with a new dataset from Jon and I encountered some plotting errors with plot.mobr()
and plot_9_panels()
functions. I used the scripts from branch 4cur.
I got errors because some objects in the plotting functions could not be created because some object names changed in the structure of mobr
object when most probably is created with the function get_delta_stats()
.
I went line by line with some test objects and it seems that in plot.mobr()
one should make the following change in line 90:
tests = c('indiv', 'N', 'agg')
change to
tests = c('SAD', 'N', 'agg')
The error was triggered when trying to access the mobr
object , for example, like mobr[[type]][[tests[i]]]
. And actually the mobr
object, created previously with get_delta_stats()
function, has 3 data frames in the list discrete
: SAD
, N
and agg
, but no indiv
. Therefore, I presumed I can change indiv
with SAD
.
Then a similar error happened in plot_9_panels()
function. Here at lines 1200 and 1202 I replaced $ind
with $SAD
in order to avoid the error:
mobr$discrete$ind[, -1] = lapply(mobr$discrete$ind[, -1], function(x)
as.numeric(as.character(x)))
delta_Sind = mobr$discrete$ind[which(as.character(mobr$discrete$SAD$group) == as.character(trt_group)), ]
I changed to
mobr$discrete$SAD[, -1] = lapply(mobr$discrete$SAD[, -1], function(x)
as.numeric(as.character(x)))
delta_Sind = mobr$discrete$SAD[which(as.character(mobr$discrete$SAD$group) == as.character(trt_group)), ]
Please let me know if my suggestions make sense.
I didn't have time to check the function get_delta_stats()
, but I presume that another option would be to make modifications there regarding the naming of the objects.
Best,
Valentin
This error was reported to me by @valentinitnelav his original posting is here
I simply copy and pasted the text below...
I tested the script with the new patch from yesterday and I get now a different error. The error happens in function SpecAbunAce {SpadeR}.
One can reproduce the error with the following datasets and code:
Datasets – download rda objects OK.rda and OK.env.rda
Code
load(file = "OK.rda")
load(file = "OK.env.rda")
source("C:/MoB/mobr-master/mobr-master/R/mobr.R")
source("C:/MoB/mobr-master/mobr-master/R/mobr_boxplots.R")
OK_comm <- make_comm_obj(OK, OK.env)
OK_stats <- mob_stats(OK_comm, "treatment")
Warning: In this case, it can't estimate the variance of Homogeneous estimation
Error in if ((sum(x == 1)/sum(x[x <= k])) == 1) { :
missing value where TRUE/FALSE needed
The error seems to happen first time at row 9 in the OK object. There is only one species with 21 individuals.
Inside the SpecAbunAce{SpadeR} function the error is given by a division by zero in if ((sum(x == 1)/sum(x[x <= k])) == 1)
Note that the error is triggered inside mob_stats() when calling ChaoSpecies{SpadeR}:
S = ChaoSpecies(comm$comm[i,], datatype = "abundance")
Best,
Valentin
The following code generates an error with a fresh install of our package
library(mobr)
data(inv_comm)
data(inv_plot_attr)
inv_mob_in = make_mob_in(inv_comm, inv_plot_attr)
stats = mob_stats(inv_mob_in)
plot_samples(stats)
Error in eval(expr, envir, enclos) : object 'S' not found
In addition: Warning message:
In (function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, :
row names were found from a short variable and have been discarded
In some empirical datasets at larger sampling efforts the spatial accumulation curve is a bit unstable and appears jagged. In a simple simulated community of species neatly arranged on a gradient sampled by a transect this jaggedness is even more noticeable. The moving-window nested species-area relationship does not show this pattern of jaggedness. The question is why - is this a bug (I don't think so) or just poor performance of our algo (I think this could be the case). Here is an example of what I'm talking about:
Note that the left panel shows the species (each a different color) and their position along the gradient. I added a little noise into their presences and the jaggedness relaxed a bit but it is still quite present relative to the SAR. I think this may be due to the fact that the SAR is an average over all possible accumulations of species where our spatial rarefaction curve is an average of a much smaller set that is nested within the set considered by the SAR. Any other ideas @rueuntal and @ngotelli ?
hey @rueuntal now that the code is more or less stabilized let's spend a little bit of time thinking about what we have called things. The specific names I would like to discuss is comm
. As I go through the code more I'll probably find other object names to add in to this thread.
With respect to comm
. We have a function make_comm_obj
which takes as its input an argument comm
and another argument plot_attr
. This is understandably confusing - why would a function that is supposed to make a comm
take it as input. In #84 I changed the argument comm
to x
which is very generic but follows the convention of the package vegan
which I think is a good authority for how to work with site-by-species matrices. This brings up a question though. Throughout the rest of our package we refer to comm
like objects which are really just site-by-species matrices should we change these names to x
? Is there a better name then x
that we can think of. Maybe the simpliest solution is to change make_comm_obj
to something else so like make_mob_data
or something like that.
Let me know what you think.
If the output of get_delta_stats
has an attribute called $type
which is either discrete
or continous
is there any value to nesting the data.frames for the SAD, N, and agg effects within the attribute $discrete
? So for example here is the current structure of the output of get_delta_stats
:
Only the first five rows of any matrices are printed
$type
[1] "discrete"
$log_scale
[1] TRUE
$indiv_rare
sample invaded uninvaded
1 1 1.000000 1.000000
2 2 1.881711 1.607740
3 3 2.670418 2.052937
4 4 3.385225 2.426138
5 5 4.040640 2.763523
6 6 4.647746 3.079981
$sample_rare
group sample_plot impl_S expl_S deltaS_agg
10 invaded 1 6.737519 9.42 2.68248095
19 invaded 2 10.378372 12.10 1.72162788
29 invaded 3 13.595478 14.44 0.84452217
39 invaded 4 16.333422 16.94 0.60657842
49 invaded 5 18.753198 19.14 0.38680190
58 invaded 6 20.729451 20.76 0.03054906
$discrete
$discrete$SAD
group effort_ind deltaS_emp deltaS_null_low deltaS_null_median deltaS_null_high
1 invaded 1 -4.363176e-14 -6.065703e-14 -5.245804e-14 -4.180267e-14
2 invaded 2 2.739708e-01 -6.257883e-02 -7.606259e-03 4.085399e-02
3 invaded 3 6.174812e-01 -1.298121e-01 -1.663329e-02 8.763914e-02
4 invaded 4 9.590875e-01 -1.891738e-01 -2.517006e-02 1.311027e-01
5 invaded 5 1.277117e+00 -2.404404e-01 -3.283497e-02 1.700484e-01
6 invaded 6 1.567765e+00 -2.857248e-01 -3.971128e-02 2.053122e-01
$discrete$N
group effort_sample ddeltaS_emp ddeltaS_null_low ddeltaS_null_median ddeltaS_null_high
1 invaded 1 4.363176e-14 -8.903989e-14 -8.715251e-15 6.558365e-14
2 invaded 2 -5.611687e-01 -2.432205e-01 -9.175297e-02 4.615233e-02
3 invaded 3 -1.190207e+00 -5.589478e-01 -2.540712e-01 2.024506e-02
4 invaded 4 -1.815606e+00 -8.762102e-01 -4.132492e-01 -7.071931e-03
5 invaded 5 -2.415627e+00 -1.172861e+00 -5.484117e-01 -1.436326e-02
6 invaded 6 -2.986400e+00 -1.444538e+00 -6.567581e-01 2.155285e-03
$discrete$agg
group effort_sample ddeltaS_emp ddeltaS_null_low ddeltaS_null_median ddeltaS_null_high
1 invaded 1 7.291876 -1.0816237 0.1018763 1.709876
2 invaded 2 9.290809 -0.4931914 0.5208086 1.650809
3 invaded 3 9.900593 -0.8054069 0.3905931 1.382593
4 invaded 4 9.859997 -0.9420026 0.4599974 1.324997
5 invaded 5 9.756374 -1.0381263 0.3063737 1.199374
6 invaded 6 9.038851 -1.2701486 0.2588514 1.458851
I propose that we get rid of the nesting of the effect specific dataframes within $discrete
and I propose that we condense this down to a data.frame because effect data.frame has the same number of columns in the same order. Returning one data.frame
is almost always preferable over a list
of data.frames
. We will just have to add a column for effect_type
which will haves the values SAD, N, or agg.
Hey Xiao can you push the most recent of your 4curves branch to github thx
The rarefied richness analysis that @FelixMay implemented can fail when a treatment has a strong influence on abundance such that all the plots of a given treatment fall below the 10 individual cutoff (this occurred in the glade ants 2011 analysis). The analysis will return an error like this:
Error in `contrasts<-`(`*tmp*`, value = contr.funs[1 + isOF[nn]]) :
contrasts can be applied only to factors with 2 or more levels
which results because any sample with less than 10 individuals was dropped effectively making it a single treatment analysis. Although this is likely pretty rare we should add some code to catch this error and then remove the rarefaction analysis from the suite of tests.
Early in development we discussed the option of allowing presence absence data and we build that into the generation of a the input object but we didn't pursue this idea much beyond that. Assuming that is still a priority I thought here we could brainstorm the changes we need to for an analysis of binary data
Brainstorm / To Do List
Here are lists of exceptions and warnings for the analysis. @dmcglinn feel free to expand.
Warnings:
It would be nice if the three boxplot functions can be combined into one, which allows the users to specify whether it's sample, group or betaPIE.
Hey @rueuntal slogging through the code I see the following mapping for the rarefaction calculations when working with species richness (i.e., Hill number where q = 0):
rarefy_individual -> rareNMtests::rarefaction.individual -> vegan::rarefy
rarefy_sample_implicit -> rareNMtests::rarefaction.sample -> vegan::specaccum
Do you think we want to maintain the option of using our framework on other Hill numbers besides richness? rareNMtests gives us the option of running our code on q=1 (Shannon) and q=2 (Simpson) indices as well. If you don't think we'll likely ever use these then my vote is to directly code the analytical solutions rather than depend on rareNMtests and vegan for these calculations. Their functions lose some speed by carrying out redundant checks and in one case don't allow computing of the rarefaction curve for a sub sample of.all quadrats (I could request this change though).
Should we specify a minimal # of plots for each group to carry out the plot-based (spatially-implicit & explicit) rarefactions? It probably doesn't make much sense to rarefy when there are only two or three plots.
https://github.com/MoBiodiv/mobr/blob/master/R/mobr.R#L466 is causing an error
Error in colSums(x) : 'x' must be an array of at least two dimensions
not sure why yet
Hey @rueuntal take a look here:
https://github.com/MoBiodiv/mobr/blob/master/R/mobr.R#L256-L257
where you formulated PIE. I'm curious why the term N / (N - 1) is needed? You calculated the probability of a heterospecific (under an assumption of replacement) and then you multiply it by this.
Thanks!
Hi @rueuntal !
I saw that some minor changes took place in the OY label names inside the plot.mobr()
function.
Line 98 in mobr.R
is now
ylabs = rep('delta-S', 3)
And previously it was
ylabs = c('delta-S', rep('delta-delta-S', 2))
Shouldn't the old line of code be kept? Otherwise is also not consistent with the output from plot_9_panels()
function. But I'm not familiar with the entire concept so, please correct me if I'm wrong.
Best,
Valentin
If N/aggregation differs - effect due to N/aggregation
If both differ - ???
Need to do sensitivity analysis on these scenarios.
Right now there are lots of
if (type == 'continuous'){}
else {}
in our code. I'm leaning towards breaking them into two separate functions, potentially suffering some repetition for better readability. @dmcglinn what do you think?
@dmcglinn @ngotelli This is kind of a nuance but I'd love to hear what folks think. If we think about the nonspatial curve as being created by randomly shuffling the individuals across plots, then the two curves are only exactly identical when all plots within the treatment have exactly the same abundance (density). The more "uneven" they are in abundance, the more divergence we'd see in the two types of calculations.
Consider the following extreme example - a treatment with three species and two plots, c(100, 100, 100) and c(1, 0, 0). No matter how we reshuffle the individuals, plot 1 would always have three species while plot 2 would always have 1. So the expected S at plot = 1 for the nonspatial curve is 2:
mean(c(rarefaction(c(101, 100, 100), 'indiv', c(1, 300))))
However, calculated from the ind-based curve, the expected abundance for plot = 1 is 150 or 151, and
rarefaction(c(101, 100, 100), 'indiv', 150)
is 3.
Need to get a better feeling for what is the best way (or if it depends) to scale plots to number of individuals in test 2.
Using the analytical rescaling of the rarefaction as introduced in #123 sometimes results in warnings about the input of lgamma
being out of range.
I looked at a plot of the lgamma
function and it shows this bizarre pattern.
So it is undefined for certain bands of negative numbers. Based on your derivation would you ever expect lgamma
to be computed on a negative quantity. If not then I can setup an if statement so it is not evaluated for negative values at all.
The algorithm we devised to remove spatial within and between aggregation by shuffling individuals of each species randomly between plots of a given group type is identical to an appropriately rescaled individual based rarefaction for that same group. This simplifies our code in the function effect_of_N
I will try to submit a modified version of this function soon that only relies on rescaled individual based rarefaction curves rather than any unnecessary sample based curves.
One of the argument we allow users to specify when computing get_delta_stats
is called log_scale
which spaces the sampling of individuals evenly along a log scale. I propose that if the user sets this argument to TRUE
that the downstream calls to our plotting routines use a log transformed x-axis when the number of individuals (not the number of samples) is the variable being displayed.
Alternatively and maybe more generally another solution could be for users to just specify in the plotting routine whether they want log transforms on numbers of individuals, samples, or both. We could also provide this option for the S axis but for our visual displays we may want to stick with arithmetic scaling of S because it adds emphasis to the fact that the patterns show strong scale dependence.
Hi @dmcglinn!
I tested the script with the new patch from yesterday and I get now a different error. The error happens in function SpecAbunAce {SpadeR}.
One can reproduce the error with the following datasets and code:
Datasets – download rda objects OK.rda and OK.env.rda
Code
# load objects
load(file = "OK.rda")
load(file = "OK.env.rda")
# source mobr.R and mobr_boxplots.R from local machine
source("C:/MoB/mobr-master/mobr-master/R/mobr.R")
source("C:/MoB/mobr-master/mobr-master/R/mobr_boxplots.R")
# Make community object with mobr
OK_comm <- make_comm_obj(OK, OK.env)
# Run function for summary statistics - getting error in SpecAbunAce {SpadeR}
OK_stats <- mob_stats(OK_comm, "treatment")
Warning: In this case, it can't estimate the variance of Homogeneous estimation
Error in if ((sum(x == 1)/sum(x[x <= k])) == 1) { :
missing value where TRUE/FALSE needed
The error seems to happen first time at row 9 in the OK object. There is only one species with 21 individuals.
Inside the SpecAbunAce{SpadeR} function the error is given by a division by zero in if ((sum(x == 1)/sum(x[x <= k])) == 1)
Note that the error is triggered inside mob_stats() when calling ChaoSpecies{SpadeR}:
S = ChaoSpecies(comm$comm[i,], datatype = "abundance")
Best,
Valentin
It appears that our function rarefaction
is miscalculating the richness when sample size is 1 (possibly at other sizes as well I'm not sure yet). Was this function ever compared to hand-worked examples as a test? We should get some explicit tests in place for this function. Here is code to demonstrate the bug:
# create a 3 species community distributed across 4 sites
comm = rbind(c(0, 0, 1),
c(0, 0, 0),
c(0, 1, 0),
c(1, 0, 0))
replicate(4, rarefaction(comm, 'spat', xy_coords = 1:4))
# note that the replicates differ from on another the correct value when
# n = 1 is 0.75 (= mean(c(1, 0, 1, 1)) the site richness values)
# but sometimes the function returns 1.
I think this bug likely has something to do with the reordering of the data prior to computing the accumulation curves to try to randomize the pattern by which ties are encountered.
This discussion is related to @dmcglinn 's pull request #18 .
To summarize (for our future selves), our algorithm for the pair-wise comparisons consists of three steps:
1 and 3 are fairly self explanatory, but there is some decision that we have to make in 2. To subtract the ind-based curve from the sample-based curve, we have to rescale the latter from number of samples to number of individuals. Curves from both groups have to follow the same rescaling factor, otherwise the sample-based curves would no longer match (ie they may have the same number of samples but end up with different numbers of individuals after rescaling). Our (mostly Brian's and my) previous decision is to first ask user to specify if there is a group with "baseline" density, e.g., the control (the "scaleby" parameter). If left unspecified, the min density is used, so that the sample-based curves fall back to the end point as the individual-based curves.
Another option would be the mean, which I think is what's being implemented in #18 . My guess is that this decision can change the results, but I'm not sure if the null model would naturally take care of that. And it's not clear to me if one decision is better than the others. @dmcglinn what do you think?
This is because of this line which assumes that group
is a factor. We need a more robust system because group
could be a character
or a numeric
class of variable.
Many of R's default functions (e.g., plot
and lm
) have an argument called subset that allows the user to run the code over a subset of the rows, this would be a nice addition to our code because currently you have to define a new mob_in
or go through a lot of hassle doing manual subsets.
We should break the three null model algos out as separate functions rather than just nestled within the get_delta_stats
function. My reasons are 1) it will be easier to write a function for generating the null results for rarefied S only and not delta S which will need the same shuffling algos, 2) it makes it easier for other scientists to jump right to the important part (the null algos) and see how they work in a simple transparent way.
Hey @dmcglinn is that function you wrote to obtain the null expectation of delta S vs N based on randomizing samples still in here? Or did it get renamed to something else?
it would be trivial to add this to our function rarefaction
pair_dist = fields::rdist.earth(xy_coords)
when the user specifies that the xy_coords are longitude and latitude records. This function computes the great circle distance. Does anyone think this is worthwhile enough to warrant the additional dependence on the fields
package?
Currently in the delta analysis all of the plots in a given treatment are as part of the same "population" but many times in experimental designs there are blocks that the investigator would like to be able to control for when carrying out the analysis of the treatment effect. This same problem also occurs in analyses of time series such as in the Portal dataset in which there are many plots in the same treatment that each have been sampled through time. If one is only interested in temporal (rather than spatial) patterns of species accumulation then a method is needed to be able to average across the individual quadrat results for both the observed and null deviations.
My feeling is that the way we want to approach this is to estimate the treatment level curve by averaging the separate block specific curves, but in practice this may be difficult to implement because some blocks may have small sample sizes (individuals or samples) and because all of how our null models are conducted. Let's use this thread to brainstorm how to best implement these analyses
hey @rueuntal can you explain to me what these lines of code are accomplishing:
https://github.com/MoBiodiv/mobr/blob/master/R/mobr.R#L46-L49
#1 for absence, 0 for presence
sp_bool = as.data.frame(ifelse(sp_ordered[ , 1:ncol(sp_ordered)] == 0, 1, 0))
rich = cumprod(sp_bool)
explic_loop[ , i] = as.numeric(ncol(dat_sp) - 1 - rowSums(rich))
I don't understand what the variable rich
represents? Thanks!
It would probably be good for us to keep record of the different algorithms that we have tried, and whether/how/why it has failed. Since the code has been substantially modified in structure (#75 ), I'll re-test some of our old ideas and summarize the results here. If folks are interesting in testing out some new algorithms, the R code for sensitivity analysis is in scripts/mobr_sensitivity.R (requires @FelixMay 's MoBspatial package, and change of directories to run).
Hi Dan and Xiao
I’m continuing the use the mobr package as it develops…thanks for the great resource.
Couple of (simple) things have popped up in the new version that I want to bring to your attention:
- typo in stack_effects(). The ‘label_indv’ argument should be ‘label_indiv’
- something funky happens with the plot.mob_out function for my data that results in the the individual based rarefaction curves being mislabelled. Specifically, the column order of my groups in the mob_out$indiv_rare is: ‘sample’, ‘fished’, ‘protected’. But as I call set trt_group=‘protected’, the code as currently written mixes up the two curves. I suspect, but don’t know, that this is going to be a problem for any data sets where the alphabetical order of the grouping variable is not the same as the order of the treatment/reference groups?
Finally, my results sometimes have NAs in the mob_out$N part of the object returned by the get_delta_stats() function. I’m still trying to figure out why…
Cheers,
Shane
Hi!
I have noticed that the column name of the environmental variable passed to plot_attr
argument in make_comm_obj()
function gets lost somehow. This happens when the environmental variable is a vector or factor or a data.frame with only one column.
The issue happens at line
out$env = data.frame(plot_attr[ , -spat_cols])
Adding this line after the one above fixed the issue for me
colnames(out$env) <- colnames(plot_attr)[-spat_cols]
Best,
Valentin
John requested that practitioners would like to know an overall significance levels for each stage of the analysis. I'm a bit resistant to this because it de-emphasizes that the answer depends on the scale of the analysis but I can still see the importance. There is a precedent for an overall significance test from the spatial correlogram analyses. Legendre and Fortin (1989) suggest that overall significance can be assessed if there is at least one point that is significant at the level of alpha_prime = alpha / n where alpha is typically 0.05 and n is the number of tests which in our case will be number points along our rarefaction curves we examine. This is the Bonferroni method of correcting for multiple tests.
hey @ngotelli @rueuntal and @FelixMay do any of you know of a good reference to cite for our approach to spatially explicit rarefaction. The general concept is described in Chiarucci et al. (2009) but they use a different way of defining how to select the next sampled (a k-nearest neighbor approach). Our approach is a bit simpler than their approach is and I'm curious if there is a precedent for it in the literature.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.