Giter Site home page Giter Site logo

metafor's Introduction

Hi there!

Work Details

  • I am associate professor of methodology and statistics in the Department of Psychiatry and Neuropsychology at Maastricht University in the Netherlands
  • I do research on the statistical methods for meta-analysis and the analysis of multilevel and longitudinal data, with particular emphasis on computational statistics and software development

Most Important Projects

Get in touch

metafor's People

Contributors

bwiernik avatar kylehamilton avatar mmaechler avatar mutlusun avatar wviechtb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

metafor's Issues

rma() produces different CIs for correlations than cor.test() or corr.test()

Hello,

I'm having a bit of trouble with rma(), and I want to make sure I'm not crazy. I'm doing an internal meta-analysis of correlations, and the CI's that my core functions (cor.test and corr.test) are giving me don't match up with the ones from rma. I'm running the correlations, extracting the r's, n's, and se's, then feeding them into rma

Here's the dataframe (note the CIs):
screen shot 2018-04-06 at 6 59 44 pm

Here's the code that I'm feeding in:
rma(ri=Correlation, ni=n, data=TarPData, slab=Target, measure="COR")

And, weirdly, here's the forest plot (note that the CIs cross 1?)
image

Do you have any idea what's going on? Any idea how I can feed in the data in a way that matches the output from the correlation functions?

Thanks,

-Nick

installing metafor

Hi, I am having troubles installing the metafor R package.
Here is the error I am encountering:

Installing package(s) 'metafor'
Warning: unable to access index for repository https://cran.rstudio.com/src/contrib:
unsupported URL scheme
Warning: unable to access index for repository https://cran.rstudio.com/src/contrib:
unsupported URL scheme
Warning message:
package ‘metafor’ is not available (for R version 3.2.3)

Any help on how to solve this issue would be highly appreciated.
Thanks in advance,
Elena

transf argument in confint.rma.uni

Classification:

Bug Report

Summary

In confint.rma.uni, the transf argument is only applied if fixed==TRUE. If fixed is false, results are returned in the original untransformed metric.

Reproducible Example (if applicable)

res <- rma(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
confint(res)
confint(res, transf = exp) # Same result

sessionInfo()

R version 3.5.3 (2019-03-11)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] metafor_2.1-0 Matrix_1.2-17

loaded via a namespace (and not attached):
[1] compiler_3.5.3  tools_3.5.3     nlme_3.1-139    grid_3.5.3      lattice_0.20-38

measure = "SMDC"

Classification:

Enhancement Suggestion

Summary

Could measure = "SMDC" or similar be added for standardized mean differences using the control group SD as the reference (Glass's Delta), rather than the pooled (measure = "SMD") or averaged (measure = "SMDH") SD?

`plot.infl.rma.uni()` changes the `mfrow` option of `par` but does not change it back if `plotinf=FALSE` and `plotdfbs=TRUE`

Classification:

Bug Report

Summary

plot.infl.rma.uni() changes the mfrow option of par but does not change it back if plotinf=FALSE and plotdfbs=TRUE.

Reproducible Example (if applicable)

Adapted from the example of plot.infl.rma.uni()

dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
res <- rma(yi, vi, mods = ~ ablat + year, data=dat)
inf <- influence(res)
plot(inf, plotinf=FALSE, plotdfbs=TRUE)
plot(rnorm(10), rnorm(10))

The last plot will be plotted on a 3-by-1 layout created by plot.infl.rma.uni().

Notes

If plotinf = TRUE, then these lines will run, whether plotdfbs is TRUE or not.

par.mfrow <- par("mfrow")
on.exit(par(mfrow = par.mfrow), add=TRUE)

However, if plotinf = FALSE, then the lines above will not be run. mfrow will be changed when dfbs graphs are plotted, but the original value will not be restored.

How about moving the lines 93-94 before the if (plotinf) block, such that the mfrow value is stored and restored in all conditions?

sessionInfo()

R version 4.1.1 (2021-08-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19043)

Matrix products: default

locale:
[1] LC_COLLATE=Chinese (Traditional)_Hong Kong SAR.950  LC_CTYPE=Chinese (Traditional)_Hong Kong SAR.950   
[3] LC_MONETARY=Chinese (Traditional)_Hong Kong SAR.950 LC_NUMERIC=C                                       
[5] LC_TIME=Chinese (Traditional)_Hong Kong SAR.950    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] metafor_3.0-2 Matrix_1.3-4 

loaded via a namespace (and not attached):
[1] compiler_4.1.1  tools_4.1.1     mathjaxr_1.4-0  nlme_3.1-152    grid_4.1.1      lattice_0.20-44

Risk difference issue

I am having an issue with the risk difference measure. When the risk difference is something other than 1, the output is as I expected (the difference in proportions). But when the risk difference is 1, I am getting something different.

For example, Here is a portion of my data:

upperleft lowerleft upperright lowerright
1 1.000 8.00 19.000 20.00
2 0.000 2.00 2.000 2.00
3 0.000 2.00 2.000 2.00
4 4.000 8.00 7.000 11.00
5 1.500 8.00 9.500 11.00
6 1.000 5.00 39.000 40.00
7 1.000 5.00 46.000 47.00

For lines 2 and 3, the risk difference should be 1 ((2/2)-(0/2)).

This is the code that I am using:

meta <- escalc(measure="RD", n1i=quadrants$lowerright,
n2i=quadrants$lowerleft, ai=quadrants$upperright,
ci=quadrants$upperleft)

And here is the output:

   yi     vi

1 0.8250 0.0160
2 0.6667 0.0926
3 0.6667 0.0926
4 0.1364 0.0523
5 0.6761 0.0297
6 0.7750 0.0326
7 0.7787 0.0324

Any thoughts on why this may be happening?

Thank you!

rcalc function

Classification:

Bug Report

Summary

The rcalc function works with the dataset from the example dat.craft2003 but does not work with other datasets.

Reproducible Example (if applicable)

When running the example provided with the package from the dat.craft2003 it runs well and then it can be used by he rma.mv function.
Example from the package:
tmp <- rcalc(ri ~ var1 + var2 | study, ni=ni, data=dat)
V <- tmp$V
dat <- tmp$dat

However, when using other datasets and reproducing the same process, at the moment of doing the last step, (dat<-tmp$dat) this returns dat as empty. The previous two steps seem to work well.

I tried it with different datasets and even with one that resembled exactly the Craft2003 one from a csv file and the same problem occurred.

I understand that this might be a consequence of the function being under development and I apologise if that is the case.

Best regards,

improve error on nrow(mods)!=k

It would be useful if the error below (I think it's finding problem with mods, but really not sure... everything seems fine) had an extra message say what value nrow(mods) and k took. so

Number of rows of the model matrix (nrow(mods) does not match length of the outcome vector (k).")). I got 3 rows in the mod column, but detected k=4 studies

Also, the error message includes the stop function (i.e., the error parsing code itself generates an error.

betas = c(US=.001 , AU=.0757, IL=.0020)
SEs   = c(US=.0036, AU=.0276, IL=.0131)
mod  = c(US="SCAN", AU="UK", IL="SCAN")
ma = metafor::rma(yi=betas, sei = SEs, slab = slab, mods= factor(mod))

Error in if (!is.null(mods) && (nrow(mods) != k)) stop(mstyle$stop("Number of rows of the model matrix does not match length of the outcome vector.")) :
missing value where TRUE/FALSE needed

packageVersion("metafor") ‘2.1.0’

how are weights assigned with rma.mv?

Hi there,

I am new to meta-analysis and relatively new to R, and am hoping for some advice regarding a meta-analysis! I am using a rma.mv model to account for clustering by trials that use the same comparator group for multiple treatment groups. The data we are working with is pretty sparse... several studies have 0 events for both treatment and comparator groups. Is this an issue for "rma.mv"? I know that these studies normally get "thrown out" for meta-analysis using "rma", however they have been included with "rma.mv", with comparatively low weights.

My second question is related to weighting of studies and how this is determined using the default method with "rma.mv". I'm used to specifying a weighting system with "rma". I see there is an optional argument to specify weights with "rma.mv". Is this something I should consider doing, considering small samples sizes (2-20), and sparse outcomes?

Thanks in advance!

Nadine Vogt

Add a warning about z-tests with few degrees of freedom

Classification:

Enhancement Suggestion

Summary

Random-effects meta-analysis can be performed with rma(), defaulting to z-test even if the number of degrees of freedom is very low, leading to highly biased P-values without warning.
Use of test="knha" should be highly encouraged.

Reproducible Example

rma(c(1,3),c(0.5, 0.5))

Observed behavior : no warning
Expected behavior : warning

Notes

I have a patch providing relevant warning messages such as:
z-test approximation is very poor with 1 degrees of freedom.
At 5% significance level, type I error rate raises to 30%
Please, use test="knha" to fix that issue

warning.patch.txt

Do you want a pull request?

sessionInfo()

Post output of sessionInfo() below:

R Under development (unstable) (2020-08-13 r79017)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=French_France.1252  LC_CTYPE=French_France.1252    LC_MONETARY=French_France.1252
[4] LC_NUMERIC=C                   LC_TIME=French_France.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] metafor_2.4-0 Matrix_1.2-18

loaded via a namespace (and not attached):
[1] compiler_4.1.0  tools_4.1.0     nlme_3.1-148    grid_4.1.0      lattice_0.20-41

Aggregate: Error in match.fun(FUN)

Classification:

Bug Report

Summary

Summary of the problem.
I was excited to see the addition of the aggregate function to metafor. However, I'm getting the error that is supposed to be for if you use an object that isn't an escalc class.

Reproducible Example (if applicable)

use this for posting code (if applicable)

> dat
   shi_site_id      yi     vi 
1       CAAB01  0.7121 0.3445 
2       CAAB01  1.0661 0.2727 
3       CAMB01 -0.4415 0.0875 
4       CAON03  0.3718 0.0468 
5       CAON03  0.1676 0.0001 
6       CAON03  0.2924 0.0122 
7       CAON03  0.4170 0.0168 
8       MXPU01 -0.0829 0.2257 
9       USAL01  0.1295 0.0467 
10      USAL01  0.3567 0.0130 
11      USAL01  0.2496 0.0524 
12      USAL01  0.2262 0.0388 
13      USAL01  0.0045 0.0451 
14      USAL01  0.2317 0.0115 
15      USAL01  0.1106 0.0217 
16      USAL01  0.0872 0.0081 
17      USCO04  0.2072 0.0116 
18      USCO04 -0.2550 0.0781 
19      USDE01  0.1937 0.0051 
> class(dat)
[1] "escalc"     "data.frame"

agg<-aggregate(dat,cluster=shi_site_id)

use this for posting output (if applicable)

Error in match.fun(FUN) : argument "FUN" is missing, with no default


## Notes

Since aggregate is relatively new, I updated my version of metafor and then downloaded the development version from Github. I also tried  
agg<-metafor::aggregate(dat,cluster="shi_site_id")
Error: 'aggregate' is not an exported object from 'namespace:metafor'
I used the find_funs function from https://sebastiansauer.github.io/finds_funs/ and got this output
> find_funs("aggregate")
# A tibble: 7 x 3
  package_name builtin_pckage loaded
  <chr>        <lgl>          <lgl> 
1 Biobase      FALSE          FALSE 
2 metafor      FALSE          TRUE  
3 raster       FALSE          FALSE 
4 S4Vectors    FALSE          FALSE 
5 sf           FALSE          FALSE 
6 sp           FALSE          FALSE 
7 stats        TRUE           TRUE  


## sessionInfo()

Post output of sessionInfo() below:
R version 3.6.1 (2019-07-05)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS Sierra 10.12.6

Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] robumeta_2.0 ggpubr_0.2.5 magrittr_1.5 data.table_1.12.8 metafor_2.5-74
[6] Matrix_1.2-18 forcats_0.5.0 stringr_1.4.0 dplyr_1.0.2 purrr_0.3.3
[11] tidyr_1.0.2 tibble_3.0.0 ggplot2_3.3.0 tidyverse_1.3.0 readr_1.3.1
[16] readxl_1.3.1

loaded via a namespace (and not attached):
[1] Rcpp_1.0.4 lubridate_1.7.4 lattice_0.20-40 prettyunits_1.1.1 ps_1.3.2
[6] rprojroot_1.3-2 assertthat_0.2.1 digest_0.6.25 R6_2.4.1 cellranger_1.1.0
[11] backports_1.1.5 reprex_0.3.0 httr_1.4.1 pillar_1.4.3 rlang_0.4.8
[16] curl_4.3 rstudioapi_0.11 callr_3.4.3 labeling_0.3 munsell_0.5.0
[21] broom_0.5.5 compiler_3.6.1 modelr_0.1.6 pkgconfig_2.0.3 pkgbuild_1.0.6
[26] clipr_0.7.0 tidyselect_1.1.0 gridExtra_2.3 fansi_0.4.1 crayon_1.3.4
[31] dbplyr_1.4.2 withr_2.1.2 grid_3.6.1 nlme_3.1-145 jsonlite_1.6.1
[36] gtable_0.3.0 lifecycle_0.2.0 DBI_1.1.0 scales_1.1.0 cli_2.0.2
[41] stringi_1.4.6 farver_2.0.3 ggsignif_0.6.0 fs_1.4.0 remotes_2.2.0
[46] xml2_1.2.5 ellipsis_0.3.0 generics_0.0.2 vctrs_0.3.4 cowplot_1.0.0
[51] tools_3.6.1 glue_1.3.2 hms_0.5.3 processx_3.4.2 colorspace_1.4-1
[56] rvest_0.3.5 haven_2.2.0

Adding a constant to the means in the log transformed ratio of means

Dear @wviechtb,

I'm helping a colleague with implementing some R code for calculating effect sizes using the log transformed ratio of means and I have the following issue with metaphor::escalc() function.

Can one add a constant to the means to "deal with" zero cases using escalc()? There seems to be an add and to arguments designed for this purpose, but I can't find a way to make them work. Here is what I mean:

library(metafor)
m1i  <- 15.6
m2i  <- 12.2
n1i  <- 15
n2i  <- 20
sd1i <- 3.82
sd2i <- 3.22

# log response ratio
log(m1i) - log(m2i)
# [1] 0.245835
log(m1i + .5) - log(m2i + .5) # add a constant
# [1] 0.2372173

# variance of log response ratio
sd1i^2/(n1i * m1i^2) + sd2i^2/(n2i * m2i^2)
# [1] 0.007480549
sd1i^2/(n1i * (m1i + .5)^2) + sd2i^2/(n2i * (m2i + .5)^2) # add a constant
# [1] 0.006967255


lnR <- escalc(measure="ROM", vtype="LS", m1i=m1i, m2i=m2i, sd1i=sd1i, sd2i=sd2i, n1i=n1i, n2i=n2i)
lnRk <- escalc(measure="ROM", vtype="LS", add=0.5, to="all", m1i=m1i, m2i=m2i, sd1i=sd1i, sd2i=sd2i, n1i=n1i, n2i=n2i)
lnR; lnRk
#       yi     vi
# 1 0.2458 0.0075
#       yi     vi
# 1 0.2458 0.0075
all.equal(lnR, lnRk)
# [1] TRUE

I checked shortly the code behind escalc() and seems that the constant is not actually added when measure = "ROM" or other measures for Quantitative Variables, am I right? If that is so, then should I conclude that this was build purposely to discourage this practice of adding a constant for these measures?

Thank you for your amazing work on metafor! It is an impressive amount of thoughtful work and a great open source contribution!

Best,
Valentin

PS: This question is related to my post here.

Inconsistency between AIC and summary table ouput

The attached code (and data file) illustrates a case in which there is an inconsistency between the importance of a variable added to the model as judged by AIC and the summary output. AIC says the variable matters, summary table says it does not.

I can't find a way to explain this, any suggestions?

Regards,
Kim

P.S. Thanks for developing and maintaining metafor.
Good package and web site examples much better than most R packages for supporting users
Final final data set_Hasintha.txt

metafor AIC inconsistent with summary table.R.txt

Output only the

I am just now learning to use the metafor package. Thanks for hard work; it is a very useful package.

On the page: http://www.metafor-project.org/doku.php/plots:forest_plot_with_subgroups
the example shows the output of the forest plot. I was trying to find a way to output ONLY the summary statistics information for "RE Model for All Studies" and the corresponding plot for that summary.
I would like to be able to turn off the output of the individual studies and just show the stats and plot for the summarized information.
Is there a way to accomplish that?

Thank you.
-Anand

Conflicting results with rma.peto vs. escalc

Classification:

Bug Report
df_for_wolfgang.csv

Summary

You had suggested on StackExchange that I can use either rma.peto to escalc with measure = "PETO" to return equivalent results. That worked on the dat.egger2001 data. But, for some reason it's not working with my data. I've attached the file here. Not sure if this is a bug or not!

Reproducible Example (if applicable)

res <- rma.peto(ai=a, bi = b, ci=c, di = d, data=df)
summary(res)

dat <- escalc(measure="PETO", ai=a, bi = b, ci=c, di = d, data=df)
escalc.out <- rma(yi, vi, data=dat, method="FE")
summary(escalc.out)
Output from rma.peto:
estimate      se    zval    pval    ci.lb   ci.ub 
  0.0968  0.0505  1.9159  0.0554  -0.0022  0.1959 

Output from escalc:
estimate      se    zval    pval    ci.lb   ci.ub 
  0.0952  0.0503  1.8912  0.0586  -0.0035  0.1938 

Notes

I tried rounding the values to whole numbers (some are fractions when dropout wasn't clearly reported).

sessionInfo()

Post output of sessionInfo() below:

sessionInfo()
R version 4.0.4 (2021-02-15)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Mojave 10.14.6

Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib

Random number generation:
RNG: Mersenne-Twister
Normal: Inversion
Sample: Rounding

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] Hmisc_4.5-0 Formula_1.2-4 survival_3.2-7 lattice_0.20-41
[5] readtext_0.80 MplusAutomation_0.8 ggpubr_0.4.0 ppcor_1.1
[9] mediation_4.5.0 sandwich_3.0-0 mvtnorm_1.1-1 lmerTest_3.1-3
[13] lme4_1.1-26 readxl_1.3.1 MASS_7.3-53 foreign_0.8-81
[17] MAd_0.8-2.1 g.data_2.4 metapower_0.2.2 car_3.0-10
[21] carData_3.0-4 metafor_3.0-2 Matrix_1.3-2 psych_2.0.12
[25] forcats_0.5.1 stringr_1.4.0 dplyr_1.0.7 purrr_0.3.4
[29] readr_2.1.1 tidyr_1.1.4 tibble_3.1.6 ggplot2_3.3.5
[33] tidyverse_1.3.0 meta_5.2-0

loaded via a namespace (and not attached):
[1] minqa_1.2.4 colorspace_2.0-0 ggsignif_0.6.1 ellipsis_0.3.2
[5] rio_0.5.26 htmlTable_2.1.0 base64enc_0.1-3 fs_1.5.0
[9] rstudioapi_0.13 farver_2.1.0 fansi_0.4.2 lubridate_1.7.10
[13] mathjaxr_1.4-0 xml2_1.3.2 splines_4.0.4 mnormt_2.0.2
[17] knitr_1.37 texreg_1.37.5 jsonlite_1.7.2 nloptr_1.2.2.2
[21] broom_0.7.9 cluster_2.1.0 dbplyr_2.1.0 png_0.1-7
[25] compiler_4.0.4 httr_1.4.2 backports_1.2.1 assertthat_0.2.1
[29] fastmap_1.1.0 cli_3.0.1 htmltools_0.5.2 tools_4.0.4
[33] coda_0.19-4 gtable_0.3.0 glue_1.4.2 reshape2_1.4.4
[37] Rcpp_1.0.8 cellranger_1.1.0 vctrs_0.3.8 gdata_2.18.0
[41] nlme_3.1-152 xfun_0.29 proto_1.0.0 openxlsx_4.2.3
[45] testthat_3.0.2 rvest_0.3.6 lpSolve_5.6.15 CompQuadForm_1.4.3
[49] lifecycle_1.0.0 gtools_3.8.2 rstatix_0.7.0 statmod_1.4.35
[53] zoo_1.8-8 scales_1.1.1 hms_1.0.0 parallel_4.0.4
[57] RColorBrewer_1.1-2 curl_4.3 gridExtra_2.3 pander_0.6.3
[61] rpart_4.1-15 latticeExtra_0.6-29 stringi_1.5.3 checkmate_2.0.0
[65] boot_1.3-26 zip_2.1.1 rlang_1.0.0 pkgconfig_2.0.3
[69] htmlwidgets_1.5.3 labeling_0.4.2 tidyselect_1.1.0 plyr_1.8.6
[73] magrittr_2.0.1 R6_2.5.0 generics_0.1.0 DBI_1.1.1
[77] gsubfn_0.7 pillar_1.6.5 haven_2.3.1 withr_2.4.1
[81] nnet_7.3-15 abind_1.4-5 modelr_0.1.8 crayon_1.4.1
[85] utf8_1.1.4 tmvnsim_1.0-2 tzdb_0.2.0 jpeg_0.1-8.1
[89] grid_4.0.4 data.table_1.14.0 reprex_1.0.0 digest_0.6.27
[93] xtable_1.8-4 GPArotation_2014.11-1 numDeriv_2016.8-1.1 munsell_0.5.0

# put output here

rma.glmm produces absurd results for standard erros

Summary

For a particular set of data, rma.glmm with measure="OR" and model="CM.EL" yields absurd results for the standard error and p-value of the coefficient.

Reproducible Example

postreat = c(1,1,14,1,5,8,7,8,4,5)
ntreat = c(407,414,1197,989,139,136,304,214,111,231)
poscontrol = c(1,1,16,1,4,11,4,11,0,8)
ncontrol = c(422,407,1300,494,80,157,152,227,37,234)
rma.glmm(measure="OR",ai=postreat,bi=ntreat-postreat,ci=poscontrol,di=ncontrol-poscontrol,model="CM.EL")

The estimate for OR is reasonable (around 0.85), but standard error = 0.0001, zval = -2198.9358 and negligible p-value are completely unreasonable. Using other methods, p-value is larger than 0.3 (which is what one would expect from data like that).
A reasonable p-value is obtained if the data is changed a little bit (for instance, by replacing poscontrol[9] with 1).

Selection model error during optimization

Classification: Bug Report

Summary

I'm using the Normand1999 data set as a test for adding selection models to MAJOR. When I run selmodel I get and error during optimization when I use either beta and power for the model type. I wasn't sure if this was a bug or an error on my end and since selmodel is a new feature I figured I would file a big report just in case. If this ends up being user error my bad.

Reproducible Example (if applicable)

dat <- dat.normand1999
res <- rma(measure="SMD", m1i=m1i, sd1i=sd1i, n1i=n1i, m2i=m2i, sd2i=sd2i, n2i=n2i, data=dat, slab=source)
selmodel(res, type="beta")
selmodel(res, type="power")
dat <- dat.normand1999
res <- rma(measure="SMD", m1i=m1i, sd1i=sd1i, n1i=n1i, m2i=m2i, sd2i=sd2i, n2i=n2i, data=dat, slab=source)
selmodel(res, type="beta")
Error in selmodel.rma.uni(res, type = "beta") : 
  Error during optimization.
selmodel(res, type="power")
Error in selmodel.rma.uni(res, type = "power") : 
  Error during optimization.

Notes

I've been able to get this to work in R and in Jamovi by setting the model type to any of the other options and it works fine. Maybe if I change the optimizer to something other than optim it might fix this but I'm not sure.

sessionInfo()

R version 4.0.3 (2020-10-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17134)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252    LC_MONETARY=English_United States.1252 LC_NUMERIC=C                           LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] SCRT_1.3.1     SCMA_1.3.1     metafor_2.5-52 jmvtools_1.6.4 Matrix_1.2-18 

loaded via a namespace (and not attached):
 [1] pillar_1.4.6        compiler_4.0.3      mathjaxr_1.0-1      prettyunits_1.1.1   remotes_2.2.0       tools_4.0.3         pkgbuild_1.1.0      jsonlite_1.7.1      node_1.0           
[10] lifecycle_0.2.0     tibble_3.0.4        nlme_3.1-149        jmvcore_1.2.23      gtable_0.3.0        lattice_0.20-41     pkgconfig_2.0.3     rlang_0.4.8         cli_2.1.0          
[19] rstudioapi_0.11     withr_2.3.0         vctrs_0.3.4         rprojroot_1.3-2     grid_4.0.3          glue_1.4.2          R6_2.4.1            processx_3.4.4      fansi_0.4.1        
[28] magrittr_1.5        callr_3.5.1         TOSTER_0.3.4        ggplot2_3.3.2       ellipsis_0.3.1      backports_1.1.10    scales_1.1.1        ps_1.4.0            assertthat_0.2.1   
[37] colorspace_1.4-1    numDeriv_2016.8-1.1 munsell_0.5.0       crayon_1.3.4

Standard Error of Mean or Standard Deviation need to input in rma.uni as sei=

Classification:

(Pick one of: Bug Report, Feature Request, Enhancement Suggestion)

General questions about the use of the metafor package should not be asked here, but on the r-sig-meta-analysis mailing list (https://stat.ethz.ch/mailman/listinfo/r-sig-meta-analysis). Anything posted here should really be related to the development of the package (including potential bug reports).

Summary

Summary of the problem.

Reproducible Example (if applicable)

If applicable, please provide a minimal and fully reproducible example. Remove any superfluous code that is not pertinent to the issue at hand and provide a small dataset together with the code so that it can actually be run (the dput() function is extremely useful for this; or use one of the datasets that comes with the metafor package). See also:

# use this for posting code (if applicable)
# use this for posting output (if applicable)

Notes

Describe any debugging steps you've taken yourself. If you've found a workaround, please provide it here.

sessionInfo()

Post output of sessionInfo() below:

# put output here

Why do Random Effect and Fixed Effect models give exactly the same results

Suppose I am working with the simple dataset attached (called "sutton_df.xlsx"
sutton_df.xlsx
). I am interested in calculating Risk Difference ("RD") for both a fixed effects model ("FEM") and a random effects model ("REM").

I create two models as follows:

model.sutton.RD.FEM <- rma(measure = "RD", ai = tpos, ci=cpos, n1i = nt, n2i = nc, data = sutton_df, slab = Study.ID, method = "FE")

model.sutton.RD.REM <- rma(measure = "RD", ai = tpos, ci=cpos, n1i = nt, n2i = nc, data = sutton_df, slab = Study.ID, method = "REML")

A summary() of both of these models gives EXACTLY the same result for the intercept ($b) and all other parameters. Further the REM shows zero heterogeneity. And, the forest() plots give EXACTLY the same plots.

It seems to me that the REM and FEM results should be different, and that heterogeneity should not be zero in the REM. Is there a bug here, or have I failed to understand the proper coding?

FWIW, calculating the odds ratio (via measure ="OR") also gives identical results for the FEM and REM, and once again zero heterogeneity in the REM. Something seems amiss, but it could easily be with me.

Many thanks.

masking `ranef` (from "nlme" or "lme4")

The metafor package provides an S3 method for ranef(). Because it "forgets" to import the S3 generic (from nlme or lme4), a new ranef() S3 generic is automatically created.

This leads to a warning in the following, but more importantly, ranef() will stop working for nlme or lme4 objects where it should work:

> require(nlme)
> require(metafor)
Loading required package: metafor
Loading 'metafor' package (version 1.9-9). For an overview 
and introduction to the package please type: help(metafor).

Attaching package:metaforThe following object is masked frompackage:lme4:

    ranef

AFTER fixing the problem (see pull request) the warning (" ... masked ... " ) disappears and more importantly, after loading the two packages, methods(ranef) does show both nlme and metafor methods.

Problem with forest() in version 2.0.0

I've recently installed R 3.4.1 and updated metafor to 2.0.0 and I've found the following error when running forest(rma_object)

Error in object$X.f %*% object$beta : 
  requires numeric/complex matrix/vector arguments

I managed to get it to work when I reverted back to 1.9-9.

Enhancement request: provide additional reporter() functionality to support those writing reports in Rmarkdown

Classification:

Enhancement Suggestion

Summary

Hi @wviechtb, following your great talk on the metafor::reporter() function at #ESMARConf2021, I've been thinking more about how I would integrate this truly fantastic functionality into my workflow. As someone who regularly writes papers in Rmarkdown, I wanted to suggest three enhancements that I feel would make it easier for me to make use of reporter(), but would not change the central aim of producing a self-contained report:

  1. Having a output option that allows users to call reporter() within a larger Rmarkdown document (see this gist for an example of the current behaviour) would be very helpful, so that additional Rmarkdown text/elements could be added before/after the output from reporter() without having to use the workaround described in "Notes" below.
  2. Providing an argument to allow users to subset what sections reporter() reports would also be extremely useful. This would be particularly useful as the analysis could be run in the first chunk to create the res object, then reporter(res, section = "methods") could be called in the methods and reporter(res, section = "results") in the results section of a manuscript. I've had a look at the code powering reporter() and it seems that the function is already creating each section separately, so it would just be a case of allowing users to define which of these sections are written to the final file.
  3. In the current temporary markdown file, using in-line expressions rather than hard-coded numbers/values (e.g. "k=`r res$k`" rather than "k=13") would be helpful, particularly in the case where users copy the generated markdown text across to their manuscript and edit it, but then want to update the analysis. If the copied text used in-line expressions, the res object in the main manuscript could be updated without having to call reporter() and copy-paste the text again, which would remove user-changes. As most people will not look at the Rmarkdown file anyways, preferring to view the rendered version, this change should not have any major user-facing effects but would make working with the output of reporter() much easier for those writing in Rmarkdown.

I would be very happy to try and implement some of this functionality and then open a PR, but wanted to open an issue first and see what your thinking on it was!

Just to note as a discussion piece, while the above enhancements would be hugely helpful in their own right, the reason I am particularly interested in them is that they would allow for a data-driven generation of the methods/results section of a systematic review manuscript, particularly if other analytical packages provided similar reporter functions (e.g. robvis, PRISMA2020 by @nealhaddaway)

As an example, you could then have a chunk in the methods:

metafor::reporter(res, section = "methods") # Methods element of current fn

And in the results:

PRISMA2020::reporter(prisma) # Summary of included studies
metafor::reporter(res, section = "results") # Results element of current fn
robvis::reporter(rob) # Summary of ROB in included studies

Reproducible Example (if applicable)

Because Rmarkdown doesn't copy well into GitHub issues, I have created a reprex of the current behaviour of metafor::reporter() in this gist.

Notes

As a work-around to the functionality detailed in Enhancement 1, users can silently save the temp .Rmd to a user-defined direction and then include it as a child-Rmd file in a subsequent chunk.

I've created a full reprex of this workaround as a gist.

sessionInfo()

R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19042)

Matrix products: default

locale:
[1] LC_COLLATE=English_United Kingdom.1252 
[2] LC_CTYPE=English_United Kingdom.1252   
[3] LC_MONETARY=English_United Kingdom.1252
[4] LC_NUMERIC=C                           
[5] LC_TIME=English_United Kingdom.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods  
[7] base     

other attached packages:
[1] metafor_2.4-0 Matrix_1.2-17

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.5        highr_0.8         compiler_3.6.1   
 [4] prettyunits_1.0.2 remotes_2.2.0     tools_3.6.1      
 [7] testthat_2.3.0    digest_0.6.27     pkgbuild_1.0.6   
[10] pkgload_1.0.2     nlme_3.1-140      memoise_1.1.0    
[13] evaluate_0.14     lifecycle_0.2.0   lattice_0.20-38  
[16] rlang_0.4.9       cli_2.0.2         rstudioapi_0.11  
[19] yaml_2.2.1        xfun_0.19         stringr_1.4.0    
[22] withr_2.3.0       knitr_1.30        desc_1.2.0       
[25] fs_1.3.1          devtools_2.3.1    rprojroot_2.0.2  
[28] grid_3.6.1        glue_1.4.2        R6_2.5.0         
[31] processx_3.4.1    fansi_0.4.0       rmarkdown_2.5    
[34] sessioninfo_1.1.1 callr_3.4.3       purrr_0.3.3      
[37] magrittr_2.0.1    ps_1.3.0          ellipsis_0.3.0   
[40] htmltools_0.5.0   usethis_2.0.0     rsconnect_0.8.16 
[43] assertthat_0.2.1  stringi_1.4.6     crayon_1.3.4

Continuity correction only affects MH test (rma.mh), not estimates/CI

The continuity correction option in rma.mh seems to only affect the MH test, but not the estimates and confidence interval. Presumably it should affect all. This is for version 2.0-0.

library(metafor)

rma.mh(ai=c(0,0,2), bi=c(20,20,18), ci=c(0,2,4), di=c(20,18,16), measure="OR", correct=FALSE)

Fixed-Effects Model (k = 3)

Test for Heterogeneity:
Q(df = 1) = 0.3092, p-val = 0.5782

Model Results (log scale):

estimate se zval pval ci.lb ci.ub
-1.2528 0.8712 -1.4380 0.1504 -2.9602 0.4547

Model Results (OR scale):

estimate ci.lb ci.ub
0.2857 0.0518 1.5757

Cochran-Mantel-Haenszel Test: CMH = 2.2286, df = 1, p-val = 0.1355
Tarone's Test for Heterogeneity: X^2 = 0.8547, df = 1, p-val = 0.3552

rma.mh(ai=c(0,0,2), bi=c(20,20,18), ci=c(0,2,4), di=c(20,18,16), measure="OR", correct=TRUE, add=1/2, to="only0", drop00=FALSE)

Fixed-Effects Model (k = 3)

Test for Heterogeneity:
Q(df = 2) = 0.6922, p-val = 0.7074

Model Results (log scale):

estimate se zval pval ci.lb ci.ub
-1.2528 0.8712 -1.4380 0.1504 -2.9602 0.4547

Model Results (OR scale):

estimate ci.lb ci.ub
0.2857 0.0518 1.5757

Cochran-Mantel-Haenszel Test: CMH = 1.2536, df = 1, p-val = 0.2629
Tarone's Test for Heterogeneity: X^2 = 0.8547, df = 1, p-val = 0.3552

When I manually check what the estimate should be, I get:

mhor <- function(ai,bi,ci,di,correct=FALSE){
  if (correct==TRUE) {
    for (i in 1:length(ai)){
      if ((ai[i]==0) || (ci[i]==0)) {
        ai[i] = ai[i] + 0.5
        di[i] = di[i] + 0.5
        bi[i] = bi[i] + 0.5
        ci[i] = ci[i] + 0.5
      }
    }
  }
  return( sum( ai*di/(ai+bi+ci+di) ) / sum( bi*ci/(ai+bi+ci+di) ) )
}

mhor(ai=c(0,0,2), bi=c(20,20,18), ci=c(0,2,4), di=c(20,18,16), correct=FALSE)

0.2857143

mhor(ai=c(0,0,2), bi=c(20,20,18), ci=c(0,2,4), di=c(20,18,16), correct=TRUE)

0.3873085

Wrong weigths in the forest plot of the three-level meta-analysis conducted with rma.mv() function

Classification:

Bug Report

Summary

The weights depicted in the forest plot of the three-level meta-analysis conducted using the rma.mv() function are incorrect.
The weights revealed via the forest.rma() are diagonal weights of the variance-covariance matrix, which are similar to the regular model without clustering. Calculating the weighted mean of the effects based on the diagonal weights does not lead to the same result reported by the three-level model. But, when using the rowsum weights, we get an overall effect similar to the one reported by the three-level model and depicted by the forest plot.

Reproducible code

# here is a reproducible Code for conducting a three-level meta-analysis using rma.mv() function
# and drawing the forest plot.
# loading essential libraries
library(tidyverse)
library(metafor)

# building the dataframe
df <- data.frame(author = c("Geng 2019", "Kim 2018", "Kim 2018",  "Kim 2018", "Kim 2020",
                            "Kim 2020",  "Kim 2020",  "Lih 2016", "Lih 2016","Lih 2019",
                            "Lih 2019",  "Lih 2019",  "Lih 2019"),
                 yi = c(6.8881651, 0.8095802, 1.2276701, 0.6665888, 2.2901223,
                        3.1653921, 2.0759750, 0.9695670, 0.3003020, 0.8151049,
                        0.6450638, 0.6594766, 0.3751020), 
                 vi = c(1.98619588, 0.10819275, 0.23767934, 0.21110852, 0.33111651,
                        0.45049268, 0.30774181, 0.24833500, 0.22472726, 0.09025413,
                        0.08766778, 0.08786364, 0.08479897))

# to add the third level to the data as "study id"
df <- df %>% 
  mutate(es_id = row_number()) %>% 
  group_by(`author`) %>% 
  mutate(study_id = cur_group_id()) %>% 
  mutate(study_No = row_number())

# building the model
meta_model <- rma.mv(data = df, yi = yi, V = vi,
                     random = ~ 1 | study_id / es_id,
                     method = "REML")
# forest plot
forest.rma(meta_model, showweights = T, digits = 4, 
           slab = paste(df$author, ".",df$study_No))

# to check if the weighted mean with the depicted weights match the overall effect in the plot or not.
w_diag<- weights.rma.mv(meta_model, "diagonal")
weighted.mean(x = meta_model$yi, w = w_diag)
# the result shows these numbers are not equal (0.88 != 2.00)

w_rowsum <- weights.rma.mv(meta_model, "rowsum")
weighted.mean(x = meta_model$yi, w = w_rowsum)
# this weighted mean is equal to the overall effect demonstrating that the weights depicted in the 
# plot are not correct and should be row sum weights
# the diagonal weights are related to the regular random effects model, but not three-level model

Output

forest plot imgae


# diagonal weights
> w_diag<- weights.rma.mv(meta_model, "diagonal")
> w_diag
         1          2          3          4          5          6          7          8          9 
 0.2755947  8.2452046  5.8151941  6.3029766  3.5349209  2.9635868  3.6501036  3.8745954  3.8938949 
        10         11         12         13 
15.0572992 15.3559194 15.3329894 15.6977202 

# weighted mean based on diagonal weights
> weighted.mean(x = meta_model$yi, w = w_diag)
[1] 0.8807281
# this number is not equal to the overall effect!

# rowsum weights
> w_rowsum <- weights.rma.mv(meta_model, "rowsum")
> w_rowsum
        1         2         3         4         5         6         7         8         9        10 
15.010137 10.851992  4.939877  5.561627  7.493503  5.507798  8.062676 10.006018 11.057157  5.219367 
       11        12        13 
 5.373347  5.361369  5.555131 

# weighted mean based on rowsum weights
> weighted.mean(x = meta_model$yi, w = w_rowsum)
[1] 1.996449
# this number is equal to the overall effect reported in the forest plot.


Notes

When conducting the model via metagen() function in the meta package, it results in the proper weights.

sessionInfo()

R version 4.1.1 (2021-08-10)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 19044)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    
system code page: 1256

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] metafor_3.0-2   Matrix_1.3-4    forcats_0.5.1   stringr_1.4.0   dplyr_1.0.5     purrr_0.3.4    
 [7] readr_1.4.0     tidyr_1.1.3     tibble_3.1.0    ggplot2_3.3.5   tidyverse_1.3.1

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.6         lubridate_1.7.10   lattice_0.20-44    assertthat_0.2.1   utf8_1.1.4        
 [6] R6_2.5.0           cellranger_1.1.0   backports_1.2.1    reprex_2.0.0       httr_1.4.2        
[11] pillar_1.6.0       rlang_0.4.10       readxl_1.3.1       performance_0.8.0  rstudioapi_0.13   
[16] minqa_1.2.4        nloptr_1.2.2.2     effectsize_0.5     mathjaxr_1.4-0     splines_4.1.1     
[21] lme4_1.1-26        statmod_1.4.35     munsell_0.5.0      broom_0.7.6        compiler_4.1.1    
[26] modelr_0.1.8       pkgconfig_2.0.3    parameters_0.15.0  insight_0.14.5     tidyselect_1.1.1  
[31] fansi_0.4.2        crayon_1.4.1       dbplyr_2.1.1       withr_2.4.2        MASS_7.3-54       
[36] grid_4.1.1         nlme_3.1-153       jsonlite_1.7.2     meta_5.0-1         gtable_0.3.0      
[41] lifecycle_1.0.0    DBI_1.1.1          magrittr_2.0.1     bayestestR_0.11.5  scales_1.1.1      
[46] datawizard_0.2.1   cli_2.5.0          stringi_1.5.3      fs_1.5.0           xml2_1.3.2        
[51] ellipsis_0.3.1     generics_0.1.0     vctrs_0.3.6        boot_1.3-28        tools_4.1.1       
[56] CompQuadForm_1.4.3 glue_1.4.2         hms_1.0.0          colorspace_2.0-0   rvest_1.0.0       
[61] haven_2.4.1     

R2 in a rma.mv model

Hello

I am fitting a rma.mv model and when trying to calculate R2 I got a negative value. It is quite strange to find that the variance of a null model is smaller than the variance of a model with moderators. I have opened a github repo where data and code is located. (https://github.com/nmolanog/issue_r2_metafor). I will appreciate if you can take a look at the small data and code.

Thank you

Meta-analysis of single proportions - number of events / non-events missing in rma.uni object

Classification

Feature Request

Summary

Hi Wolfgang,

It looks like the number of events 'xi' and non-events 'mi' are not part of an rma.uni object (look for 666 in output of the two examples).

I guess the same is true for 'ti' and 'ri' in the meta-analysis of single rates or single correlations.

BTW, is there any difference between list elements 'ni' and 'ni.f'?

Best wishes,
Guido

Reproducible Example

str(rma.uni(xi = 666 + 0:1, ni = 2 * 666 + c(0, 2), measure = "PR"))
str(rma.uni(xi = 666 + 0:1, mi = 666 + 0:1, measure = "PR"))

sessionInfo()

Post output of sessionInfo() below:

R version 4.0.3 (2020-10-10)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur 10.16

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRblas.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.0/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base

other attached packages:
[1] metafor_2.5-69 Matrix_1.2-18

loaded via a namespace (and not attached):
[1] compiler_4.0.3  mathjaxr_1.0-1  nlme_3.1-150    grid_4.0.3
[5] lattice_0.20-41

rma.glmm example fails

On RHEL 7.6 with version 2.0-0 the last example in metafor/pritz1997.Rd fails with

> ### mixed-effects logistic regression model
> res <- rma.glmm(measure="PLO", xi=xi, ni=ni, data=dat)
Error in t(b2.QE) %*% chol2inv(chol.h) : non-conformable arguments
Calls: rma.glmm
Execution halted
* checking for unstated dependencies in ‘tests’ ... OK
* checking tests ... ERROR
  Running ‘testthat.R’
Running the tests in ‘tests/testthat.R’ failed.
Last 13 lines of output:
  4: withCallingHandlers(code, warning = function(condition) {
         out$push(condition)
         invokeRestart("muffleWarning")
     })
  5: eval_bare(get_expr(quo), get_env(quo))
  6: rma.glmm(measure = "PLO", xi = xi, ni = ni, mods = ~mod1, data = dat)

whereas on Windows 10 the example runs producing a warning though:

4: In rma.glmm(measure = "PLO", xi = xi, ni = ni, data = dat) :
  Cannot invert Hessian for saturated model.

Is rma.glmm touching numerical limits here?

Kind regards
Reinhold

escalc change in behavior since 1.9-8

Hi - I have some code I have been running a while on 1.9-8. I recently updated to 2.0-0 and having problem with escalc. It does not behave in the same way it used to. In short, the yi / vi is not being calculated at all. I can show examples from 1.9-8 and 2.0-0 below.

dataset:

              study n1i sd1i n2i sd2i   am  m1i m2i           vi      yi
1  Cahill KN (2017)  32   NA  30   NA   MD   NA  NA 2.603178e-05 0.04600
2 Bousquet J (2011) 272   NA 128   NA   MD   NA  NA 2.603178e-03 0.11000
3   Karpel J (2010) 542   NA 529   NA   MD   NA  NA 7.229779e-04 0.02876
4   Garcia G (2013)  20 0.38  21  0.2 <NA> 0.25   0           NA      NA

escalc call - same one used in 1.9-8 and 2.0-0

es <- escalc("MD", m1i=m1i, n1i=n1i, sd1i=sd1i,
                   m2i=m2i, n2i=n2i, sd2i=sd2i,
                   yi=yi,vi=vi,
                   slab=study,data=p2$data,replace=FALSE)

result in 1.9-8 (it generated the yi/vi for the last study as expected)


              study n1i sd1i n2i sd2i   am  m1i m2i     vi     yi
1  Cahill KN (2017)  32   NA  30   NA   MD   NA  NA 0.0000 0.0460
2 Bousquet J (2011) 272   NA 128   NA   MD   NA  NA 0.0026 0.1100
3   Karpel J (2010) 542   NA 529   NA   MD   NA  NA 0.0007 0.0288
4   Garcia G (2013)  20 0.38  21  0.2 <NA> 0.25   0 0.0091 0.2500

result in 2.0-0 (in the last study the yi/vi was not generated)

              study n1i sd1i n2i sd2i   am  m1i m2i           vi      yi
1  Cahill KN (2017)  32   NA  30   NA   MD   NA  NA 2.603178e-05 0.04600
2 Bousquet J (2011) 272   NA 128   NA   MD   NA  NA 2.603178e-03 0.11000
3   Karpel J (2010) 542   NA 529   NA   MD   NA  NA 7.229779e-04 0.02876
4   Garcia G (2013)  20 0.38  21  0.2 <NA> 0.25   0           NA      NA

maybe the change in behavior is documented somewhere but I could not find it.

thank you!

Add further information to "Cannot fit ML model"

Classification: Feature Request

Summary

Recently, I tried to fit a rma.glmm model what resulted in the error "cannot fit ML model". First, I thought there is a problem with my data or my specified model. Then, I realized that even the examples of metafor did not run. Investigating this further, I eventually realized that there was a missmatch between the used Rcpp version and the version lme4 was compiled with (due to the use of renv). After recompiling Rcpp and lme4 the problem was fixed.

It took me quite a while to try fit a model with lme4 directly and check its output. I think, I could have found the problem far easier / faster when metafor would return the corresponding error from lme4 and not only state "cannot fit ML model".

Thus, this is a feature request to add the corresponding error from lme4 in addition to "cannot fit ML model".

Notes

Thank you a lot for all your work on metafor!

Add a 'data' option to forest() function

Classification:

Feature request

Summary

forest() function currently does not have a 'data =' option to specify the dataframe where the yi and vi variables come from, as an alternative to specifying them as as individual objects in the environment. Would be handy to have, as in escalc() etc.

Support for penalized splines/GAMs

Classification:

Feature Request

Summary

I'm interested in being able to fit meta-analysis models with penalized splines, ala mgcv::gam(). For example, I have a meta-analytic dataset where we want to model change in effect sizes over time using thin plate splines, with Country random effects in both intercepts and the year splines.

Here is how I might fit such a model with brms by using mgcv's tensor product spline implementation.

library(brms)
m2 <- brm(
  bf(yi | se(sqrt(vi)) ~ t2(year2004, Country, bs = c("tp", "re")) + 
       (1  | sample_id)
  ),
  data = filter(dat_mnsd2, param == "mn"),
  backend = "cmdstanr",
  cores = parallel::detectCores(),
  threads = 2
)

Is there any possibility that these sorts of penalized splines might be supported in metafor?

meta-regression with beta likelihood

I was wondering if there are plans to add meta-regression modelling capacity with Beta likelihood for soft bound (0,1) response data. Such as rates or survival probability estimates (with estimates of precision) derived from capture-mark-recapture studies. Unlike in the Gaussian likelihood were it is possible and desirable to fix the residual variance to 1, it is not clear to me if that concept is still applicable for the Beta likelihood regression (would this be the same as fixing the dispersion parameter). Just some ponderings!

`refit` argument in anova()

Would it be possible to add a refit argument to anova() ala lme4/glmmTMB to automatically refit REML models with ML so that models with different fixed effects can be meaningfully compared?

Evaluate a function to transf and atransf that can alias with expected functions such that xlab is set to corresponding label.

Feature Request

Be able to pass a custom function to the transf and atransf variables and get the expected xlab.

Summary

I like to use a variable for the transf and atransf when automating results of forest plots but the way these are parsed in forest and .setlab only the explicit text arguments are recognized.

I did some digging and I think it is possible to resolve this with a few changes by taking advantage of this stack post.
https://stackoverflow.com/questions/32266433/in-r-how-do-i-test-that-two-functions-have-the-same-definition

Reproducible Example (if applicable)

The xlab is different for these two forest plots because the first will capture the character and the

get_atransf <- function(measure=""){
    if (is.null(measure)){
        measure <- ""
    }
    inv.function <- ""
    if(measure %in% c('MD', 'SMD', 'RD','MN', 'PR')) inv.function <- identity
    if(measure %in% c('OR', 'RR', 'PLN')) inv.function <- exp
    if(measure %in% c('PLO')) return(transf.ilogit)
    return(inv.function )
    
}

res <- rma(measure="PLO", xi=ci, ni=n2i, data=dat.nielweise2007)
forest(res, atransf = transf.ilogit)
forest(res, atransf =get_atransf('PLO'))

We can check that these two things should be the same, but the behavior differs because only the text argument of atransf is parsed and not what it evaluates.

identical(deparse(transf.ilogit), deparse(get_atransf('PLO')))

Suggested Edit 1

In the defaults plots we need to change how the argument is substituted then deparsed.
Because of substitute we just get the function name, but we want what the function evaluates since the test_atransf_as_a_function returns the 'default' back transform for the measure.

   #transf.char  <- deparse(substitute(transf))
   #atransf.char <- deparse(substitute(atransf))
   transf.char  <- deparse(transf)
   atransf.char <- deparse(atransf)

Suggested Edit 2

A corresponding edit in .setlabs is need.
Here we create a list of possible functions and deparse them.
So long as any are identical we can use the recommended label.
Here I just make a dummy .setlabs for a test case.

setlab_custom <- function(measure, transf.char, atransf.char, gentype, short=FALSE){
  
  if (measure == "PLO") {
    if (transf.char == "FALSE" && atransf.char == "FALSE") {
      lab <- ifelse(short, "Log[Odds]", "Log Odds")
    } else {
      lab <- ifelse(short, lab, "Transformed Log Odds")
      # Take all the possible expected matching functions and deparse them
      # This allows us to pass funtions as variables in metafor. 
      funct_list <- list(transf.ilogit.int, transf.ilogit, plogis)
      funct_list <- lapply(funct_list, deparse)
      if (any(sapply(funct_list, identical, atransf.char)))
        lab <- ifelse(short, "Proportion", "Proportion (logit scale)")
    }
  }
  return(lab)
  
}

# Current only way using explicit text argument  
# It works since this is the current design. 
atransf.char <- deparse(substitute(transf.ilogit))
metafor:::.setlab("PLO", "FALSE", atransf.char,gentype=1)

# using a function that takes a variable fails 
# it just takes the string of the exact argument

atransf.char <- deparse(substitute(get_atransf('PLO')))
metafor:::.setlab("PLO", "FALSE", atransf.char,gentype=1)

# New method saves what the argument evaluates to
# New evaluation checks all possible good evaluations. 
atransf.char <- deparse(get_atransf('PLO'))
setlab_custom("PLO", "FALSE", atransf.char,gentype=1)


sessionInfo()

R version 4.1.0 (2021-05-18)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 20.04.2 LTS

Matrix products: default
BLAS:   /usr/lib/x86_64-linux-gnu/blas/libblas.so.3.9.0
LAPACK: /usr/lib/x86_64-linux-gnu/lapack/liblapack.so.3.9.0

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C               LC_TIME=en_US.UTF-8       
 [4] LC_COLLATE=en_US.UTF-8     LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                  LC_ADDRESS=C              
[10] LC_TELEPHONE=C             LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] DREmetafor_1.0 metafor_2.4-0  Matrix_1.3-3  

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.6         pillar_1.4.7       compiler_4.1.0     tools_4.1.0       
 [5] boot_1.3-28        digest_0.6.27      jsonlite_1.7.2     evaluate_0.14     
 [9] lifecycle_0.2.0    tibble_3.0.4       nlme_3.1-152       gtable_0.3.0      
[13] lattice_0.20-44    pkgconfig_2.0.3    rlang_0.4.10       personograph_0.1.3
[17] yaml_2.2.1         xfun_0.24          dplyr_1.0.2        stringr_1.4.0     
[21] knitr_1.33         generics_0.1.0     vctrs_0.3.8        grid_4.1.0        
[25] tidyselect_1.1.0   grImport_0.9-3     glue_1.4.2         R6_2.5.0          
[29] XML_3.99-0.5       rmarkdown_2.6      pander_0.6.4       ggplot2_3.3.3     
[33] purrr_0.3.4        magrittr_2.0.1     htmltools_0.5.0    DREutils_1.0      
[37] scales_1.1.1       ellipsis_0.3.2     colorspace_2.0-0   stringi_1.6.2     
[41] munsell_0.5.0      crayon_1.4.1       Cairo_1.5-12.2

rma.mh possible bug

Hello,

I'm not entirely sure if this is a bug or not, but I've had some strange behaviour with the rma.mh function of metafor. With certain data, it fails with an unhelpful error.
It seems to be tied to double zeros in my data frame; I've partially reproduced the issue below. (The reason I say 'seems to be' and 'partially reproduced' is that excluding different random rows each time causes it rma.mh to fail (???), but I can only reproduce the failure, not the randomness.)
I've pasted a simple example below.

Any help much appreciated!
Cheers,
Andrea

#R version 3.3.3, metafor v. 2.0-0
d = matrix(c(
0,1,1,0,
1,0,0,1,
1,1,0,0,
0,0,1,1,
0,1,0,1,
1,0,1,0,
1,1,1,1,
1,1,1,1),byrow=TRUE, nrow =8, ncol = 4)

userows = c(7,8)
rma.mh (d[userows,1],d[userows,2],d[userows,3],d[userows,4])


userows = c(1,7,8) #works
userows = c(2,7,8) #works
userows = c(3,7,8) #fails
userows = c(4,7,8) #works
userows = c(5,7,8) #works with warning
userows = c(6,7,8) #works with warning


userows = c(3,7,8) #fails

rma.mh (d[userows,1],d[userows,2],d[userows,3],d[userows,4])

#Fixed-Effects Model (k = 3)

#Test for Heterogeneity: 
#Q(df = 2) = 0.0000, p-val = 1.0000

#Model Results (log scale):

#estimate      se    zval    pval    ci.lb   ci.ub
#  0.0000  1.4142  0.0000  1.0000  -2.7718  2.7718

#Model Results (OR scale):

#estimate   ci.lb    ci.ub
#  1.0000  0.0625  15.9875

#Error in if (width < 0L) { : missing value where TRUE/FALSE needed

rma.mh (d[userows,1],d[userows,2],d[userows,3],d[userows,4], drop00=c(TRUE,TRUE))

#Fixed-Effects Model (k = 3)

#Test for Heterogeneity: 
#Q(df = 2) = 0.0000, p-val = 1.0000

#Model Results (log scale):

#estimate      se    zval    pval    ci.lb   ci.ub
#  0.0000  1.4142  0.0000  1.0000  -2.7718  2.7718

#Model Results (OR scale):

#estimate   ci.lb    ci.ub
#  1.0000  0.0625  15.9875

#Error in if (width < 0L) { : missing value where TRUE/FALSE needed 

RE: forest() atransf

I used forest() to draw forest plot of several proportions. I used atransf = mytransf to show annotation in percentage, below is mytransf:
mytransf <- function(x, digits = 0, format = "f", ...) {
paste0(formatC(100 * x, format = format, digits = digits, ...), "%")
}
This works fine, but with flip the order of 95% CI like below:
15% [26%, 5%], which should be 15%[5%, 26%].
I realized it's because the system treated this two numbers as strings, but I don't know how to change it in the order I wanted.
I also tried the simple atransf=percent by calling from library scales, not working.
Could you help with this? Many thanks.

Test assumes R version >= 4.0.0

sav <- structure(list(study = c("1", "1", "1", "1", "1", "1"), var1 = c("acog", "asom", "conf", "acog", "acog", "asom"), var2 = c("perf", "perf", "perf", "asom", "conf", "conf"), var1.var2 = c("acog.perf", "asom.perf", "conf.perf", "acog.asom", "acog.conf", "asom.conf"), yi = c(-0.55, -0.48, 0.66, 0.47, -0.38, -0.46), ni = c(142L, 142L, 142L, 142L, 142L, 142L)), row.names = c(NA, 6L), class = "data.frame")
expect_equivalent(dat[1:6,], sav, tolerance=.tol[["coef"]])

This test fails on R<4.0.0:

Component "study": 'current' is not a factor

Reporting here for you to decide the right fix as there are several options.

rma.mv & rma yield different results (rma works, rma.mv gives an error) for identical formulas *when loading data from a DTA file created by haven*

Update:

The issue was solved when by loading the identical dataset via a csv file rather than a dta file.

Classification:

Bug Report

Summary

  • I am working on a large-ish (1200 rows x 70 columns) meta-analysis. I am looking to meta-analyze the data with within-paper effects (we have a variable called unique_paper_id -- it's about 1200 from 303 separate papers).
  • When I run res <- metafor::rma.uni(d, var_d, data = dat); print(res, digits=3), I get
Random-Effects Model (k = 1231; tau^2 estimator: REML)
tau^2 (estimated amount of total heterogeneity): 0.136 (SE = 0.007)

and so on.

  • But when I run res2 <- metafor::rma.mv(d, var_d, data = dat), I get
Error in if (V0 || is.vector(V) || nrow(V) == 1L || ncol(V) == 1L) { : 
  missing value where TRUE/FALSE needed

I'm afraid I am not sure where to begin with this. The documentation (http://www.metafor-project.org/doku.php/tips:rma.uni_vs_rma.mv) says:

one can also use the rma.mv() function to fit the same models as the rma.uni() function, since those models are really just special cases of the more general models handled by the rma.mv() function

and

The same model can be fitted to these data using the rma.mv() function.

I am also not sure what will suffice for a reproducible minimal example, as I am not at liberty to share the data broadly -- though, possibly, I may be able to share a dropbox link with someone via private correspondence -- and I failed to reproduce this with any of the built-in datasets.

So my main question is: is this something you have seen before?

Thanks in advance for your help!

Session Info:

R version 3.6.0 (2019-04-26)
Platform: x86_64-apple-darwin15.6.0 (64-bit)
Running under: macOS  10.15.2

Matrix products: default
BLAS:   /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] dplyr_0.8.4   metafor_2.1-0 Matrix_1.2-18 haven_2.2.0  

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.3          compiler_3.6.0      pillar_1.4.3        nloptr_1.2.2       
 [5] forcats_0.4.0       tools_3.6.0         boot_1.3-24         lme4_1.1-21        
 [9] tibble_2.1.3        nlme_3.1-144        lattice_0.20-40     pkgconfig_2.0.3    
[13] rlang_0.4.4         rstudioapi_0.11     yaml_2.2.1          mvtnorm_1.0-12     
[17] vctrs_0.2.3         hms_0.5.3           stats4_3.6.0        grid_3.6.0         
[21] tidyselect_1.0.0    glue_1.3.1          R6_2.4.1            bdsmatrix_1.3-4    
[25] minqa_1.2.4         readr_1.3.1         purrr_0.3.3         magrittr_1.5       
[29] metaplus_0.7-11     MASS_7.3-51.5       splines_3.6.0       assertthat_0.2.1   
[33] fastGHQuad_1.0      bbmle_1.0.23.1      numDeriv_2016.8-1.1 crayon_1.3.4       

Brief update:

I tried this with the development version of metafor (remotes::install_github('wviechtb/metafor') and the error persists with metafor version 2.2-8

reporter scoping issue

Classification:

Bug Report

Summary

I believe there is a scoping issue forest and funnel formals metafor::reporter.rma.uni(..., forest = , funnel =) function. Looking at the body of the function, args.forest is not declared before the control flow unless it is missing.

Code (lines 85-89 & lines 93-97):

 if (missing(forest)) {
    args.forest <- ""
  }
  else {
    if (!is.character(args.forest)) ...

  if (missing(funnel)) {
    args.funnel <- ""
  }
  else {
    if (!is.character(args.funnel)) 

Reproducible Example (if applicable)

From the roxygen example

### copy BCG vaccine data into 'dat'
dat <- metafor::dat.bcg

### calculate log risk ratios and corresponding sampling variances
dat <- metafor::escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat,
              slab=paste(author, ", ", year, sep=""))

### random-effects model
res <- metafor::rma(yi, vi, data=dat)


metafor::reporter.rma.uni(res, forest = c(annotate = "FALSE"))

###maybe I'm specifying the argument incorrectly
metafor::reporter.rma.uni(res, forest = "annotate=FALSE")
Error in metafor::reporter.rma.uni(res, forest = "annotate=FALSE") : 
  object 'args.forest' not found

Notes

This is a great feature Wolfgang! Thanks for all your do

sessionInfo()

R version 3.6.1 (2019-07-05)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 17763)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252    LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C                           LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

loaded via a namespace (and not attached):
 [1] compiler_3.6.1  Matrix_1.2-17   htmltools_0.4.0 tools_3.6.1     metafor_2.4-0   Rcpp_1.0.4.6    rmarkdown_1.18 
 [8] nlme_3.1-140    grid_3.6.1      knitr_1.26      xfun_0.11       digest_0.6.25   rlang_0.4.5     evaluate_0.14  
[15] lattice_0.20-38

Factor being transformed into numeric - Enhancement Suggestion

I'm using rma.mv function for perform a model. I have a variable called climate that's a binary and it's a factor, wit the levels Tropical and Temperate.
When I run the model (with interactions) the summary function from the model output object, it returns something like that (hypothetical output):

Multivariate Meta-Analysis Model (k = 229; method: ML)

Variance Components:

            estim    sqrt  nlvls  fixed    factor 
sigma^2    0.1311  0.3621     45     no  id_group 

Test for Residual Heterogeneity:
QE(df = 223) = 548.9318, p-val < .0001

Test of Moderators (coefficients 1:6):
QM(df = 6) = 64.0154, p-val < .0001

Model Results:

                      estimate      se     zval    pval    ci.lb    ci.ub  
crop1                      -0.0514  0.1173  -0.4378  0.6616  -0.2814   0.1786      
climate1                 -0.2151  0.0781  -2.7553  0.0059  -0.3681  -0.0621   ** 
crop1: climate1      -0.1491  0.0781  -1.9099  0.0561  -0.3021   0.0039    .  

---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

So, my question is, what is actually climate1? In my example it would be Tropical or Temperate. I've tried change the variable climate to numeric and see which one is replaced with 1 and which one is replaced with 2 (from as.numeric.factor S3 function), however if I change the order of the label, it returns the same summary statistics.
I'd like to know how to identify which climate1 is referring to: Tropical or Temperate.

Thank you, best

dropped support for `slab = NULL` argument

Classification:

Bug Report / Enhancement Suggestions / Question

Summary

The previous version of the package allowed setting the slab argument in rma() to NULL.
This is very handy when calling the function in other packages/statistical programmes and the slab argument can be specified with if() statement.

My question is: was this a deliberate decision (and why, if so) or is it a bug?

Reproducible Example (if applicable)

Works:

devtools::install_version("metafor", version = "3.0-2")
library(metafor)
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
rma(yi, vi, data=dat, method="REML", slab = NULL)

Doesn't work:

devtools::install_version("metafor", version = "3.4-0")
library(metafor)
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
rma(yi, vi, data=dat, method="REML", slab = NULL)

sessionInfo()

R version 4.1.2 (2021-11-01)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 22000)

Matrix products: default

locale:
[1] LC_COLLATE=English_United States.1252  LC_CTYPE=English_United States.1252   
[3] LC_MONETARY=English_United States.1252 LC_NUMERIC=C                          
[5] LC_TIME=English_United States.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] metafor_3.4-0            metadat_1.2-0            Matrix_1.3-4             FrantisekTools_0.0.0.666
[5] vdiffr_1.0.4             devtools_2.4.3           usethis_2.1.5           

loaded via a namespace (and not attached):
 [1] compiler_4.1.2     mathjaxr_1.6-0     prettyunits_1.1.1  remotes_2.4.2      tools_4.1.2       
 [6] testthat_3.1.3     pkgbuild_1.3.1     pkgload_1.2.4      nlme_3.1-153       jsonlite_1.8.0    
[11] memoise_2.0.1      lifecycle_1.0.1    lattice_0.20-45    rlang_1.0.2        cli_3.2.0         
[16] curl_4.3.2         fastmap_1.1.0      withr_2.5.0        httr_1.4.3         desc_1.4.1        
[21] fs_1.5.2           rprojroot_2.0.3    grid_4.1.2         glue_1.6.2         R6_2.5.1          
[26] processx_3.5.3     sessioninfo_1.2.2  callr_3.7.0        purrr_0.3.4        magrittr_2.0.3    
[31] ps_1.6.0           ellipsis_0.3.2     telegram.bot_2.4.0 cachem_1.0.6       crayon_1.5.1      
[36] brio_1.1.3      

Failed to update package V.2.1.0

Hello,
I failed when I tried to update version 2.1.0 through "install_github("wviechtb/metafor")".
It mentioned:

Installing package into ‘C:/Users/Qingy/Documents/R/win-library/3.5’
(as ‘lib’ is unspecified)

  • installing source package 'metafor' ...
    ** R
    Error : (由警告转换成)无法重新编码'misc.func.hidden.r'行2359
    ERROR: unable to collate and parse R files for package 'metafor'
  • removing 'C:/Users/Qingy/Documents/R/win-library/3.5/metafor'
  • restoring previous 'C:/Users/Qingy/Documents/R/win-library/3.5/metafor'
    In R CMD INSTALL
    Error in i.p(...) :
    (converted from warning) installation of package ‘C:/Users/Qingy/AppData/Local/Temp/RtmpMT0FZ7/file15b41944810/metafor_2.1-0.tar.gz’ had non-zero exit status

sessionInfo():
R version 3.5.2 (2018-12-20)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)

Matrix products: default

locale:
[1] LC_COLLATE=Chinese (Simplified)_China.936
[2] LC_CTYPE=Chinese (Simplified)_China.936
[3] LC_MONETARY=Chinese (Simplified)_China.936
[4] LC_NUMERIC=C
[5] LC_TIME=Chinese (Simplified)_China.936

attached base packages:
[1] grid stats graphics grDevices utils datasets methods base

other attached packages:
[1] bindrcpp_0.2.2 dplyr_0.7.6 forestmodel_0.5.0 ggplot2_3.1.1
[5] usethis_1.4.0 devtools_2.0.1 forestplot_1.7.2 checkmate_1.8.5
[9] magrittr_1.5

loaded via a namespace (and not attached):
[1] Rcpp_0.12.18 bindr_0.1.1 compiler_3.5.2 pillar_1.3.1
[5] plyr_1.8.4 prettyunits_1.0.2 remotes_2.0.2 tools_3.5.2
[9] digest_0.6.16 pkgbuild_1.0.2 pkgload_1.0.2 memoise_1.1.0
[13] tibble_1.4.2 gtable_0.2.0 pkgconfig_2.0.2 rlang_0.3.1
[17] cli_1.0.1 rstudioapi_0.7 curl_3.2 yaml_2.2.0
[21] withr_2.1.2 desc_1.2.0 fs_1.2.6 tidyselect_0.2.4
[25] rprojroot_1.3-2 glue_1.3.0 R6_2.2.2 processx_3.2.1
[29] sessioninfo_1.1.1 purrr_0.2.5 callr_3.1.1 backports_1.1.2
[33] scales_1.0.0 ps_1.3.0 assertthat_0.2.0 colorspace_1.3-2
[37] lazyeval_0.2.1 munsell_0.5.0 crayon_1.3.4

What can I do for this situation ?
PS. The characters, "无法重新编码", means “Unable to recode”; "行", "row".

meta vs metafor. Random effects options reversed?

Hello.

I'm trying the packages meta and metagen to perform a meta-analysis study.

With meta I write this and it provides both results, for fixed effects and mixed effects.
metagen(myX,mySD, sm="OR")

With metafor I must use two different calls:
rma(myX,sei=mySD, method="ML", measure="OR") # Random effects.
rma(myX,sei=mySD, method="FE", measure="OR") # Fixed effects.

But strangely the "random-effects" option on metafor produces the same result than the fixed-effects option of meta and vice versa. I mean they are reversed.

What am I doing wrong?

Accept data.frame in predict()

Classification:

Enhancement Suggestion

Summary

I'm trying to fit an rma.mv() model with a fairly complex moderator structure (factor variable with several levels and splines for a continuous variable, interaction effects to model the different outcomes). Given this complexity, I'm having a really hard time trying to construct the model matrix for newmods by hand. It would be much easier and less error-prone if predict.rma() accepted a data frame for the newmods argument the same way most R modeling functions do.

Reproducible Example (if applicable)

library(metafor)
#> Loading required package: Matrix
#> 
#> Loading the 'metafor' package (version 3.1-3). For an
#> introduction to the package please type: help(metafor)
dat <- escalc(measure="RR", ai=tpos, bi=tneg, ci=cpos, di=cneg, data=dat.bcg)
mod <- rma(yi ~ ablat + year, vi, data=dat)
predict(mod, newmods = dat[1:4,])
#> Error in predict.rma(mod, newmods = dat[1:4, ]): Argument 'newmods' should be a vector or matrix, but is of class 'escalc'.Argument 'newmods' should be a vector or matrix, but is of class 'data.frame'.

cf. predict.lm():

predict(lm(mpg ~ disp, data = mtcars), newdata = mtcars[1:5,])
#>         Mazda RX4     Mazda RX4 Wag        Datsun 710    Hornet 4 Drive 
#>          23.00544          23.00544          25.14862          18.96635 
#> Hornet Sportabout 
#>          14.76241

Non sensible error message

The following R code halts and outputs an error message on line 3:

library("metafor")
rma.mv(c(1,2,3), c(0.1,0.2,0.3)) # Works fine.
rma.mv(c(1,2,3), c(0.1,0.2,0.3), sparse=TRUE) # Breaks down without any sensible error message.

This is the full error message:

Error in `[<-`(`*tmp*`, upper.tri(Vlt), value = 0) : 
error in evaluating the argument 'i' in selecting a method for function '[<-': Error in as.vector(data) : 
no method for coercing this S4 class to a vector
Calls: upper.tri ... as.matrix -> as.matrix.default -> array -> as.vector
Calls: rma.mv -> [<-
Execution halted

I don't know if I'm doing something wrong here (I encountered this problem when trying to run the sparse option on a much more complicated and larger data set). However, I believe that the error message should be caught by metafor and output something more sensible for the user (for example, I never called the function as.vector(data) myself).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.