guido-s / meta Goto Github PK
View Code? Open in Web Editor NEWOfficial Git repository of R package meta
Home Page: http://cran.r-project.org/web/packages/meta/index.html
License: GNU General Public License v2.0
Official Git repository of R package meta
Home Page: http://cran.r-project.org/web/packages/meta/index.html
License: GNU General Public License v2.0
I think I’m facing a bug when using forest(…, bysort=TRUE). The subgroup p-values don’t seem to match up anymore…
Version: 'meta' package (version 4.15-1).
bysort=FALSE
bysort=TRUE
Bysort=TRUE, the z=14.05 is now w/ a different group
options(repr.plot.width=20, repr.plot.height=60, repr.plot.res = 200)
m.bin <- metacont(InterventionN,
IntensityofPainInterventionMean,
IntensityofPainInterventionSD,
ControlN,
IntensityofPainControlMean,
IntensityofPainControlSD,
data = data,
studlab = paste(str_pad(Study, max(sapply(Study, str_length)), 'right'), # Pad it
str_pad(InterventionContinuousvsSingleshotvsLiposomalBupivacaine, max(sapply(InterventionContinuousvsSingleshotvsLiposomalBupivacaine, str_length)), 'right'),
ComparatorControl,
sep=" | "),
comb.fixed = TRUE,
comb.random = FALSE,
byvar=paste(TypeofSurgery, outcome, ComparatorControlClean, sep=" | "),
hakn = TRUE,
prediction = TRUE)
pdf(file = 'Pain Comparison sort.pdf', width=18, height=40)
forest(m.bin,
sortvar=-Year, #paste(ComparatorControl, InterventionContinuousvsSingleshotvsLiposomalBupivacaine, Study),
bysort=TRUE,
fontfamily="mono",
lab.e = "Intervention",
pooled.totals = TRUE,
pooled.events = TRUE,
overall = TRUE,
bylab = "", # Label before the subgroup
print.tau2 = FALSE,
col.diamond = "blue",
col.diamond.lines = "black",
col.predict = "black",
print.I2.ci = TRUE,
digits.sd = 2,
test.overall = TRUE,
test.subgroup = TRUE,
print.stat = TRUE,
test.effect.subgroup = TRUE
)
dev.off()
Dear Dr. Guido Schwarzer
I am a researcher at Jinan University(Guangzhou, China).
Recently, I have done a meta-analysis using the meta-package. I found it was very powerful and convenient except when I choose the model( random, fixed, or combined) to use.
Each time I choose the model parameter, I have to set “comb.fixed = TRUE, comb.random = FALSE” .
If there is a model parameter to set, it will make a big difference.
Thanks for your time. Have a nice day.
I have noticed that metabind
is checking whether the meta-analysis objects have the same arguments. Possibly here:
Lines 495 to 501 in 492c5ad
However, the problem is that arguments like warn
do not influence the calculations and still are being considered in this check.
warn A logical indicating whether warnings should be printed (e.g., if studies are excluded from meta-analysis due to zero standard deviations).
reprex:
library(meta)
#> Loading 'meta' package (version 4.11-1).
#> Type 'help(meta)' for a brief overview.
data(Fleiss93cont)
Fleiss93cont$age <- c(55, 65, 55, 65, 55)
Fleiss93cont$region <- c("Europe", "Europe", "Asia", "Asia", "Europe")
m1 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c, data = Fleiss93cont, sm = "MD")
mu1 <- update(m1, byvar = age, bylab = "Age group", warn = TRUE)
mu2 <- update(m1, byvar = region, bylab = "Region", warn = FALSE)
metabind(mu1, mu2)
#> Error in metabind(mu1, mu2): All meta-analyses must use the same basic settings which differ for the following argument: 'warn'
Created on 2020-02-20 by the reprex package (v0.3.0)
Any thoughts on this?
Hi Guido, hi everyone,
thank you for your help and the solution regarding the last issue (#48), and I am really sorry to bother you once again.
Given the previously explored example
# libraries
library(meta)
library(data.table)
# data
dt <- data.table(
ID = rep(1:5, each=3),
AE = rep(c("minor","major","deadly"),5),
ni = rep(c(25,19,101,32,50), each=3),
xi = c(2,3,3,4,1,0,29,13,4,7,4,1,7,2,1) )
# meta-analysis & forest plot
res <- metaprop(xi, ni, cluster=ID, subgroup=AE, data=dt)
forest(res, subgroup=T)
metaprop()
works fine, and also estimates of the effect are reported.
However, forest()
does not plot the summary effect per subgroup, regardless how the arguments type.random
, type.subgroup
or type.subgroup.random
are defined.
Any advices? Wrong configuration at my side, or a structural problem of forest()
?
Sorry once again, and all the best,
Felix
Hi Guido,
I am new to meta-analysis and your package has been of great help!
Pardon me for creating this issue about the code as well as the understanding of the DerSimonian-Laird method for the random effects model in the analysis of binary data.
Consider the following illustrative code:
data(Olkin95)
meta1 <- metabin(event.e, n.e, event.c, n.c, data = Olkin95, method = "Inverse")
meta2 <- metabin(event.e, n.e, event.c, n.c, data = Olkin95, method = "MH")
meta1$Q; meta2$Q
meta1$tau; meta2$tau
The Q statistics of heterogeneity are different because the methods for the fixed effect model are different. The DerSimonian-Laird estimates of tau-squared are also different.
I imagine this is because the respective Q statistic from the fixed effect method is directly used in the tau-squared estimation. However, in the original paper by DerSimonian and Laird (1986), they suggested using Q from the inverse-variance method. In two other reference (Normand 1999; Borenstein, Hedges, Higgins, and Rothstein 2009), the Q from the inverse-variance method was the only option.
Could you provide some clarification about your choice (with possibly some references)? My apologies for any misunderstanding. Thank you for consideration!
References:
Hi there,
Thank you for the meta
package, it is very useful and the customisation of plots is fantastic.
These two points are barely 'issues' and really just very small customisations to the forest.meta
plot if you could help.
I was wanting to remove the dotted ref line for the estimate of effect in the plot - is there anyway to do this? I was digging through the source code but couldn't find an immediate easy option unless I missed something? The ref
parameter is only for the solid reference line right?
Is there anyway to add a solid line across the top of the plot (i.e. extending from the left col to the right col at top of y axis (ymax?) but under the column headings? Again I was playing around with modifying the source code but couldn't get a line to sit permanently in position without the position of it moving based on plot size. Out of curiosity how do you get the plots to sit in a "fixed" position regardless of the window/plot sizing?
As I mentioned these are really not 'problems', more aesthetic modifications, so if no easy solution of course that's fine :)
Hi there,
I wonder if you could help me understand some potential discrepancies in the output of the metacor
function. I've attached a small dataset here in the hopes you can replicate my results. Here is the basic code I'm running, using the tidyverse and meta packages on the attached .csv file: test.zip.
test_data <-
read.csv("test.csv")
test_metacor <-
test_data %>%
group_by(group) %>%
summarize(cor = cor(v1, v2, use = "pairwise.complete.obs"),
n = length(v1)) %>%
metacor(cor,
n,
studlab = group,
data=.)
test_metacor
test_metacor$TE.fixed
I'm attempting to understand why I'm seeing different coefficients when accessing the results by calling test_metacor
as opposed to test_metacor$TE.fixed
.
For example, when I call test_metacor
I get:
COR 95%-CI z p-value
Fixed effect model 0.5691 [0.5108; 0.6221] 15.39 < 0.0001
Random effects model 0.5691 [0.3929; 0.7050] 5.48 < 0.0001
However, when i call test_metacor$TE.fixed
I get:
0.6461636
I was under the impression, from using the other functions in the meta package (e.g., metamean), that $TE.fixed
accessed the fixed effect estimate but maybe I'm wrong? Based on the results I see from accessing test_metacor
I would have thought test_metacor$TE.fixed
should give me 0.5691
.
Any help would be appreciated.
Cheers,
Stefan
Hi:
I had a question about the 95%CI appearance but the cilayout("(", ", ") resolved it for me.
Thank you for the nice package.
Hi,
I am using your metacont function and would like to extract the individual ROM estimates from my metacont object. However, when I print the object (See P1 below), I get numbers around ~1. When I print the metacontobject$TE (see P2) I get something different than what is printed in P1. Maybe I'm misunderstanding where the individual ROM estimates are stored in my metacont object. Any help would be greatly appreciated. Thanks.
my metacont object is stored as mod_non_adj$TE
P1:
mod_non_adj
Review: Bacterial Shannon Diversity
ROM 95%-CI %W(random)
14 1.2667 [1.1107; 1.4446] 1.7
74 1.0079 [0.9372; 1.0839] 2.5
93 1.0183 [1.0005; 1.0365] 3.2
121 0.7596 [0.7194; 0.8020] 2.8
227 0.9849 [0.9789; 0.9910] 3.3
227 0.9868 [0.9823; 0.9913] 3.3
238 1.0038 [0.9854; 1.0225] 3.2
238 1.0010 [0.9860; 1.0162] 3.2
238 1.0038 [0.9861; 1.0218] 3.2
274 1.1053 [1.0010; 1.2204] 2.1
327 0.9044 [0.8362; 0.9782] 2.4
345 0.9545 [0.9111; 1.0001] 2.9
358 0.9695 [0.8697; 1.0806] 2.0
376 1.0413 [0.9805; 1.1059] 2.7
376 1.0525 [0.9724; 1.1391] 2.4
376 1.0442 [0.9639; 1.1313] 2.4
410 1.0055 [0.9640; 1.0488] 3.0
410 0.9889 [0.9154; 1.0683] 2.4
505 1.0310 [0.9249; 1.1492] 2.0
505 1.0321 [0.9259; 1.1505] 2.0
505 1.0159 [0.9114; 1.1324] 2.0
507 0.9844 [0.9394; 1.0315] 2.9
507 1.0459 [1.0233; 1.0690] 3.2
507 1.1383 [1.0624; 1.2195] 2.6
507 1.0373 [0.9853; 1.0920] 2.8
507 0.9949 [0.9595; 1.0316] 3.0
549 0.9751 [0.8515; 1.1166] 1.6
555 0.9819 [0.8874; 1.0864] 2.1
563 0.8941 [0.7006; 1.1411] 0.8
570 0.9654 [0.9350; 0.9967] 3.1
674 1.0667 [0.9449; 1.2042] 1.8
674 0.9706 [0.7867; 1.1975] 0.9
677 1.3432 [0.9810; 1.8391] 0.5
685 1.0293 [0.9782; 1.0831] 2.8
709 0.8459 [0.8083; 0.8852] 2.9
709 0.9325 [0.9129; 0.9525] 3.2
713 1.0250 [1.0008; 1.0498] 3.2
713 0.9836 [0.9631; 1.0046] 3.2
715 1.0586 [0.9201; 1.2178] 1.6
275 1.0417 [1.0112; 1.0732] 3.1
347 0.3772 [0.1877; 0.7582] 0.1
347 0.2721 [0.1206; 0.6139] 0.1
347 1.1150 [0.3370; 3.6886] 0.0
Number of studies combined: k = 43
ROM 95%-CI z p-value
Random effects model 0.9976 [0.9738; 1.0220] -0.20 0.8454
Prediction interval [0.8671; 1.1477]
Quantifying heterogeneity:
tau^2 = 0.0047; H = 2.74 [2.42; 3.10]; I^2 = 86.7% [82.9%; 89.6%]
Test of heterogeneity:
Q d.f. p-value
315.07 42 < 0.0001
P2:
mod_non_adj$TE
[1] 0.2363887781 0.0078554999 0.0181823191 -0.2749871530 -0.0151911392
[6] -0.0132809189 0.0038167985 0.0009555662 0.0038167985 0.1000834586
[11] -0.1004705304 -0.0465200156 -0.0310102367 0.0404679496 0.0511475112
[16] 0.0432968058 0.0055096558 -0.0111421766 0.0305155439 0.0315966926
[21] 0.0157629597 -0.0157483570 0.0448505662 0.1295153230 0.0366213547
[26] -0.0051590828 -0.0252379326 -0.0183026368 -0.1119179162 -0.0352261846
[31] 0.0645385211 -0.0298529631 0.2950548268 0.0288993496 -0.1673400032
[36] -0.0699119892 0.0246926126 -0.0165293020 0.0569217274 0.0408686316
[41] -0.9749742337 -1.3016061002 0.1088323868
Using the forest
function to plot results from metacont
where a study has only central tendency but not spread (SD) available causes the plot to display the study with missing spread but excludes the last study from the plot.
This is apparently caused by metacont
not accounting for studies with missing spread in k
.
MD 95%-CI %W(fixed) %W(random)
haider 1984 NA 0.0 0.0
lolley 1985 NA 0.0 0.0
andel 1990 NA 0.0 0.0
wistbacka 1992 0.0000 [ -28.1692; 28.1692] 0.5 3.6
boldt 1993 NA 0.0 0.0
boldt 1993 NA 0.0 0.0
brodin 1993 NA 0.0 0.0
kjellman 2000 NA 0.0 0.0
lindholm 2001 NA 0.0 0.0
szabo 2001 NA 0.0 0.0
bruemmer 2002 NA 0.0 0.0
lell 2002 6.4000 [ -5.1944; 17.9944] 3.1 11.6
wallin 2003 NA 0.0 0.0
visser 2005 NA 0.0 0.0
quinn 2006 NA 0.0 0.0
shim 2006 1.6000 [ -2.0946; 5.2946] 31.0 19.4
barcellos 2007 NA 0.0 0.0
zuurbier 2008 1.6000 0.0 0.0
shim 2013 -4.4000 [ -7.4852; -1.3148] 44.4 19.8
laiq 2015 NA 0.0 0.0
ahmad 2017 -15.1000 [ -20.5008; -9.6992] 14.5 17.9
oldfield 1986 NA 0.0 0.0
girard 1992 1.0000 [-141.6784; 143.6784] 0.0 0.2
lazar 1997 NA 0.0 0.0
tunerir 1998 NA 0.0 0.0
besogul 1999 NA 0.0 0.0
lazar 2000 NA 0.0 0.0
lazar 2004 NA 0.0 0.0
celkan 2006 -23.3500 [ -45.4665; -1.2335] 0.9 5.3
koskenkari 2006 NA 0.0 0.0
ranasinghe 2006 NA 0.0 0.0
ranasinghe 2006 NA 0.0 0.0
smith 2008 NA 0.0 0.0
jovic 2009 NA 0.0 0.0
jovic 2009 NA 0.0 0.0
howell 2011 NA 0.0 0.0
sato 2011 NA 0.0 0.0
foroughi 2012 -55.0000 [-110.0985; 0.0985] 0.1 1.1
rujirojindakul 2014 NA 0.0 0.0
duncan 2015 NA 0.0 0.0
roh 2015 -11.6000 [ -27.0141; 3.8141] 1.8 8.7
duncan 2018 NA 0.0 0.0
ellenberger 2018 0.0000 [ -10.7662; 10.7662] 3.6 12.4
tsang 2007 NA 0.0 0.0
zhao 2020 NA 0.0 0.0
straus 2013 NA 0.0 0.0
seied 2010 NA 0.0 0.0
wistbacka 1994 NA 0.0 0.0
turkoz 2000 NA 0.0 0.0
Number of studies combined: k = 10
MD 95%-CI z p-value
Fixed effect model -3.9303 [ -5.9863; -1.8743] -3.75 0.0002
Random effects model -5.3660 [-11.2840; 0.5521] -1.78 0.0755
Quantifying heterogeneity:
tau^2 = 43.4641 [0.0000; >363.7533]; tau = 6.5927 [0.0000; >19.0723];
I^2 = 75.0% [53.4%; 86.6%]; H = 2.00 [1.46; 2.73]
Test of heterogeneity:
Q d.f. p-value
35.98 9 < 0.0001
Details on meta-analytical method:
- Inverse variance method
- DerSimonian-Laird estimator for tau^2
- Jackson method for confidence interval of tau^2 and tau
MD 95%-CI %W(fixed) %W(random)
haider 1984 NA 0.0 0.0
lolley 1985 NA 0.0 0.0
andel 1990 NA 0.0 0.0
wistbacka 1992 0.0000 [ -28.1692; 28.1692] 0.5 3.6
boldt 1993 NA 0.0 0.0
boldt 1993 NA 0.0 0.0
brodin 1993 NA 0.0 0.0
kjellman 2000 NA 0.0 0.0
lindholm 2001 NA 0.0 0.0
szabo 2001 NA 0.0 0.0
bruemmer 2002 NA 0.0 0.0
lell 2002 6.4000 [ -5.1944; 17.9944] 3.1 11.6
wallin 2003 NA 0.0 0.0
visser 2005 NA 0.0 0.0
quinn 2006 NA 0.0 0.0
shim 2006 1.6000 [ -2.0946; 5.2946] 31.0 19.4
barcellos 2007 NA 0.0 0.0
zuurbier 2008 NA 0.0 0.0
shim 2013 -4.4000 [ -7.4852; -1.3148] 44.4 19.8
laiq 2015 NA 0.0 0.0
ahmad 2017 -15.1000 [ -20.5008; -9.6992] 14.5 17.9
oldfield 1986 NA 0.0 0.0
girard 1992 1.0000 [-141.6784; 143.6784] 0.0 0.2
lazar 1997 NA 0.0 0.0
tunerir 1998 NA 0.0 0.0
besogul 1999 NA 0.0 0.0
lazar 2000 NA 0.0 0.0
lazar 2004 NA 0.0 0.0
celkan 2006 -23.3500 [ -45.4665; -1.2335] 0.9 5.3
koskenkari 2006 NA 0.0 0.0
ranasinghe 2006 NA 0.0 0.0
ranasinghe 2006 NA 0.0 0.0
smith 2008 NA 0.0 0.0
jovic 2009 NA 0.0 0.0
jovic 2009 NA 0.0 0.0
howell 2011 NA 0.0 0.0
sato 2011 NA 0.0 0.0
foroughi 2012 -55.0000 [-110.0985; 0.0985] 0.1 1.1
rujirojindakul 2014 NA 0.0 0.0
duncan 2015 NA 0.0 0.0
roh 2015 -11.6000 [ -27.0141; 3.8141] 1.8 8.7
duncan 2018 NA 0.0 0.0
ellenberger 2018 0.0000 [ -10.7662; 10.7662] 3.6 12.4
tsang 2007 NA 0.0 0.0
zhao 2020 NA 0.0 0.0
straus 2013 NA 0.0 0.0
seied 2010 NA 0.0 0.0
wistbacka 1994 NA 0.0 0.0
turkoz 2000 NA 0.0 0.0
Number of studies combined: k = 10
MD 95%-CI z p-value
Fixed effect model -3.9303 [ -5.9863; -1.8743] -3.75 0.0002
Random effects model -5.3660 [-11.2840; 0.5521] -1.78 0.0755
Quantifying heterogeneity:
tau^2 = 43.4641 [0.0000; >363.7533]; tau = 6.5927 [0.0000; >19.0723];
I^2 = 75.0% [53.4%; 86.6%]; H = 2.00 [1.46; 2.73]
Test of heterogeneity:
Q d.f. p-value
35.98 9 < 0.0001
Details on meta-analytical method:
- Inverse variance method
- DerSimonian-Laird estimator for tau^2
- Jackson method for confidence interval of tau^2 and tau
In both cases, metacont
results report k = 10
.
You'll notice that removing central tendency data for zuurbier 2008
correctly removed zuurbier 2008
from the plot, but also allowed ellenberger 2018
to appear in the plot, while it was excluded before.
I just updated meta to v6.5, I am running RStudio 2023.06.0+421 (last version).
I noticed something in one of my macOS system (meta + R last versions).
When I launched a metabin function
res <- metabin (…, method = “MH”,…)
All I got back was an analysis with Inverse method. No way to fix it (hakn and MH.exact disabled changed nothing).
So I launched the same command on another macOS machine, and I had MH method working ! This macOS had not (yet) the 6.5 version of meta package.
So obviously I updated to meta 6.5 and it was it : no more MH method…
I guess the problem is coming from meta 6.5-0
Thank you for your help !
A function which cam extract the table with effect-size estimates from fitted meta-regression objects, e.g.:
m1 <- metaprop(4:1, 10 * 1:4)
summary(m1)
proportion 95%-CI
1 0.4000 [0.1216; 0.7376]
2 0.1500 [0.0321; 0.3789]
3 0.0667 [0.0082; 0.2207]
4 0.0250 [0.0006; 0.1316]
...
# output truncated
Or is there a convenient way to extract the table with proportions and CI from the object?
Regards, Sven
Hi,
I get the following error when calling metagen
and passing in both an id
(for three-level meta-analysis) and byvar
(for subgroup analysis):
Error in add.w[j, ] <- c(meta1$sign.lower.tau, meta1$sign.upper.tau) :
number of items to replace is not a multiple of replacement length
Any idea what might cause this, and whether it's likely a bug or a problem with how we are trying to use the function?
Here's a minimal reproducible example (the error message is the same as we get with different data and some other differing arguments):
data(Pagliaro1992)
# Add fictitious grouping variables
Pagliaro1992$region <- rep(c("Europe", "Europe", "Asia", "Asia"), times = 7)
Pagliaro1992$id <- rep(c(1, 2, 3, 4), times = 7)
m <- metagen(logOR, selogOR,
byvar = region,
id = id,
data = Pagliaro1992,
sm = "OR")
Many thanks,
Erik
For each of my meta analyses, I use something like the following code snippet to create a pdf file with a single forest plots.
pdf(file=output_file_path, width=8, height=3)
meta::forest(...)
dev.off()
Since I perform many different meta analyses with varying number of studies, the amount of (white) space above / below the plot is different for each analysis.
I would like to automatically adjust the value of height
for each study. As far as I can see it is difficult to determine the height of meta::forest(...)
, since the function does not return an object. Am I missing something here?
I found the following information in the documentation:
The forest function is based on the grid graphics system. In order to print the forest plot, (i) resize the graphics window, (ii) either use dev.copy2eps or dev.copy2pdf.
However, I'm afraid that this is a bit unclear to me.
Any help would be highly appreciated!
Bests,
Sebastian
I've (already) conducted a meta analysis with the metafor
package.
Now, I've discovered the meta
package and realized that the meta::forest()
function creates way better forest plots than the corresponding function in the metafor
package.
I've read that the meta
package uses the metafor
package internally. So, I'm wondering, if there is a convenient way to cast the metafor::rma
objects to objects of type meta::meta
(which are required as input of meta::forest()
)?
Best regards,
Sebastian
Is is possible to make the text added using text.addline1 bold?
https://stackoverflow.com/questions/66273969/how-to-make-extra-line-of-text-using-text-addline1-bold-using-the-meta-package-i
If I have a grouping column that is a factor()
the print method prints the group levels instead of the group labels (see the group
column at the very beginning of the output below). Is there an option to change it?
reprex:
library(meta)
#> Loading 'meta' package (version 4.9-7).
#> Type 'help(meta)' for a brief overview.
data(Fleiss93cont)
# Generate additional variable with grouping information
Fleiss93cont$group <- factor(c(1, 2, 1, 1, 2), labels = c("Treatment A", "Treatment B"))
m2 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c, study,
data = Fleiss93cont, sm = "SMD", byvar = group)
print(m2)
#> SMD 95%-CI %W(fixed) %W(random) group
#> Davis -0.3399 [-1.1152; 0.4354] 11.5 11.5 1
#> Florell -0.5659 [-1.0274; -0.1044] 32.6 32.6 2
#> Gruen -0.2999 [-0.7712; 0.1714] 31.2 31.2 1
#> Hart 0.1250 [-0.4954; 0.7455] 18.0 18.0 1
#> Wilson -0.7346 [-1.7575; 0.2883] 6.6 6.6 2
#>
#> Number of studies combined: k = 5
#>
#> SMD 95%-CI z p-value
#> Fixed effect model -0.3434 [-0.6068; -0.0800] -2.56 0.0106
#> Random effects model -0.3434 [-0.6068; -0.0800] -2.56 0.0106
#>
#> Quantifying heterogeneity:
#> tau^2 = 0; H = 1.00 [1.00; 2.10]; I^2 = 0.0% [0.0%; 77.4%]
#>
#> Quantifying residual heterogeneity:
#> H = 1.00 [1.00; 1.76]; I^2 = 0.0% [0.0%; 67.8%]
#>
#> Test of heterogeneity:
#> Q d.f. p-value
#> 3.68 4 0.4515
#>
#> Results for subgroups (fixed effect model):
#> k SMD 95%-CI Q tau^2 I^2
#> group = Treatment A 3 -0.1815 [-0.5194; 0.1563] 1.34 0 0.0%
#> group = Treatment B 2 -0.5944 [-1.0151; -0.1738] 0.09 0 0.0%
#>
#> Test for subgroup differences (fixed effect model):
#> Q d.f. p-value
#> Between groups 2.25 1 0.1336
#> Within groups 1.43 3 0.6992
#>
#> Results for subgroups (random effects model):
#> k SMD 95%-CI Q tau^2 I^2
#> group = Treatment A 3 -0.1815 [-0.5194; 0.1563] 1.34 0 0.0%
#> group = Treatment B 2 -0.5944 [-1.0151; -0.1738] 0.09 0 0.0%
#>
#> Test for subgroup differences (random effects model):
#> Q d.f. p-value
#> Between groups 2.25 1 0.1336
#>
#> Details on meta-analytical method:
#> - Inverse variance method
#> - DerSimonian-Laird estimator for tau^2
#> - Hedges' g (bias corrected standardised mean difference)
Created on 2019-10-01 by the reprex package (v0.3.0)
Dear Prof. Schwarzer,
I am currently having problems when trying to produce a bubble plot from a meta regression.
I get the following error:
Error in stripchart.default(x1, ...) : invalid plotting method
I have just invited you to join my private repo in case you would like to reproduce the error.
Thank you,
Felipe
sessionInfo()
R version 3.4.4 (2018-03-15)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows >= 8 x64 (build 9200)
Matrix products: default
locale:
[1] LC_COLLATE=English_United States.1252 LC_CTYPE=English_United States.1252 LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C LC_TIME=English_United States.1252
attached base packages:
[1] stats graphics grDevices utils datasets methods base
other attached packages:
[1] meta_4.9-3
loaded via a namespace (and not attached):
[1] compiler_3.4.4 tools_3.4.4 yaml_2.2.0 grid_3.4.4
Hi Guido,
I have started using your meta package and it is a very useful piece of software, so thanks a lot for publishing and maintaining it!
However, I have come across a strange bug which I think is due to some numeric issues.
I have the following data:
mean.e | sd.e | mean.c | sd.c | n.e | n.c |
---|---|---|---|---|---|
-2.08 | 0.53 | -2.15 | 0.32 | 696 | 558 |
-2.60 | 0.55 | -0.24 | 0.26 | 1028 | 808 |
Which is part of a larger dataframe, but that is irrelevant here. I noticed that these two studies get assigned a value for SMD if and only if I use Cohen's d, but even then they don't get any weight or CI. I played with the table to see what might be causing this problem and it seems like it is the large sample sizes of the studies. Below are three analyses, once with the full data, once with n= 200 for both studies and groups and once with n=100 for both studies and groups.
> results <- metacont(n.e=n.e,
+ mean.e=mean.e,
+ sd.e=sd.e,
+ n.c=n.c,
+ mean.c=mean.c,
+ sd.c=sd.c,
+ random = T, studlab = 1:2,
+ data = data3, sm ="SMD",
+ method.smd="Cohen")
> forest(results, leftcols = c('studlab'))
> results <- metacont(n.e=c(200,200),
+ mean.e=mean.e,
+ sd.e=sd.e,
+ n.c=c(200,200),
+ mean.c=mean.c,
+ sd.c=sd.c,
+ random = T, studlab = 1:2,
+ data = data3, sm ="SMD",
+ method.smd="Cohen")
> forest(results, leftcols = c('studlab'))
> results <- metacont(n.e=c(100,100),
+ mean.e=mean.e,
+ sd.e=sd.e,
+ n.c=c(100,100),
+ mean.c=mean.c,
+ sd.c=sd.c,
+ random = T, studlab = 1:2,
+ data = data3, sm ="SMD",
+ method.smd="Cohen")
So the problem occurs somewhere between an n of 100 and 200 per group. I assume this is a rounding error, i.e. that something in the calculation of the CI is becoming so small that it is rounded to 0 and subsequently the study gets assigned no weight.
Is there any way I can fix/prevent this?
Cheers,
Florin
Hi,
I have been looking through the API and internet for some hours now but I feel like I cannot find any satisfying answers. When we construct forest plots for a continuous measure (e.g. group mean) we use the metacont() -> forest() workflow. This will result (with correct alterations) in three columns showing the group size ('Total'), mean and SD for the control as well as experimental groups.
I was wondering: is it possible to put the mean and SD into one column representing it as; mean (± SD) such as 28.0 (± 1.2)?
Thank you for the help.
Dear developer,
Recently, I am developing an online tool and read a Bruno's paper. It stated that " Cox and Snell’s method is computationally similar to Hasselblad and Hedges’ method, but uses a different multiplication factor. We multiplied SMDs and their standard error by 1.65 to calculate log odds ratios and the corresponding standard errors."
However, in smd2or function and or2smd function
## smd2or
if (method == "HH") {
lnOR <- smd * pi / sqrt(3)
selnOR <- sqrt(se.smd^2 * pi^2 / 3)
}
else if (method == "CS") {
lnOR <- smd * 1.65
selnOR <- sqrt(se.smd^2 * 1.65)
}
## or2smd
if (method == "HH") {
smd <- lnOR * sqrt(3) / pi
se.smd <- sqrt(selnOR^2 * 3 / pi^2)
}
else if (method == "CS") {
smd <- lnOR / 1.65
se.smd <- sqrt(selnOR^2 / 1.65)
}
For CS method, the SE of lnOR is calculated by multiplying tandard error of SMD by sqrt(1.65) instead of 1.65.
Is it a issue for this conversion? Thanks for your patient and Looking forward to your response.
Reference:
Bruno R da Costa et.al. , Methods to convert continuous outcomes into odds ratios of treatment response and numbers needed to treat: meta-epidemiological study, International Journal of Epidemiology, Volume 41, Issue 5, October 2012, Pages 1445–1459, https://doi.org/10.1093/ije/dys124
I am using the forest
function from within the gemtc
package. Further I use the knitr
package to render the forest plots into a html report.
The network is quite large (>40 treatments) and hence the forrest plots are considerable large too. For reasons I cannot figure out, the html report cuts each forest plot after 19 comparisons and starts with a new plot below. I am not even sure if this issues is related to the meta
package or if it is related to some knitr
options I am not aware of.
Any advise is highly appreciated.
Hi, I am getting the following error in certain conditions when using the metagen
command to run a three-level meta-analysis (id
) with sub-groups (byvar
):
Error: $ operator is invalid for atomic vectors
It appears to be distinct from issue #38 and seems to occur specifically when one sub-group includes multiple data points from one study using a single reference group. I have produced a minimum reproducible example below that behaves in the same manner as my own data. Plain (2019) appears to be the study that causes the error when both id
and byvar
are passed. Removing byvar
allows the three-level meta-analysis to run successfully. Sub-setting the data to exclude Plain (2019) also allows a nested three-level meta-analysis to run.
I would be grateful for any thoughts on how to resolve this issue.
Regards
Andy
Minimum reproducible example:
library(meta)
df <- data.frame(study = c("Brown 2011, exposed, males",
"Brown 2011, exposed, females",
"Grant 2014, exposed, both sexes",
"Young 2012, medium exposed, both sexes",
"Young 2012, high exposed, both sexes",
"Plain 2019, medium exposed/medium dose, both sexes",
"Plain 2019, medium exposed/high dose, both sexes",
"Plain 2019, high exposed/medium dose, both sexes",
"Plain 2019, high exposed/high dose, both sexes"),
ref_group = c("Brown 2011, unexposed, males",
"Brown 2011, unexposed, males",
"Grant 2014, unexposed, both sexes",
"Young 2012, unexposed, both sexes",
"Young 2012, unexposed, both sexes",
"Plain 2019, unexposed, both sexes",
"Plain 2019, unexposed, both sexes",
"Plain 2019, unexposed, both sexes",
"Plain 2019, unexposed, both sexes"),
te = c(0.63, 0.77, 1.08, 1.23, 1.25,0.66, 0.78, 0.87, 0.93),
se = c(0.24, 0.16, 0.34, 0.28, 0.27,0.13, 0.18, 0.12, 0.17),
exposure_topic = c("A","A","B","B","B","C","C","C","C"))`
ma <- metagen(TE = te, seTE = se, sm = paste("OR"),
studlab = paste(study),
data = df,
subgroup = exposure_topic,
subgroup.name = "Exposure topic",
id = ref_group,
fixed = FALSE, random = TRUE)
Link to repo (includes un-nested version and subset excluding Plain (2019))
Hi, I am trying to use the bubble plot to plot the results of my metaregression, and I get an error something like:
Error in get(covar.name) :
object 'Covariate name' not found
I get the same error when covariates are categorical or numeric. The output of the metaregression itself is fine and I am able to print it.
Any ideas why this might be? Many thanks.
Hi, I'm doing a forest plot of correlations, and since I only have 2 columns on the left the heterogeneity stats are overlapping with the x axis of the graph. I couldn't find anything in the help file about how can we move the hetstat line in the graph, the only solution I found was increasing colgap of the left columns but it created a lot of white space. Is there any other way to deal with this issue?
Thanks
Hi everyone.
I am trying to calculate proportional meta.analysis with clusters and subgroups. The meta-analysis works fine and as expected, however, forest() seems - at least for me - broken:
forest(res, subgroup=T)
Error in x$labels[[i]] : subscript out of bounds
A reproducible example:
# libraries
library(meta)
library(data.table)
# data
dt = data.table(
ID = rep(1:5, each=3),
AE = rep(c("minor","major","deadly"),5),
ni = rep(c(25,19,101,32,50), each=3),
xi = c(2,3,3,4,1,0,29,13,4,7,4,1,7,2,1) )
# meta-analysis & forest plot
res = metaprop(xi, ni, cluster=ID, subgroup=AE, data=dt, tau.common=T)
forest(res)
A similar issue was described earlier for metamean() ( #41 ), but was solved meanwhile.
Any help would be highly appreciated.
Best, Felix
Hi!
We are conducting a systematic review and we have used your package for doing our meta-analysis.
Now, we are struggling to sort the studies in the images by weights. We already try using sortvar command, but we could not find the exact way to order descendingly by the calculated weights.
We will appreciate if you could clarify this.
Kind Regards,
Marcelo Reategui
There seems to be a minor bug when leftcols are manually defined in forest plots. If study is not referred to as "studlab" but for example the name of the variable which was defined as "studlab" in the model, heterogeneity stats disappear and justifying the study-variable does not work. This is not a problem if one just remembers to use studlab, but once you start adding other lefcols with variable names, it is perhaps easily forgotten.
Below is an example
library(meta)
#generate some data
d<-data.frame(Est=c(.04,.07,.01,.04),
SE=c(.01,.02,.01,.01),
n=c(292,141,356,315),
Study=paste0("Study",1:4))
#run generic meta-analysis
m<-metagen(TE = Est,
seTE = SE,
data = d,
studlab = Study)
#default forest plot
forest(m)
#Add sample size and define leftcols with variable names (heterogeneity disappears and Study is not left-justified)
forest(m,leftcols=c("Study","n"),
just.studlab = "left")
#Add sample size and define leftcols with studlab for Study (all fine)
forest(m,leftcols=c("studlab","n"),
just.studlab = "left")
Hi sorry to ask this question here, but I have searched stack exchange, the documentation and also the code to find an answer to this question. But when I calculate the odds ratio myself (and using the fisher.test()
I get the following estimate
> OR <- (270/288)/(281/301)
> OR
[1] 1.004226
However when I run this line of code I get the following:
> metabin(270,288,281,301,sm="OR")
OR 95%-CI z p-value
1.0676 [0.5527; 2.0621] 0.19 0.8456
Details:
- Inverse variance method
Can you explain the slight discrepancy in the point estimate of the odds ratio between my calculations and what the metabin function is providing?
Thanks, Eric
Hi,
this is Jiaxing Wang from Emory university, US.
I am using R package "meta" for a meta-analysis.
However, when applying the "metareg" function for meta-regression, it does not report R2 (the amount of heterogeneity explained by each variable). It only reports Tau^2, Tau, I^2 and H^2 .
My question is: how to get the R^2 value?
My session info:
Jiaxing
Hi,
is it possible to include more than 2 lines in a forest plot? So far I only found out to edit 2 using the limits of equivalence argument (lower/upper.equi). However I would like to add 3 lines to both sides indicating thresholds for small, moderate and large effects like in the image below (from the fully contextualised GRADE approach).
Best regards,
Florian
Hello,
I need to do some meta-analyses in R (moving away from stata that was used by previous collaborators)
My problem is that I cannot reproduce the results….
For example
The data was:
Study case pop
Study1 5 119
Study2 116 170
And then in stata:
Metaprop case pop, random
I get this: 0.18 (95%CI 0.14-0.21)
But in R I tried many combinations…
events=c(5,116)
n=c(119,170)
meta_data <- metaprop(event,n,method="GLMM", comb.fixed = FALSE)
forest(meta_data)
meta_data
proportion 95%-CI
1 0.0420 [0.0138; 0.0953]
2 0.6824 [0.6067; 0.7515]
Number of studies combined: k = 2
Random effects model 0.2355 [0.0191; 0.8297]
Details on meta-analytical method:
Random intercept logistic regression model
Maximum-likelihood estimator for tau^2
Logit transformation
Clopper-Pearson confidence interval for individual studies
<image001.png>
I also tried various other approaches. The closest results I obtained was with the following. It gives me good point estimates but not confidence intervals… I also tried woth double arcsine transformation (sm=”PFT”) but no luck…
meta_data3 <- metaprop(events,n,method="inverse",sm = "PLN", comb.fixed = F,overall=T,method.tau="REML",method.tau.ci="BJ")
proportion 95%-CI %W(random)
1 0.0420 [0.0138; 0.0953] 48.8
2 0.6824 [0.6067; 0.7515] 51.2
Number of studies combined: k = 2
Random effects model 0.1752 [0.0114; 1.0000]
Details on meta-analytical method:
Do you know how to reproduce stata ‘metaprop random function’ in R (default parameters)???
#2 Second (hopefully easier) question:
how can I extract the 95%CI of random effect models from the meta results???
thank you!
Hi @guido-s ,
I'm trying to perform a subgroup analysis for a meta-analysis of proportions. I've found that when specifying certain parameters for metaprop()
, I'm not able to extract and report the pval.random.w
element as referenced in your book. In particular, I'm trying to run the HKSJ method with arcsine transformation of proportions. However, I've seen other examples of metaprop()
subgroup analysis where pval.random.w
(and other associated elements) is not NA
.
library(meta)
#> Loading 'meta' package (version 6.2-1).
#> Type 'help(meta)' for a brief overview.
#> Readers of 'Meta-Analysis with R (Use R!)' should install
#> older version of 'meta' package: https://tinyurl.com/dt4y5drs
dat <- structure(
list(xi = c(182, 5, 209, 7, 19, 22, 17, 2, 4, 50, 9, 10, 3, 14, 30, 3, 32, 2, 25),
ni = c(2905, 30, 1633, 30, 49, 157, 106, 15, 70, 150, 28, 38, 100, 100, 35, 48, 46, 18, 82),
subgroup = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L),
levels = c("A", "B"), class = "factor")),
row.names = c(NA, -19L),
class = c("tbl_df", "tbl", "data.frame")
)
mod <- metaprop(
event = xi,
n = ni,
data = dat,
sm = "PAS",
fixed = FALSE,
method.random.ci = "HK",
method.tau = "SJ",
subgroup = subgroup
)
mod$pval.random.w
#> A B
#> NA NA
Created on 2023-05-08 with reprex v2.0.2
sessionInfo()
#> R version 4.2.2 (2022-10-31)
#> Platform: aarch64-apple-darwin20 (64-bit)
#> Running under: macOS Ventura 13.3.1
#>
#> Matrix products: default
#> BLAS: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.2-arm64/Resources/lib/libRlapack.dylib
#>
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] meta_6.2-1
#>
#> loaded via a namespace (and not attached):
#> [1] Rcpp_1.0.10 rstudioapi_0.14 xml2_1.3.3
#> [4] mathjaxr_1.6-0 knitr_1.42 splines_4.2.2
#> [7] MASS_7.3-59 lattice_0.21-8 rlang_1.1.1
#> [10] fastmap_1.1.1 minqa_1.2.5 tools_4.2.2
#> [13] grid_4.2.2 nlme_3.1-162 xfun_0.39
#> [16] cli_3.6.1 metafor_4.2-0 withr_2.5.0
#> [19] htmltools_0.5.5 yaml_2.3.7 lme4_1.1-32
#> [22] digest_0.6.31 lifecycle_1.0.3 numDeriv_2016.8-1.1
#> [25] Matrix_1.5-4 nloptr_2.0.3 fs_1.6.2
#> [28] glue_1.6.2 evaluate_0.21 rmarkdown_2.21
#> [31] reprex_2.0.2 compiler_4.2.2 metadat_1.2-0
#> [34] boot_1.3-28.1 CompQuadForm_1.4.3
Some colleagues and I are using meta
for a three-level meta-analysis and have encountered into some behaviour that we don't quite understand. I'm hoping this might be an appropriate forum to ask, but apologies if not!
After creating a three-level model using metagen
(i.e. passing in an id
vector that has the same value for some studies), weights(model)
returns NA
for every study, and the model$w.fixed
and model$w.random
attributes are both NA
. As a consequence, it's not possible to include weights e.g. in a forest plot.
Based on this line in the metagen code I'm guessing this is intentional behaviour, but it doesn't seem to be documented anywhere. Is it just not possible to calculate individual study weights for a three-level model, or is there some way around this? I noticed that rma.mv
is used under the hood, and there seems to be a way to retrieve weights from a rma.mv
object, but I don't really know enough about the details to tell whether this could be used here.
Happy to provide a reprex or a clearer description if it's helpful, but I figured there might be a simple reason this decision was made, so I thought I would just ask first.
Thanks for a very usable and powerful package! ⭐
Hi,
I have a weird issue with the data analysis while I use metacont from package Meta.
Some of my studies have a high number of participants and for these studies, metacont doesn't calculate SMD.
I tried to find out why that is the case but I cannot find an explanation or a solution.
When I figured out that the n.e and n.c is the problem I tried to change them to see what will happen, the SMD is calculated.
Does someone know how to solve this issue?
library(dmetar)
library(esc)
library(tidyverse)
library(meta)
meta_function <- read_excel("meta_function.xlsx")
str(meta_function)
tibble [25 x 7] (S3: tbl_df/tbl/data.frame)
$ author : chr [1:25] "Bennell 2018" "Peters 2017" "Irvine 2015" "Calner 2017" ...
$ n.e : num [1:25] 73 226 199 55 101 6 16 17 44 54 ...
$ n.c : num [1:25] 71 50 398 44 61 8 16 15 49 54 ...
$ smd_int: num [1:25] 0.337 0.23 0.286 0.305 1.23 ...
$ sd_int : num [1:25] 129.46 8.47 1.05 17.73 21.57 ...
$ smd_con: num [1:25] 0.366 0.103 0.149 0.128 0.207 ...
$ sd_con : num [1:25] 109.93 7.93 1.08 18.78 21.37 ...
m.cont <- metacont(n.e = n.e,
mean.e = smd_int,
sd.e = sd_int,
n.c = n.c,
mean.c = smd_con,
sd.c = sd_con,
studlab = author,
data = meta_function,
sm = "SMD",
method.smd = "Hedges",
fixed = FALSE,
random = TRUE,
method.tau = "REML",
hakn = TRUE,
title = "meta analysis")
summary(m.cont)
Review: meta analysis
SMD 95%-CI %W(random)
Bennell 2018 -0.0002 [-0.3269; 0.3265] 6.2
Peters 2017 0.0151 [-0.2912; 0.3214] 7.0
Irvine 2015 NA 0.0
Calner 2017 0.0096 [-0.3868; 0.4061] 4.2
Mecklenburg 2018 0.0474 [-0.2705; 0.3652] 6.5
Johnston 2010 0.0484 [-1.0103; 1.1071] 0.6
Blixen 2004 -0.0200 [-0.7130; 0.6729] 1.4
Ang 2010 0.0161 [-0.6782; 0.7104] 1.4
Shigaki 2013 0.0219 [-0.3852; 0.4290] 4.0
Petrozzi 2019 0.0602 [-0.3171; 0.4375] 4.6
Shebib 2019 0.0267 [-0.2799; 0.3334] 7.0
Toelle 2019 0.0067 [-0.4161; 0.4295] 3.7
Chiauzzi 2010 -0.1712 [-0.4499; 0.1075] 8.5
Williams 2010 0.0036 [-0.3572; 0.3645] 5.1
Carpenter 2012 0.1137 [-0.2167; 0.4441] 6.0
Dear 2013 NA 0.0
Pozo-Cruz 2012a 0.0676 [-0.3458; 0.4810] 3.9
Kristjansdottir 2013 0.0097 [-0.3278; 0.3471] 5.8
Amorim 2019 0.0128 [-0.4626; 0.4881] 2.9
Piqueras 2013 -0.0004 [-0.3294; 0.3286] 6.1
Bini 2017 -0.0472 [-0.7900; 0.6957] 1.2
Russell 2011 0.0102 [-0.4765; 0.4970] 2.8
Li 2014 -0.0444 [-0.3022; 0.2134] 9.9
Iles 2011 0.0373 [-0.6784; 0.7531] 1.3
Lorig 2002 NA 0.0
Number of studies combined: k = 22
Number of observations: o = 3965
SMD 95%-CI t p-value
Random effects model 0.0024 [-0.0268; 0.0315] 0.17 0.8667
Quantifying heterogeneity:
tau^2 = 0; tau = 0; I^2 = 0.0% [0.0%; 46.2%]; H = 1.00 [1.00; 1.36]
Test of heterogeneity:
Q d.f. p-value
2.40 21 1.0000
Details on meta-analytical method:
Hello. I have a question on the metacont
function.
When I use metacont
with data that sd.e or sd.c is 0, it provides the following warning:
Studies with non-positive values for sd.e or sd.c get no weight in meta-analysis.
I referred some books on meta-analysis (written in Japanese) and I thought that sd.e or sd.c was 0 was not a problem.
Is this a wrong understanding?
Unfortunately, I'm not very familiar with meta-analysis...
example_a.csv (The first line is a header.)
1,2,3
0,1,0
1,1,1
2,1,2
example_b.csv
1,2,3
1,1,2
2,2,2
3,3,2
example.R
require(meta)
summarize <- function(d) {
result <- data.frame(
apply(d, 2, function(x) {
x <- x[is.finite(x)]
l <- length(x)
return(c(
n = length(x),
min = ifelse(l, min(x), NA),
max = ifelse(l, max(x), NA),
median = ifelse(l, median(x), NA),
mean = ifelse(l, mean(x), NA),
sd = ifelse(l, sd(x), NA)
))
}),
check.names = FALSE
)
return(data.frame(t(result), check.names = FALSE))
}
data_a <- read.csv("example_a.csv", check.names = FALSE)
data_b <- read.csv("example_b.csv", check.names = FALSE)
summary_a <- summarize(data_a)
summary_b <- summarize(data_b)
meta <- metacont(
summary_a[1:3, "n"],
summary_a[1:3, "mean"],
summary_a[1:3, "sd"],
summary_b[1:3, "n"],
summary_b[1:3, "mean"],
summary_b[1:3, "sd"],
rownames(summary_a)[1:3],
sm = "SMD"
)
pdf(file = "example_plot.pdf", width = 12, height = 3)
forest(meta)
dev.off()
Hi,
I am trying to generate subgroup forest plots for proportion and mean meta-analysis using the 'subgroup' argument in metaprop and metamean functions. The forest plot for the metaprop works very well but I keep getting the error below when I use the 'forest' argument with metamean:
Error in x$labels[[i]] : subscript out of bounds
The argument passed to 'forest' is below:
Number of studies combined: k = 10
Number of observations: o = 4910
mean 95%-CI
Random effects model 2.0754 [1.3918; 2.7590]
Quantifying heterogeneity:
tau^2 = 1.2092 [0.5682; 4.0494]; tau = 1.0996 [0.7538; 2.0123]
I^2 = 99.7% [99.7%; 99.8%]; H = 18.92 [17.54; 20.41]
Test of heterogeneity:
Q d.f. p-value
3222.66 9 0
Results for subgroups (random effects model):
k mean 95%-CI
Population = European 2 1.0373 [ 0.7738; 1.3008]
Population = Chinese 3 2.3380 [ 1.1179; 3.5581]
Population = Middle East 1 2.7700 [ 2.6262; 2.9138]
Population = Indian 2 2.3655 [-0.1727; 4.9036]
Population = South and North American 2 2.0990 [ 0.0606; 4.1374]
tau^2 tau Q I^2
Population = European 0.0324 0.1800 9.00 88.9%
Population = Chinese 1.1524 1.0735 245.20 99.2%
Population = Middle East -- -- 0.00 --
Population = Indian 3.3426 1.8283 293.42 99.7%
Population = South and North American 2.1598 1.4696 641.59 99.8%
Test for subgroup differences (random effects model):
Q d.f. p-value
Between groups 128.06 4 < 0.0001
Details on meta-analytical method:
#subgroup_population_ALL
k=metamean(subgroup = Population, n=N, mean=mean, sd = sd, data = met_all, studlab = SN, fixed=FALSE)
forest(k)
Error in x$labels[[i]] : subscript out of bounds
Any help will be appreciated.
John
I have just noticed that when using metabind
to combine two meta
objects (not subgroups), the tau squared value appears as NA
in the forest plot.
This happens both from the latest CRAN version as well as from dev version.
reprex:
library(meta)
#> Loading 'meta' package (version 4.11-0).
#> Type 'help(meta)' for a brief overview.
data(Fleiss93cont)
m1 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c,
data = Fleiss93cont, sm = "MD")
m2 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c,
data = Fleiss93cont, sm = "MD")
mb1 <- metabind(m1, m2)
#> Warning in metabind(m1, m2): Note, results from random effects model extracted.
#> Use argument pooled = "fixed" for results of fixed effect model.
mb1
#> MD 95%-CI meta-analysis
#> overall -0.7373 [-1.4577; -0.0170] meta1
#> overall -0.7373 [-1.4577; -0.0170] meta2
#>
#> Number of studies combined: k = 5
#>
#> MD 95%-CI z p-value
#> Random effects model -0.7373 [-1.4577; -0.0170] -2.01 0.0448
#>
#> Quantifying heterogeneity:
#> tau^2 = 0.1894; tau = 0.4352; I^2 = 29.3% [0.0%; 72.6%]; H = 1.19 [1.00; 1.91]
#>
#> Test of heterogeneity:
#> Q d.f. p-value
#> 5.66 4 0.2260
#>
#> Results for meta-analyses (random effects model):
#> k MD 95%-CI tau^2 tau Q I^2
#> meta1 5 -0.7373 [-1.4577; -0.0170] 0.1894 0.4352 5.66 29.3%
#> meta2 5 -0.7373 [-1.4577; -0.0170] 0.1894 0.4352 5.66 29.3%
#>
#> Details on meta-analytical method:
#> - Inverse variance method
#> - DerSimonian-Laird estimator for tau^2
forest(mb1)
Created on 2020-03-20 by the reprex package (v0.3.0)
I'm working on a forest plot with subgroups, but in some cases my subgroups consist of a single study. In this case I don't want to display the FE/RE models for this subgroup as it doesn't add anything of interest to the plot.
Is it possible to functionality to print FE/RE models for only specified subgroups? I'm imagining passing a vector of TRUE/FALSE to the subgroup parameter rather than a single logical value.
Thanks for maintaining the package!
I love the meta package!
Quick Q - can I use the byvar argument to do interaction terms or do I have to do one factor at a time?
I checked your book and online and vignette.
example
m <- metagen(percent, var, studlab = ID, byvar = lever, data = mdata)
but I have different athlete types, just two types - so I did this:
#elite only
m <- metagen(percent, var, studlab = ID, byvar = lever, subset = athletes == "elite", data = mdata)
summary(m)
#active only
m <- metagen(percent, var, studlab = ID, byvar = lever, subset = athletes == "active", data = mdata)
summary(m)
So, I am using byvar for the main factor and then subset for the second factor or level.
Be great to be able just to do:
byvar = lever*athletes or byvar = c(lever,athletes)
Error: Arguments 'TE' and 'byvar' must have the same length. etc
possible?
In 4.7 the following minimal example worked, but in 4.8 report an error:
data(Fleiss93cont)
meta1 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c, data=Fleiss93cont, sm="SMD")
meta1
forest(meta1)
metacum(meta1)
Thanks for creating the very useful meta package!
Hello!
Not sure if this is the right place for this since it seems to be an interaction between metagen and a newer version of some tidyverse packages, but I've got a code snippet that runs metagen
on groups of a data frame.
enrichment_data = sum_stats %>%
filter(cond == 'all100' &
grepl('oe_lof_upper_quantile_', name) &
(is.na(cor_rm0.2) | cor_rm0.2 == 0) # Select uncorrelated variables
) %>%
group_by(name) %>%
summarize(meta_enrichment = metagen(enrichment, enrichment_SE)$TE.random,
meta_sd = metagen(enrichment, enrichment_SE)$seTE.random)
This worked previously with dplyr 0.7.8 and tidyr 0.8.2, but some recent updates of those packages has broken this workflow (see error below). No worries from my end, as I can just fix to an older version as the project is mostly done. But thought you might want to know!
Error in eval(mf[[match("TE", names(mf))]], data, enclos = sys.frame(sys.parent())) :
object 'enrichment' not found
16.
eval(mf[[match("TE", names(mf))]], data, enclos = sys.frame(sys.parent()))
15.
eval(mf[[match("TE", names(mf))]], data, enclos = sys.frame(sys.parent()))
14.
metagen(enrichment, enrichment_SE)
13.
summarise_impl(.data, dots, environment(), caller_env())
12.
summarise.tbl_df(.data, ...)
11.
fun(.data, ...)
10.
log_summarize(.data, dplyr::summarize, "summarize", ...)
9.
summarize(., meta_enrichment = metagen(enrichment, enrichment_SE)$TE.random,
meta_sd = metagen(enrichment, enrichment_SE)$seTE.random)
8.
function_list[[i]](value)
7.
freduce(value, `_function_list`)
6.
`_fseq`(`_lhs`)
5.
eval(quote(`_fseq`(`_lhs`)), env, env)
4.
eval(quote(`_fseq`(`_lhs`)), env, env)
3.
withVisible(eval(quote(`_fseq`(`_lhs`)), env, env))
2.
sum_stats %>% filter(cond == "all100" & grepl("oe_lof_upper_quantile_",
name) & (is.na(cor_rm0.2) | cor_rm0.2 == 0)) %>% group_by(name) %>%
summarize(meta_enrichment = metagen(enrichment, enrichment_SE)$TE.random,
meta_sd = metagen(enrichment, enrichment_SE)$seTE.random) %>% ... at fig5_disease.R#252
1.
partitioning_heritability_enrichment(T)
Environment:
> sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-apple-darwin18.2.0 (64-bit)
Running under: macOS 10.14.4
Matrix products: default
BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
LAPACK: /opt/local/Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib
locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
attached base packages:
[1] grid stats graphics grDevices utils datasets methods base
other attached packages:
[1] crayon_1.3.4 tidylog_0.1.0 cowplot_0.9.4 RMySQL_0.10.17
[5] DBI_1.0.0 ggrepel_0.8.0 pbapply_1.4-0 rlang_0.3.4
[9] tidygraph_1.1.2 STRINGdb_1.22.0 meta_4.9-5 ggrastr_0.1.7
[13] ggpubr_0.2 ggridges_0.5.1 readxl_1.3.1 corrr_0.3.2
[17] corrplot_0.84 patchwork_0.0.1 naniar_0.4.2 plotROC_2.2.1
[21] gghighlight_0.1.0 skimr_1.0.5 gapminder_0.3.0 trelliscopejs_0.1.18
[25] scales_1.0.0 magrittr_1.5 slackr_1.4.2 plotly_4.9.0
[29] broom_0.5.2 forcats_0.4.0 stringr_1.4.0 dplyr_0.8.0.1
[33] purrr_0.3.2 readr_1.3.1 tidyr_0.8.3 tibble_2.1.1
[37] tidyverse_1.2.1 Hmisc_4.2-0 ggplot2_3.1.1 Formula_1.2-3
[41] survival_2.44-1.1 lattice_0.20-38
loaded via a namespace (and not attached):
[1] colorspace_1.4-1 visdat_0.5.3 mclust_5.4.3 htmlTable_1.13.1
[5] base64enc_0.1-3 rstudioapi_0.10 hash_2.2.6.1 bit64_0.9-7
[9] lubridate_1.7.4 sqldf_0.4-11 xml2_1.2.0 splines_3.5.1
[13] knitr_1.22 jsonlite_1.6 cluster_2.0.7-1 png_0.1-7
[17] compiler_3.5.1 httr_1.4.0 backports_1.1.4 assertthat_0.2.1
[21] Matrix_1.2-17 lazyeval_0.2.2 cli_1.1.0 acepack_1.4.1
[25] htmltools_0.3.6 prettyunits_1.0.2 tools_3.5.1 igraph_1.2.4.1
[29] gtable_0.3.0 glue_1.3.1 Rcpp_1.0.1 cellranger_1.1.0
[33] gdata_2.18.0 nlme_3.1-137 autocogs_0.1.2 xfun_0.6
[37] proto_1.0.0 rvest_0.3.3 gtools_3.8.1 DistributionUtils_0.6-0
[41] hms_0.4.2 parallel_3.5.1 metafor_2.0-0 RColorBrewer_1.1-2
[45] yaml_2.2.0 memoise_1.1.0 gridExtra_2.3 rpart_4.1-13
[49] latticeExtra_0.6-28 stringi_1.4.3 RSQLite_2.1.1 plotrix_3.7-5
[53] checkmate_1.9.1 caTools_1.17.1.2 chron_2.3-53 pkgconfig_2.0.2
[57] bitops_1.0-6 htmlwidgets_1.3 bit_1.1-14 tidyselect_0.2.5
[61] plyr_1.8.4 R6_2.4.0 gplots_3.0.1.1 generics_0.0.2
[65] gsubfn_0.7 pillar_1.3.1 haven_2.1.0 foreign_0.8-71
[69] withr_2.1.2 RCurl_1.95-4.12 nnet_7.3-12 modelr_0.1.4
[73] KernSmooth_2.23-15 progress_1.2.0 data.table_1.12.2 blob_1.1.1
[77] digest_0.6.18 webshot_0.5.1 munsell_0.5.0 viridisLite_0.3.0
I have noticed that when using metabind()
it always gives me a double()
instead of an integer()
in the column showing the number of studies. I have tried to transform it in the metabind object, but it still doesn't change anything. I wonder if this is an issue in forest()
?
reprex:
library(meta)
#> Loading 'meta' package (version 4.9-7).
#> Type 'help(meta)' for a brief overview.
data(Fleiss93cont)
# Add some (fictitious) grouping variables:
#
Fleiss93cont$age <- c(55, 65, 55, 65, 55)
Fleiss93cont$region <- c("Europe", "Europe", "Asia", "Asia", "Europe")
m1 <- metacont(n.e, mean.e, sd.e, n.c, mean.c, sd.c,
data = Fleiss93cont, sm = "MD")
# Conduct two subgroup analyses
#
mu1 <- update(m1, byvar = age, bylab = "Age group")
mu2 <- update(m1, byvar = region, bylab = "Region")
# Combine subgroup meta-analyses and show forest plot with subgroup
# results
#
mb1 <- metabind(mu1, mu2)
#> Warning in metabind(mu1, mu2): Note, results from random effects model
#> extracted. Use argument pooled = "fixed" for results of fixed effect model.
mb1
#> MD 95%-CI meta-analysis
#> 55 -1.0519 [-2.0636; -0.0403] Age group
#> 65 -0.5152 [-1.8868; 0.8565] Age group
#> Europe -1.0938 [-1.7704; -0.4173] Region
#> Asia -0.4591 [-2.6758; 1.7577] Region
#>
#> Number of studies combined: k = 5
#>
#> MD 95%-CI z p-value
#> Random effects model -0.7373 [-1.4577; -0.0170] -2.01 0.0448
#>
#> Quantifying heterogeneity:
#> tau^2 = 0.1894; H = 1.19 [1.00; 1.91]; I^2 = 29.3% [0.0%; 72.6%]
#>
#> Test of heterogeneity:
#> Q d.f. p-value
#> 5.66 4 0.2260
#>
#> Results for meta-analyses (random effects model):
#> k MD 95%-CI Q tau^2 I^2
#> Age group 5 -0.7373 [-1.4577; -0.0170] 5.66 0.1894 29.3%
#> Region 5 -0.7373 [-1.4577; -0.0170] 5.66 0.1894 29.3%
#>
#> Details on meta-analytical method:
#> - Inverse variance method
#> - DerSimonian-Laird estimator for tau^2
forest(mb1)
Created on 2019-10-01 by the reprex package (v0.3.0)
Hi,
I am getting a zero p value for test for effect in subgroups, actually it should print P<0.01. I am not sure if there is a problem with data, but it prints P<0.01 for other outcomes. Is it for very high values of Z it gives 0 ? I am using metabin object. Here is my code:
md <- metabin(Events1,Total1,Events2,Total2, data = df1,studlab = Study,sm="RD", byvar = factor(Subgroup),print.byvar = FALSE,comb.fixed = FALSE,keepdata = TRUE) forest(md,test.effect.subgroup = TRUE,layout = "Revman5")
Kindly share your suggestions.
Dear Prof. Schwarzer,
Thank you very much for this great package. I would like to know how I could print the p-values in a forest plot from metainf.
If I print the metainf results I get MD, 95%-CI, p-value, tau^2, and I^2. However, if I use a forest plot to visualize the results I only get MD and 95%-CI.
Thank you.
Hello there,
I have been reading meta manual for the past few days, but could not get a clear answer. Is it possible to run meta with z-scores and p-values (sample sizes are available if needed)?
Thanks
Hi,
I have a set of binary/dichotomous outcomes. I wanted to calculate the hazard ratio for meta analysis and network meta analysis. However when I look at "meta" package "metabin"function "sm" argument, only risk ratio(RR) and odds ratio(OR) are available.
When left columns are specified with leftcols, three problems occur
data(Olkin95)
Olkin95$modern <- ifelse(Olkin95$year>1980 , 'Modern', 'Old')
meta1 <- metabin(event.e, n.e, event.c, n.c,subset=20:27,
data=Olkin95, sm="RR",
studlab=paste(author, year), byvar = modern)
# Default, without leftcols, looks ok
meta::forest(meta1, comb.fixed=TRUE, comb.random=FALSE)
# Specifying leftcols creates problems
meta::forest(meta1, comb.fixed=TRUE, comb.random=FALSE,
leftcols=c('author','year'))
Thanks very much for your time and the meta package.
In the following code,
library(meta)
studies <- c("Cong 2015","Parsch 2017","Gangathimmaiah 2017","Isbister 2016","Kowalski 2015","Riddell 2017","Scaggs 2016","Schepke 2015","Olives 2016","Cole 2016","Burnett 2015","Keseg 2014","Hollis 2017","Burnett 2012")
obs <- c(0,0,3,0,0,2,0,2,85,25,14,8,10,2)
denom <- c(18,22,21,49,5,23,7,52,135,64,49,35,38,12)
grouping <- c("AMT","AMT","AMT","ED","ED","ED","EMS","EMS","EMS","EMS","EMS","EMS","EMS","EMS")
m1 <- metaprop(obs, denom, studies, comb.random=FALSE,
complab="N", outclab="intubated", title="Intubation rates",
byvar=grouping, bylab="Setting", byseparator=":")
forest(m1)
The complab, coutclab, byseparator and title do not appear.
Respected sir
The metabind function I am using for plotting a forest plot for comparison between the subgroups is showing an error
#code below
db1 <- db0[ which(db0$Pollutant=='PM10' & db0$Measure=='AVERAGE' & db0$Outcome=='Total Mortality' & db0$Age=='All' & db0$Sex=='All' & (db0$Lag=='0-1' ))
db1$LnEE.spm.unadjusted <- as.numeric(db1$LnEE.spm.unadjusted)
db1$SELnEE.spm.unadjusted <- as.numeric(db1$SELnEE.spm.unadjusted)
model <- metagen(TE=db1$LnEE.spm.unadjusted, seTE=db1$SELnEE.spm.unadjusted, studlab=db1$Article, sm="RR", comb.random=TRUE, method.tau="DL", hakn=FALSE, backtransf=TRUE, prediction=TRUE, level.predict=0.80)
summary(model)
mu1 <- update(model, byvar = db1$Lag)
mb1 <- metabind(mu1)
mb1
forest(mb1)
On running this, the error is showing up as
"Error in metabind(mu1) : object 'args2' not found"
Previously the results were coming but now, they error pops up
Kindly help
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.