Giter Site home page Giter Site logo

velocyto.r's Introduction

velocyto.R

RNA velocity estimation in R

System requirements

velocyto.R can be installed on unix-flavored systems, and requires the following key elements:

  • C++11
  • Open MP support
  • boost libaries
  • igraph library
  • hdf5c++ library (as required by the h5 R package to support loom files)

Installation

The easiest way to install velocyto.R is using devtools::install_github() from R:

library(devtools)
install_github("velocyto-team/velocyto.R")

You need to have boost (e.g. sudo apt-get install libboost-dev) and openmp libraries installed. You can see detailed installation commands in the dockers/debian9/Dockerfile.

Dockers

If you are having trouble installing the package on your system, you can build a docker instance that can be used on a wide range of systems and cloud environments. To install docker framework on your system see installation instruction. After installing the docker system, use the following commands to build a velocyto.R docker instance:

cd velocyto.R/dockers/debian9
docker build -t velocyto .
docker run --name velocyto -it velocyto

Tutorials

The example shows how to annotate SMART-seq2 reads from bam file and estimate RNA velocity.

The example shows how to load spliced/unspliced matrices from loom files prepared by velocyto.py CLI, use pagoda2 to cluster/embed cells, and then visualize RNA velocity on that embedding.

This example shows how to start analysis using dropEst count matrices, which can calculated from inDrop or 10x bam files using dropEst pipeline. It then uses pagoda2 to cluster/embed cells, and then visualize RNA velocity on that embedding.

velocyto.r's People

Contributors

dcroote avatar gioelelm avatar lldelisle avatar pkharchenko avatar puriney avatar t-carroll avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

velocyto.r's Issues

Merging Loom Files

Hi,

I recently got the CLI interface to work quite well with my 10x samples. However, I was unable to get the CLI run10x command to run on all 10x samples at once. I received this error:

IndexError: list index out of range

Therefore I went ahead and generated separate Loom Files for each of my samples. Is there any way in R (or python) that you would recommend to merge the loom files from each sample so I can work with all samples at once?

Dylan

No reads returned by read.smartseq2.bams

Hi VeloCytoers -

I've been trying to use this tool by following the Chromaffin tutorial with my own data. Unfortunately, I'm stumbling on the very first QC measure. The histogram of reads per gene in the tutorial consists of a single bar around zero reads for me. Examining the list returned by read.smartseq2.bams, it appears that nearly all of the data matrices are empty (emat, iomat, smat, base.df$el are all empty, but base.df$il, exons, and genes are not).

After debugging for a while, I think I've found the root problem. The cdl variable constructed on line 1424 of momentum_routines.R is always a list of NULLs, which I think is because t.annotate.bam.reads has no return result. Changing line 1572 in that function from

x <- do.call(rbind,lapply(chrl,function(chr) {

to

do.call(rbind,lapply(chrl,function(chr) {

seems to solve it for me, at least as far as I can determine by semi-manually stepping through the code in read.smartseq2.bams.

Happy to submit a PR to this effect (or not), but I wonder about this. From what I can tell, this function has never returned a value - which makes my explanation seem a little suspect. Any thoughts/comments?

Error installing velocyto.R

Anyone can help me with these errors?

ERROR: dependencies ‘pcaMethods’, ‘h5’ are not available for package ‘velocyto.R’

Thanks!

Reversing velocities?

Hi,

Thanks for the awesome package!!
I have been playing around with my dataset with velocyto for some time now and are wondering about a few things:

  1. The velocities are reversed to what is expected and for some points which is already proven. I have mixed cell lineages and they are lying close to each other in the embedding. I guess I have a problem with neighborhoodbased projections probably with different attractors too. Is there any way to solve this?
  2. What the "scale" parameter in the show.velocity.on.embedding.cor() function does? As far as I have understood, it is how arrows/velocities are scaled compared to single cells? Which is best recommended and why? In tutorials you have used sqrt but I can't find an explanation to why.
  3. Is it possible to calculate the velocities (and show them on embedding) for a specific list of genes? E.g. for a specific lineage of cells. IF this is a possibility how many genes would I have to include? And can it be differentially expressed genes for one lineage?

Hope you can help me out so I can get even more use of your package.

dropseq optimal code

Is there an optimized settings/code for Dropseq samples. The tutorials seem to be about other scRNAseq methods and I was wondering if Dropseq would require different settings due to its depth of reads.

Velocyto installation instructions

I had to do
sudo apt-get install libboost-all-dev
probably worth including this on your installation instructions
You need to have boost (e.g. sudo apt-get install libboost-dev) and openmp libraries installed.

Velocyto in transition of aged single cell

Can velocyto be used to determine transition of cells during longer period of time such as aging? I have generated several aged mouse tissue samples from 8 weeks to 64 weeks old, I used velocyto on these and see very nice results yet I am not clear whether this package is appropriate for that kind of analysis since on the paper it is described for analysis of short timescale data.
"We show that RNA velocity is a vector that predicts the future state of individual cells on a timescale of hours"

Of course I am also using pseudo time analysis such as monocle2 and other package but the combination of SWNE and velocyto gives me very interesting results. I just wonder if it is appropriate to use jn this situation.

Many thanks,
Tommy

How to Install boost and openmp libraries in Rstudio?

Hello I tried to install velocity using
library(devtools)
install_github("velocyto-team/velocyto.R")

But I get an error message: clang: error: unsupported option '-fopenmp'....

How do I get this fixed?

Thank you so much

show velocity Error....'from' must be a finite number

Hello,

I have got the following error while running show.velocity.on.embedding.cor function.

fit.quantile <- 0.02
rvel.cd <-gene.relative.velocity.estimates(emat,nmat,deltaT=1,kCells=25,cell.dist=cell.dist,fit.quantile=fit.quantile)

calculating cell knn ... done 
calculating convolved matrices ... done
fitting gamma coefficients ... done. succesfful fit for 483 genes
filtered out 139 out of 483 genes due to low nmat-emat correlation
filtered out 60 out of 344 genes due to low nmat-emat slope
calculating RNA velocity shift ... done
calculating extrapolated cell state ... done

show.velocity.on.embedding.cor(emb,rvel.cd,n=200,scale='sqrt',cell.colors=ac(cell.colors,alpha=0.5),cex=0.8,arrow.scale=3,show.grid.flow=TRUE,min.grid.cell.mass=0.5,grid.n=40,arrow.lwd=1,do.par=F,cell.border.alpha = 0.1)

delta projections ... sqrt knn ... transition probs ... done
calculating arrows ... grid estimates ... Error in seq.default(rx[1], rx[2], length.out = grid.n) : 
  'from' must be a finite number

Any idea what's wrong here and how to correct it?

Many thanks

velocyto.R installation issue: RcppArmadillo.h: No such file or directory

The following error arises when I attempt to install velocyto.R on R 3.5.1:

*** arch - i386
C:/Rtools/mingw_32/bin/g++ -std=gnu++11 -I"C:/PROGRA1/R/R-351.1/include" -DNDEBUG -I"\icnas1.cc.ic.ac.uk/cd2514/R/win-library/3.5/Rcpp/include" -I"\icnas1.cc.ic.ac.uk/cd2514/R/win-library/3.5/RcppArmadillo/include" -fopenmp -O2 -Wall -mtune=generic -c RcppExports.cpp -o RcppExports.o
RcppExports.cpp:4:27: fatal error: RcppArmadillo.h: No such file or directory
#include <RcppArmadillo.h>
compilation terminated.
make: *** [C:/PROGRA1/R/R-351.1/etc/i386/Makeconf:215: RcppExports.o] Error 1

The code I used to install this package is:
library(devtools)
install_github("velocyto-team/velocyto.R")

I have RcppArmadillo successfully installed.
Any help would be appreciated, thanks in advance.,

edit: devtools::install_github(repo = "RcppCore/Rcpp", lib="C:/Program\ Files/R/R-3.5.1/library")
helped solve the problem

Running velocyto on multiple samples with 10X

Dear Velocyto team,

I have a question about running velocyto on eight 10x samples. Since 10X pipeline gives bam file for each sample, I can run velocyto on each bam file one by one and generate eight loom files. I am wondering is there a way to visualize the vector fields on a single TSNE plot of aggregate cells of all eight samples? I would like to have some advise on how to run velocyto and achieve that map efficiently?

Yours,
Ying DIng

how long will "fitting gamma coefficients" step take?

Hi Team velocyto,

I know the computational time depends on a lot of factors: CPU, memory and size of data set. Our HPC have 48 processors and 256 GB memory. I run the example listed in README : http://pklab.med.harvard.edu/velocyto/notebooks/R/SCG71.nb.html. The emat is 736 common genes x 2600 cells, which takes only few seconds to finish the "gene.relative.velocity.estimates" step. Here I have a data set comprised by 944 common genes x 5000 cells. It has already run about 2 hours but still stay at the step "fitting gamma coefficients ...". I am wondering whether something wrong with my installation or it just need longer time to finish. Any suggestion will be helpful.

Thanks,
Peng

missing dependencies

these are not pulled in automatically, or at least it doesn't complain until it's too late:

GenomicAlignments
Rsamtools

Missing dependencies?

Got confused with the documentation of the preprocessing

Hi,

This is not an issue per say, but I just wanted to point out to you that as a R user, when I was visiting the website 'velocity.org' to figure out how to use your tool, I have immediately followed the link leading to the github repository of the R package.

As a consequence, I have totally skipped the python documentation, assuming that it would be somehow equivalent to the documentation I would find on the R repository..

That was my mistake...

Indeed, the scRnaSeq dataset that I have come from 10X genomics. And all the documentation available on the R package repository concerning preprocessing is about the read.smartseq2.bams function... This function takes as input the individual bam of each cell in the dataset. Then I told to myself: "the output of cellRanger is a global bam but since I have the barcode of each of my cells, I could write an awk script piped from samtools to split the global bam into individuals bams corresponding to my cells"..
That is what I did. Eventually, when I submitted these individuals bams to the read.smartseq2.bams, it returned me obviously an error message since the function has not been designed to handle "10x Chromium" bams...

That is only at this point, that I ended up visiting the python documentation and discovered the amazing "counting pipeline" from command line that handles perfectly the cellRanger output bam...

All this story to let you know that you should probably indicate on both 'velocyto.org' and on the github repository of the 'R package' that the preprocessing of the data sequenced by 10X genomics should be done using the python package.

I would have appreciated it ;)

All the best,

Charles

Installing error in ubuntu

Hi, now I tried to install velocyto. But when I run install_github("velocyto-team/velocyto.R"), my PC showed this error below.

Doesn' find /usr/bin/ld: -lboost_filesystem
Doesn' find /usr/bin/ld: -lboost_system
collect2: error: ld returned 1 exit status
/usr/local/lib/R/share/make/shlib.mk:6: fail the 'velocyto.R.so'
make: *** [velocyto.R.so] error 1
ERROR: compilation failed for package ‘velocyto.R’

  • removing ‘/usr/local/lib/R/site-library/velocyto.R’
    run(bin, args = real_cmdargs, stdout_line_callback = real_callback(stdout), error:
    System command error

My PC environment is Ubuntu16.04(x86_64-pc-linux-gnu arch x86_64),R version is 3.4.1 (2017-06-30).

If you have some idea,please teach me.
Thank you.

Is there a way to show legend on tsne embedding plot?

Hi there,

Thanks for your excellent work! I have three questions and hopefully you could help me out.

1.I was wondering if there is a input parameter of the function show.velocity.on.embedding.cor() in R that could add legend/title of the output figures rather than manually build it through plot() components?

  1. Regarding for the differential expression and bio-marker/cell-marker analysis, do you have any suggestions? I was having a hard time to biologically interpret those clusters. The pagoda2 differential expression function did not have output for some of the clusters, and their heatmaps looks terrible...

  2. Is there a way to plot only one targeted cluster's velocity on the figure?
    I tried it by changing the untargeted clusters' current and projected column in the expression matrix (in rvel.cd) to the same value, but it did not success.

Thanks again.

Best.

*** caught bus error *** address 0x7f0c17a7db00, cause 'non-existent physical address'

Hi,

I got a loom matrix, and I was trying to follow the same step than here to begin my analysis : http://pklab.med.harvard.edu/velocyto/notebooks/R/chromaffin2.nb.html

But I got an error there:
rvel.qf <- gene.relative.velocity.estimates(emat,nmat,deltaT=1,kCells = 5,fit.quantile = fit.quantile)

calculating cell knn ... done
calculating convolved matrices ... done
fitting gamma coefficients ...
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:
*** caught bus error ***
address 0x7f0c17a7db00, cause 'non-existent physical address'

Traceback:
1: lm.wfit(x, y, w, offset = offset, singular.ok = singular.ok, ...)
2: lm(n ~ e, data = df, weights = pw)
3: FUN(X[[i]], ...)
4: lapply(X = S, FUN = FUN, ...)
5: doTryCatch(return(expr), name, parentenv, handler)
6: tryCatchOne(expr, names, parentenv, handlers[[1L]])
7: tryCatchList(expr, classes, parentenv, handlers)
8: tryCatch(expr, error = function(e) { call <- conditionCall(e) if (!is.null(call)) { if (identical(call[[1L]], quote(doTryCatch))) call <- sys.call(-4L) dcall <- deparse(call)[1L] prefix <- paste("Error in", dcall, ": ") LONG <- 75L msg <- conditionMessage(e) sm <- strsplit(msg, "\n")[[1L]] w <- 14L + nchar(dcall, type = "w") + nchar(sm[1L], type = "w") if (is.na(w)) w <- 14L + nchar(dcall, type = "b") + nchar(sm[1L], type = "b") if (w > LONG) prefix <- paste0(prefix, "\n ") } else prefix <- "Error : " msg <- paste0(prefix, conditionMessage(e), "\n") .Internal(seterrmessage(msg[1L])) if (!silent && identical(getOption("show.error.messages"), TRUE)) { cat(msg, file = outFile) .Internal(printDeferredWarnings()) } invisible(structure(msg, class = "try-error", condition = e))})
9: try(lapply(X = S, FUN = FUN, ...), silent = TRUE)
10: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
11: FUN(X[[i]], ...)
12: lapply(seq_len(cores), inner.do)
13: parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } }}, mc.cores = n.cores, mc.preschedule = T)
14: do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T))
15: data.frame(do.call(rbind, parallel::mclapply(sn(rownames(conv.emat.norm)), function(gn) { if (!is.null(smat)) { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells]), o = sfit[gn, "o"]) if (zero.offset) df$o <- 0 } else { df <- data.frame(n = (conv.nmat.norm[gn, steady.state.cells]), e = (conv.emat.norm[gn, steady.state.cells])) o <- 0 if (!zero.offset) { zi <- df$e < 1/conv.emat.cs[steady.state.cells] if (any(zi)) { o <- sum(df$n[zi])/(sum(zi) + 1) } } df$o <- o } if (is.null(fit.quantile)) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = df$e^4 + df$n^4) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { if (diagonal.quantiles) { emax <- quantile(df$e, p = 0.99) nmax <- quantile(df$n, p = 0.99) if (emax == 0) emax <- max(max(df$e), 0.001) if (nmax == 0) nmax <- max(max(df$n), 0.001) x <- df$e/emax + df$n/nmax eq <- quantile(x, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat)) { pw <- as.numeric(x >= eq[2]) } else { pw <- as.numeric(x >= eq[2] | x <= eq[1]) } } else { eq <- quantile(df$e, p = c(fit.quantile, 1 - fit.quantile)) if (!is.null(smat) || zero.offset) { pw <- as.numeric(df$e >= eq[2]) } else { pw <- as.numeric(df$e >= eq[2] | df$e <= eq[1]) } } if (!is.null(smat) || zero.offset) { d <- lm(n ~ e + offset(o) + 0, data = df, weights = pw) return(c(o = df$o[1], g = as.numeric(coef(d)[1]), r = cor(df$e, df$n, method = "spearman"))) } else { d <- lm(n ~ e, data = df, weights = pw) return(c(o = as.numeric(coef(d)[1]), g = as.numeric(coef(d)[2]), r = cor(df$e, df$n, method = "spearman"))) } } }, mc.cores = n.cores, mc.preschedule = T)))
16: gene.relative.velocity.estimates(emat, nmat, deltaT = 1, kCells = 5, fit.quantile = fit.quantile)

Possible actions:
1: abort (with core dump, if enabled)
2: normal R exit
3: exit R without saving workspace
4: exit R saving workspace
Selection:

My sessionInfo()

R version 3.4.3 (2017-11-30)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Scientific Linux release 6.7 (Carbon)

Matrix products: default
BLAS: /ifs/illumina/vagnec/Analyses/software/miniconda/envs/velocytoR/lib/R/lib/libRblas.so
LAPACK: /ifs/illumina/vagnec/Analyses/software/miniconda/envs/velocytoR/lib/R/lib/libRlapack.so

locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] velocyto.R_0.6 Matrix_1.2-14 RevoUtils_10.0.8
[4] RevoUtilsMath_10.0.1

loaded via a namespace (and not attached):
[1] Rcpp_0.12.14 lattice_0.20-35 MASS_7.3-50
[4] grid_3.4.3 nlme_3.1-137 h5_0.9.9
[7] tools_3.4.3 Biobase_2.38.0 parallel_3.4.3
[10] compiler_3.4.3 BiocGenerics_0.24.0 cluster_2.0.7-1
[13] mgcv_1.8-24 pcaMethods_1.70.0

Thank you in advance!

Installation Error: clang: error: unsupported option '-fopenmp'

Installation Error:

> options(tz="America/Los_Angeles")
> library(devtools)
> install_github("velocyto-team/velocyto.R")
Downloading GitHub repo velocyto-team/velocyto.R@master
from URL https://api.github.com/repos/velocyto-team/velocyto.R/zipball/master
Installing velocyto.R
Running command /Library/Frameworks/R.framework/Resources/bin/R 
Arguments:
CMD
INSTALL
/private/var/folders/5m/jlrrrsfd719d0ks0qg48_pg80000gn/T/Rtmpv4UBqe/devtools1d0851a3bc6b/velocyto-team-velocyto.R-9741595
--library=/Library/Frameworks/R.framework/Versions/3.4/Resources/library
--install-tests

* installing *source* package ‘velocyto.R’ ...
Warning in as.POSIXlt.POSIXct(x, tz) :
  unknown timezone 'default/America/Los_Angeles'
** libs
clang++ -I/Library/Frameworks/R.framework/Resources/include -DNDEBUG  -I"/Library/Frameworks/R.framework/Versions/3.4/Resources/library/Rcpp/include" -I"/Library/Frameworks/R.framework/Versions/3.4/Resources/library/RcppArmadillo/include" -I/usr/local/include  -std=c++11 -fopenmp -fPIC  -Wall -g -O2  -c RcppExports.cpp -o RcppExports.o
clang: error: unsupported option '-fopenmp'
make: *** [RcppExports.o] Error 1
ERROR: compilation failed for package ‘velocyto.R’
* removing ‘/Library/Frameworks/R.framework/Versions/3.4/Resources/library/velocyto.R’
 
Installation failed: run(bin, args = real_cmdargs, stdout_line_callback = real_callback(stdout),      stderr_line_callback = real_callback(stderr), stdout_callback = real_block_callback,      stderr_callback = real_block_callback, echo_cmd = echo, echo = show,      spinner = spinner, error_on_status = fail_on_status, timeout = timeout) : System command error
> 

Session info:

─ Session info ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 setting  value                       
 version  R version 3.4.1 (2017-06-30)
 os       macOS High Sierra 10.13     
 system   x86_64, darwin15.6.0        
 ui       RStudio                     
 language (EN)                        
 collate  en_US.UTF-8                 
 tz       <NA>                        
 date     2017-10-24                  

─ Packages ───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
 package     * version     date       source                            
 assertthat    0.2.0       2017-04-11 CRAN (R 3.4.0)                    
 backports     1.1.1       2017-09-25 CRAN (R 3.4.2)                    
 callr         1.0.0.9000  2017-08-23 Github (mangothecat/callr@2dffbbe)
 clisymbols    1.2.0       2017-05-21 CRAN (R 3.4.0)                    
 crayon        1.3.4       2017-09-16 CRAN (R 3.4.1)                    
 curl          3.0         2017-10-06 CRAN (R 3.4.2)                    
 debugme       1.1.0       2017-10-22 CRAN (R 3.4.2)                    
 devtools    * 1.13.3.9000 2017-08-23 Github (hadley/devtools@5517fdb)  
 digest        0.6.12      2017-01-27 CRAN (R 3.4.0)                    
 git2r         0.19.0      2017-07-19 CRAN (R 3.4.1)                    
 httr          1.3.1       2017-08-20 CRAN (R 3.4.1)                    
 memoise       1.1.0       2017-04-21 CRAN (R 3.4.0)                    
 pkgbuild      0.0.0.9000  2017-08-23 Github (r-lib/pkgbuild@6574561)   
 pkgload       0.0.0.9000  2017-08-23 Github (r-pkgs/pkgload@558aaac)   
 processx      2.0.0.1     2017-07-30 CRAN (R 3.4.1)                    
 R6            2.2.2       2017-06-17 CRAN (R 3.4.0)                    
 rlang         0.1.2.9000  2017-09-06 Github (tidyverse/rlang@6a589ec)  
 rprojroot     1.2         2017-01-16 CRAN (R 3.4.0)                    
 rstudioapi    0.7         2017-09-07 CRAN (R 3.4.1)                    
 sessioninfo   1.0.1       2017-08-23 Github (r-lib/sessioninfo@e813de4)
 usethis     * 1.0.0       2017-10-22 CRAN (R 3.4.2)                    
 withr         2.0.0       2017-07-28 CRAN (R 3.4.1)      

Any advice would be appreciated.

Error with read.smartseq2.bams

When I run read.smartseq2.bams as noted in the documentation, I get the following error:

dat <- read.smartseq2.bams(files,"refFlat.txt",n.cores=40)
reading gene annotation ... done ( 28051 genes)
parsing exon information ... done
reading in 2688 bam files ... done
estimating gene counts ... done
adjusting gene annotation based on expressed regions ... Error in base::colSums(x, na.rm = na.rm, dims = dims, ...) :
'x' must be numeric

I am using GRCh38 and downloaded refFlat.txt from the UCSC browser, it has 11 columns. Any insight?

RNA velocity plot has no color for the cells

I used show.velocity.on.embedding.cor( ) to produce RNA velocity plot with the TSNE embedding as below.
show.velocity.on.embedding.cor(emb,rvel.cd,n=200,scale='sqrt',cell.colors=ac(cell.colors,alpha=0.5),cex=0.8,arrow.scale=3,show.grid.flow=TRUE,min.grid.cell.mass=0.5,grid.n=40,arrow.lwd=1,do.par=F,cell.border.alpha = 0.1)

My plot has circles for the cells without any color.
Velocyto version is 0.6, and R studio 1.1.453.
My R version is as below.
platform x86_64-pc-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 5.0
year 2018
month 04
day 23
svn rev 74626
language R
version.string R version 3.5.0 (2018-04-23)
nickname Joy in Playing

Any help would be largely appreciated!

Hong

Use hdf5r rather than h5

Problem

The dependency h5 package was no longer available for installation, as noticed here mannau/h5@09382bb. The author developed the new one hdf5r.

I was wondering if your team can update with that.

Possible solution suggested by author

See dicussion here: #30 (comment)

Env

  • R version 3.5.1

How to fed dimensional reduced data to velocyto?

Hi,
I have reduced Single cell gene expression matrix using canonical correlation analysis(CCA). I needed to use CCA for batch correction. I have used for example 5 dimensions (as shown below) to generate tSNE plot using Rtsne() function. I would like to use this reduced matrix to generate velocity as I can not use original gene matrix because of batch effect. Is it possible?

Following matrix shows 5 dimensions (columns) of CCA and cell-barcodes (rows).

                     ACC1       ACC2        ACC3       ACC4       ACC5
AAACCTGAGAGCCTAG  0.2912968  1.9218620 -0.07594450 0.07920177  0.0214002
AAACCTGAGTGGCACA  1.3125555 -1.0346728  0.56284450 0.01348247 -0.1927503
AAACCTGCAATAACGA -0.3517998 -0.7054918  0.21015822 0.75798072  0.2751296
AAACCTGGTAGCTAAA  0.9580736 -0.5992308  0.01396823 0.57237873 -0.1930334

Thanks a lot in advance.

Error when projecting onto existing embedding

I'm getting an error message when I try to project the velocity field onto an existing embedding. As far as I can tell, the format of my "emb" matrix is the same as the example data file. The number of cells matches the number of cells in the "gvel" object. Any help would be much appreciated.
Thanks,
Josh
dim(emb)
[1] 244 2
dim(gvel$projected)
[1] 5061 244
show.velocity.on.embedding.cor(emb,gvel,n=100,scale='sqrt',cell.colors=cell_clusters,arrow.lwd=1)
delta projections ... sqrt knn ... transition probs ... done
Error in Matrix::colSums(di * tp[, i]) :
dims [product 2] do not match the length of object [244]

installation error

Hi ,
I tried to install velocyto.R through github, but had the following error
I have installed a number of packages from github, but this is the first time I had this error.

I really appreciate helps.


Downloading GitHub repo velocyto-team/velocyto.R@master
from URL https://api.github.com/repos/velocyto-team/velocyto.R/zipball/master
Installing velocyto.R
'/home/garret/anaconda3/lib/R/bin/R' --no-site-file --no-environ --no-save
--no-restore --quiet CMD INSTALL
'/tmp/RtmpEPZ1wQ/devtoolsfc0249cc7bb/velocyto-team-velocyto.R-666e1db'
--library='/home/garret/anaconda3/lib/R/library' --install-tests
Installation failed:Command failed (1)
Installation failed: Command failed (1)


sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Ubuntu 18.04.1 LTS

Matrix products: default
BLAS: /home/garret/anaconda3/lib/R/lib/libRblas.so
LAPACK: /home/garret/anaconda3/lib/R/lib/libRlapack.so

locale:
[1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
[3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
[5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
[7] LC_PAPER=en_US.UTF-8 LC_NAME=C
[9] LC_ADDRESS=C LC_TELEPHONE=C
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] RevoUtils_11.0.1 RevoUtilsMath_11.0.0

loaded via a namespace (and not attached):
[1] Rcpp_0.12.18 digest_0.6.15 crayon_1.3.4 IRdisplay_0.5.0
[5] repr_0.15.0 jsonlite_1.5 magrittr_1.5 evaluate_0.11
[9] stringi_1.2.4 uuid_0.1-2 IRkernel_0.8.11 tools_3.5.1
[13] stringr_1.3.1 compiler_3.5.1 base64enc_0.1-3 pbdZMQ_0.3-3
[17] htmltools_0.3.6

Mount directory to dock

Hi, velocyto team,

I successfully installed velocyto.R with Docker. But I met a problem when mounting loom data to the container. Here is my commands:

docker run --name velocyto -it velocyto --mount type=bind,source=/Users/fengqidi/Chromaffin/,target=/

The error is :
/bin/bash: --mount: invalid option

If I change "--mount" to "-v" and modified the command as:
docker run --name velocyto -it velocyto -v /Users/fengqidi/Chromaffin/:/

The error:
/bin/bash: /Users/fengqidi/Chromaffin/:/: No such file or directory

Any suggestions? Thank you very much!

small and sparse velocity arrows on PCA

I have been applying this velocity package on my own data and found that display velocity on PCA seem to generate smaller and sparser arrows compared with t-SNE. I used the same velocity calculation but just changed the dimension reduction plot as shown in the code below. Could you explain why this would happen?

get the dimension reduction data

emb <- r$embeddings$PCA$tSNE
emb_pca <- r$reductions$PCA

calculate velocity

rvel.cd <- gene.relative.velocity.estimates(emat,nmat,deltaT=1,kCells=25,cell.dist=cell.dist,fit.quantile=fit.quantile)

show rna velocity on pagoda t-sne

show.velocity.on.embedding.cor(emb,rvel.cd,n=200,scale='sqrt',cell.colors=ac(cell.colors_fullseq,alpha=0.5),cex=0.8,arrow.scale=3,show.grid.flow=TRUE,min.grid.cell.mass=0.5,grid.n=40,arrow.lwd=1,do.par=T,cell.border.alpha = 0.1)

show rna velocity on the pagoda pca

show.velocity.on.embedding.cor(emb_pca,rvel.cd,n=200,scale='sqrt',cell.colors=ac(cell.colors_fullseq,alpha=0.5),cex=0.8,arrow.scale=3,show.grid.flow=TRUE,min.grid.cell.mass=0.5,grid.n=40,arrow.lwd=1,do.par=T,cell.border.alpha = 0.1)

bug in read.smartseq2.bams function

I have mapped reads from smart-seq2 experiment to a bam file for each cell (using mm10). When I am trying to estimate the necessary matrices using read.smartseq2.bams function and genes.refFlat file from, I got the following error:

Error in dimnames(x) <- dn :
length of 'dimnames' [1] not equal to array extent

I believe the error is from the lengthstats2 function. After estimating lf, I found the dimension from the lf variable is different from variable gene:

Browse[4]> dim(genes)
[1] 24414 8
Browse[4]> dim(lf)
[1] 15258 3

This error may also relate to the fact that the dimension of exons is different from that of genes.

Browse[4]> dim(exons)
[1] 40418 4

Any thoughts on this?

Samples without UMI

Will using C1 data from SMARTer protocol that has not UMIs incorporated also be possible?
I have run the source code from bam files line by line, setting min.umi.reads=0 but atm my final matrices (emat, iomat and smat) only have 1 row which is not correct, I think:

str(emat)
Formal class 'dgCMatrix' [package "Matrix"] with 6 slots
..@ i : int [1:924764] 14 114 116 118 119 195 242 266 269 285 ...
..@ p : int [1:97] 0 13459 28033 35010 41997 57568 72667 87057 92810 107753 ...
..@ Dim : int [1:2] 196520 96
..@ Dimnames:List of 2
.. ..$ : chr [1:196520] "ENST00000361727.3" "ENST00000539563.1" "ENST00000376104.2" "ENST00000543673.1" ...
.. ..$ : chr [1:96] "1_c11_1-639_cntr" "1_c14_2-0015_cntr" "1_c59_0-7925_HS" "1_small_c62_0-788_HS" ...
..@ x : num [1:924764] 1 7 7 7 4 15 40 25 13 183 ...
..@ factors : list()

I kept other parameters as default. Is there anything special to do for this data?

Thank you in advance.

Display velocity on existing PCA

I wonder whether there is way to display velocity on existing PCA (rows are cells; columns are PCs). It seems show.velocity.on.embedding.cor needs the embedding matrix (after tsne or largeVis); pca.velocity.plot cannot take pca as input and it will generate its own PCA.

Non-deterministic results with velocyto R

I have found that running gene.relative.velocity.estimates() and then show.velocity.on.embedding.cor() on the same data set and with the same tSNE embedding produces a slightly different velocity field every time (this is using the grid arrow option). This is not true of the same analysis in the python implementation.

I can't figure out why the results would be non-deterministic, could you provide any insight into what's going on?

Thank you,
Sasha

gene.relative.velocity.estimates - error: sort_index(): detected NaN

Dear Velocyto.R Developer:

I tried the example at http://pklab.med.harvard.edu/velocyto/notebooks/R/chromaffin.nb.html. The demo data works fine. But when I tried with my own data about 600 bam files, it gives the error below. Could you suggest how to resolve this?

rvel.qf <- gene.relative.velocity.estimates(emat,nmat,deltaT=1,kCells = 5,fit.quantile = 0.02)
calculating cell knn ...
error:
error:
error: sort_index(): detected NaN
error:
error: sort_index(): detected NaN

error: sort_index(): detected NaN
terminate called after throwing an instance of 'std::logic_error'
terminate called recursively
terminate called recursively

FYI, I have tried to split the 600 bams in two groups, with about 300 bams each, and the velocity.R worked fine for both of them. It is only when I run them together, this error came up.

thanks,
Li

Generating a refFlat from gtf (human annotation)

Dear Velocyto team,

I was wondering if you have a tool or a command line to generate refFlat from gtf. I used this tool called gtfToGenePred, but I was not able to create the right file for using function read.smartseq2.bams()
Here is my very simple command line:

gtfToGenePred gencode.v27.chr_patch_hapl_scaff.annotation.gtf h38.genes.refFlat

Please can you let me know what is the right way of creating refFlat files? Great tool is velocyto btw.

Thank you!
Ivan

Installation Error: recipe for target 'velocyto.R.so' failed

Installation Error:

Downloading GitHub repo velocyto-team/velocyto.R@master
from URL https://api.github.com/repos/velocyto-team/velocyto.R/zipball/master
Installing velocyto.R
'/usr/lib64/R/bin/R' --no-site-file --no-environ --no-save --no-restore --quiet CMD INSTALL  \
  '/tmp/RtmpHr7vfs/devtoolsfbe1d1f924c/velocyto-team-velocyto.R-3ad8f63'  \
  --library='/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4' --install-tests 

* installing *source* package ‘velocyto.R’ ...
** libs
g++ -m64  -I/usr/include/R -DNDEBUG  -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/Rcpp/include" -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/RcppArmadillo/include" -I/usr/local/include  -std=c++11 -fopenmp -fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic  -c RcppExports.cpp -o RcppExports.o
g++ -m64  -I/usr/include/R -DNDEBUG  -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/Rcpp/include" -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/RcppArmadillo/include" -I/usr/local/include  -std=c++11 -fopenmp -fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic  -c points_within.cpp -o points_within.o
points_within.cpp: In function ‘SEXPREC* points_within2(SEXP, SEXP, SEXP, SEXP, SEXP, SEXP)’:
points_within.cpp:77:14: warning: ‘i_nv’ may be used uninitialized in this function [-Wmaybe-uninitialized]
    i_nv[*ki-1]++;
              ^
g++ -m64  -I/usr/include/R -DNDEBUG  -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/Rcpp/include" -I"/home/kravak/R/x86_64-redhat-linux-gnu-library/3.4/RcppArmadillo/include" -I/usr/local/include  -std=c++11 -fopenmp -fpic  -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic  -c routines.cpp -o routines.o
routines.cpp: In function ‘arma::sp_mat balanced_knn(const mat&, int, int, bool, int)’:
routines.cpp:15:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<d.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp:29:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<d.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp:34:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     for(j=0;j<dsi.n_rows && p<k;j++) {
             ~^~~~~~~~~~~
routines.cpp:38:14: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
       if(l[m]>= maxl) { continue; } // element has already maxed out its neighbors
               
routines.cpp:42:9: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
     if(j==dsi.n_rows && p<k) {
        ~^~~~~~~~~~~~
routines.cpp:58:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<colptr.n_elem;i++) { colptr[i]=cc; cc+=k; }
               ~^~~~~~~~~~~~~~
routines.cpp: In function ‘arma::mat colDeltaCor(const mat&, const mat&, int)’:
routines.cpp:76:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<e.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp: In function ‘arma::mat colDeltaCorSqrt(const mat&, const mat&, int)’:
routines.cpp:88:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<e.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp: In function ‘arma::mat colDeltaCorLog10(const mat&, const mat&, double, int)’:
routines.cpp:101:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<e.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp: In function ‘arma::mat colEuclid(const mat&, const mat&, int)’:
routines.cpp:117:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<e.n_cols;i++) {
               ~^~~~~~~~~
routines.cpp: In function ‘arma::mat embArrows(const mat&, const mat&, double, int)’:
routines.cpp:138:16: warning: comparison between signed and unsigned integer expressions [-Wsign-compare]
   for(int i=0;i<emb.n_rows;i++) {
               ~^~~~~~~~~~~
g++ -m64 -shared -L/usr/lib64/R/lib -o velocyto.R.so RcppExports.o points_within.o routines.o -lboost_filesystem -lboost_system -lstdc++ -L/usr/lib64/R/lib -lRlapack -L/usr/lib64/R/lib -lRblas -fopenmp -L/usr/lib64/R/lib -lR
/app/rh/devtoolset-6/root/usr/bin/../libexec/gcc/x86_64-redhat-linux/6.3.1/ld: cannot find -lboost_filesystem
/app/rh/devtoolset-6/root/usr/bin/../libexec/gcc/x86_64-redhat-linux/6.3.1/ld: cannot find -lboost_system
collect2: error: ld returned 1 exit status
/usr/share/R/make/shlib.mk:6: recipe for target 'velocyto.R.so' failed
make: *** [velocyto.R.so] Error 1
ERROR: compilation failed for package ‘velocyto.R’

Session Info:

setting  value                       
version  R version 3.4.1 (2017-06-30)
system   x86_64, linux-gnu           
ui       RStudio (1.0.136)           
language (EN)                        
collate  en_US.UTF-8                 
tz       <NA>                        
date     2017-10-25                  

Packages -------------------------------------------------------------------------------------
package       * version date       source        
base          * 3.4.1   2017-06-30 local         
BiocInstaller * 1.26.1  2017-10-25 Bioconductor  
compiler        3.4.1   2017-06-30 local         
curl            3.0     2017-10-06 CRAN (R 3.4.1)
datasets      * 3.4.1   2017-06-30 local         
devtools      * 1.13.3  2017-08-02 CRAN (R 3.4.1)
digest          0.6.12  2017-01-27 CRAN (R 3.4.1)
git2r           0.19.0  2017-07-19 CRAN (R 3.4.1)
graphics      * 3.4.1   2017-06-30 local         
grDevices     * 3.4.1   2017-06-30 local         
httr            1.3.1   2017-08-20 CRAN (R 3.4.1)
knitr           1.17    2017-08-10 CRAN (R 3.4.1)
memoise         1.1.0   2017-04-21 CRAN (R 3.4.1)
methods       * 3.4.1   2017-06-30 local         
R6              2.2.2   2017-06-17 CRAN (R 3.4.1)
stats         * 3.4.1   2017-06-30 local         
tools           3.4.1   2017-06-30 local         
utils         * 3.4.1   2017-06-30 local         
withr           2.0.0   2017-07-28 CRAN (R 3.4.1)

Can you advise a solution?

cell trajectory modelling - running out of memory

I've been trying to use the show.velocity.on.embedding.eu method on my scRNA-seq data, but it won't finish running. It throws an out of memory error on the "constructing path graph... " step even with even with 125G of memory available.

Do you any ideas for how I can get it to work? Are there any parameters I can change that might reduce the memory requirements significantly?

`show.velocity.on.embedding.cor` calculations expensive - separate function?

When testing different visual parameters, it is pretty expensive to continually recalculate the delta projects, sqrt knn, transition probs, and such. Is there a way to save the results of these calculations or create a separate function for their calculation and reuse them for iterating on a plot more rapidly?

Also, for this function's returns, it might be nice to have the resulting ggplot/base data be returned so that the results can be ported over to the user's desired graphical interface (for example, if I want to use transpose the arrows onto a different ggplot of my making).

Bulk RNA-seq data

Hi,

How about using velocyto with bulk data - how many replicates are necessary? I see in the circadian paper (SRA025656 dataset you were using) that there is only 1 rep per condition. How to set the parameters for such data? Should I run gene.relative.velocity.estimates with kCells=1 ? Any other parameter differences?

Also - any easy hacks to add labels (cell annotations) to the pca.velocity.plot?

Thanks,
-M

Missing license

Hi,

I'd like to package this for GNU Guix, but I noticed that none of the source files declare a license. Is this package Free Software and if so: what license applies to this package?

Thanks!

Seurat2 with Veloctyo.R

Hello,

I have tested the velocyto.R package with Pagoda2 as described in tutorials. But I would really like to test it with Seurat2.
How do I create cell embedding, cell clustering, and accurate cell-cell distance matrix using Seurat2?
Many thanks.

nonobvious error message

hi! I am running this code:

library(velocyto.R)
path <- "./fastq/STAR/"
files <- system(paste('find',path,'-name "sorted.bam" -print'),intern=T)
print(files)
names(files) <- gsub(".
\/(.*).sorted.bam","\1",files)
dat <- read.smartseq2.bams(files,"mm10.refFlat",n.cores=20)
saveRDS(dat,"velo.RData")

===================

and I get:

reading gene annotation ... done ( 40147 genes)
parsing exon information ... done
reading in 32 bam files ... done
estimating gene counts ... done
adjusting gene annotation based on expressed regions ... Error in base::colSums(x, na.rm = na.rm, dims = dims, ...) :
'x' must be numeric
Calls: read.smartseq2.bams -> -> ->
In addition: Warning messages:
1: In parallel::mclapply(bam.files, t.annotate.bam.reads, genes = genes, :
scheduled cores 13, 8, 12, 17, 7, 19 encountered errors in user code, all values of the jobs will be affected
2: In parallel::mclapply(cdl, t.get.estimates2, genes = genes, mc.cores = n.cores) :
scheduled cores 17, 13, 12, 8, 7, 19 encountered errors in user code, all values of the jobs will be affected
3: In parallel::mclapply(cdl, function(x) { :
scheduled cores 13, 12, 8, 7, 19, 17 encountered errors in user code, all values of the jobs will be affected
Execution halted

what might this mean?

Problem with precalculated tsne coordinates

When I run the program with tSNE.velocity.plot, it works fine with my rvel, but when I run show.velocity.on.embedding.cor I get the dots colored by cluster and in the tSNE space I am aiming for, but there are no velocity arrows. The rvel is clearly fine though, because the tSNE.velocity.plot on the exact same data shows them.

Any help would be appreciated!

Problem when installing h5 package, help !!!

Error: package or namespace load failed for ‘h5’ in dyn.load(file, DLLpath = DLLpath, ...):
unable to load shared object '/home/u2510/R/x86_64-pc-linux-gnu-library/3.5/h5/libs/h5.so':
/home/u2510/R/x86_64-pc-linux-gnu-library/3.5/h5/libs/h5.so: undefined symbol: _ZN2H56H5FileC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjRKNS_17FileCreatPropListERKNS_15FileAccPropListE
Error: loading failed
Execution halted
ERROR: loading failed

  • removing ‘/home/u2510/R/x86_64-pc-linux-gnu-library/3.5/h5’

Error installing velocyto.R

By following the instructions, I try to install velocyto on my Mac on R version 3.5, but I got the following error.

In file included from RcppExports.cpp:4:
In file included from /Library/Frameworks/R.framework/Versions/3.5/Resources/library/RcppArmadillo/include/RcppArmadillo.h:31:
In file included from /Library/Frameworks/R.framework/Versions/3.5/Resources/library/RcppArmadillo/include/RcppArmadilloForward.h:26:
In file included from /Library/Frameworks/R.framework/Versions/3.5/Resources/library/Rcpp/include/RcppCommon.h:29:
In file included from /Library/Frameworks/R.framework/Versions/3.5/Resources/library/Rcpp/include/Rcpp/r/headers.h:59:
In file included from /Library/Frameworks/R.framework/Versions/3.5/Resources/library/Rcpp/include/Rcpp/platform/compiler.h:100:
In file included from /usr/local/Cellar/llvm/7.0.0/include/c++/v1/cmath:305:
/usr/local/Cellar/llvm/7.0.0/include/c++/v1/math.h:301:15: fatal error: 'math.h' file not found
#include_next <math.h>
^~~~~~~~
1 error generated.
make: *** [RcppExports.o] Error 1
ERROR: compilation failed for package 'velocyto.R'

  • removing '/Library/Frameworks/R.framework/Versions/3.5/Resources/library/velocyto.R'
    Installation failed: Command failed (1)

It said math.h not found, I don't know why it occurred. the math.h actually exist in the directory.

Thank you in advanced

Error in show.velocity.on.embedding.eu

I'm trying to plot trajectories like by the end of the Chromaffin differentiation analysis tutorial. Here's my code:

show.velocity.on.embedding.eu(emb = clytia@[email protected], vel = clytia_vel, arrow.scale = 3, show.cell.trajectories = TRUE, nPcs = 50)

I did get it to work for show.velocity.on.embedding.cor, but for show.velocity.on.embedding.eu, I got this error:

Error in ncol(em) : object 'em' not found

I looked at the source code:

if(is.null(cell.colors)) { cell.colors <- ac(rep(1,ncol(em)),alpha=0.3); names(cell.colors) <- colnames(em) }
It referred to em in the first line of this function while there's no argument called em. Do you actually mean emb? That's what I suspect, but please clarify.

Error parsing GTF

I get the following error message when I try to read in smart-seq2 bam files. It seems that velocyto doesn't like the GTF, but I can't figure out why. I obtained the GTF by downloading the hg19 "ucsc genes" track directly from the UCSC table browser. I also tried with the GTF that comes with the 10X Genomics annotation set and got a similar error message. The output is below. Any suggestions would be much appreciated. Thanks!
library(velocyto.R) dat <- read.smartseq2.bams(c("scRNA-seq/D0HCF_A01/alignments.sort.bam","scRNA-seq/D0HCF_A02/alignments.sort.bam"),"velocyto_annotations/hg19_genes.gtf",n.cores=1)
reading gene annotation ... done ( 1461416 genes)
parsing exon information ... Error in [.data.frame(x, i, c(10, 11)) : undefined columns selected

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.