timbitz / isosceles Goto Github PK
View Code? Open in Web Editor NEWThe Isoforms from Single-Cell; Long-read Expression Suite
License: GNU General Public License v3.0
The Isoforms from Single-Cell; Long-read Expression Suite
License: GNU General Public License v3.0
Hello,
thank you for the isosceles package. I was wondering if it was possible to work from nanopore wf-epi2me single-cell pipeline results with isosceles ?
special emphasis about "find_isoswitch" function that would be great if I didn't need "the Isosceles transcript" in order to make it work.
Best regards
Hello, I tried to run scrna analysis using your tool by trying the multiprocessing functionality, i.e. adjusting the ncpu
parameter. However, when I check my run using the top command, it was capped at 100% cpu usage. What could be the potential problem?
Hi, thanks for the tool and for the great documentation.
In the prepare_bam_transcripts.R line 118, the code asked to verify that the hash id generated are unique. However, the hash id generated is not unique using my dataset. After looking into the code, I realise this was because some of the tx_intron_positions are the same despite having different tx_position (different chromosome). I believe the reason for this was the strands are undetermined, causing the overlapping, and hence generating the same hash_id based on the same tx_intron positions.
I replaced make_hash(tx_intron_positions)
by make_hash(paste(tx_position, tx_intron_positions))
in line 112 to give an unique hash_id, do you think this make sense ?
Remark: I also replaced make_hash(.data$intron_positions)
into make_hash(paste(.data$position, .data$intron_positions))
for consistency.
Hi,
First, thanks for a careful development of this package.
I have been trying several aspects of Isosceles, however, I usually get stuck when I need to run DEXSeq
. My dataset is way too big and runs days, sometimes not even managing to get the estimateDispersions
step finished after a week. I have tried parallelized it with more cpus (12) and filtering in many ways my dataset to reduce the number of events to test but I still get to run DEXSeq
for days. For instance, I ran the window approach, with 1500 events and 480 window columns and it took 2 days to finish.
I read that DEXSeq
is not recommended for long datasets, but rather approaches like satuRn
or limma
. I understand that satuRn only performs DTU test, which is not exactly what is intended with DEXSeq (with testforDEU
). Do you have a suggestion to deal with bigger datasets (more than 1000 cells and more than 1000 events)? Could it perhaps edgeR
's diffSpliceDGE
be adapted for this step?
Thanks,
Mariela
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.