louiejtaylor / grabseqs Goto Github PK
View Code? Open in Web Editor NEWA utility for easy downloading of reads from next-gen sequencing repositories like NCBI SRA
License: MIT License
A utility for easy downloading of reads from next-gen sequencing repositories like NCBI SRA
License: MIT License
This is extremely unlikely to be an issue in practice, but if for some reason an individual were to be downloading two accession numbers such that one accession number was a substring of another accession number, pigz might clobber the shorter accession number because of the way you compress files.
Hey, I just came across grabseqs and at first glance, it looks really similar to a tool I've been developing called geofetch -- just wondered if you had any interest in exploring the possibility of working together on this. or, perhaps you'd be interested in the idea of a PEP, which geofetch produces, which is a standardized way to represent the sample metadata that is downloaded from geo. I haven't delved too deep into grabseqs yet as I just found it, but I thought I'd reach out to see if we could make a connection and alert you to some related projects.
Multiple errors incl
module 'numpy' has no attribute '__version__'
and
ImportError: Something is wrong with the numpy installation. While importing we detected an older version of numpy in ['/.../miniconda2/envs/sunbeam/lib/python3.6/site-packages/numpy']. One method of fixing this is to repeatedly uninstall numpy until none is found, then reinstall this version.
I suspect this is due to duplicated requirements/dependencies between setup.py
(pip) and other packages grabbing numpy in their environment.yml
(conda) although I'm not sure what the correct workaround is for this...
(i.e. work on more than just circleci)
Hi,
I met this problem when I used grabseqs.
Traceback (most recent call last):
File "/media/home/user05/anaconda3/envs/python36/bin/grabseqs", line 11, in
sys.exit(main())
File "/media/home/user05/anaconda3/envs/python36/lib/python3.6/site-packages/grabseqslib/init.py", line 58, in main
acclist, metadata_agg = get_sra_acc_metadata(sra_identifier, args.outdir, args.list, not args.SRR_parsing, metadata_agg)
File "/media/home/user05/anaconda3/envs/python36/lib/python3.6/site-packages/grabseqslib/sra.py", line 52, in get_sra_acc_metadata
run_col = lines[0].index("Run")
ValueError: 'Run' is not in list
Could you please tell me how to solve this problem? Thanks.
Thanks for making this tool, it's a real time saver!
I'm attempting to download a list of SRS accessions, which, at the start was working fine, but after a few hours has been consistently erroring:
downloading SRR2192724 using fasterq-dump 2019-04-09T00:56:16 fasterq-dump.2.9.1 err: storage exhausted while creating directory within file system module - error with https open 'https://sra-download.ncbi.nlm.nih.gov/traces/sra33/SRR/002141/SRR2192724' 2019-04-09T00:56:16 fasterq-dump.2.9.1 err: **invalid accession 'SRR2192724'** pigz: skipping: SRS475922/SRR2192724*fastq does not exist SRA download for acc SRR2192724 failed, retrying 0 more times. Traceback (most recent call last): File "/localscratch/EisenRa/miniconda2/bin/grabseqs", line 11, in <module> sys.exit(main()) File "/localscratch/EisenRa/miniconda2/lib/python3.6/site-packages/grabseqslib/__init__.py", line 59, in main run_fasterq_dump(acc, args.retries, args.threads, args.outdir, args.force, args.fastqdump) File "/localscratch/EisenRa/miniconda2/lib/python3.6/site-packages/grabseqslib/sra.py", line 114, in run_fasterq_dump raise Exception("download for "+acc+" failed. fast(er)q-dump returned "+str(retcode)+", pigz returned "+str(rgzip)+".") Exception: download for SRR2192724 failed. fast(er)q-dump returned 0, pigz returned 0.
It claims invalid accession, but the SRA file link is downloadable with wget. Is this some kind of cache error? I've got enough space on the disk.
Commands ran:
while read SRS; do grabseqs sra -t 50 -m -o $SRS -r 3 $SRS; done < SRS.txt
Where SRS.txt = a list of SRS accessions, one per line.
Best wishes,
Raphael
I thought I was missing some metadata in one of our own previously-submitted SRA datasets because it wasn't showing up in the CSV file, but then the SRA admins pointed out that it does show up on the web interface and the TSV file generated there, just not the version downloaded by grabseqs via the SRA CGI URL.
This is for BioProject PRJNA506241, where you can see the full metadata (columns like dsODN) when viewing it here:
https://trace.ncbi.nlm.nih.gov/Traces/study/?acc=PRJNA506241
But downloading via this URL gives only the core SRA columns and not the BioSample ones:
http://trace.ncbi.nlm.nih.gov/Traces/sra/sra.cgi?save=efetch&db=sra&rettype=runinfo&term=PRJNA506241
Looking closer they're almost two completely different sets of columns except for BioSample (and Consent, for some reason). Did something change server-side with this behavior, maybe? I also asked the SRA admins so I'll post an update if I learn anything.
Saw this when a test was running:
downloading SRR1913936 using fasterq-dump
spots read : 11
reads read : 22
reads written : 22
pigz: skipping: /home/circleci/grabseqs_unittest/test_tiny_sra_paired/SRR1913936.fastq does not exist
โ SRA paired sample download test passed
pigz should not attempt to zip a paired sample..?
Or, do I explicitly zip all possible files that come down to hedge against unpaired sequences? Either way, should fix this and #8 at the same time.
For the test scripts. As in sunbeam-labs/sunbeam#198
Since sometimes the SRA metadata says the samples are paired-end (but only one file comes down!)
Makes more sense this way--problems have come from people needing to use it
(Keep the option around until v1.0 at least)
It would be nice if grabseqs
supported downloading data from EBI.
It would be useful to have a list of "things I have seen go wrong and how I diagnosed/fixed them" for each of the repos, I think
Re-downloading a SRA sample prints thousands of lines telling you that it found the sample already and won't re-download. Nice to have, but one line will do.
It is:
-m
and --no_parsing
flagsdon't re-download already complete files (add --force flag to re-download)
fasterq-dump does not have a gzip option--do manually using the SRR#
Can only install it in Python 3.6, but standard conda is already on Python 3.7. Is there a reason for this restriction?
Hi -
when I run grabseqs with a project identifier that has no links to any runs
(example:
grabseqs sra -l PRJNAXXXXX)
, grabseqs dies with
ValueError: 'Run' is not in list
Rightly so, because list.index() raises a ValueError when there is no matching item (see e.g. https://docs.python.org/3/tutorial/datastructures.html)
solution:
in line 98 of sra.py the error should be caught with
except ValueError: raise ValueError("Could not find samples for accession: "+pacc+". If this accession number is valid, try re-running.")
Best wishes -
Anna
In the same vein as issue #53, it would be great if this tool could be used to pull data from the National Genomics Data Center also.
Related to this Sunbeam issue. It seems as though the grabseqs retry functionality isn't working as intended--make sure that all errors for SRA downloading are caught properly and make the error messages a little clearer.
Using shutil.which
Brought up originally in this context: #35
when I use the command "grabseqs sra -l PRJDB5400", I have some errors.
pigz not found, using gzip
Traceback (most recent call last):
File "/home/tools/anaconda3/bin/grabseqs", line 8, in
sys.exit(main())
File "/home/tools/anaconda3/lib/python3.8/site-packages/grabseqslib/init.py", line 58, in main
metadata_agg = process_sra(args, zip_func)
File "/home/tools/anaconda3/lib/python3.8/site-packages/grabseqslib/sra.py", line 27, in process_sra
acclist, metadata_agg = get_sra_acc_metadata(sra_identifier,
File "/home/tools/anaconda3/lib/python3.8/site-packages/grabseqslib/sra.py", line 97, in get_sra_acc_metadata
run_col = lines[0].index("Run")
ValueError: 'Run' is not in list
Looks like some changes on the NCBI side lead to failures in SRA downloads:
grabseqs sra SRR11733975
Traceback (most recent call last):
File "/users/cdiener/miniconda3/envs/sra/bin/grabseqs", line 11, in <module>
sys.exit(main())
File "/users/cdiener/miniconda3/envs/sra/lib/python3.7/site-packages/grabseqslib/__init__.py", line 58, in main
metadata_agg = process_sra(args, zip_func)
File "/users/cdiener/miniconda3/envs/sra/lib/python3.7/site-packages/grabseqslib/sra.py", line 31, in process_sra
metadata_agg)
File "/users/cdiener/miniconda3/envs/sra/lib/python3.7/site-packages/grabseqslib/sra.py", line 97, in get_sra_acc_metadata
run_col = lines[0].index("Run")
ValueError: 'Run' is not in list
This seems to be caused by a hardcoded address to download the SRA manifest that is not reachable anymore.
Because sometimes, fasterq-dump
just doesn't work (and fastq-dump
does)
If users have older versions of sra-tools installed, grabseqs sra
will fail--we can avert this!
Sometimes, people may not need the whole dataset of the SRA project. If there is a flag can help to download the custom SRR list, that will be great!
Great reviewer suggestion to allow users to pass custom flags along to fasterq-dump
--this should be very doable (and make grabseqs more user-friendly)!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.