Giter Site home page Giter Site logo

wyim-pgl / dcblast Goto Github PK

View Code? Open in Web Editor NEW
13.0 4.0 3.0 33.11 MB

Divide and Conquer BLAST: using grid engines to accelerate NCBI-BLAST+ and other sequence analysis tools

License: BSD 2-Clause "Simplified" License

Perl 100.00%
blast ncbi-blast grid-engine sequence-analysis robust cluster grid-cloud hpc perl plast

dcblast's Introduction

DCBLAST (Divide and Conquer BLAST for HPC)

The Basic Local Alignment Search Tool (BLAST) is by far best the most widely used tool in for sequence analysis for rapid sequence similarity searching among nucleic acid or amino acid sequences. Recently, cluster, HPC, grid, and cloud environmentshave been are increasing more widely used and more accessible as high-performance computing systems. Divide and Conquer BLAST (DCBLAST) has been designed to perform run on grid system with query splicing which can run National Center for Biotechnology Information (NCBI) BLAST. BLAST search comparisons over withinthe HPC, grid, and cloud computing grid environment by using a query sequence distribution approach NCBI BLAST. This is a promising tool to accelerate BLAST job dramatically accelerates the execution of BLAST query searches using a simple, accessible, robust, and practical approach.

  • DCBLAST can run BLAST job across HPC (SGE & SLURM).
  • DCBLAST suppport all NCBI-BLAST+ suite.
  • DCBLAST generate exact same NCBI-BLAST+ result.
  • DCBLAST can use all options in NCBI-BLAST+ suite. blast

Requirement

Following basic softwares are needed to run

  • Perl (Any version 5+)
$ which perl
$ perl --version
  • NCBI-BLAST+ (Any version; tested 2.4.0+ - 2.7.1+) for easy approach, you can download binary version of blast from below link. ftp://ftp.ncbi.nlm.nih.gov/blast/executables/blast+/LATEST

For using recent version, please update BLAST path in config.ini

$ which blastn
  • Sun Grid Engine (Any version)
  • SLURM (tested with 17.02.2)
- SGE
$ which qsub
- SLURM
$ which sbatch
  • Grid cloud or distributed computing system.

Prerequisites

The following Perl modules are required:

  • Path::Tiny
  • Data::Dumper
  • Config::Tiny

Install prerequisites with the following command:

$ cpan `cat requirement`

or

$ cpanm `cat requirement`

or

$ cpanm Path::Tiny Data::Dumper Config::Tiny

We strongly recommend to use Perlbrew http://perlbrew.pl/ to avoid having to type sudo

We also recommend to use 'cpanm' https://github.com/miyagawa/cpanminus

Installation

The program is a single file Perl scripts. Copy it into executive directories.

We recommend to copy it on scratch disk.

cd ~/scratch/  # We recommend to copy it on scratch disk.

git clone git://github.com/ascendo/DCBLAST.git

cd ~/scratch/DCBLAST_{your_scheduler_SGE_or_SLURM}

perl dcblast.pl

Usage : dcblast.pl --ini config.ini --input input-fasta --size size-of-group --output output-filename-prefix  --blast blast-program-name

  --ini <ini filename> ##config file ex)config.ini

  --input <input filename> ##query fasta file

  --size <output size> ## size of chunks usually all core x 2, if you have 160 core all nodes, you can use 320. please check it to your admin.

  --output <output filename> ##output folder name

  --blast <blast name> ##blastp, blastx, blastn and etcs.

  --dryrun Option will only split fasta file into chunks

Configuration

Please edit config.ini before you run!!

[dcblast]
##Name of job (will use for SGE job submission name)
job_name_prefix=dcblast

[blast]
##BLAST options

##BLAST path (your blast+ path); $ which blastn; then remove "blastn"
path=~/bin/blast/ncbi-blast-2.2.30+/bin/

##DB path (build your own BLAST DB)
##example
##makeblastdb -in example/test_db.fas -dbtype nucl (for nucleotide sequence)
##makeblastdb -in example/your-protein-db.fas -dbtype prot (for protein sequence)
db=example/test_db.fas

##Evalue cut-off (See BLAST manual)
evalue=1e-05

##number of threads in each job. If your CPU is AMD it needs to be set 1.
num_threads=2

##Max target sequence output (See BLAST manual)
max_target_seqs=1

##Output format (See BLAST manual)
outfmt=6

##any other option can be add it this area
#matrix=BLOSUM62
#gapopen=11
#gapextend=1


[sge]
##Grid job submission commands
##please check your job submission scripts
##Especially Queue name and Threads option will be different depends on your system
pe=SharedMem 1
M=your@email
o=log
q=common.q
j=yes
cwd=

If you need any other options for your enviroment please contant us or admin

PBS & LSF need simple code hack. If you need it please request through issue.

Example sequence

This sequences are randomly selected from plant species. The size and gene number informations are below.

test_db.fas

Number of gene	35386
Total size of gene	43546761
Longest gene	16182
Shortest gene	22
test_query.fas

Number of gene	6282
Total size of gene	7247997
Longest gene	11577
Shortest gene	22

It usually finish within ~20min depends on HPC status and CPU speed.

Usage

perl dcblast.pl

Usage : dcblast.pl --ini config.ini --input input-fasta --size size-of-group --output output-filename-prefix  --blast blast-program-name

  --ini <ini filename> ##config file ex)config.ini

  --input <input filename> ##query fasta file

  --size <output size> ## size of chunks usually all core x 2, if you have 160 core in nodes, you can use 320. please check it to your admin.

  --output <output filename> ##output folder name

  --blast <blast name> ##blastp, blastx, blastn and etcs.

  --dryrun Option will only split fasta file into chunks

Examples SGE

Dryrun (--dryrun option will only split fasta file into chunks)

perl dcblast.pl --ini config.ini --input example/test_query.fas --output test --size 20 --blast blastn --dryrun
DRYRUN COMMAND : [qsub -M your@email -cwd -j yes -o log -pe SharedMem 1 -q common.q -N dcblast_split -t 1-20 dcblast_blastcmd.sh]
DRYRUN COMMAND : [qsub -M your@email -cwd -j yes -o log -pe SharedMem 1 -q common.q -hold_jid dcblast_split -N dcblast_merge dcblast_merge.sh test/results 20]
DRYRUN COMMAND : [qstat]
DONE

Check the test folder "test/chunks/" for sequence split result.

Run with example

You don't need to run "dryrun" everytime.

perl dcblast.pl --ini config.ini --input example/test_query.fas --output test --size 20 --blast blastn 

This run will splits file into 20 chunks, run on 20 cores and generated BLAST output file "test/results/merged" and chunked input file "test/chunks/"

It will finish their search within ~20min depends on HPC status and CPU speed.

For your research, please format database according to NCBI-BLAST+ instruction. Here is the brief examples.

makeblastdb -in your-nucleotide-db.fa -dbtype nucl ###for nucleotide sequence
makeblastdb -in your-protein-db.fas -dbtype prot ###for protein sequence

Acknowledgement

The authors would like to thank DOE for providing support for this study and acknowledge the support of Research & Innovation and the Office of Information Technology at the University of Nevada, Reno for computing time on the Pronghorn High-Performance Computing Cluster.

Citation

Won Cheol Yim and John C. Cushman (2017) Divide and Conquer BLAST: using grid engines to accelerate BLAST and other sequence analysis tools. PeerJ 10.7717/peerj.3486 https://peerj.com/articles/3486/

dcblast's People

Contributors

wyim-pgl avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

dcblast's Issues

Support for Torque/PBS

Hi,
I would like to use dcblast to submit jobs to PBSpro. It could use a template file and replace the variables. For my purposes the following template would be enough.

#PBS -S $my_shell
#PBS -q $queue_name
#PBS -N ${job_prefix}_${chunk_id}
#PBS -l select=1:ncpus=24,walltime=144:00:00
#PBS -A $project_name
#PBS -M $email
#PBS -m ea

cd $PBS_O_WORKDIR || exit 255

$blastcmd" "${args}" "${other_user_opts}"

The command to submit a job is:

qsub "${pbs_script_filename}"

PBS code hack?

PBS & LSF need simple code hack. If you need it please request through issue.

I am running dcblast.pl through PBS and only the first result file gets written-no other results files are created. Pls help.

See my config below:
`[dcblast]
##Name of job (will use for SGE job submission name)
job_name_prefix=dcblast_benchmark

[blast]
##BLAST options

##BLAST path (your blast+ path)
path=/data-fastpool/ncbi-blast-2.10.0+/bin

##DB path (build your own BLAST DB)
##example
##makeblastdb -in example/test_db.fas -dbtype nucl (for nucleotide sequence)
##makeblastdb -in example/your-protein-db.fas -dbtype prot (for protein sequence)
db=/data-fastpool/FELIX/blastdb/AllProteins/upkb_reviewed/upkb_bact_swissprot.fasta

##Evalue cut-off (See BLAST manual)
evalue=1e-05

##number of threads in each job. If your CPU is AMD it needs to be set 1.
num_threads=12

##Max target sequence output (See BLAST manual)
max_target_seqs=1

##Output format (See BLAST manual)
outfmt=6

##any other option can be add it this area
#num_alignments=1
#seg=yes
#soft_masking=true
#lcase_masking=
#max_hsps=1
#matrix=BLOSUM62
#gapopen=11
#gapextend=1

[sge]
##Grid job submission commands
##please check your job submission scripts
##Especially Queue name (q) and Threads option (pe) will be different depends on your system
M=[email protected]
d=/data-fastpool/FELIX/blastdb/AllProteins/upkb_reviewed
o=/data-fastpool/FELIX/working/swist/dcblast_test
e=/data-fastpool/FELIX/working/swist/dcblast_test
l=walltime=48:00:00,nodes=1:ppn=12,pmem=6gb
m=abe`

Reg:No output from the run

hi,
After i execute the command ,the job goes to failed state,without any error or log...

Please help.

Jobs stuck in Eqw or hqw states

Hi Dear Won Chol,

Thank you for developing DCBLAST.

Currently, we are trying to use DCBLAST to perform blastp on our Sun Grid Engine System.

The command we were using is:
perl dcblast.pl --ini configL.ini --input L6_Fv.faa --size 200 --output L6_Fv_vs_nr2 --blast blastp

We noticed that the .faa file was split successfully. However, the jobs were in Eqw or hqw states. Upon checking the error underlying Eqw state, we noticed that it is related to password entry error for the user ('can't get password entry for user "xxx". Either the user does not exist or NIS error!'); the user account is available on head node but not on the compute node.

Hope you may be able to advise on how we can resolve the problem.

Thanks.

slurm script error: dcblast_merge.sh: command not found

I tried it and got an error
dcblast_split.sh: command not found
dcblast_merge.sh: command not found

I solved this issue by editing the line 56 and 57 from:
my $dcblast_blastcmd = 'dcblast_blastcmd.sh';
my $dcblast_mergecmd = 'dcblast_merge.sh';

To:
my $dcblast_blastcmd = './dcblast_blastcmd.sh';
my $dcblast_mergecmd = './dcblast_merge.sh';

The system now can find the two scripts.

Implementation to other HPC systems

Dear @Ascendo,
I read your paper and find your tool fascinating. Any plans of implementing a version of this for LSF? If not, do you have any suggestion on how I may go about speeding up blastn on an LSF cluster?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.