Giter Site home page Giter Site logo

nf-core / epitopeprediction Goto Github PK

View Code? Open in Web Editor NEW
41.0 136.0 22.0 4.37 MB

A bioinformatics best-practice analysis pipeline for epitope prediction and annotation

Home Page: https://nf-co.re/epitopeprediction

License: MIT License

HTML 1.33% Python 36.70% Nextflow 61.84% Shell 0.12%
nf-core nextflow workflow pipeline epitope epitope-prediction mhc-binding-prediction

epitopeprediction's Introduction

nf-core/epitopeprediction

GitHub Actions CI Status GitHub Actions Linting StatusAWS CICite with Zenodo nf-test

Nextflow run with conda run with docker run with singularity Launch on Seqera Platform

Get help on SlackFollow on TwitterFollow on MastodonWatch on YouTube

Introduction

nf-core/epitopeprediction is a bioinformatics best-practice analysis pipeline for epitope prediction and annotation. The pipeline performs epitope predictions for a given set of variants or peptides directly using state of the art prediction tools. Additionally, resulting prediction results can be annotated with metadata.

Supported prediction tools:

  • syfpeithi
  • mhcflurry
  • mhcnuggets-class-1
  • mhcnuggets-class-2
  • netmhcpan-4.0
  • netmhcpan-4.1
  • netmhc-4.0
  • netmhciipan-4.1

The pipeline is built using Nextflow, a workflow tool to run tasks across multiple compute infrastructures in a very portable manner. It uses Docker/Singularity containers making installation trivial and results highly reproducible. The Nextflow DSL2 implementation of this pipeline uses one container per process which makes it easier to maintain and update software dependencies. Where possible, these processes have been submitted to and installed from nf-core/modules in order to make them available to all nf-core pipelines, and to everyone within the Nextflow community!

On release, automated continuous integration tests run the pipeline on a full-sized dataset on the AWS cloud infrastructure. This ensures that the pipeline runs on AWS, has sensible resource allocation defaults set to run on real-world datasets, and permits the persistent storage of results to benchmark between pipeline releases and other analysis sources.The results obtained from the full-sized test can be viewed on the nf-core website.

Pipeline summary

  1. Read variants, proteins, or peptides and HLA alleles
  2. Generate peptides from variants or proteins or use peptides directly
  3. Predict HLA-binding peptides for the given set of HLA alleles

Note

If you are new to Nextflow and nf-core, please refer to this page on how to set-up Nextflow. Make sure to test your setup with -profile test before running the workflow on actual data.

First, prepare a samplesheet with your input data that looks as follows:

samplesheet.csv:

sample,alleles,mhc_class,filename
GBM_1,A*01:01;A*02:01;B*07:02;B*24:02;C*03:01;C*04:01,I,gbm_1_variants.vcf
GBM_1,A*02:01;A*24:01;B*07:02;B*08:01;C*04:01;C*07:01,I,gbm_1_peptides.vcf

Each row represents a sample with associated HLA alleles and input data (variants/peptides/proteins).

Now, you can run the pipeline using:

nextflow run nf-core/epitopeprediction \
   -profile <docker/singularity/.../institute> \
   --input samplesheet.csv \
   --outdir <OUTDIR>

Warning

Please provide pipeline parameters via the CLI or Nextflow -params-file option. Custom config files including those provided by the -c Nextflow option can be used to provide any configuration except for parameters; see docs.

For more details and further functionality, please refer to the usage documentation and the parameter documentation.

Pipeline output

To see the results of an example test run with a full size dataset refer to the results tab on the nf-core website pipeline page. For more details about the output files and reports, please refer to the output documentation.

Credits

nf-core/epitopeprediction was originally written by Christopher Mohr from Boehringer Ingelheim and Alexander Peltzer from Boehringer Ingelheim. Further contributions were made by Sabrina Krakau from Quantitative Biology Center and Leon Kuchenbecker from the Kohlbacher Lab.

The pipeline was converted to Nextflow DSL2 by Christopher Mohr, Marissa Dubbelaar from Clinical Collaboration Unit Translational Immunology and Quantitative Biology Center, Gisela Gabernet from Quantitative Biology Center, and Jonas Scheid from Quantitative Biology Center

Contributions and Support

If you would like to contribute to this pipeline, please see the contributing guidelines.

For further information or help, don't hesitate to get in touch on the Slack #epitopeprediction channel (you can join with this invite).

Citations

If you use nf-core/epitopeprediction for your analysis, please cite it using the following doi: 10.5281/zenodo.3564666

An extensive list of references for the tools used by the pipeline can be found in the CITATIONS.md file.

You can cite the nf-core publication as follows:

The nf-core framework for community-curated bioinformatics pipelines.

Philip Ewels, Alexander Peltzer, Sven Fillinger, Harshil Patel, Johannes Alneberg, Andreas Wilm, Maxime Ulysse Garcia, Paolo Di Tommaso & Sven Nahnsen.

Nat Biotechnol. 2020 Feb 13. doi: 10.1038/s41587-020-0439-x.

epitopeprediction's People

Contributors

alina-bauer avatar apeltzer avatar christopher-mohr avatar ggabernet avatar jonasscheid avatar kevinmenden avatar lkuchenb avatar marissadubbelaar avatar maxulysse avatar nf-core-bot avatar skrakau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

epitopeprediction's Issues

No attribute 'MIN_DP'

Description of the bug

The run breaks in the NFCORE_EPITOPEPREDICTION:EPITOPEPREDICTION:PEPTIDE_PREDICTION_VAR step, giving the following error:

Using TensorFlow backend.
  Traceback (most recent call last):
    File "/home-link/kmhmd01/.nextflow/assets/nf-core/epitopeprediction/bin/epaa.py", line 1165, in <module>
      __main__()
    File "/home-link/kmhmd01/.nextflow/assets/nf-core/epitopeprediction/bin/epaa.py", line 994, in __main__
      vl, transcripts, metadata = read_vcf(args.somatic_mutations)
    File "/home-link/kmhmd01/.nextflow/assets/nf-core/epitopeprediction/bin/epaa.py", line 327, in read_vcf
      if isinstance(sample[format_key], list):
    File "/usr/local/lib/python2.7/site-packages/vcf/model.py", line 104, in __getitem__
      return getattr(self.data, key)
  AttributeError: 'CallData' object has no attribute 'MIN_DP'

So I checked what this MIN_DP is in the .vcf file and the header notes the following:

##FORMAT=<ID=MIN_DP,Number=1,Type=Integer,Description="Minimum filtered basecall depth used for site genotyping within a non-variant multi-site block">

Command used and terminal output

NXF_VERSION=21.10.4 nextflow run nf-core/epitopeprediction \
--input "" \
--outdir '' \
--genome_version "GRCh38" \
-profile cfc \
-resume

Relevant files

nextflow.log

System information

  • Nextflow version: 21.10.4
  • Hardware: HPC
  • Executor: slurm
  • Version of nf-core/epitopeprediction: 2.0.0

Neoepitopeprediction crashes when a variant has annotated metadata, which is not available for others

Description of the bug

The pipeline crashes during the prediction of neoeptiopes, when one (or more) variants have annotated metadata, which is not available for other variants.

Command used and terminal output

Command:
nextflow run main.nf --max_memory '4.GB' --max_cpus 4 -profile docker --input input_epp.csv

Terminal output:
Traceback (most recent call last):
    File "/Users/jonas/Documents/Uni/Master/Thesis/Ligandomics/neoepitopes/epitopeprediction/bin/epaa.py", line 1194, in <module>
      __main__()
    File "/Users/jonas/Documents/Uni/Master/Thesis/Ligandomics/neoepitopes/epitopeprediction/bin/epaa.py", line 1074, in __main__
      pred_dataframes, statistics, all_peptides_filtered, proteins = make_predictions_from_variants(vl, methods, thresholds, alleles, int(args.min_length), int(args.max_length) + 1, ma, up_db, args.identifier, metadata, transcriptProteinMap)
    File "/Users/jonas/Documents/Uni/Master/Thesis/Ligandomics/neoepitopes/epitopeprediction/bin/epaa.py", line 883, in make_predictions_from_variants
      df[c] = df.apply(lambda row: create_metadata_column_value(row, c, metadata), axis=1)
    File "/usr/local/lib/python2.7/site-packages/pandas/core/frame.py", line 6487, in apply
      return op.get_result()
    File "/usr/local/lib/python2.7/site-packages/pandas/core/apply.py", line 151, in get_result
      return self.apply_standard()
    File "/usr/local/lib/python2.7/site-packages/pandas/core/apply.py", line 257, in apply_standard
      self.apply_series_generator()
    File "/usr/local/lib/python2.7/site-packages/pandas/core/apply.py", line 286, in apply_series_generator
      results[i] = self.f(v)
    File "/Users/jonas/Documents/Uni/Master/Thesis/Ligandomics/neoepitopes/epitopeprediction/bin/epaa.py", line 883, in <lambda>
      df[c] = df.apply(lambda row: create_metadata_column_value(row, c, metadata), axis=1)
    File "/Users/jonas/Documents/Uni/Master/Thesis/Ligandomics/neoepitopes/epitopeprediction/bin/epaa.py", line 554, in create_metadata_column_value
      meta = set([str(y.get_metadata(c)[0]) for y in set(variants) if c in metadata])
  IndexError: ('list index out of range', u'occurred at index 0')

Relevant files

No response

System information

No response

Tidy up parameters for min./max. peptide length

  • only display parameters when not using --peptides input

  • only allow to set by user when not using --peptides input

  • if not using --peptides input, display both (currently only max_peptide_length)

Reorganize result folder structure

Description of feature

Currently we have folders predictions and merged_predictions which might be a bit confusing although it is described in the docs. I would suggest to just use predictions for the final results and split_predictions for the intermediate results. In addition we should introduce sample-specific result folders.

.
+-- predictions
|   +-- sample1
|         +-- predictions.tsv
|   +-- sample2
...
+-- split_predictions
|   +-- sample1
...

Option for FASTA input

We should offer the option to provide protein sequences in FASTA format which would then be used for peptide prediction directly. Peptides are generated from the given protein sequences according to the provided length parameters.

Enable selection of Ensembl (BioMart) version

Currently, the Ensembl (BioMart) version that is used for information retrieval of e.g. transcripts is predefined for each reference genome version. I guess it would be better to allow the selection of the version from a predefined list at least.

However, throughout the different versions database field names e.g. changed and might have to be adapted accordingly in the interfaces.

local module: genPeptides

This module uses the gen_peptides.py script which is dependent on Bio.SeqIO.FastaIO (SimpleFastaParser)
and Fred2

Add provided metadata to prediction output

If additional information such as allele frequency is provided in VCF or TSV output (already available for peptide input), this information should be attached to generated peptides and part of the results. This is already done for some fields but not in a generic fashion.

var.log_metadata("vardbid", variation_dbid)

Use case: there will be cases where one would like to filter the results based on allele frequency, dbSNP, ...

Implementation of the sample sheet in the pipeline

It could be suggested that we create the following annotation for the input file:

  • sample ID,
  • vcf file
  • alleles (A*01:01),
  • peptide sequences file*,
  • protein file
  • The question here is where fo the id and sequence stand for (UniProt id and peptide sequence). Would it not be interesting to include the associated protein here as well?

Missing output information in neoepitopeprediction snvs and indels

Description of the bug

Including two vcf files, one containing SNVs and one containing small indels, reduces the output information of SNVs. E.g.
Run with SNV only (header):
sequence length chr pos gene transcripts proteins variant type method DP HLA-A*01:01 affinity HLA-A*01:01 binder HLA-A*01:01 score HLA-A*02:01 affinity HLA-A*02:01 binder HLA-A*02:01 score HLA-B*18:01 affinity HLA-B*18:01 binder HLA-B*18:01 score HLA-B*27:05 affinity HLA-B*27:05 binder HLA-B*27:05 score MQ MQ0 NORMAL.AU NORMAL.CU NORMAL.DP NORMAL.FDP NORMAL.GU NORMAL.SDP NORMAL.SUBDP NORMAL.TU NT QSS QSS_NT ReadPosRankSum SGT SNVSB SOMATIC SomaticEVS TQSS TQSS_NT TUMOR.AU TUMOR.CU TUMOR.DP TUMOR.FDP TUMOR.GU TUMOR.SDP TUMOR.SUBDP TUMOR.TU homozygous synonymous vardbid variant details (genomic) variant details (protein)

Run with indel only (header):
sequence length chr pos gene transcripts proteins variant type method

Run with SNV and indel file (header):
sequence length chr pos gene transcripts proteins variant type method

Command used and terminal output

No response

Relevant files

No response

System information

No response

Add percentile rank scores

Add percentile rank scores computed based on random natural peptides, either given by tools itself, e.g. by mhcflurry, or compute consistently for all tools.

Provide option to select prediction methods

We should provide an option to select tools (one or multiple) which are used for the peptide prediction and check if an interface for the selected version of the tools is available in FRED2.

The alternative would be to restrict the options to tools and versions which are available.

Missing `fromPath` parameter

I am getting Missing fromPath parameter when running "nextflow run nf-core/epitopeprediction --reads '_R{1,2}.fastq.gz' -profile standard,docker". I tried "nextflow run nf-core/epitopeprediction --reads '_R{1,2}.fastq.gz' -profile standard,docker --fromPath ." too. Same error.

Check for available prediction methods

We should check if an interface to the selected prediction method is available in FRED2. FRED2 offers functionality to check for available interfaces:

>>> EpitopePredictorFactory.available_methods()
{'netmhcstabpan': ['1.0'], 'smmpmbec': ['1.0'], 'syfpeithi': ['1.0'], 'netmhc': ['3.0a', '3.4', '4.0'], 'netctlpan': ['1.1'], 'smm': ['1.0'], 'tepitopepan': ['1.0'], 'netmhcii': ['2.2'], 'arb': ['1.0'], 'pickpocket': ['1.1'], 'epidemix': ['1.0'], 'unitope': ['1.0'], 'netmhciipan': ['3.0', '3.1'], 'comblibsidney': ['1.0'], 'netmhcpan': ['2.4', '2.8', '3.0'], 'calisimm': ['1.0'], 'hammer': ['1.0'], 'svmhc': ['1.0'], 'bimas': ['1.0']}

Provide information about supported models

It would be helpful to provide information about supported models by different tools and to throw an exception or warning if a not supported peptide length or allele is specified.

Provide option to customize binder threshold

Currently the affinity threshold for a peptide prediction is set to >= 50% to decide whether a peptide is considered as a binder or not. It would be convenient to customize the threshold for each given prediction tool.

Option for FASTA file output

Generate protein sequences for mutated (and unmutated) protein sequences and provide option to output as FASTA file.

Neoepitopeprediction is too memory demanding

Description of feature

The splitting of vcf files is currently done chromosome-wise. However, vcf files can contain a skewed distribution of variants over the genome, meaning one chromosome can report a remarkable number of variants while another chromosome almost has none. Processing large numbers of variants chromosome-wise therefore leads to an inefficient use of memory and can crash the pipeline.

I would propose to alter the snpsift_split module with the addition SnpSift split -l {number of variants} to split the vcf file with a fixed number of variants (e.g. 50 or 100)

Specify predictor versions

The declared tool version in the results file is wrong, at least for MHCnuggets. At the same time the used MHCNuggetsPredictor class does not correspond to the version used by this epitopeprediction workflow.

If I see it right, the reason is that within the epaa.py script no tool version is selected (v2.3.2 is installed), but instead the version is retrieved from FRED2, which in turn currently only provides a MHCNuggetsPredictor class for v2.0.

This should be changed with EpitopePredictorFactory(m, version="xxx") within epaa.py.

MHCnuggets results - all peptides binder

When running the workflow on the test_peptides testdata while specifying --tools mhcnuggets-class-1 all peptides are classified as "binder" although the binding affinity is 0.0. I guess the create_affinity_values() and create_binder_values() functions need to be adjusted?

Provide Feedback on pipeline adding

Hi!

can you go through this here and let us know if you find any problems:

https://nf-co.re/adding_pipelines

Copied some notes from our meeting atm:

Porting pipeline to nf-core

missing pieces in documentation

Push to GitHub session is complete

git push --set-upstream origin master is missing

Similar how to set up a dev branch on the first commit is missing entirely...

==> Information on how to set up branches is completely missing

Set up Travis and Docker Hub

==> Docker Hub needs to be done by Alex Peltzer

===> Travis doesnt see the projects

travis-ci.org is outdated, please use travis-ci.com

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.