Create panel of normals for different tools
This snakemake workflow is designed to generate panel of normal (PoN) files for a plethora of tools. Which can in turn be used for variant calling analyses.
The PoN is generated according to the best practices from GATK as described in
this tutorial.
For each .bam
file of a normal sample variants are called using
mutect2
and combined into a .vcf
file representing the PoN.
For CNVkit, the docs talk about a
reference of pooled normals
which is generated in two steps from .bam
files of normal samples.
To run this workflow, the following tools need to be available:
- Add all sample ids to
samples.tsv
in the columnsample
. - Add sample type information, normal or tumor, to
units.tsv
. - Use the
analysis_output
folder from wgs_std_viper as input.
- You need a reference
.fasta
file representing the genome used for mapping. For the different tools to work, you also need to prepare index files and a.dict
file.
- The required files for the human reference genome GRCh38 can be downloaded from
google cloud.
The download can be manually done using the browser or using
gsutil
via the command line:
gsutil cp gs://genomics-public-data/resources/broad/hg38/v0/Homo_sapiens_assembly38.fasta /path/to/download/dir/
- If those resources are not available for your reference you may generate them yourself:
samtools faidx /path/to/reference.fasta
gatk CreateSequenceDictionary -R /path/to/reference.fasta -O /path/to/reference.dict
- In order to split the variant calling process by chromosome, a locus file containg a list of all available chromosomes is used in the analysis.
- Mutect2 requires an
.interval_list
file which needs to be supplied. For GRCh38, the file is also available in the google bucket. - Mutect2 also requires a modified gnomad database
as a
.vcf.gz
. For GRCh38, the file can be retrieved from google cloud as described under 1. - Add the paths of the different files to the
config.yaml
. The index files should be in the same directory as the reference.fasta
. - Make sure that the docker container versions are correct.
The workflow repository contains a small test dataset .tests/integration
which can be run like so:
cd .tests/integration
snakemake -s ../../workflow/Snakefile -j1 --use-singularity
The workflow is designed for WGS data meaning huge datasets which require a lot of compute power. For HPC clusters, it is recommended to use a cluster profile and run something like:
snakemake -s /path/to/Snakefile --profile my-awesome-profile