fgvieira / ngsf Goto Github PK
View Code? Open in Web Editor NEWEstimation of per-individual inbreeding coefficients under a probabilistic framework
License: Other
Estimation of per-individual inbreeding coefficients under a probabilistic framework
License: Other
Dear @fgvieira,
I have run ngsF and got as output a text file with per individual inbreeding values and a .pars binary file. I would like to know how did you manage with those files to obtain a figure similar to the supplementary figures 7, 8 and 9 of the paper (https://academic.oup.com/bioinformatics/article/32/14/2096/1743296). I have read that you only used ggplot2 but I am not sure how to process the binary file to get a table with the values and so on.
Thanks a lot,
Sincerely,
Marc
Currently ngsF only works when all sites are variables (call SNPs first). One way would be to use a weighted average but it only seems to work with extreme weights (pvar^200)
Arbitrary ploidy to account for X/Y and poliploid.
See: http://www.genetics.org/content/186/4/1367.full
Hi @fgvieira
I have run doGlf 3
for each chromosome using angsd. In order to estimate inbreeding coefficient, I think I should merge the binary glf.gz together from each chromosome, but I do not know which software can do this.
Best
Zhuqing
Hi Felipe - I am sure it is something silly, but I am out of ideas... For some reason I cannot get this to work:
NIND=cat bams.nr | wc -l
NS=zcat g3.mafs.gz | wc -l
zcat g3.glf.gz | ngsF --glf - --n_ind $NIND --n_sites $NS --out inbr
I'm getting "ERROR: cannot read GLF file!"
I made sure to make g3.glf.gz with -doGlf 3. What am I doing wrong?
I have the most recent angsd, version 0.933-25-g5955d69 (htslib: 1.9-44-g80f3557)
cheers
Misha
Hello Dr. Vieira,
I have read the tutorial and it says that ngsF should be run several times to avoid convergence to local maxima. My question is how many runs should be performed to avoid this. Should the seed argument be different for each run? (my guess is that no)
Additionally, after the certain number of runs are done, should we obtain the mean value for inbreeding for each sample?
For example, if I run ngsF (with the same parameters and seed) 10 times, I will have 10 output files with the different values for inbreeding coefficients. Then, I would have to put together the inbreeding values for each sample and get the mean value per sample, right?
Thank you in advance!
Allow for negative F (-1 < F < 1) to accommodate other deviations from HWE
Hi, Filipe,
I am using ngsF to estimate the inbreeding coefficient for a population of giraffes. I am not sure about the parameter "--freq_fixed". The readMe stated "assume initial MAF as fixed parameters (only estimates F)". However, the examples shown in the ngsTools tutorials didn't mention this parameter in either initial values estimate (with --approx_EM) or the final run.
I tried running ngsF with and without setting "--freq_fixed=TRUE". The results are quite different. I would like to learn more about this parameter.
Dear Dr. Viera,
I'm a new ngsF user, and I just ran the program using my samples. I found a very stark pattern in the resulting inbreeding coeffiecients:
0.000000
0.000000
0.034296
0.000000
0.000000
0.000000
0.475336
0.913737
0.620628
0.623424
0.397937
0.649332
0.731328
0.487009
0.480450
0.689157
0.706471
0.610795
0.577949
0.664756
0.868305
0.516195
0.503299
These data are ddRAD-seq from a plant species subject to considerable amounts of selfing, so I expect fairly high inbreeding coefficients. Notably, the first six samples all have very low F, and these six samples were all sequenced in a separate run from the remaining samples and also have lower coverage (~ 5-10x) than the rest of the samples (~15-20x).
I am guessing that this batch effect is somehow responsible for the lower F scores in the first six samples. Does this seem plausible? If so, I figure that I should separate these first six samples and analyze them separately. Does that seem like a reasonable approach?
Thanks!
Dave
I generated my genotype likelihood file using the following flags in ANGSD,
# generating the genotype likelihood files
"angsd -b "$BAMS" \
-ref "$REFERENCE_FASTA" \
-doMajorMinor 4 -doMaf 1 -doCounts 1 \
- "$chr" -P 10 \
-skipTriallelic 1 \
-minMapQ 25 -minQ 25 -remove_bads 1 \
-GL 1 -doGlf 3 \
-setMinDepth 1 -SNP_pval 1e-6 \
-out "$out"
but I get the following error when attempting to run my input through ngsF. Reading the front page of the github repository indicates that uncompressed is the default but compressed files are still accepted. Is there a flag I am missing from my ngsF script? The manual seems to indicate it can take both compressed and uncompressed GLF files.
# error
'[main] ERROR: Standard library only supports UNCOMPRESSED GLF files!
# ngsF
~/bigdata/ngsF/ngsF -glf out.glf.gz --n_ind "$number_indvs" --n_sites "$sites\
-init_values r --min_epsilon 1e-9 --n_threads 15 --out "$out"
Implement accelerated EM method
see: http://www.jstor.org/stable/2290716
Dear Felipe - does ngsF take the same input GLF file as ngsLD?
For some reason I cannot seem to find the right combination of ANGSD's -doGeno and/or -doGlf to produce a file that ngsF would accept...
many thanks in advance
Misha
Currently, individual loglkl is disabled since, for sake of speed, it only calculates the fast_lkl.
Should implement the correct individual loglkl
Hi, is the number of sites in .mafs.gz file supposed to be different from those in angsd log file? For one of my populations I get a different value when counted with wc -l of .mafs.gz. For all other populations the number of sites in angsd log file and .mafs.gz are same
Hi Filipe
When extracting GL tags from a vcf (called with freebayes) and converting to glf I am encountering some problems. NgsF does not want to process, I am wondering why. I know, it's not described in the readme and it's sort of file hacking but I don't see why it would be wrong. Maybe you can help.
Here my workflow
bcftools query -f '[%GL,]\n' my.vcf > my.gl
sed -i 's/\.,/0,0,0,/g' my.gl # replace missing GL with 0,0,0,
sed -i 's/,/\t/g' my.gl # make tab sep
cat my.gl | perl -an -e 'for($i=1; $i<=$#F; $i++){print(pack("d",$F[$i]))}' > my.glf
N_IND=$(awk '{print NF/3; exit}' my.gl)
N_SITES=$(cat my.gl | wc -l)
cat my.glf | $NGSF --n_ind $N_IND --n_sites $N_SITES --glf - --min_epsilon 0.001 \
--out my.approx_indF --n_threads 1 --verbose 2 --approx_EM --seed 12345 --init_values r
output
==> Input Arguments:
glf file: -
init_values: r
calc_LRT: false
freq_fixed: false
out file: fb_run5_all_nn_bi_mj_sn_mm.gl.0s.t.glf.approx_indF
n_ind: 339
n_sites: 489999
chunk_size: 100000
approx_EM: true
call_geno: false
max_iters: 1500
min_iters: 10
min_epsilon: 0.0010000000
n_threads: 1
seed: 12345
quick: false
verbose: 2
version: 1.2.0-STD (Feb 23 2021 @ 12:18:32)
==> Analysis will be run in 5 chunk(s)
==> Using native I/O library
[main] ERROR: cannot read GLF file!
[main] ERROR: cannot read GLF file!
: No such file or directory
Hi!
I tried to estimate the inbreeding coefficient for different populations using both ANGSD -doHWE option and ngsF. I use the exact same command to generate the hwe.gz and the glf.gz files (including -SNP_pval to keep only sites that are variable in my pop). I average over the different sites for the ANGSD results, and over the different individuals for the ngsF results.
And I get different results... Here an example for a few populations:
ANGSD doHWE | ngsF |
---|---|
0.020 | 0.047 |
0.029 | 0.050 |
0.028 | 0.053 |
0.018 | 0.054 |
0.026 | 0.079 |
0.028 | 0.067 |
0.002 | 0.044 |
The difference is maybe not so large (the ngsF values are twice as large, but still very close to 0), but how can we explain it? Is there one method that would be more reliable than the other in this case?
Thanks for your comments!
Is there a way to calculate the number of sites within ngsF? I am currently getting "[main] ERROR: wrong number of sites or invalid/corrupt file!" and I have no idea why. I'm trying to find a way around it
Thank you very much for providing this package! I just applied it to an inbred species using the parameters that are recommended for low-coverage data ("--init_values r -min_epsilon 1e-9") since my individuals show quite a spread of coverage (between ~ 3 and 20). For a few individuals, I receive an estimated inbreeding coefficient of zero, which is not expected according to what we know about the species. May I ask if you have an idea about why this might be the case? Thank you very much for your help!
Hello, I have a problem with gsl as a dependency of ngsF.
I work with conda 4.12.0 in a dedicated environment.
The installation of ngsF warned that the gsl.pc and gsl/gsl_rng.h files are not found:
In the ngsF directory:
make
make -C htslib bgzip
make[1]: Entering directory '/mnt/shared/home/usr/ngsF/htslib'
cc -c -Wall -O3 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -D_USE_KNETFILE -D_CURSES_LIB=1 -I. bgzip.c -o bgzip.o
cc -c -Wall -O3 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -D_USE_KNETFILE -D_CURSES_LIB=1 -I. bgzf.c -o bgzf.o
bgzf.c: In function ‘bgzf_close’:
bgzf.c:630:8: warning: variable ‘count’ set but not used [-Wunused-but-set-variable]
int count, block_length = deflate_block(fp, 0);
^~~~~
cc -c -Wall -O3 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -D_USE_KNETFILE -D_CURSES_LIB=1 -I. knetfile.c -o knetfile.o
cc -Wall -O3 -o bgzip bgzf.o bgzip.o knetfile.o -lz
make[1]: Leaving directory '/mnt/shared/home/usr/ngsF/htslib'
Package gsl was not found in the pkg-config search path.
Perhaps you should add the directory containing `gsl.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gsl' found
g++ -O3 -Wall -I -I/mnt/shared/home/usr/ngsF/htslib -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -D_USE_KNETFILE -c parse_args.cpp
Package gsl was not found in the pkg-config search path.
Perhaps you should add the directory containing `gsl.pc'
to the PKG_CONFIG_PATH environment variable
No package 'gsl' found
g++ -O3 -Wall -I -I/mnt/shared/home/usr/ngsF/htslib -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE -D_USE_KNETFILE -c read_data.cpp
read_data.cpp:1:10: fatal error: gsl/gsl_rng.h: No such file or directory
#include <gsl/gsl_rng.h>
^~~~~~~~~~~~~~~
Therefore I have re-installed gsl with conda, and the installed version is 2.28.
conda install -c conda-forge gsl
I have checked as well the presence of gl/gsl_rng.h
/mnt/shared/scratch/usr/apps/conda/pkgs/gsl-2.7-he838d99_0/include/gsl/gsl_rng.h
But the same issue happen again.
Have you seen this issue before?
Dear ANGSD group,
I tried to run ANGSD to estimate a bunch of things, however, it stopped at some point with the error message as follows:
ReferenceID:(r:0) for read:CL100033396L1C017R046_454501 is not: 254 but is:253
and my command for running is:
-P 1 -b list.bam -trim 3 -out angsd -gl 1 -minQ 20 -minMapQ 30 -uniqueOnly 1 -only_proper_pairs 1 -setMaxDepthInd 100 -doMajorMinor 1 -snp_pval 1e-6 -domaf 1 -minmaf 0.01 -doIBS 1 -doCounts 1 -makeMatrix 1 -doCov 1 -doGlf 2 -doDepth 1 -doGeno 3 -dopost 1
LKL currently not threaded.
Hi Felipe,
I was wondering if you could tell me how to construct the input file for --init_values?
I would like to use --fixed_freq and a list of MAF values that I previously estimated from a subset of my data. The total dataset contains a lot of close relatives which I think would skew the MAF estimates.
Thanks a lot,
-Lauren
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.