Giter Site home page Giter Site logo

looselab / readfish Goto Github PK

View Code? Open in Web Editor NEW
162.0 14.0 30.0 4.67 MB

CLI tool for flexible and fast adaptive sampling on ONT sequencers

Home Page: https://looselab.github.io/readfish/

License: GNU General Public License v3.0

Python 100.00%
adaptive-sampling bioinformatics genomics ont oxford-nanopore sequencing

readfish's People

Contributors

adoni5 avatar alexomics avatar mattloose avatar svennd avatar thomassclarke avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

readfish's Issues

Specifying coordinates in TOML

Readuntil target enrichment is working great if I specify individual chromosomes (either in the ru_generator TOML or in a separate txt file). However, if I try to specify a specific region of a chromosome (as detailed in the README and in issue #22), I get no enrichment. At the moment, I'm working with the fast5 files suggested in the readuntil README. Apologies if I'm doing something silly here, but any suggestions?

Question regarding PC usage

Hi Loose Lab, thanks for the great software!
We are currently running our MinIONs using a PC desktop (32 Gb RAM, 12 CPU cores), so pretty reasonable machine. We also have direct access from the machine to high-performance compute servers with lots of GPUs, etc. I guess my quick question is whether you think it would be worth-while me trying to get 'read until' running on the PC, maybe using 'windows subsystem for linux', or should I just bite the bullet and get a new linux box? Alternatively, is there any way I can get our HPC servers to use 'read until' to control the MinION?
Thanks again,
Matt.

ru_summarise_fq error

I'm trying to reproduce the tests described in the readme, when I received this error :

Using reference: /data/minimap2.mmi
Traceback (most recent call last):
  File "/home/minion/read_until/bin/ru_summarise_fq", line 11, in <module>
    load_entry_point('ru==2.0.0', 'console_scripts', 'ru_summarise_fq')()
  File "/home/minion/read_until/lib/python3.6/site-packages/ru/summarise_fq.py", line 125, in main
    "{:.0f}".format(stdev(data)),
  File "/usr/lib/python3.6/statistics.py", line 650, in stdev
    var = variance(data, xbar)
  File "/usr/lib/python3.6/statistics.py", line 588, in variance
    raise StatisticsError('variance requires at least two data points')

I used the demo file and changed chr22 to chr15;

The experiment ran on a minion, connected to a computer with a GPU (2080 ti) so minknow, did not basecall directly (as this would work on CPU).

I had the "feeling" reads where beeing pushed out correctly and the basecall_server running GPU and connected to the RU was working; However I never installed minimap2 so I'm unsure how the ru knows if it maps in a correct region ?

After the test (~15min) I basecalled using the same GPU basecall_server (v3.4.4) and ran the above command; Did I forget a step ? the fastq's are 7,3mb, 13mb, 30mb, 32mb. So i guess they contain some thing ?

Thanks for this nice pioneering work :)

counter intuitive result

We have simulated a run that we would like to do based on your bulk file. But we get really counter-intuitive results.
The toml we have used is built like this:

[caller_settings]
config_name = "dna_r9.4.1_450bps_fast"
host = "localhost"
port = 5550
​
[conditions]
reference = "some.mmi"
axis = 1
​
[conditions.0]
name = "control"
control = true
min_chunks = 0
max_chunks = 4
targets = ["p"]
single_on = "stop_receiving"
multi_on = "stop_receiving"
single_off = "unblock"
multi_off = "unblock"
no_seq = "proceed"
no_map = "proceed"
​
[conditions.1]
name = "enrich_t1"
control = false
min_chunks = 0
max_chunks = 4
targets = ["t1"]
single_on = "stop_receiving"
multi_on = "stop_receiving"
single_off = "unblock"
multi_off = "unblock"
no_seq = "proceed"
no_map = "proceed"
​
[conditions.2]
name = "enrich_ts"
control = false
min_chunks = 0
max_chunks = 4
targets = ["t1","t2","t3","t4","t5","t6"]
single_on = "stop_receiving"
multi_on = "stop_receiving"
single_off = "unblock"
multi_off = "unblock"
no_seq = "proceed"
no_map = "proceed"
​
[conditions.3]
name = "deplete_ts"
control = false
min_chunks = 0
max_chunks = 4
targets = ["t1","t2","t3","t4","t5","t6"]
single_on = "unblock"
multi_on = "unblock"
single_off = "stop_receiving"
multi_off = "stop_receiving"
no_seq = "proceed"
no_map = "proceed"

Afterwards I split the fastq according to the channel in each reads header with something like the following:

run_info, conditions, reference, caller_settings = get_run_info(toml_file)
channel = int(fastq_header.split()[4].split("=")[1])
field = run_info[channel]

After I utilized summarise_fq.py on each conditions fastq and looked at the sum column as a proxy for whether the enrichment has worked or not.
And I see:

Conditions:               0        1        2         3
description:              control  t1_enr   ts_enr    ts_depl
Total bases in ts:        334294   30561    7069      1210683
Normalized:               1        0.091    0.021     3.622

So it's the complete opposite of what I expected. Do you have an idea of what I am doing wrong ?

CPU basecalling - increased number of reads processed together?

First of all, installing the new Read Until code works like a charm! Thanks a lot!

We're trying to get Read Until going with CPU basecalling. We're using a machine with 40 physical Xeon Silver cores, and live basecalling in MinKnow (fast mode) seems to easily keep up with the incoming data from from the bulk FAST5 test file, so I hope that this should, at least in principle, not be a hopeless undertaking.

ru_generators starts up fine, but there may be some kind of communications issue between the Read Until code and the Guppy basecalling server. Specifically, this is the output I get from ru_generators:

2020-02-29 18:24:00,807 ru.ru_gen 1R/0.33953s
2020-02-29 18:24:47,350 ru.ru_gen 148R/46.54283s
2020-02-29 18:28:25,598 ru.ru_gen 205R/218.24645s

(and then nothing more, so maybe things are slowing down over time)

I assume that 'R' specifies the number of reads, and the number after that specifies the average amount of time spent on the initial basecalling per read? ... so it seems that the number of reads lumped together is much higher than in the example provided?

Guppy server command:

./guppy_basecall_server --config dna_r9.4.1_450bps_hac.cfg --port 5550 --log_path guppy_log.txt --ipc_threads 3 --max_queued_reads 2000 --data_path ../data --num_callers 1 --cpu_threads_per_caller 18

Read Until command:

./ru_generators --device MN25472 --experiment-name "test2" --toml example.toml --log-file read_until_log.txt

Apart from that, everything is exactly identical to the default / example files.

Toml internal validation

When a TOML file is loaded, it should be validated to ensure no errors in reference names etc.

ReadUntil for filling in gaps?

Not really an issue, rather a question. Curious what others think about the idea of using ReadUntil with one MinION flowcell and some hopefully long Nanopore reads (N50 > 50 Kbp) to try and fill in gaps in a several Mbp region that was scaffolded with Hi-C reads but still has some gaps of unknown sizes (arbitrary 1000-bp gaps between contigs joined by Hi-C contact maps). It seems like one could use the desired region as the reference for ReadUntil and then use Racon to fill in gaps with the obtained enriched reads. Any thoughts?

Playback own fast5

Dear RU authors,

Thanks to the well-written documents, we ran the simulation using the provided fast5 quite well.
However, we couldn't apply the simulation protocol on our own sequence fast5 generated by MinKnow (19.12.5, GridIon).
We found the internal formats of your fast5 quite different from those generated by MinKnow (multiple fast5 per run and no channel groups while your have a single fast5 and channel groups).
Is any way we can run simulation using our previous runs?

Thanks,
Yao-Ting

Question about unblocking Odd or Even pores

Hi,
Sorry if I have missed this. I am trying to run a test by only unblocking odd or even pores. How can I achieve this? I see the axis options in the toml file, should I use these?
Thanks,
Nick

Guppy_CPU support?

Hi,

I was wondering if about using ru without a GPU. Our Server has 40 cores (80 threads), so I think we might be able to use ont-guppy-cpu_3.4.1_linux64.tar.gz as it comes with ont-guppy-cpu/bin/guppy_basecall_server.

Any idea if guppy_basecall_server could keep up with a MinION using CPUs?

Optimising read until on miniT

Hi Alex, Matt & team,
We have a miniT that runs our minion and inspired by #28 I had a go at installing and running ru on it. It works, right up to the basecalling & mapping and then I am having "performance issues". See my mapping times below. Not sure if it is worth continuing or if this is never going to work. Happy to provide additional logs or info.
Cheers,
Mark

2020-04-30 18:34:19,771 ru.ru_gen 74R/1.25966s
2020-04-30 18:34:23,165 ru.ru_gen 148R/3.39248s
2020-04-30 18:34:28,704 ru.ru_gen 174R/5.53804s
2020-04-30 18:34:39,260 ru.ru_gen 247R/10.55593s
2020-04-30 18:34:45,618 ru.ru_gen 241R/6.34544s
2020-04-30 18:34:55,137 ru.ru_gen 242R/9.51814s
2020-04-30 18:35:10,468 ru.ru_gen 248R/15.33125s
2020-04-30 18:35:27,624 ru.ru_gen 257R/17.14404s
2020-04-30 18:35:46,549 ru.ru_gen 350R/18.92046s
2020-04-30 18:36:10,766 ru.ru_gen 346R/24.21621s
2020-04-30 18:36:39,124 ru.ru_gen 341R/28.34917s
2020-04-30 18:37:15,932 ru.ru_gen 355R/36.79682s
2020-04-30 18:38:03,242 ru.ru_gen 393R/47.28662s
2020-04-30 18:39:07,453 ru.ru_gen 409R/64.18478s
2020-04-30 18:40:24,991 ru.ru_gen 410R/77.52404s
2020-04-30 18:41:54,450 ru.ru_gen 421R/89.44561s
2020-04-30 18:43:21,877 ru.ru_gen 426R/87.40666s

For interest and info, I have solved a few installation traps on the miniT. No guarantees are given or implied.

sudo apt update
sudo apt upgrade # If required
sudo apt install python3-venv python3-dev libzmq3-dev libhdf5-dev screen
# Fetch aarch64 binary version of guppy basecaller >3.4 from Oxford Nanopore. eg.
wget https://mirror.oxfordnanoportal.com/software/analysis/ont-guppy_3.5.2_linuxaarch64.tar.gz
tar -xvf ont-guppy_3.5.2_linuxaarch64.tar.gz
python3 -m venv read_until
. ./read_until/bin/activate
pip install --upgrade pip
pip install git+https://github.com/LooseLab/read_until_api_v2@master
# Install Cython and h5py separately, limiting the version of h5py or you will get: AttributeError: module 'h5py.h5pl' has no attribute 'prepend'
# You can compile h5py 2.10.0 with HDF5 v1.8.4 just fine but it won't include the h5pl plugin attribute unless your HDF5 version is 1.10+ vis http://api.h5py.org/h5pl.html (this one got me good!).
pip install Cython
pip install h5py==2.9.0
pip install git+https://github.com/LooseLab/ru@master
# Install passed the first test at this stage
ru_generators

To get the read until working I started a new guppy server as suggested for the gridION. I got the settings to use from the existing guppy 3.2 logs on the miniT. I can't work out how to set "num socket threads" which is 1 in my logs but defaults to 2. The default of runners per device is 8.

screen sudo /opt/ont-guppy/bin/guppy_basecall_server \
--config /opt/ont-guppy/data/dna_r9.4.1_450bps_fast.cfg \
--log_path /var/log/ont/guppy --port 5556 --device cuda:all \
--chunk_size 1000 --chunks_per_runner 48 \
--num_callers 1

Applying to direct RNA seqeuencing

Hi,

I was wondering what do I need to change to allow this program to work on direct RNA sequencing?

Fortunately, I have kept a bulk file from one of my direct RNA sequencing runs and I will be able to test on that.

More explicit handling of reads that exceed the maximum sampling

Currently in the event of a given read exceeding the maximum threshold we unblock unless the last decision was "stop_receiving" see here. However, this is not fit if the reference only contains targets that need to be removed; as anything that doesn't classify will be unblocked.

The action to take in the event of exceeding max chunks need to be either user settable (adds more complexity) or we could provide pre-defined scenarios e.g.:

ru deplete ...
ru enrich ...

Where deplete would be the use case for unblocking anything that classifies against the reference; whereas enrich would do the opposite and stop receiving anything that classifies.

These options might not encompass mixed references.

Manager Sending reset : human error ?

In our latest run, we noticed that read until script stopped with the following message :

2020-04-13 11:18:33,073 read_until_api_v2.main Reset request received, shutting down...
2020-04-13 11:18:33,086 read_until_api_v2.main Reset signal received by action handler.
2020-04-13 11:18:33,151 read_until_api_v2.main Stopping processing of reads due to reset.
2020-04-13 11:18:33,338 read_until_api_v2.main Stream handler exited successfully.
2020-04-13 11:18:33,395 ru.ru_gen Finished analysis of reads as client stopped.
2020-04-13 11:18:33,543 Manager Worker exited successfully.

The sequencing continued; but read until, for some reason stopped "randomly". This is the second time. The first time we thought it was a human error; but now, during that time, there was nobody around the machine ...

What can causes the script to exit ? We run ru_generators in a Linux screen (if that should be relevant)

thanks;

Specifying targets in a bed file

I could not find this in the documentation, apologies if I missed it, but I think a common use-case would be to have the targets of interest specified as intervals in a bed file. I noticed the [conditions.0] 'targets' part either takes a string or an array of targets, or could it also be a (bed) file?

Thanks,
Wouter

Alert when no targets are provided

When running an flowcell with read until, if no targets are provided (or the path to a file of targets is wrong) the experiment essentially becomes an 'unblock all' configuration. This is NOT good!

ru_generators sometimes fails when started before run starts, runs fine on restart of ru_generator

If I start the ru_generator before the run has started it goes into a wait mode, starts when the run starts, and often immediately fails. Restarting ru_generator fixes the issue.

2020-06-19 10:06:43,253 Manager ru_generators --experiment-name test --device MN26516 --toml example.toml --log-file RU_log.log
2020-06-19 10:06:43,254 Manager batch_size=512
2020-06-19 10:06:43,254 Manager cache_size=512
2020-06-19 10:06:43,254 Manager channels=[1, 512]
2020-06-19 10:06:43,254 Manager chunk_log=chunk_log.log
2020-06-19 10:06:43,254 Manager device=MN26516
2020-06-19 10:06:43,254 Manager dry_run=False
2020-06-19 10:06:43,254 Manager experiment_name=test
2020-06-19 10:06:43,254 Manager host=127.0.0.1
2020-06-19 10:06:43,254 Manager log_file=RU_log.log
2020-06-19 10:06:43,254 Manager log_format=%(asctime)s %(name)s %(message)s
2020-06-19 10:06:43,254 Manager log_level=info
2020-06-19 10:06:43,254 Manager paf_log=paflog.log
2020-06-19 10:06:43,254 Manager port=9501
2020-06-19 10:06:43,254 Manager read_cache=AccumulatingCache
2020-06-19 10:06:43,255 Manager run_time=172800
2020-06-19 10:06:43,255 Manager throttle=0.1
2020-06-19 10:06:43,255 Manager toml=example.toml
2020-06-19 10:06:43,255 Manager unblock_duration=0.1
2020-06-19 10:06:43,255 Manager workers=1
2020-06-19 10:06:43,308 Manager Initialising minimap2 mapper
2020-06-19 10:06:43,334 Manager Mapper initialised
2020-06-19 10:06:43,334 read_until_api_v2.main Client type: many chunk
2020-06-19 10:06:43,334 read_until_api_v2.main Cache type: AccumulatingCache
2020-06-19 10:06:43,335 read_until_api_v2.main Filter for classes: adapter and strand
2020-06-19 10:06:43,335 read_until_api_v2.main Creating rpc connection for device MN26516.
2020-06-19 10:06:43,759 read_until_api_v2.main Loaded RPC
2020-06-19 10:06:43,759 read_until_api_v2.main Waiting for device to start processing

Once the flow cell starts running it often crashes as follows:

...
2020-06-19 10:07:47,668 Manager Creating 1 workers
2020-06-19 10:07:47,669 read_until_api_v2.main Processing started
2020-06-19 10:07:47,669 read_until_api_v2.main Sending init command, channels:1-512, min_chunk:0
2020-06-19 10:07:47,673 read_until_api_v2.main <_MultiThreadedRendezvous of RPC that terminated with:
        status = StatusCode.FAILED_PRECONDITION
        details = "Data acquisition not running, or analysis not enabled"
        debug_error_string = "{"created":"@1592554067.669713569","description":"Error received from peer ipv4:127.0.0.1:8002","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Data acquisition not running, or analysis not enabled","grpc_status":9}"

Then starting ru_generators again with the run already on its way, works just fine. Any idea what the problem is here? Could this be a timeout issue?

Cant simulate

Hi, I am trying to get this up and running in a similar deployment as the one suggested in the gridion issue, but I am immediately hitting my head against being able to simulate. MinKNOW seems to accept the modified script but I am not seeing any reads in the run (FYI I am on windows).

Additionally, is there anyway to install the read until api v2 in a location where minknow is either not installed (I can copy over the files manually) or not installed in a default location.

Cheers!

Compatibility with MinKNOW-Core 4.04

Hi there, it looks like the GridION software and MinKNOW have received major updates on July 7 (notably, python2 -> python3). The nanoporetech read_until_api has also been updated.

Are there any plans to update ru and read_until_api_v2 to be compatible with the new MinKNOW? Thanks!

setup help on minion

hi, I'm trying to get this set up on a minion. I have this as far as getting ru_unblock_all to work on the playback. I am not able to get the example with selective unblock going. It looks like reads are never getting sent to minimap2. The stderrr looks like:

020-03-09 16:50:08,856 Manager /home/quinlan/.local/bin/ru_generators --device MN18894 --experiment-name RU Test basecall and map --toml /data/human_chr_selection.toml --log-file human_chr_selection.log
2020-03-09 16:50:08,856 Manager batch_size=512
2020-03-09 16:50:08,856 Manager cache_size=512
2020-03-09 16:50:08,856 Manager channels=[1, 512]
2020-03-09 16:50:08,856 Manager chunk_log=chunk_log.log
2020-03-09 16:50:08,856 Manager device=MN18894
2020-03-09 16:50:08,856 Manager dry_run=False
2020-03-09 16:50:08,856 Manager experiment_name=RU Test basecall and map
2020-03-09 16:50:08,856 Manager host=127.0.0.1
2020-03-09 16:50:08,856 Manager log_file=human_chr_selection.log
2020-03-09 16:50:08,856 Manager log_format=%(asctime)s %(name)s %(message)s
2020-03-09 16:50:08,856 Manager log_level=info
2020-03-09 16:50:08,856 Manager paf_log=paflog.log
2020-03-09 16:50:08,856 Manager port=9501
2020-03-09 16:50:08,856 Manager read_cache=AccumulatingCache
2020-03-09 16:50:08,856 Manager run_time=172800
2020-03-09 16:50:08,856 Manager throttle=0.1
2020-03-09 16:50:08,856 Manager toml=/data/human_chr_selection.toml
2020-03-09 16:50:08,857 Manager unblock_duration=0.1
2020-03-09 16:50:08,857 Manager workers=1
2020-03-09 16:50:08,860 Manager Initialising minimap2 mapper
2020-03-09 16:50:16,204 Manager Mapper initialised
2020-03-09 16:50:16,204 read_until_api_v2.main Client type: many chunk
2020-03-09 16:50:16,204 read_until_api_v2.main Cache type: AccumulatingCache
2020-03-09 16:50:16,204 read_until_api_v2.main Filter for classes: adapter and strand
2020-03-09 16:50:16,204 read_until_api_v2.main Creating rpc connection for device MN18894.
2020-03-09 16:50:16,480 read_until_api_v2.main Loaded RPC
2020-03-09 16:50:16,481 read_until_api_v2.main Signal data-type: int16
2020-03-09 16:50:16,482 Manager This experiment has 1 region on the flowcell
2020-03-09 16:50:16,482 Manager Using reference: /data/human_g1k_v38_decoy_phix.fasta.mmi
2020-03-09 16:50:16,483 Manager Region 'select_chr_21_22' (control=False) has 2 targets of which 2 are in the reference. Reads will be unblocked when classed as single_off or multi_off; sequenced when classed as single_on or multi_on; and polled for more data when classed as no_map or no_seq.
2020-03-09 16:50:16,484 Manager Creating 1 workers
2020-03-09 16:50:16,484 read_until_api_v2.main Processing started
2020-03-09 16:50:16,485 read_until_api_v2.main Sending init command, channels:1-512, min_chunk:0

and then it just stalls there.

Other potentially useful information:

  • I have started the run in minKNOW (both with and without minKNOW base-calling -- and trying an external guppy base-calling server)
  • if I enter a different PORT, then it fails as it can't connect.
  • here is the command I am using:
ru_generators --device $DEVICE \
              --port $PORT \
              --experiment-name "RU Test basecall and map" \
              --toml $TOML \
              --log-file $(basename $TOML .toml).log
  • In minKNOW I can see the run proceeding un-interrrupted (whereas with ru_unblock_all I can see the size distribution drop).

Anything else I can report or try to help diagnose?

small target regions

Hi,

I am planning to run this ReadUntil experiment using a small set of target regions (~3M bases, so ~0.1% of human genome). Do you think it will have a huge detrimental effect on the pores and eventually to the sequencing yield, as many unblocking will occur at the pores? I am thinking of whether it is better to include more regions (even though not in our regions of interest), just to keep the pores more occupied to sequencing.
Would appreciate to hear what you think?

Thanks,
Cen Liau

Best action on first connect to a run.

If we have a run that has been in progress and then start read until, we see a large number of reads unblocked. In many cases these reads maybe longer than the max chunks parameter. Thus we are unblocking very long molecules which we might prefer not too. A better course of action might be to estimate the length of the read on that first grab of data and have some rules for rejection or sequencing. This is especially true if we have very long reads in our libraries.

ru_generators

ru_generators is the only script for interfacing with MinKNOW - I suggest we use the ru_ prefix for things which are actually doing read until - and then have some other prefix for non read until scripts.

SO ru_generators should become readuntil

ru_unblock_all is a test for making sure that MinKNOW is working and responding as expected. I suggest we rename to:

ru_test_unblock_all

ru_raw_signal_log is something which might be interesting and we should investigate.

ru_validate is a convenience wrapper for toml validation for read until - I suggest we call this:

ru_toml_validate

ru_iteralign is iteralign

and ru_iteralign_centrifuge becomes:

itercent

Set messages level to 2

Currently messages are sent as normal levels, but should be pushed as warnings as they will change flowcell behaviour.

Choice of parameters

Hi,

Some weeks ago, we did a sequencing run to test ReadUntil without reference mapping and basecalling. Everything went fine and I could recognize a peak at 500bp as expected. But after 3 hours I observed, that a lot of action messages were failed and the read lengths were increasing on unblocked channels. Now I wonder if this could be caused by the parameters I selected for running ReadUntil (e.g. action batch size 100, unblock_duration 1.0). I'm not sure whether I completely understand how the parameters influence the behaviour of the gRPC stream. Do you have any suggestions for parameter choice?

Thanks
Jens

Read-until for short amplicons

Hi Matt,

We're using the amazing ARTIC protocol for Covid-19 sequencing but observe that there is sometimes significant individual-amplicon dropout. The amplicons are relatively short (~400). Would it be possible to modify your code so that reject decisions are made after the first, say, 20 or 40 bases, with the idea of enriching under-represented amplicons?

Cheers,

Alex

Read Until on a Promethion

Hi Matt,
thank you for this nice peace of software. I have used your code and extened it to enable targeted enrichment from a BED file. Not perfect yet but it works! So I didn't use the generator script.

Up to now, we have managed to run Read Until on a Minion with great results. Now we tried it on a Promethion. I have set the first channel to 1 and the last channel to 3000. I'm using the fast basecalling model. Running it with a single worker it seems not to catch up with the guppy basecaller, so we increased the workers (to 6), which seems to be able to handle the number of reads:

`2020-05-12 15:42:38,611 [read_until_lafuga] Thread:6 21 reads/0.28550s Unblocked: 1565 Target: 398 Total reads: 4682 Guppy queue: 11
2020-05-12 15:42:38,612 [read_until_lafuga] Thread:3 1 reads/0.09670s Unblocked: 1667 Target: 460 Total reads: 4889 Guppy queue: 11
Throttle: 0.0032995089422911406
2020-05-12 15:42:38,647 [read_until_lafuga] Thread:2 5 reads/0.13400s Unblocked: 1591 Target: 426 Total reads: 4845 Guppy queue: 0
Throttle: 0.09994410490617156
2020-05-12 15:42:38,669 [read_until_lafuga] Thread:1 2 reads/0.13900s Unblocked: 1529 Target: 431 Total reads: 4708 Guppy queue: 4
2020-05-12 15:42:38,722 [read_until_lafuga] Thread:4 1 reads/0.10409s Unblocked: 1688 Target: 469 Total reads: 5077 Guppy queue: 6
Throttle: 0.09996974118985236
2020-05-12 15:42:38,866 [read_until_lafuga] Thread:3 5 reads/0.14211s Unblocked: 1668 Target: 460 Total reads: 4894 Guppy queue: 1

But Minknow only partially shows some unblocking or some kind of unblocking waves over the flowcell.
My assumption is that the basecalling and unblocking decisions are fast enough but that the Minkow RPC port can't handle the unblocking actions in time!? Do you have any experience/advices?

Have you ever tried to run RU on a promethION or do you have any advices what we could do? Or is it currently not possible to run it on a promethION?

Thank you for your help
Best Alex

Installing on GridION

Hey,

Keen to try RU with our GridION, but your README states we need guppy 3.4 and the current software release for the GridION is only at 3.2. I don't really want to install anything on the GridION as have been burned by this in the past! I did think I could install ru on another machine and use the --host and --port flags to connect to the GridION but that's when I hit the guppy version snag (well at least I guess this is what is causing the error below)

Traceback (most recent call last):
  File "/home/md1mpar/wc/miniconda2/envs/ru/bin/ru_generators", line 11, in <module>
    load_entry_point('ru', 'console_scripts', 'ru_generators')()
  File "/home/md1mpar/wc/ru/ru/ru_gen.py", line 463, in main
    read_until_client = read_until.ReadUntilClient(
  File "/home/md1mpar/wc/read_until_api_v2/read_until_api_v2/main.py", line 239, in __init__
    self.connection, self.message_port = get_rpc_connection(
  File "/home/md1mpar/wc/read_until_api_v2/read_until_api_v2/load_minknow_rpc.py", line 174, in get_rpc_connection
    response = stub.list_devices(list_request)
  File "/home/md1mpar/wc/miniconda2/envs/ru/lib/python3.8/site-packages/grpc/_channel.py", line 826, in __call__
    return _end_unary_response_blocking(state, call, False, None)
  File "/home/md1mpar/wc/miniconda2/envs/ru/lib/python3.8/site-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
    raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
	status = StatusCode.UNIMPLEMENTED
	details = ""
	debug_error_string = "{"created":"@1583246614.752489054","description":"Error received from peer ipv4:143.167.151.27:8000","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"","grpc_status":12}"

What are my options here? You state in the README you've tested on a GridION? How did you do that? Install guppy 3.4, ru, and the readuntil api?

Thanks!

Setting up another guppy server

Alex and Matt,
If one has a linux box running new Minion-nc, AND new guppy (version 3.5.x) in a different directory (say home/local/bin), one could run basecalling "remote" using the local/bin copy in server mode as long as the toml file is properly configured (port other than 5555). Is that correct? I think that Alex had a diagram for some thing close to this in an earlier reply to someone setting up ru on a gridion. I want to set this up for a minion, and was wondering if running basecalling server on the same machine (but different installation directory) would work. Any advice on setting up basecalling server or toml file for this case?

Originally posted by @tchrisboles in #30 (comment)

Selection of mapping conditions for low abundance enrichment and targeted sequencing

The mapping conditions listed in your paper for these two goals are similar

multi-on: stop receiving
multi-off: proceed
single-on: stop receiving
single-off: unblock
no_map: proceed
no_seq: proceed

Two questions:

  1. If you unblock for single-off, why not unblock for multi-off? Seems more grounds for unblocking in the case of multi-off signal...?.

  2. Why proceed (in all schemes) for no_map and no_seq? What's the rationale?

Native barcoding of samples

Hi, me again! I figured this question might be of interest for multiple users so I ask it here...
Is there something we have to take into account when we use (native) barcoding on samples prior to read until?

Is this something you tried?
Naively I would expect the barcodes don't align to any target, but we should quickly enough have enough unique sequence to enable a decision...

Eventually, we might want to balance out the coverage of the barcodes with read until, but right now we'll assume that our lab-fu is good enough to get some equimolar pooling right.

Cheers,
Wouter

Chunk log information

Hey,
Could we have clear information on the meaning of the chunk log file?
Thanks

Off-target region with high mapping reads

Hi,

We run a cosmic panel in NB4 cells and found a off-target region (MIR12136) with high mapping reads.
We wonder to know whether this off-target result was observed in your experiment?

圖片1

Amber

Read Until on a flongle

Hello,

I have managed to set up read until on a our laptop (12 threads), with the aim of running read until on flongles. As I believe less data is generated at once, the hope is that our laptop will be able to keep up. In order to simulate a flongle, would it just be a case of adjusting the --channels in the ru_generator command? I've set this to 128 below, but I think it should be 126 to represent a flongle.

If you can suggest any optimisations then I'd be very grateful.

Guppy server command:
Downloads/ont-guppy-cpu/bin/guppy_basecall_server --config /Downloads/ont-guppy-cpu/data/dna_r9.4.1_450bps_fast.cfg --port 5556 --log_path /ReadUnti l/read_until/Guppy_server/log.txt --ipc_threads 1 --max_queued_reads 1000 --num_callers 2 --cpu_threads_per_caller 3

ru_generator command:
ru_generators --device MN16259 --experiment-name "rut2" --toml human_chr_selection.toml --log-file rut2.log --log-level info --workers 4 --channels 1 128

I have modified break_reads_after_seconds to be 0.4 and max_chunks to 4.

The current output I'm getting from ru_generators looks okay so far:
2020-04-21 09:56:12,943 ru.ru_gen 9R/0.14979s
2020-04-21 09:56:12,993 ru.ru_gen 4R/0.08269s
2020-04-21 09:56:13,146 ru.ru_gen 3R/0.10329s
2020-04-21 09:56:13,265 ru.ru_gen 5R/0.11896s

How it looks after ~30 minutes.
ru

no_map reads seem mappable

Dear Authors,
When playing with your provided fast5, we occasionally observed several reads are continuously reported as "no_map" in the RU log even with many iterations and long lengths (>30, >7kb). Yet they are mappable by minimap after MinKnow basecalling (see attached). These reads should be unblocked as they were not in the TOML targets. But because they were consistently reported as "no_map" (still within max_chunks), they were entirely sequenced instead. We are curious why minimap and mappy in RU produce different mapping results (using the same map-ont setting). Thanks, Yao-Ting.

messageImage_1587361351018
messageImage_1587361322265

RU crashes when pyguppy returns NoneType

Hello,

We had a read-until run crash recently after running successfully for ~9 hours. Sequencing proceeded as normal, but the adaptive selection ceased to operate after this point. The goal of the run was to deplete reads that align to the target reference (unblock) and allow all non-mapping reads to proceed.

Here's the contents of the toml file:

[caller_settings]
config_name = "dna_r9.4.1_450bps_hac"
host = "127.0.0.1"
port = 5555

[conditions]
reference = "/home/grid/git/pyguppyclient/read_until/assembly.no_eupl.combo.ont.mmi"

[conditions.0]
name = "bac_background_depletion"
control = false
min_chunks = 0
max_chunks = inf
targets = ["combined_bac_contigs"]
single_on = "unblock"
multi_on = "unblock"
single_off = "unblock"
multi_off = "unblock"
no_seq = "proceed"
no_map = "proceed"`

And here are the last few lines of the read-until log:

2020-03-09 21:55:17,987 DEC 9655 167 3055fd3e-a776-47be-8dd2-41cb6364ee17 298 20068 8 1 no_map proceed
2020-03-09 21:55:17,988 DEC 9655 168 746b3d88-9c1b-4768-8495-b62daebfff5c 299 17338 4816 4 no_map proceed
2020-03-09 21:55:17,988 DEC 9655 169 9edd601d-bbf4-4cb5-b282-c1df702abf35 5 17692 887 2 no_map proceed
2020-03-09 21:55:17,990 ru.ru_gen 169R/5.10888s
2020-03-11 13:08:09,670 Manager Sending reset
2020-03-11 13:08:09,800 Manager EXCEPT
Traceback (most recent call last):
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/ru/ru_gen.py", line 397, in run_workflow
res = result.get(3)
File "/home/grid/miniconda3/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
File "/home/grid/miniconda3/lib/python3.7/multiprocessing/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/ru/ru_gen.py", line 207, in simple_analysis
decided_reads=decided_reads,
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/ru/basecall.py", line 146, in map_reads_2
for read_info, read_id, seq, seq_len, quality in calls:
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/ru/basecall.py", line 101, in basecall_minknow
self.pass_read(read)
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/pyguppyclient/client.py", line 123, in pass_read
), simple=False)
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/pyguppyclient/client.py", line 55, in send
return simple_response(self.recv())
File "/home/grid/git/pyguppyclient/read_until/lib/python3.7/site-packages/pyguppyclient/ipc.py", line 101, in simple_response
raise Exception(cls.Text().decode())
AttributeError: 'NoneType' object has no attribute 'decode'

Any guidance on what happened here? It could be that this is a pyguppy bug, but I thought I'd first run it by you read-until folks to see if you have encountered this before. I'm hesitant to simply relaunch the run without debugging as this is a precious sample.

Thanks,
John

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.