quantumjot / btrack Goto Github PK
View Code? Open in Web Editor NEWBayesian multi-object tracking
Home Page: https://btrack.readthedocs.io
License: MIT License
Bayesian multi-object tracking
Home Page: https://btrack.readthedocs.io
License: MIT License
Hello Author,
Could you please suggest if this Bayesian tracker can be implemented for another domain along with any deep learning based object detector.
If yes, can you please let me know if the input format of the object detection that has to be followed for running this tracking algorithm ?
Thank You !
Hello:-)
Thank you for the wonderful software!
Do you have a suggestion for using a library for cell segmentation from images that would produce the csv data that would be compatible with btracker?
Thank you!
Maria
Currently, any dummy objects inserted by the tracker do not have properties. This means that tracks containing dummies will have NaN for certain properties, which causes them to render incorrectly in napari.
We should provide a flag to replace these NaN values to aid in visualization. We could perhaps interpolate the values for the missing ones.
would become:
def tracks_to_napari(tracks: list, ndim: int = 3, replace_nan: bool = True):
Allow the Tracklet to store properties (e.g. cell area) over time, and export these in the '.h5' file. Update the to_napari()
convenience function to format these for the napari Tracks layer.
to_napari()
function to allow property visualizationShould also update README wrt new ways to import data and configure.
Hello!
I am attempting to run B-Track on a medium sized timelapse (~20-30 cells per image across 60 or so 512x512 images), and the run time seems to be taking longer than I otherwise would have expected. Approximately how long does each image take to track/how long does the optimizer take on average? I have it set to the EXACT mode at the moment, but I am at the 9 hour mark of the optimizer running with no real end in sight.
I'm curious if the CUDA version of the model is currently functional, and if so, do you have any recommendations on using it?
Thanks!
Peter
Currently, btrack files can be loaded by an additional plugin (https://github.com/lowe-lab-ucl/napari-btrack-reader). However, we could add an additional hook to this plugin for import/export and deprecate the other plugin making things easier to maintain.
Related issue: lowe-lab-ucl/napari-btrack-reader#3
It looks like there's a typo in dataio.py
with open(filename, 'r') as csv_file:
csvreader = csv.DictReader(csvfile, delimiter=' ', quotechar='|')
Should "csv_file" be "csvfile"?
Would be good to add a utility function that takes the output of a segmentation algorithm and creates the input for btrack
.
Update README to add the reference to the btrack publication
Need to fix this error TypeError: Segmentation of type <class 'dask.array.core.Array'> not accepted.
by expanding localisation to accept dask arrays used in segment-classify-track pipeline. I can give this a go if you want @quantumjot ?
Hi, the way motion model (motion.cc) is implemented there is a lot of dynamic memory allocation in the update function which causes memory issues on highly resource constrained devices. Would it make sense for you to incorporate these best practices into your library?
I found out regarding the Eigen memory allocation mechanism here
https://github.com/stulp/tutorials/blob/master/test.md
Use a heuristic to determine the track dimensionality when loading from h5 file.
So multi-class segmentation masks are accepted
Hi,
I just found out about btracker some days ago and I would really like to use it in my project. Since now, it works pretty well and it makes a lot of fun to work with. Thank you for that!
However, I noticed an odd behavior to which I have not yet found a work around. Every time I am tracking cells, somehow the last frame is missing from all tracks. Here is an example:
Let there be one object in 7 frames (t0 - 6):
x = np.array([200, 201, 202, 203, 204, 207, 208])
y = np.array([503, 507, 499, 500, 510, 515, 518])
t = np.array([0, 1, 2, 3, 4, 5, 6])
z = np.zeros(x.shape)
objects = {'x': x,
'y': y,
'z': z,
't': t}
objects = btrack.dataio.objects_from_dict(objects)
I track this object:
with btrack.BayesianTracker() as tracker:
tracker.configure_from_file('tracker_config.json')
tracker.max_search_radius = 100
tracker.append(objects)
tracker.frame_range = (0, 48)
# set the volume (Z axis volume is set very large for 2D data)
tracker.volume=((0, 2048), (0, 2048), (-1e5, 1e5))
# track them (in interactive mode)
tracker.track() #(step_size=100)
# generate hypotheses and run the global optimizer
tracker.optimize()
tracks = tracker.tracks
The output is:
tracks[0]
ID | t | x | y | z | parent | root | state | generation | dummy |
---|---|---|---|---|---|---|---|---|---|
1 | 0 | 200.0 | 503.0 | 0.0 | 1 | 1 | 5 | 0 | False |
1 | 1 | 201.0 | 507.0 | 0.0 | 1 | 1 | 5 | 0 | False |
1 | 2 | 202.0 | 499.0 | 0.0 | 1 | 1 | 5 | 0 | False |
1 | 3 | 203.0 | 500.0 | 0.0 | 1 | 1 | 5 | 0 | False |
1 | 4 | 204.0 | 510.0 | 0.0 | 1 | 1 | 5 | 0 | False |
1 | 5 | 207.0 | 515.0 | 0.0 | 1 | 1 | 5 | 0 | False |
My expectation was, that the last frame (t=6) would also be in the track. This behavior appears every time I use btracker.
Is this behavior intentional and if so, what is the reason?
Is there any possibility to include also the last frame in the tracks?
Thank you!
PS: I use the example configuration file.
Hello! Thank you for this great tracker.
I have a question about the segmentation that this tracker accepts. It seems like according to the example you gave and what I have tried, I have to put all the instance masks in one layer and pass it to the tracker. However, if there's overlap between two cell instances, the tracker will consider them as the same cell. Does it mean I have to remove all the overlaps before passing them to the tracker? Separation of cell instances is really important to me.
Thank you in advance!
implement a gui function to convert a napari Labels
layer into a set of btrack objects, using this utility function:
More information about setting up this tracking model would be really useful. Specifically, I'd like to know how the system deals with data that doesn't come in with equally spaced time steps (dt=1 always?), or how one defines a variable transition matrix to model this. Is that possible with this code?
Regards,
BillyPeanut
The recently implemented segmentation_to_objects
function (found here) doesn't store the mask labels, e.g. individual cell states, which means these can't be used as additional input for the tracking step.
setup.py does not appear in BaysesianTracker folder after git clone.
$ git clone https://github.com/quantumjot/BayesianTracker.git
$ cd BayesianTracker
$ python setup.py install
(base) C:\Users\test\BayesianTracker>python setup.py install
python: can't open file 'setup.py': [Errno 2] No such file or directory
I'm trying to track about 5000 peaks (2D) per frame for 10 frames. Using trackpy it takes about 10-15 seconds to run but btrack is still running after 20 minutes.. I've tried reducing theta_time and theta_dist in the model ("cellmodel.json") and that doesn't seem to have an effect. What optimization parameters can I change to speed this up? Additionally I'm running on a 20 core server. Can I take advantage of multiprocessing without writing my own implementation?
Hi. First of all, thanks for writing and maintaining this tool. Disclaimer - I'm completely new to programming so please excuse any blatant errors, inaccuracies and misconceptions I might make writing this. I'll try to describe my issue as straightforwardly as possible:
I'm trying to use btrack for particle tracking and assessing of colocalization. For that, I have written a script that simulates random walks of an arbitrary number of particles of 2 types. When particles of different types are within a certain radius of each other, they may associate and travel along a common trajectory until they dissociate by stochastic principle. I need to run btrack on the simulated trajectories and have it recognize the stretches where particles are colocalized (assign the points belonging to those stretches a different state). To do that, from what I read I need to configure an object model.
I tried doing so simply by filling in the OrdededDict as shown in btypes.py:
"ObjectModel":
{
"name": "coloc_model",
"states": 3,
"emission": {
"matrix": [1,0,0,
0,1,0,
0,0,1]
},
"transition": {
"matrix": [1,0,0,
0,1,0,
0,0,1]
},
"start": {
"matrix": [1,0,0,
0,1,0,
0,0,1]
}
},
The size of the matrices seems to be correct (no idea about the values), but then I got an error in core.py:
AttributeError: function 'model' not found
I found that this function was to be called from (please excuse me if I'm using the wrong term or misunderstanding the situation) libtracker.dll and have the OrderedDict elements passed on to it. I opened the .dll with Dependency Walker to see what's inside and found that the name "model" was not there although there were some references to an "ObjectModel". By analogy, the function "motion", which seems to be the one taking care of the motion model, was there and therefore the arguments from the MotionModel OrderedDict were successfully passed on to it. Am I correct to assume that the function needs to be included into the DLL for the ObjectModel parameters to be used?
Is there a way to enable the configuring of the object model and how would that work with particles? I'd be grateful for any answers.
Hello, I am building a tool to import cell tracking files, process them in Python and visualize them in Blender. I am having some issues decoding the h5 files exported by btrack.
Following the file structure from dataio.py, I understand that the tracks 'map' array is a (K x 2) array of indexes to the 'tracks' array, and that using these indices in the 'tracks' array allows us to extract the relevant indices for the 'coords' array.
Where I am a bit lost is why there are negative integers included in the tracks array, and how they relate to the size of the dummy dataset?
For every dataset queried:
abs(np.min(tracks)) == dummies.shape[0]
For example, a sample slice of 'tracks' using 'map' indices
looks like this:
[ 1545 2335 3635 4695 5727 6774 7787 8851 9906 10983 -1252 12986 14045 15066 16124 17145 18194 19229 20264 21266 22015 -1253 24251 25261 26018 27052 28443 29478 30546 31581 32653 33688 -1254 35627 36369 37463 38801 39846 40875 41946 42986 44003]
What is strange is how the pattern of negative values is such that they are decreasing by 1 each time, until they reach a minimum value equal to the number of rows in the 'dummies' array.
Using these indices to slice the 'coords' array leads to discontinuities of the otherwise temporally continuous slices of the dataset.
For example, here is the corresponding slice from 'coords' from the tracks indices above:
[[ 1. 691.25354 305.169 56.560562 0. ]
[ 2. 690.2381 306.49524 50.862858 0. ]
[ 3. 679.69147 315.4468 53.95213 0. ]
[ 4. 671.06665 317.68332 55.6025 0. ]
[ 5. 658.7258 314.58566 60.326637 0. ]
[ 6. 654.0179 307.42856 60.498215 0. ]
[ 7. 657.6908 306.5987 59.42171 0. ]
[ 8. 656.00934 306.84113 60.632942 0. ]
[ 9. 656.31055 311.1738 60.31111 0. ]
[ 10. 656.6334 308.63986 60.369453 0. ]
[237. 613.4513 564.4992 57.711514 0. ]
[ 12. 658.06006 307.46738 58.93825 0. ]
[ 13. 663.02563 307.453 59.71154 0. ]
[ 14. 662.3555 308.1111 59.263332 0. ]
[ 15. 661.60986 308.29596 59.485428 0. ]
[ 16. 661.6381 307.41904 59.175713 0. ]
[ 17. 663.1128 309.82706 59.609776 0. ]
[ 18. 658.50543 306.27716 60.4125 0. ]
[ 19. 655.3111 304.77777 58.956665 0. ]
[ 20. 655.0373 306.6087 59.807144 0. ]
[ 21. 654.80743 306.40994 60.79286 0. ]
[237. 402.58563 561.1381 57.830387 0. ]
[ 23. 655.58093 307.2614 59.981327 0. ]
[ 24. 662.2381 307.25714 60.95 0. ]
[ 25. 654.6695 309.15967 56.97815 0. ]
[ 26. 655.3333 310.1839 59.918964 0. ]
[ 27. 661.25793 312.62442 59.680317 0. ]
[ 28. 659.13513 313.27927 60.483784 0. ]
[ 29. 657.9659 310.66476 60.061363 0. ]
[ 30. 657.5599 307.24295 59.415318 0. ]
[ 31. 658.72034 307.2523 59.174316 0. ]
[ 32. 658.81067 305.4852 60.181065 0. ]
[237. 107.055176 557.10345 60.258415 0. ]
[ 34. 657.86707 306.13925 60.942722 0. ]
[ 35. 657.526 305.88962 60.46461 0. ]
[ 36. 657.0847 307.0904 60.579662 0. ]
[ 37. 658.86957 305.21738 60.6 0. ]
[ 38. 658.6667 307.30362 58.786633 0. ]
[ 39. 658.77563 307.19232 59.037018 0. ]
[ 40. 659.3985 304.21054 59.661655 0. ]
[ 41. 658.49854 306.18552 58.82 0. ]
[ 42. 659.33496 306.68216 57.882397 0. ]]
Thanks in advance for any advice you can provide,
Hello,
First of all, thank you for your work on this very useful library.
I would like to have more details about the hyperparameters of the algorithm (especially of the Hypothesis model) and how I need to set them in the config file regarding my specific data. I work on cell tracking, and typically have several hundreds of images and around a hundred cells within each image. For now, I've only used the algorithm with the default json config file given in the tutorial and this works fine as long as the number of images remains fairly low (~100). When I go a bit beyond (~300 images), the "Optimizing" step never ends. This is must be due to a poor optimizer configuration, as underlined in Issue #13.
I understand that I should change the Hypothesis Model hyperparameters in the config file, but the problem is that I couldn't find any detailed information about what is each hyperparameter useful for, and how it should be regarding the data (not enough details in the wiki or in the referenced papers). Did I miss something, or could it be possible to have more information regarding this ?
Thank you!
It would be great to add a convenience function that converts a numpy array (or Pandas dataframe) of localizations to a list of btrack objects in the utils
library:
def convert_localizations_to_objects(data: np.ndarray) -> List[PyTrackObject]:
# code to build objects using ObjectFactory
return objects
It could also be used by the CSV importer:
https://github.com/quantumjot/BayesianTracker/blob/dee211aaa7bbb283046a6539bd2b033adf6c31ee/btrack/dataio.py#L94
Are all of the entries for a JSON object required? If I'm only interested in spacial tracking (not lineages, or other states), do I need to indicate "label", "states", "probability", or "time"?
{
"Object_203622": {
"x": 554.29737483861709,
"y": 1199.362071438818,
"z": 0.0,
"t": 862,
"label": "interphase",
"states": 5,
"probability": [
0.996992826461792,
0.0021888131741434336,
0.0006106126820668578,
0.000165432647918351,
4.232166247675195e-05
],
"dummy": false
}
}
First of all, thank you for this great resource! It has been a great help in single-cell tracking of my cells.
As the cells in my videos behave quite differently to those that this software was initially designed for (as shown in the videos), I have had to toggle values in the cell_config file. To do this, I have inferred the roles of parameters (such as 'lambda_dist' etc.) by tweaking them and observing changes in tracks on Napari. However, if all goes well, as I would like to use this analysis in a near-future publication, it would be great if I could make sure of what I have actually done (perhaps I have tweaked values that I should not have?).
It would be great to have a brief rundown of what each parameter does as well as what would be expected if each parameter was increased/decreased.
Again, thanks for the great resource!
Would it be useful for general users to also be able to save the raw data, not only the segmentation masks?
e.g. This would allow the user to crop the raw intensity patches belonging to individual tracks and visualize them as a montage.
It would be identical code to the segmentation
property and the write_segmentation
method in the HDF5FileHandler
, so should be easy to implement.
This may be due to setup.py missing however.
>>> import btrack
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\g\test\lib\btrack\__init__.py", line 3, in <module>
from core import *
File "C:\g\test\lib\btrack\core.py", line 602
self.__motion_model = None
^
TabError: inconsistent use of tabs and spaces in indentation
Hi, I'm trying to use btrack with spyder 5.0.0. When I call
with btrack as tracker:
<...>
tracker.cleanup()
<...>
I get
AttributeError: 'BayesianTracker' object has no attribute 'cleanup'
even though Kite is displaying it in the documentation right above. Could anyone please give me any idea of what I should check? Thanks.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.