Giter Site home page Giter Site logo

Comments (14)

william-silversmith avatar william-silversmith commented on September 17, 2024

Hi Albert,

There are two possible reasons for this. Either there is a hole in the segmentation, or you tripped on a bug (?) I've encountered where if the scale factor is too high, sometimes it seems to cause problems (other times not?). Try reducing the scale factor to 4 and let me know if that helps. Usually when I see this issue, it is not easily reproducible and disappears e.g. when I do a fresh install or something.

Will

from igneous.

albertplus007 avatar albertplus007 commented on September 17, 2024

Actually, I have try scale=2, scale=4, scale=3, but it also broken.
I am just using the google segmentation to extract skeleton, the label i use here is 1099405435
So you mean i will reinstall igneous?

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

What resolution level are you running the skeletons against? Sometimes a sufficiently high downsample can introduce breaks in the segmentation. How are you running Igneous? Is it against a local copy?

from igneous.

albertplus007 avatar albertplus007 commented on September 17, 2024

I use the 256x256x320nm resolution, does the low resolution influence the result?
I download the segmentation in my local file, and use the local file to extract the skeleton.
Here is my code:

cloudpath1 = 'file:///mnt/d/braindata/google_segmentation/google_256.0x256.0x320.0/'
mip = 0
# First Pass: Generate Skeletons
tasks1 = tc.create_skeletonizing_tasks(
    cloudpath1, 
    mip, # Which resolution to skeletionize at (near isotropic is often good)
    shape=Vec(512, 512, 512), # size of individual skeletonizing tasks (not necessary to be chunk aligned)
    sharded=False, # Generate (true) concatenated .frag files (False) single skeleton fragments
    spatial_index=False, # Generate a spatial index so skeletons can be queried by bounding box
    info=None, # provide a cloudvolume info file if necessary (usually not)
    fill_missing=True, # Use zeros if part of the image is missing instead of raising an error

    # see Kimimaro's documentation for the below parameters
      teasar_params={
    'scale': 4,
    'const': 20, # physical units
    'pdrf_exponent': 4,
    'pdrf_scale': 100000,
    'soma_detection_threshold': 1100, # physical units
    'soma_acceptance_threshold': 3500, # physical units
    'soma_invalidation_scale': 1.0,
    'soma_invalidation_const': 300, # physical units
    'max_paths': None, # default None
  },
    object_ids=[1099405435], # Only skeletonize these ids
    mask_ids=None, # Mask out these ids
    fix_branching=True, # (True) higher quality branches at speed cost
    fix_borders=True, # (True) Enable easy stitching of 1 voxel overlapping tasks 
    dust_threshold=0, # Don't skeletonize below this physical distance
    progress=False, # Show a progress bar
    parallel=1, # Number of parallel processes to use (more useful locally)
  )
tq = MockTaskQueue()
tq.insert_all(tasks1)

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

256nm resolution is far lower than my typical usage and is highly likely to fragment to produce the holes you are witnessing. Ordinarily, I run skeletonization at 32x32x40 resolution. It can be run at 64x64x40, but this causes the skeletons to start snaking along the sides of labels as they start getting very thin.

That's kind of a big computational leap for you guys, so maybe at least try using 128x128x160 first?

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

Also, you can try using the LocalTaskQueue to execute jobs in parallel or the new FileQueue protocol to have multiple worker processes attack a large job.

# Task creation process
tq = TaskQueue('fq:///mnt/d/braindata/queue') # for example
tq.insert(tasks)

# worker processes
tq = TaskQueue('fq:///mnt/d/braindata/queue') # for example
tq.poll(verbose=True, tally=True)

from igneous.

albertplus007 avatar albertplus007 commented on September 17, 2024

Now, i run the skeleton task on 64x64x80nm resolution, the same code for the same label as above, but the result seems broken again, the figure as below:
1
It seem better than the low resolution, but still broken in some region. I also try the 32x32x40nm resolution, but it does not finish yet.

Did something go wrong during the fusion? I set the dust_threshold = 2 and tick_threshold = 0 very small to get a complete result.

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

I think the gaps are real. neuroglancer link

image

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

You can try fixing them with kimimaro.join_close_components but that technique is just a heuristic and you need to make sure it is doing something sensible.

https://github.com/seung-lab/kimimaro/

from igneous.

albertplus007 avatar albertplus007 commented on September 17, 2024

Thanks for the tip, I forget to see the label in neuroglancer.
If the result of the segmentation is broken, the result of skeleton is indeed broken.
Did you use this method to put them together?
kimimaro.join_close_components. It seems that the radius needs to be adjusted very precisely.

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

I didn't do the skeletonization for the Google segmentation. They used a derivative of another library. It looks like they joined the skeletons in post processing though as the skeletons are terminating at the furthest point of the broken pieces which is characteristic of TEASAR. I don't know what method they used, though they may have put it in the methods section of a paper somewhere.

from igneous.

albertplus007 avatar albertplus007 commented on September 17, 2024

Thanks a lot.
Another question: I use the 64x64x80nm resolution and set mip = 0 to extract skeleton, then I use the 32x32x40nm resolution but set mip = 1(i just want to make a double downsampling, make this dateset's resolution is the same as the 64x64x80nm) to extract skeleton.
What is the difference from the two method? Does the result of the same label is the same or have some thing special?

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

Assuming everything is configured correctly, there should be no or minor differences between the two skeletons. The only reason for a difference would be if different downsampling methods were used to achieve the lower resolution layer (e.g. 2x2x2 striding vs 2x2x2 mode). The mip level determines how CloudVolume downloads an image, but once the image is in memory Kimimaro takes over and is agnostic to where the image came from. If there are minor differences in the bounding box of the layers, that could cause some differences too.

from igneous.

william-silversmith avatar william-silversmith commented on September 17, 2024

Closing this question due to inactivity. Please reopen or open a new issue if you need help! ^_^

from igneous.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.