Giter Site home page Giter Site logo

pmeal / porespy Goto Github PK

View Code? Open in Web Editor NEW
280.0 25.0 91.0 961.71 MB

A set of tools for characterizing and analying 3D images of porous materials

Home Page: https://porespy.org

License: MIT License

Python 100.00%
python scientific-visualization image-analysis voxel-generator tomography porous-media porous-materials porespy 3d-images

porespy's Introduction

PMEAL Website

Porous Materials Engineering and Analysis Lab Website

This repository hosts the homepage of the PMEAL group. We used to use Wordpress on an Amazon AWS instance, but migrated the site to Github to relieve ourselves of the annoyance of maintaining a web server.

This site is created using quarto. It's quite a powerful and handy tool and probably worth writing a blog post about someday.

porespy's People

Contributors

actions-user avatar anthero1 avatar arfon avatar aryamoghaddam avatar bryanwweber avatar chahataggarwal avatar ecederstrand avatar github-actions[bot] avatar hfathian avatar jamesobutler avatar jgostick avatar jihyeoh5 avatar ma-sadeghi avatar madeline-am avatar magnaou avatar mattrlam avatar mdrkok avatar mkaguer avatar ni2m avatar nik23130 avatar pagsalab avatar pascalruzzante avatar rickyfann avatar rodericday avatar sreeyuth avatar tomtranter avatar vasudevanv avatar xu-kai-xu avatar zohaib-atiq avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

porespy's Issues

Making snow and snow_dual,consistent

The 'snow' algorithm as originally published was basically a set of filtering steps that led to a proper watershed segmentation of the void space. The actual step of extracting a network from the partitioned image was a separate step.

The new snow_dual (and eventually the snow_nphase) is designed such that you pass in a binary image and get a fully extracted network back....there are no intermediate steps by the user. So at the moment they are inconsistent. It is necessary for this function to return a network since it must do several post-processing things to label the different phases correctly. In other words, this function can't change, so the regular snow much change.

I propose the following changes to clean things up:

  • Move the filters (trim_saddle_points, trim_nearby_peaks, etc) to the filters module
  • Change the current snow function to accept a boolean image and also return a network.
  • Add a new filter to the filters module called snow_partitioning that runs the combination of the original steps in snow to a binary image and returns a partitioned image. Both snow and snow_dual would call this. This naming implies that network_extraction.snow and network_extraction.snow_dual both use the snow_partitioning algorithm to do an extraction, so it keeps the original naming of the snow algorithm in spirit. Also, this new function should only return a partitioned image, not a tuple of lots of things like it does now.
  • Add a new function to network_extraction called add_boundary_cells that pads a partitioned image with an extra layer of nodes for the subsequent extraction step. Both snow and snow_dual will call this
  • Eventually I'd like to take the code from the extract_network function and move it to our custom region_props_3D, then remove the extract_pore_network function. For now I think we should hide this function though, because it's name suggests that it does more that it does. It actually only accepts partioned images, so is basically a part of the snow and snow_dual work flows.

Add Tests...lots of tests!

I added Codecov last night and less than 50% of the code is tested....we can surely get this to 100% given the narrow focus of the code base.

Here is how to add a test:

  • There are files in the test/unit folder for each module. To add a test you just add a new function to the file that follows the same naming convention as the existing tests.
    *All function names start with test followed by the functionality being tested, like apply_chords, and ending with some indication of the test scope, like spacing, so test_apply_chords_with_spacing.
  • It is good practice to write small tests that only test one thing, so when it fails you know exactly why.
  • Tests should be fast so just use small images. Perhaps the image could be 2D if possible.
  • A typical test will run some function or filter on an image, then check that the output has the expected properties. This means the image should be the same each time the test is run. This can accomplished by setting Scipy's random number generator state to a known value before generating an image (like blobs).
  • If you're going to use an image in multiple tests, then you can create it in the setup_class function, which gets called before all test functions, and append the image to the object (e.g. self.im = test_image). This save generating it each time.

Add type annotation to the code

Tom and I are working on making a PoreSpy plugin for Dragonfly and data types are a bit difficult to deal with in Qt. It would help a lot if methods were type-annotated. Native type annotation is supported in Python 3.5+, so that should not be a problem. Here's how they work:

import typing
from typing import get_type_hints


def f(x: int, y:float):
    return x + y

>>> get_type_hints(f)
{'x': int, 'y': float}

The good thing is that type annotation doesn't break any code as it's not enforced and is only used as a hint.

Rounding issue in local_thickness

Hi,
I recently bumped into porespy which has exactly the bunch of features I was looking for.

Q0: (added at the end): why use scipy for array manipulations where numpy would be the logical package to use ? Sure this adds another import, requirements etc, but it would definitely be cleaner.

This includes the local_thickness function (I don't understand why on earth you want to remove it from the package ???).

One thing I find odd is the following line:
sizes = sp.unique(sp.around(dt, decimals=0))


Q1: This basically results in even numbers of pore radii (pixels). What is the point of doing this ?
I.e. the output from the former gives radii of 0, 1, 2, 3... => diameters of 0, 2, 4, 6... but diameters of 3, 5, 7... pixels should also be allowed ? What am I missing here ?

If the idea is to have integer numbers of pixels, then I think that the line should rather be:
sizes = sp.unique(np.round(dt*2.)/2.)
which gives half integers as radii an thus pixel numbers as diameters (Note, np is import numpy as np).


Q2: But then, this somehow results in a strange value distributions as, e.g. 2*2**.5, 3, 10**.5 (3 consecutive euclidian distances, i.e. (2,2), (3,0), (3,1)) would all result to values of 3.

So why not simply round to 2 digits to have more realistic distances and let the user decide what he wants to do with these values ? I.e.:
sizes = sp.unique(np.round(dt,decimals=2))
BTW: this gives a result which is closer to the local_thickness calculation of ImageJ (FIJI) and BoneJ (albeit with a factor 2 less since they return the diamter as a value of "thickness").


SUGGESTION: A simple effortless workaround would be to add a keyword: exact_pixels = False in the function and then replace the above line by:
if exact_pixels:
sizes = sp.unique(np.round(dt*2.)/2.)
else:
sizes = sp.unique(np.round(dt,decimals=2))

At least this is exactly what I will as a patch to this module if it doesn't change...

Cheers,
Aurélien

delete boundary_pores branch?

@Zohaib-Atiq has implemented a way to add boundary regions to the image so that the extraction algorithms create boundary pores. @TomTranter has a branch open that seems to do this also....I think you two both worked together on this, so I just want to confirm that the code on @TomTranter's branch is not needed and it can be deleted.

Add unit tests

Our test coverage is too sparse, since we've added quite a few functions lately without tests. This should be addressed before submitting to JOSS

trim border on distance transform

On the chord length functions we offer the option to remove chords that touch the border since they are artifacts that are too short. The same problem occurs in the distance transform too, but the the values are too large...for instance there may be a piece of solid lurking just outside the image, but the dt value wouldn't reflect this and would be artificially too large. When passing the dt to the radial density function these edge values should be removed so as not to skew the results.

Add handling of anisotropic voxel_size

Tomography data often has lower resolution in the stack direction than the in-plane. This is not handled in network extraction, consequently volumes and distances are going to be underestimated. Not sure it would be a straight-forward fix. Is there also an anisotropic distance transform out there?

add a helpers module?

This would take a binary image and output some 'results', like chord length dist, or pore size hist, etc. Totally just for convenience.

Messed Up PR

Hey @jgostick ,

I forgot to make my changes on a new branch, but created a new one when pushing from source tree and it seemed to update master too. I reversed the commit to master but now the PR from the new branch doesn't seem to work - it wants to reverse the commit for some reason.
Hopefully you can sort out my mess :-)

Guess we could just reverse the reversal and delete the branch

fix readthedocs build

Readthedocs is not working. It needs to be setup so that certain modules can be installed during build, specifically numpydoc. This requires enabling virtualenv in the admin section, but this this means that all the requirements of PoreSpy need to be installed into the virtualenv as well. This requires specifying a conda requirements file, which is where things go off the rails.

The solution requires add a 'readthedocs.yml' file to the project, and then specifying the required items inside the file. Anyone with knowledge of this system, or willingness to follow the instructions here would be appreciated.

rip out evtk

The evtk code is nearly 25% of our code base! This has to b ripped out as much as possible. It is not tested at all and we don't really know how to maintain it. Surely there is a better option by now for exporting to vtk? This is high priority leading up to JOSS.

Boundary pores on extracted network

This issue has come up for both @Zohaib-Atiq and myself. I was talking with @jgostick yesterday about it and we figured the network extraction could apply boundary pores rather than doing it in OpenPNM as find_surface_pores doesn't seem to give satisfactory results when pores are not planar. Attached is a script that does it. I also had to change a line in the porespy code to ignore pores that are not in the image after setting solid regions to zero. We could officially support this in porespy, however, the other objects that are returned by snow will no longer be of use as the image is now a different size (padded by 1 voxel on each face), could be an option for snow or an option for extract_pore_network or a separate function
apply_boundary_pores_to_regions.txt

SNOW will produce error after boundary_faces specification.

The new implementation of SNOW will produce error after boundary_faces specification. Its because regions and image have different shapes and we cannot multiply regions*im. Although this scheme will work without making boundary cells because regions are not padded so the shape remains same as of image.

Calculate fractal properties

It's relatively easy to calculate the fractal dimension of an image. Just count the number of 1's and 0's in boxes of different size. This could be done with a convolution filter in about 3 lines!

There is also lacunarity and succolarity. These latter two are similar to fractal, but done on different versions of the base image.

it would be handy to have a slices function

This would return three images taken from the middle of each axis, so for a 100 cubed image, slices[0] would be im[50, :, :], slices[1] = im[:, 50, :] and slices[2] = im[:, :, 50]`. This is very trivial but very handy since you wouldn't have to hard code it into scripts any more, thus would be flexible for image size and shape.

Add chord_length_density

This would be a probability analysis based on the frequecy of voxels in the image that have a certain chord value

Add conduit_lengths to SNOW output

This is needed now to be able to give a network to openpnm and have it work 'out of the box' without having to add any geometry models.

Make a 3D version of regions_props

Scikit-image has a region_props, and a small selection of the props are applicable or adapted to 3d, but most are not. This might actually be the most useful thing we could add to PoreSpy! Stuff like orientation, largest inscribed ellipse, etc

get_border should return indices or mask

At present get_border only returns a mask of 1's along the border. This is find for numpy opertations, but it is a potential performance liability since you have to use the entire mask as an index. It would be handy if if returned indices of the border elements as well, like optionally. These could be calculated algorithmically too, instead of making a large image and putting 1s. There must be some formulae for finding the face, edge and corner indices.

make_contiguous should handle negative numbers better

If the array has -1's in it, they become the new maximum value, by getting wrapped around to the of the array. The function should do im = im - im.min() at the very start, to set the smallest value to zero, and/or possible have an offset argument to indicate what the lowest number should be.

remove binarization

this is so image dependent and each case is different, there is no way to really generalize this

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.