Giter Site home page Giter Site logo

ph5toms memory usage about ph5 HOT 12 CLOSED

pic-iris avatar pic-iris commented on August 12, 2024
ph5toms memory usage

from ph5.

Comments (12)

derick-hess avatar derick-hess commented on August 12, 2024 1

I am going to test this by modifying the script on my local machine to immediately write out the trace object instead of yielding it to see how that changes things, and to get a better idea of what is going on.

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024 1

OKay got ph5toms fixed. Going to add some more memory cleanup to see if I can get it even lower and make sure its cleaning up everywhere possible. I'll create a PR in a bit.

The issue was the nested yields. Getting rid of the yield in the cut station allowed it to garbage collect after every stream is yielded

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

Still running some tests. When I run a really big request, say for 10GB of data it more clear that memory usage is going up by the amount of data yielded each time it yields an obspy stream.

I have already added some stuff to clear out ph5 tables after they are done being used. That helped a little but they really aren't that big. I'm investigating why the yielded object or data related to it isn't being garbage collected automatically after it is done being yielded.

At least that is my theory so far. Using mprof and guppy right now.

from ph5.

nick-falco avatar nick-falco commented on August 12, 2024

I think you may be on to something. The memory might not be freed until stop iteration is reached. See the following stack overflow posting:
https://stackoverflow.com/questions/15490127/will-a-python-generator-be-garbage-collected-if-it-will-not-be-used-any-more-but

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

Yeah so it looks like while the iteration is going through no garbage collection can happen anywhere in the script. I tried clearing out and deleting all the tables as soon as they are un needed but mprof shows that it actually doesnt do that until the end of the script even if i explicitly tell it to with gc.collect()

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

One solution I will look into is multiprocessing to see if that allows proper garbage collection

from ph5.

nick-falco avatar nick-falco commented on August 12, 2024

I also read about how you can potentially force garbage collection by running memory intensive processes in a separate thread. Once the thread is finished running, the memory is freed.

Do we know what is using so much memory? It seems like the first step is to figure out exactly what is causing the spike in memory usage, before figuring out a fix.

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

I believe it is the yielded obspy stream not being collected. When I run it on large data sets it looks like it is jumping by the size of the obspy stream every time. A little bit of it also looks like the das_t. I added a line in in create_trace to free that memory every time it is done with a das_t but it currently doesn't do anything since garbage collection is halted.

I also added code to clear all other tables when they are done but those only amount to < 10MB total memory.

from ph5.

nick-falco avatar nick-falco commented on August 12, 2024

I wonder if we have too many open HDF5 dataspaces that are causing a large memory leak:

The HDF5 docs state the following:

Excessive Memory Usage

Open Objects

Open objects use up memory. The amount of memory used may be substantial when many objects are left open. You should:

  • Delay opening of files and datasets as close to their actual use as is feasible.
  • Close files and datasets as soon as their use is completed.
  • If writing to a portion of a dataset in a loop, be sure to close the dataspace with each iteration, as this can cause a large temporary "memory leak".

There are APIs to determine if datasets and groups are left open. H5Fget_obj_count will get the number of open objects in the file, and H5Fget_obj_ids will return a list of the open object identifiers.

@rsdeazevedo Do you think this could be the case?

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

I don't think thats the case. I now explicitly close and delete each table after it is read. Well I try to at least, but it wont garbage collect and free the memory until the counter on the iterator is 0. Removing yielding completely fixes the problem. I'm trying to get it to work while still yielding the final stream object.

I think that will work once I rewrite th1e code that yields the stations to cut to no longer yield. I think the issue is happening because we have two iterators going at the same time.

I think we changed it to yield those to speed up the time to yield the first stream object. This code change will make it take a little longer until it yields the first stream object but I'm going to try to minimize that.

That does remind me while fixing ph5tostationxml I will also update ph5tostationxml to free the table memory as soon as it can

from ph5.

nick-falco avatar nick-falco commented on August 12, 2024

Thanks Derick the nested yielding very well could be the cause of the issue.

Anther improvement I want to make is to have the ph5tostationxml.run_ph5_to_stationxml(sta_xml_obj) method accept a list of request objects for a given network. This will vastly speed up a lot of repetitive POST requests that currently time out. If you are refactoring the ph5tostationxml.py module please keep this in mind.

For example, currently the ObsPy Fed Catalog client makes POST requests, formatted like the example below, that can be hundreds of lines.

level = 'station'
YW 1001 * * <start-time> <end-time>
YW 1002 * * <start-time> <end-time>
YW 1003 * * <start-time> <end-time>
YW 1004 * * <start-time> <end-time>
... etc.

Long requests currently time-out largely because we process each request independently (performing the same work of extracting all stations/channels more than one time). Adding support for lists of requests to the ph5tostationxml.py API will fix this problem since large amounts of data will only have to be read from each requested network one time.

from ph5.

derick-hess avatar derick-hess commented on August 12, 2024

adressed in PR #108

from ph5.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.