Comments (12)
I am going to test this by modifying the script on my local machine to immediately write out the trace object instead of yielding it to see how that changes things, and to get a better idea of what is going on.
from ph5.
OKay got ph5toms fixed. Going to add some more memory cleanup to see if I can get it even lower and make sure its cleaning up everywhere possible. I'll create a PR in a bit.
The issue was the nested yields. Getting rid of the yield in the cut station allowed it to garbage collect after every stream is yielded
from ph5.
Still running some tests. When I run a really big request, say for 10GB of data it more clear that memory usage is going up by the amount of data yielded each time it yields an obspy stream.
I have already added some stuff to clear out ph5 tables after they are done being used. That helped a little but they really aren't that big. I'm investigating why the yielded object or data related to it isn't being garbage collected automatically after it is done being yielded.
At least that is my theory so far. Using mprof and guppy right now.
from ph5.
I think you may be on to something. The memory might not be freed until stop iteration is reached. See the following stack overflow posting:
https://stackoverflow.com/questions/15490127/will-a-python-generator-be-garbage-collected-if-it-will-not-be-used-any-more-but
from ph5.
Yeah so it looks like while the iteration is going through no garbage collection can happen anywhere in the script. I tried clearing out and deleting all the tables as soon as they are un needed but mprof shows that it actually doesnt do that until the end of the script even if i explicitly tell it to with gc.collect()
from ph5.
One solution I will look into is multiprocessing to see if that allows proper garbage collection
from ph5.
I also read about how you can potentially force garbage collection by running memory intensive processes in a separate thread. Once the thread is finished running, the memory is freed.
Do we know what is using so much memory? It seems like the first step is to figure out exactly what is causing the spike in memory usage, before figuring out a fix.
from ph5.
I believe it is the yielded obspy stream not being collected. When I run it on large data sets it looks like it is jumping by the size of the obspy stream every time. A little bit of it also looks like the das_t. I added a line in in create_trace to free that memory every time it is done with a das_t but it currently doesn't do anything since garbage collection is halted.
I also added code to clear all other tables when they are done but those only amount to < 10MB total memory.
from ph5.
I wonder if we have too many open HDF5 dataspaces that are causing a large memory leak:
The HDF5 docs state the following:
Excessive Memory Usage
Open Objects
Open objects use up memory. The amount of memory used may be substantial when many objects are left open. You should:
- Delay opening of files and datasets as close to their actual use as is feasible.
- Close files and datasets as soon as their use is completed.
- If writing to a portion of a dataset in a loop, be sure to close the dataspace with each iteration, as this can cause a large temporary "memory leak".
There are APIs to determine if datasets and groups are left open. H5Fget_obj_count will get the number of open objects in the file, and H5Fget_obj_ids will return a list of the open object identifiers.
@rsdeazevedo Do you think this could be the case?
from ph5.
I don't think thats the case. I now explicitly close and delete each table after it is read. Well I try to at least, but it wont garbage collect and free the memory until the counter on the iterator is 0. Removing yielding completely fixes the problem. I'm trying to get it to work while still yielding the final stream object.
I think that will work once I rewrite th1e code that yields the stations to cut to no longer yield. I think the issue is happening because we have two iterators going at the same time.
I think we changed it to yield those to speed up the time to yield the first stream object. This code change will make it take a little longer until it yields the first stream object but I'm going to try to minimize that.
That does remind me while fixing ph5tostationxml I will also update ph5tostationxml to free the table memory as soon as it can
from ph5.
Thanks Derick the nested yielding very well could be the cause of the issue.
Anther improvement I want to make is to have the ph5tostationxml.run_ph5_to_stationxml(sta_xml_obj)
method accept a list of request objects for a given network. This will vastly speed up a lot of repetitive POST requests that currently time out. If you are refactoring the ph5tostationxml.py module please keep this in mind.
For example, currently the ObsPy Fed Catalog client makes POST requests, formatted like the example below, that can be hundreds of lines.
level = 'station'
YW 1001 * * <start-time> <end-time>
YW 1002 * * <start-time> <end-time>
YW 1003 * * <start-time> <end-time>
YW 1004 * * <start-time> <end-time>
... etc.
Long requests currently time-out largely because we process each request independently (performing the same work of extracting all stations/channels more than one time). Adding support for lists of requests to the ph5tostationxml.py API will fix this problem since large amounts of data will only have to be read from each requested network one time.
from ph5.
adressed in PR #108
from ph5.
Related Issues (20)
- [FEATURE]Improve validation for noven
- [BUG] SmartSolo array table coordinate precision decreased HOT 2
- [BUG] ph5toevt & ph5torec not handling overlapping data correctly HOT 3
- [BUG] ph5availability output for SmartSolo does not cover all expected times HOT 2
- [BUG] ph5toms: Data extraction inconsistent when number of input files is too large HOT 3
- [BUG] pforma is sorting SmartSolo data by file name, not das serial number HOT 6
- [BUG] ph5availability performance HOT 1
- [BUG] segytoph5 throwing exceptions an not running HOT 2
- ph5_validate: ph5.utilities.ph5validate - ERROR: list index out of range HOT 1
- [BUG] ph5availabilty potential duplicating availabilty information HOT 2
- [BUG] Unable to download station response or waveforms
- [FEATURE] Look for ways to have SmartSolo pickup behave like Fairfield pickup
- [BUG] ph5tostationxml fails to produce a response-level stationxml when using NRLv2 responses
- [BUG] Fairfield Experiments Archived Before February 2020 May have incorrect Response information.
- [BUG] ph5toevt: Event table entries with a numeric description_s cause ph5toevt crash with '-x U' HOT 3
- [BUG] ph5tostationxml generates xml with incorrect response information for ph5 with multiple instrument gains HOT 1
- [BUG] ph5toevt crashes with option '-u X' when value in 'sensor/serial_number_s' is >+/- 32767
- [FEATURE] Disable pforma for RT130 and Geode/Stratavisor data
- [FEATURE] Increasing Precision in Trace Headers (location & offset)
- [BUG] Abnormally large value array tables do not delete with --all_arrays HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ph5.