Comments (11)
There is no such problem in memleax.
The new memory block allocation time is set at memblock.c:memblock_new()
:
mb->create = time(NULL);
and it is used for expiration checking at memblock.c:memblock_expire()
:
if (now - mb->create < expire) {
When a memory block is freed, the record (struct memblock_s
) is deleted.
If the memory block with the same address is allocated later, a new record is created with the current timestamp, and is not related to the previous record at all.
from memleax.
Thanks for the reply!
Something else must be going on then, because when I run memleax on my application it terminates after reaching the maximum number of expired blocks (1000 blocks older than 60 seconds).
However, without memleax the same application is running for several days with perfectly flat memory usage (VIRT & RES) according to top.
I was hoping that the discrepancy was caused by malloc returning new allocations from the free list, but it seems that is not the case.
Please let me know if you have any thoughts on how to resolve this. Thanks!
from memleax.
So I've tried a few things to resolve this, so far with no luck.
I noticed that memleax was using a lot of CPU when run, so I reduced the load on the target application, to the point where it uses < 2% CPU -- at that load, memleax is running at approx. 2% CPU. Memory utilization of both memleax and the target application is steady at approx. 100-200MB (on a 16GB machine).
I also reduced the size of the call stack captured to 10 frames.
Even with these settings I am seeing what I am convinced are spurious reports from memleax:
[/home/btorpey] /build/share/memleax/1.1.1/bin/memleax -e 60 -l 10 7276
== Begin monitoring process 7276...
CallStack[1]: memory expires with 17 bytes, backtrace:
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00000033bfe80ed2 libc-2.12.so __strdup()+34 ../sysdeps/x86_64/strcpy.S:155
0x00007f29b7e2512d libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchSubMsg()+771 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:729
0x00007f29b7e24ce3 libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchNormalMsg()+196 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:643
0x00007f29b7e243e3 libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchThread()+873 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:475
0x00000033c0607aa1 libpthread-2.12.so start_thread()+209 /usr/src/debug/glibc-2.12-2-gc4ccff1/nptl/nptl-init.c:280
0x00000033bfee8bcd libc-2.12.so __clone()+109 ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
CallStack[1]: memory expires with 17 bytes, 2 times again
CallStack[1]: memory expires with 17 bytes, 3 times again
CallStack[1]: memory expires with 17 bytes, 4 times again
CallStack[1]: memory expires with 17 bytes, 5 times again
CallStack[1]: memory expires with 17 bytes, 6 times again
CallStack[1]: memory expires with 17 bytes, 7 times again
...
I'm having a hard time understanding how any allocations can live for > 60 seconds if neither memleax nor the target application are using significant CPU or memory.
FWIW, the target application simply subscribes to messages sent by a publishing application (using ZeroMQ) and discards each message after processing. As mentioned previously, the same target application has been running for several days at a much higher message rate (100,000 m/s) with flat memory utilization.
Any suggestions would be much appreciated - thanks!
P.S. I'm running memleax 1.1.1 with the patch from #36. The test machine is running CentOS 6.9 w/4x i5-2400 CPUs and 16GB RAM.
from memleax.
P.S. I terminated memleax and here's the report it gives:
== Terminate monitoring.
== Callstack statistics: (in ascending order)
CallStack[5]: may-leak=2 (48 bytes)
expired=2 (48 bytes), free_expired=0 (0 bytes)
alloc=17391 (417384 bytes), free=17388 (417312 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1508
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00007f29b7bce161 libzmq.so _ZN3zmq15socket_poller_t7rebuildEv()+467
0x00007f29b7bce4b9 libzmq.so _ZN3zmq15socket_poller_t4waitEPNS0_7event_tEil()+69
0x00007f29b7bcc0b9 libzmq.so zmq_poller_wait_all()+229 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:1371
0x00007f29b7bcc8ff libzmq.so _Z15zmq_poller_pollP14zmq_pollitem_til()+893 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:813
0x00007f29b7bcba4c libzmq.so zmq_poll()+40 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:861
0x00007f29b7e2421c libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchThread()+418 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:450
0x00000033c0607aa1 libpthread-2.12.so start_thread()+209 /usr/src/debug/glibc-2.12-2-gc4ccff1/nptl/nptl-init.c:280
0x00000033bfee8bcd libc-2.12.so __clone()+109 ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
CallStack[4]: may-leak=2 (24 bytes)
expired=2 (24 bytes), free_expired=0 (0 bytes)
alloc=17391 (208692 bytes), free=17388 (208656 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1508
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00007f29b89cf30d libstdc++.so _Znwm()+29 /shared/buildtest/gcc/4.8.2/gcc-4.8.2/build/x86_64-unknown-linux-gnu/libstdc++-v3/include/ext/concurrence.h:106
0x00007f29b7bcd92e libzmq.so _ZN3zmq15socket_poller_t3addEPNS_13socket_base_tEPvs()+236
0x00007f29b7bcbc4d libzmq.so zmq_poller_add()+178 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:1259
0x00007f29b7bcc789 libzmq.so _Z15zmq_poller_pollP14zmq_pollitem_til()+519 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:784
0x00007f29b7bcba4c libzmq.so zmq_poll()+40 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:861
0x00007f29b7e2421c libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchThread()+418 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:450
0x00000033c0607aa1 libpthread-2.12.so start_thread()+209 /usr/src/debug/glibc-2.12-2-gc4ccff1/nptl/nptl-init.c:280
0x00000033bfee8bcd libc-2.12.so __clone()+109 ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
CallStack[3]: may-leak=2 (192 bytes)
expired=2 (192 bytes), free_expired=0 (0 bytes)
alloc=17391 (1669536 bytes), free=17388 (1669248 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1508
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00007f29b89cf39d libstdc++.so _ZnwmRKSt9nothrow_t()+29 /shared/buildtest/gcc/4.8.2/gcc-4.8.2/build/x86_64-unknown-linux-gnu/libstdc++-v3/include/ext/concurrence.h:106
0x00007f29b7bcc5e4 libzmq.so _Z15zmq_poller_pollP14zmq_pollitem_til()+98 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:761
0x00007f29b7bcba4c libzmq.so zmq_poll()+40 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:861
0x00007f29b7e2421c libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchThread()+418 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:450
0x00000033c0607aa1 libpthread-2.12.so start_thread()+209 /usr/src/debug/glibc-2.12-2-gc4ccff1/nptl/nptl-init.c:280
0x00000033bfee8bcd libc-2.12.so __clone()+109 ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
CallStack[2]: may-leak=15 (800 bytes)
expired=15 (800 bytes), free_expired=0 (0 bytes)
alloc=52173 (3895584 bytes), free=52157 (3894656 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1511
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00007f29b89cf30d libstdc++.so _Znwm()+29 /shared/buildtest/gcc/4.8.2/gcc-4.8.2/build/x86_64-unknown-linux-gnu/libstdc++-v3/include/ext/concurrence.h:106
0x00007f29b7bcf7a0 libzmq.so _ZN9__gnu_cxx13new_allocatorIN3zmq15socket_poller_t6item_tEE8allocateEmPKv()+64
0x00007f29b7bcf5a1 libzmq.so _ZNSt12_Vector_baseIN3zmq15socket_poller_t6item_tESaIS2_EE11_M_allocateEm()+47
0x00007f29b7bcf073 libzmq.so _ZNSt6vectorIN3zmq15socket_poller_t6item_tESaIS2_EE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPS2_S4_EERKS2_()+325
0x00007f29b7bcecca libzmq.so _ZNSt6vectorIN3zmq15socket_poller_t6item_tESaIS2_EE9push_backERKS2_()+120
0x00007f29b7bcd9c2 libzmq.so _ZN3zmq15socket_poller_t3addEPNS_13socket_base_tEPvs()+384
0x00007f29b7bcbc4d libzmq.so zmq_poller_add()+178 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:1259
0x00007f29b7bcc789 libzmq.so _Z15zmq_poller_pollP14zmq_pollitem_til()+519 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:784
0x00007f29b7bcba4c libzmq.so zmq_poll()+40 /home/btorpey/work/libzmq/4.2.3/src/zmq.cpp:861
CallStack[1]: may-leak=285 (4845 bytes)
expired=285 (4845 bytes), free_expired=0 (0 bytes)
alloc=17391 (295647 bytes), free=17095 (290615 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1737
0x00000033bfe7aa50 libc-2.12.so __malloc()+0 /usr/src/debug/glibc-2.12-2-gc4ccff1/libio/genops.c:598
0x00000033bfe80ed2 libc-2.12.so __strdup()+34 ../sysdeps/x86_64/strcpy.S:155
0x00007f29b7e2512d libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchSubMsg()+771 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:729
0x00007f29b7e24ce3 libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchNormalMsg()+196 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:643
0x00007f29b7e243e3 libmamazmqimpl.so zmqBridgeMamaTransportImpl_dispatchThread()+873 /home/btorpey/work/OpenMAMA-zmq/2.0/src/transport.c:475
0x00000033c0607aa1 libpthread-2.12.so start_thread()+209 /usr/src/debug/glibc-2.12-2-gc4ccff1/nptl/nptl-init.c:280
0x00000033bfee8bcd libc-2.12.so __clone()+109 ../sysdeps/unix/sysv/linux/x86_64/clone.S:115
[/home/btorpey]
from memleax.
This may be a bug!
When memleax works with multi-thread process,
- memleax set breakpoints at the entry of
malloc
andfree
; - one thread runs in
free
; - memleax catches it, set a breakpoint at the exit of
free
, and deletes the breakpoint at the entry offree
temporarily to let the thread continue; - if another thread runs in
free
now, memleax will miss it because the breakpoint is deleted!!! Then this memory block record will live for ever in memleax.
For the your output, most memory block is freed, and in 1 second (max=1
), so it's the bug very likely:
CallStack[1]: may-leak=285 (4845 bytes)
expired=285 (4845 bytes), free_expired=0 (0 bytes)
alloc=17391 (295647 bytes), free=17095 (290615 bytes)
freed memory live time: min=0 max=1 average=0
un-freed memory live time: max=1737
I do not know how to fix it by now. Maybe I could study how GDB does.
I am busy these days and will try to fix it later.
Thanks for this report.
from memleax.
memleax may use lots of CPU, if your program call malloc
frequently, because it traps on each malloc
and read your program's memory several times to get the call stack. And it also make your program slower too.
Besides, why do you use memleax if your program seems no memory leak?
from memleax.
Besides, why do you use memleax if your program seems no memory leak?
I know the target application "leaks" (grows memory) in other scenarios, so I was trying to establish a baseline with memleax in a scenario where it does not leak before attempting to diagnose the other problem.
from memleax.
I do not know how to fix it by now. Maybe I could study how GDB does.
I'm sure you know this, but other memory debuggers work by intercepting malloc, free (etc.) to insert their own code. It sounds like memleax is using self-modifying code, which is definitely going to be a problem in multi-threaded applications (like mine).
I'm not sure why memleax needs to delete the breakpoint at entry to free -- would it make sense to have both the entry and exit breakpoint active at all times? Of course, that would mean that memleax would need to use thread-safe data structures, but in order to be usable with multi-threaded apps it would need to do that in any case.
You may want to consider adding a note to the README that memleax doesn't necessarily work correctly with multi-threaded apps.
If I can come up with anything that would help, I'll be sure to post back here.
Thanks again!
from memleax.
I'm sure you know this, but other memory debuggers work...
I do not know this in fact :( I just know basic knowledge about debugging from school, and do not know the detail in practice.
I'm not sure why memleax needs to delete the breakpoint at entry to free...
memleax just places an INT3 instruction at the entry of free(). When trapping, memleax replace it by the origin instruction to let the target program continue; and re-places the INT3 later.
This is also the basic idea of debugging, while "inserting their own code" may be an advanced way which I did not know before.
Besides, I do not know how to get the exit address of free(), so I set the breakpoint at the return address actually when trapping into free().
You may want to consider adding a note to the README
Yes, I have added a note, and post issues.
Thanks again!
from memleax.
I'm sure you know this, but other memory debuggers work...
I do not know this in fact :( I just know basic knowledge about debugging from school, and do not know the detail in practice.
FWIW, here's a good writeup on the LD_PRELOAD technique: http://stackoverflow.com/questions/426230/what-is-the-ld-preload-trick
This was referenced from another useful post: http://milianw.de/blog/heaptrack-a-heap-memory-profiler-for-linux
This could be a more robust technique for hooking malloc etc. esp. for multi-threaded code.
from memleax.
The heaptrack
seems wonderful. Maybe I would not write memleax
if I found it before.
I will try it later.
Thanks.
from memleax.
Related Issues (20)
- Version for i386/32 bits HOT 7
- aarch64 support HOT 1
- Script to mimic valgrind behaviour? HOT 5
- Using too much memory HOT 24
- Future supported architectures? HOT 3
- How to compile it on embeded linux HOT 1
- [fix failure to find libunwind on armhf] switch to cmake or autotools HOT 7
- 32-bit app crashes with segmentation fault on CentOS 7 x64 HOT 2
- Can't compile on CentOS 6.9 HOT 7
- BUG: miss breakpoint for multi-thread program HOT 6
- BUG: crash on exit when debugging multi-thread program HOT 1
- == Attach process error:: Operation not permitted HOT 2
- not found api: malloc HOT 13
- centos 6x64 and 7x64 HOT 4
- memleax crashes the monitored process too - using version 1.1.1-1 HOT 10
- Does this support ARM platform HOT 2
- Include CMakeLists.txt in tar.gz release file HOT 5
- Memleax is killed after sometime while debugging process memory usage HOT 3
- memleax does not report for Photon OS HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from memleax.