Giter Site home page Giter Site logo

100% cpu in event loop about logcabin HOT 11 CLOSED

logcabin avatar logcabin commented on May 18, 2024
100% cpu in event loop

from logcabin.

Comments (11)

ongardie avatar ongardie commented on May 18, 2024

Hmm, do you have commit 31cb6f from a couple of days ago? That fixed a similar symptom.

If you do have that commit and still see this problem, please attach to the server with gdb and grab the output of "thread apply all backtrace".

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

I have the latest commit pulled today before running the test. Here is the back trace:

(gdb) thread apply all backtrace

Thread 5 (Thread 0x7fad0fdd0700 (LWP 25898)):
#0  std::unique_lock<LogCabin::Core::Mutex>::~unique_lock (this=0x7fad0fdcfd00, __in_chrg=<optimized out>) at /usr/include/c++/4.8/mutex:477
#1  0x0000000000422acd in std::unique_lock<LogCabin::Core::Mutex>::operator=(std::unique_lock<LogCabin::Core::Mutex>&&) (this=0x7fad0fdcfdc0, 
    __u=<unknown type in /home/yfinkelstein/work/logcabin/build/LogCabin, CU 0x929a, DIE 0x50529>) at /usr/include/c++/4.8/mutex:494
#2  0x0000000000424e4e in LogCabin::Core::ConditionVariable::wait_until<std::chrono::_V2::steady_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > > (
    this=0x16b2380, lockGuard=..., abs_time=...) at ./Core/ConditionVariable.h:127
#3  0x000000000041c306 in LogCabin::Server::RaftConsensusInternal::RaftConsensus::timerThreadMain (this=0x16b2310) at build/Server/RaftConsensus.cc:1392
#4  0x0000000000430a5d in std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()>::operator()<, void>(LogCabin::Server::RaftConsensusInternal::RaftConsensus*) const (this=0x16b29e8, __object=0x16b2310) at /usr/include/c++/4.8/functional:601
#5  0x0000000000430865 in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)>::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x16b29e0) at /usr/include/c++/4.8/functional:1732
#6  0x00000000004306f1 in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)>::operator()() (this=0x16b29e0) at /usr/include/c++/4.8/functional:1720
#7  0x0000000000430622 in std::thread::_Impl<std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)> >::_M_run() (this=0x16b29c8) at /usr/include/c++/4.8/thread:115
#8  0x00007fad1097d770 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#9  0x00007fad11bf8f6e in start_thread (arg=0x7fad0fdd0700) at pthread_create.c:311
#10 0x00007fad100e49cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 4 (Thread 0x7fad0f5cf700 (LWP 25899)):
#0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x00007fad11c021a8 in _L_cond_lock_987 () from /lib/x86_64-linux-gnu/libpthread.so.0
#2  0x00007fad11c01f6b in __pthread_mutex_cond_lock (mutex=0x16b2338) at ../nptl/pthread_mutex_lock.c:64
#3  0x00007fad11bfcd14 in pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:259
#4  0x00007fad1097a1ec in std::condition_variable::wait(std::unique_lock<std::mutex>&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x00000000004213a9 in LogCabin::Core::ConditionVariable::wait (this=0x16b2380, lockGuard=...) at ./Core/ConditionVariable.h:72
#6  0x0000000000421452 in LogCabin::Core::ConditionVariable::wait (this=0x16b2380, lockGuard=...) at ./Core/ConditionVariable.h:85
#7  0x000000000041c7d8 in LogCabin::Server::RaftConsensusInternal::RaftConsensus::stepDownThreadMain (this=0x16b2310) at build/Server/RaftConsensus.cc:1469
#8  0x0000000000430a5d in std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()>::operator()<, void>(LogCabin::Server::RaftConsensusInternal::RaftConsensus*) const (this=0x16b2a38, __object=0x16b2310) at /usr/include/c++/4.8/functional:601
#9  0x0000000000430865 in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)>::_M_invoke<0ul>(std::_Index_tuple<0ul>) (this=0x16b2a30) at /usr/include/c++/4.8/functional:1732
#10 0x00000000004306f1 in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)>::operator()() (this=0x16b2a30) at /usr/include/c++/4.8/functional:1720
#11 0x0000000000430622 in std::thread::_Impl<std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::RaftConsensusInternal::RaftConsensus::*)()> (LogCabin::Server::RaftConsensusInternal::RaftConsensus*)> >::_M_run() (this=0x16b2a18) at /usr/include/c++/4.8/thread:115
#12 0x00007fad1097d770 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#13 0x00007fad11bf8f6e in start_thread (arg=0x7fad0f5cf700) at pthread_create.c:311
#14 0x00007fad100e49cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 3 (Thread 0x7fad0edce700 (LWP 25900)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007fad1097a1ec in std::condition_variable::wait(std::unique_lock<std::mutex>&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2  0x00000000004213a9 in LogCabin::Core::ConditionVariable::wait (this=0x16b2380, lockGuard=...) at ./Core/ConditionVariable.h:72
#3  0x0000000000421452 in LogCabin::Core::ConditionVariable::wait (this=0x16b2380, lockGuard=...) at ./Core/ConditionVariable.h:85
#4  0x0000000000419eab in LogCabin::Server::RaftConsensusInternal::RaftConsensus::getNextEntry (this=0x16b2310, lastEntryId=0) at build/Server/RaftConsensus.cc:896
---Type <return> to continue, or q <return> to quit---
#5  0x0000000000440610 in LogCabin::Server::StateMachine::applyThreadMain (this=0x16b8160) at build/Server/StateMachine.cc:159
#6  0x0000000000447a9d in std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()>::operator()<, void>(LogCabin::Server::StateMachine*) const (this=0x16b2838, __object=0x16b8160)
    at /usr/include/c++/4.8/functional:601
#7  0x00000000004479ed in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)>::_M_invoke<0ul>(std::_Index_tuple<0ul>) (
    this=0x16b2830) at /usr/include/c++/4.8/functional:1732
#8  0x00000000004478eb in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)>::operator()() (this=0x16b2830)
    at /usr/include/c++/4.8/functional:1720
#9  0x0000000000447884 in std::thread::_Impl<std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)> >::_M_run() (
    this=0x16b2818) at /usr/include/c++/4.8/thread:115
#10 0x00007fad1097d770 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#11 0x00007fad11bf8f6e in start_thread (arg=0x7fad0edce700) at pthread_create.c:311
#12 0x00007fad100e49cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 2 (Thread 0x7fad0e5cd700 (LWP 25901)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007fad1097a1ec in std::condition_variable::wait(std::unique_lock<std::mutex>&) () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#2  0x0000000000441350 in LogCabin::Server::StateMachine::snapshotThreadMain (this=0x16b8160) at build/Server/StateMachine.cc:305
#3  0x0000000000447a9d in std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()>::operator()<, void>(LogCabin::Server::StateMachine*) const (this=0x16b27a8, __object=0x16b8160)
    at /usr/include/c++/4.8/functional:601
#4  0x00000000004479ed in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)>::_M_invoke<0ul>(std::_Index_tuple<0ul>) (
    this=0x16b27a0) at /usr/include/c++/4.8/functional:1732
#5  0x00000000004478eb in std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)>::operator()() (this=0x16b27a0)
    at /usr/include/c++/4.8/functional:1720
#6  0x0000000000447884 in std::thread::_Impl<std::_Bind_simple<std::_Mem_fn<void (LogCabin::Server::StateMachine::*)()> (LogCabin::Server::StateMachine*)> >::_M_run() (
    this=0x16b2788) at /usr/include/c++/4.8/thread:115
#7  0x00007fad1097d770 in ?? () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#8  0x00007fad11bf8f6e in start_thread (arg=0x7fad0e5cd700) at pthread_create.c:311
#9  0x00007fad100e49cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 1 (Thread 0x7fad12012740 (LWP 25897)):
#0  0x00007fad100e5083 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81
#1  0x00007fad10df0f83 in ?? () from /usr/lib/x86_64-linux-gnu/libevent_core-2.0.so.5
#2  0x00007fad10ddd2d7 in event_base_loop () from /usr/lib/x86_64-linux-gnu/libevent_core-2.0.so.5
#3  0x00000000004a57cf in LogCabin::Event::Loop::runForever (this=0x7fff07e1d730) at build/Event/Loop.cc:169
#4  0x000000000043a126 in LogCabin::Server::Globals::run (this=0x7fff07e1d6f0) at build/Server/Globals.cc:135
#5  0x0000000000415dce in main (argc=3, argv=0x7fff07e1d978) at build/Server/Main.cc:113
(gdb) 

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

I think here is what's going on. In file RaftLog.cc:

uint64_t
Log::getLogStartIndex() const
{
    return startId;
}


uint64_t
Log::getLastLogIndex() const
{
    return startId + entries.size() - 1;
}

if the entries.size() is 0 which is the case when the log is empty the last log index turns into LONG_MAX.

Then, the beautiful loops like

    for (uint64_t entryId = consensus.log->getLogStartIndex();
         entryId <= consensus.log->getLastLogIndex();
         ++entryId) {

that are run upon log validation step inside Invariants::checkBasic() in RaftConsensusInvariants.cc turn into infinite loops.

Has this ever worked differently?

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

in particular, loop on line 134:

    for (uint64_t entryId = consensus.log->getLogStartIndex();
         entryId <= consensus.log->getLastLogIndex();
         ++entryId) {
        const Log::Entry& entry = consensus.log->getEntry(entryId);
        if (entry.type() == Protocol::Raft::EntryType::CONFIGURATION) {
            auto it = consensus.configurationManager->
                                        descriptions.find(entryId);
            expect(it != consensus.configurationManager->descriptions.end());
            if (it != consensus.configurationManager->descriptions.end())
                expect(it->second == entry.configuration());
        }
    }

runs forever which is what my stack trace shows. it's easy to see why

from logcabin.

ongardie avatar ongardie commented on May 18, 2024

startId should always be >= 1, so startId + entries.size() - 1 should never underflow. And the unit tests would have caught that right away. So unless startId for you is somehow getting set back to 0, I don't think this is the problem. Have you printed the startId out to check?

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

yes, you're right. entries.size() start with 1 because you initialize the log with one entry. I would argue that this is not a good coding practice to depend on initialization to avoid infinite loop, but this is not the point.
Anyway, I have narrowed the scope of the problem of 100% CPU to the timerThreadMain in RaftConsensus.cc. I've added some tracing as follows:

 void
RaftConsensus::timerThreadMain()
{
    std::unique_lock<Mutex> lockGuard(mutex);
    Core::ThreadId::setName("startNewElection");
    while (!exiting) {
        printf ("in timerThreadMain: %ld\n", startElectionAt);
        if (Clock::now() >= startElectionAt) {
            printf ("starting election ...\n");
            startNewElection();
        }
        stateChanged.wait_until(lockGuard, startElectionAt);
    }
}

when I run the program with these changes I see this:

in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807
in timerThreadMain: 9223372036854775807

logging with speed of light.
So, for whatever reason this thread runs non-stop and eats CPU. As you see, election is not even triggered. This means that somehow

stateChanged.wait_until(lockGuard, startElectionAt);

does not block the thread.
Any suggestions?

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

I have further narrowed it to line 111 in ConditionVaribale.h:

            printf ("before  cv.wait_until(%ld)", abs_time);
            cv.wait_until(lockGuard, abs_time);

this print records is printed non-stop which means wait_until does not block.
I see that you tried to do something about it by having the callback invoked instead of calling wait_until. But I don't understand why is this an alternative since the callback checks invariants and never even try to wait. Can you explain how the code that's checked in can possibly block the timer thread?

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

Finally I found the root cause. What I did is replaced cv.wait_until (..) with cv.wait (..) in case the wait time is time_point::max() which is the case when the process is a leader.

            if (abs_time == Clock::time_point::max())
                cv.wait(lockGuard);
            else
                cv.wait_until(lockGuard, abs_time);

The problem with how it was is that the duration amount overflows and turns negative deep inside cv.wait_until (). Since your true intent is to not have a timeout when the process is leader the above logic should be correct.
Once I did this 100% CPU problem went away and Control-C now stops the process (naturally the lock was never released by timer thread which was always running and the exit code was blocked by the same lock).

Well, now I'm trying to continue to start the cluster but face another issue. I can't start processes with id 2 and 3. I get this:

~/work/logcabin$ build/LogCabin --id 2
1391761874.850846 Server/Main.cc:107 in main() NOTICE[12408:evloop]: Using config file logcabin.conf
1391761874.851430 Server/Globals.cc:110 in init() NOTICE[12408:evloop]: Serving on 192.168.2.2:61023 (resolved to 192.168.2.2:61023)
1391761874.851554 Server/RaftConsensus.cc:755 in init() NOTICE[2:evloop]: My server ID is 2
1391761874.852079 Server/SimpleFileLog.cc:288 in read() ERROR[2:evloop]: Could not parse file: Could not open smoketeststorage/server2/log/0000000000000001: No such file or directory Exiting...
Aborted (core dumped)

Indeed the file 00000000001 is not there in server2/log:

ls smoketeststorage/server2/log/
metadata1  metadata2  unknown

unknown is actually a directory.

I keep wondering: has anybody else attempted to run this cluster setup recently?

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

I ended up copying the file 0000000000000001 from server1 to other two locations and after that the other servers run normally and the hello world test on the cluster passed.
So, all of these issues need to be fixed in git:

  1. lock callback must not be set since it does not block the timer thread and causes continuous spin
  2. cv.wait_until(lockGuard, abs_time) => cv.wait(lockGuard) if the node is a leader
  3. initial creation of file 0000000000000001 on nodes other than server1

from logcabin.

ongardie avatar ongardie commented on May 18, 2024

yes, you're right. entries.size() start with 1 because you initialize the log with one entry. I would argue that this is not a good coding practice to depend on initialization to avoid infinite loop, but this is not the point.

It doesn't rely on entries.size() starting with 1 (this isn't always the case; servers that haven't been added to the cluster have empty logs). It relies on the invariant that startId >= 1, which is true since log indexes are counted from 1.

The problem with how it was is that the duration amount overflows and turns negative deep inside cv.wait_until ().

Thanks for finding the root cause. The GCC bug for this is http://gcc.gnu.org/bugzilla/show_bug.cgi?id=58931 (which I didn't know about before). This was also the reason for #50.

Well, now I'm trying to continue to start the cluster but face another issue. I can't start processes with id 2 and 3. I get this:
1391761874.852079 Server/SimpleFileLog.cc:288 in read() ERROR[2:evloop]: Could not parse file: Could not open smoketeststorage/server2/log/0000000000000001: No such file or directory Exiting...
Indeed the file 00000000001 is not there in server2/log:

I don't know how you ended up in this state. Somehow the metadata files for your servers 2 and 3 claimed entry 1 existed, but it didn't. I can't repro this, but if you can, please open a separate issue.

I ended up copying the file 0000000000000001 from server1 to other two locations and after that the other servers run normally and the hello world test on the cluster passed.

That's good that you found a way to make it run, but FYI that's not really the way it's supposed to work: the leader was supposed to replicate those entries to the other servers through the Raft protocol.

I keep wondering: has anybody else attempted to run this cluster setup recently?

AFAIK, you're the first to have tried gcc 4.8. That condition variable bug was a nasty one. Thanks for the reports and working through them.

So, all of these issues need to be fixed in git:

  1. lock callback must not be set since it does not block the timer thread and causes continuous spin

I think you're confusing Core::Mutex::callback (which is called when the mutex is acquired/released) with Core::ConditionVariable::callback (which mocks out waiting for unit tests, allowing the tests to force the condition to occur). The mutex callback is used to check invariants during all runs in debug builds, and the condition variable callback is never set during any run.

\2. cv.wait_until(lockGuard, abs_time) => cv.wait(lockGuard) if the node is a leader

Commit fc4d35e takes care of this.

\3. initial creation of file 0000000000000001 on nodes other than server1

I don't know how your servers ended up in that broken state, but those files aren't meant to exist until the leader replicates those entries to those servers.

from logcabin.

yfinkelstein avatar yfinkelstein commented on May 18, 2024

After the latest checkout I no longer have problem #2 with missing files on new servers.
With the gcc bug workaround above everything works smoothly for me.
Thanks.

from logcabin.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.