Giter Site home page Giter Site logo

Comments (52)

korchak-aleksandr avatar korchak-aleksandr commented on June 20, 2024 20

Hello @mgravell

We have the same issue on .NET Core 3.1 using the latest version of StackExchange.Redis nuget.
JetBrains DotTrace profiling shows that very high percentage / number of Threads in waiting status in Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop

You said earlier that you would investigate this problem.
Could you tell current status of investigation?

We are looking forward to your reply and possible fix.
Thanks.

from pipelines.sockets.unofficial.

NickCraver avatar NickCraver commented on June 20, 2024 8

For anyone seeing the above: this isn't a problem, it's a matter of understanding what the profiler is telling you. There are dedicated threads for the scheduler that aren't part of the thread pool and are not blocking other work. If they're in RunWorkLoop they also aren't using CPU - they are awaiting a monitor event for work being available.

These profilers are telling you "we took a lot of samples, here's what we saw in those samples and how many times". Seeing these threads idle and awaiting work is good, though misleading looking at a profiler like this which is based on samples and/or wall time. It's not using active CPU, etc. and can be ignored, it's just an artifact of how threads/waiting/profilers work.

from pipelines.sockets.unofficial.

ikalafat avatar ikalafat commented on June 20, 2024 5

Hi,

I'm experiencing (probably) same issue with Pipelines.Sockets.Unofficial 2.1.16 and SE.Redis 2.1.55, running on .NET 471

Here is screenshot from profiler, if it is of any help.

PipelineSockets-PerformanceProfiler

Is there anything that I can do to check/verify/help/fix this?

Thanks!

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024 3

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024 2

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024 1

Just wanted to report back that after we've eliminated all of our sync over async paths, our deadlocks disappeared with SE.Redis 2.

from pipelines.sockets.unofficial.

svenclaesson avatar svenclaesson commented on June 20, 2024 1

We never fully figured this out why but we got rid of all sync over async code and the CPU usage dropped significantly. Still not at the level with .NET Core 2.2 but we left it at that for now.

Hi, same question, same issue.

@svenclaesson
We already have the same issue when upgrading to .NET Core 3.1
Did you solve this issue? and If Yes, How did you solve it?

from pipelines.sockets.unofficial.

ikalafat avatar ikalafat commented on June 20, 2024 1

@NickCraver thank you for the additional explanation. The "problem" is that people (including myself) are seeing this kind of results in the profiler, where large amount of time is "taken" by RunWorkLoop samples, and are unknowingly being misled by those results.

I still haven't found a way to "exclude" specific namespaces from profiler results (for example in DotTrace), so that actual timings, and percentages, are "recalculated" with exclusions in mind, where we would have more precise results, which would allow better profiling of app itself.

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

This does not necessarily indicate a problem. Basically, to avoid thread-pool starvation issues impact the IO cycle, we use a dedicate thread-pool that is already ready with threads to process IO via the pipelines API. If there's nothing for those threads to do at the time, they'll be sat at a Wait, waiting for a pulse indicating that there's something for them to do.

The default number of threads we spin up here is: 10

So: are you actually seeing a problem here? or are you just seeing these threads sat quietly minding their own business?

Note: this isn't 10 per multiplexer/connection/etc; they are shared.

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

BTW: an option here is to pass in a specific SocketManager; by default it uses a shared SocketManager, which uses 10 threads; but an individual SocketManager only holds 5 (the shared instance assumes it may see more activity). You should not, however, create too many SocketManager instances.

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

Yes that is what it seems to be happening these threads are waiting in Monitor.Wait to re-acquire a lock which they released. And it seems a Monitor.Pulse or Monitor.PulseAll is never called and when i looked at call stack they all are pointing me to this call:

Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()

So it means at some point the server is running out of threads and start getting request errors, slow response time, etc.

Regarding your suggestion above using SocketManager, would point me to some sample code that i can look at to see if this is an option for us ?

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

the pulse should be being invoked whenever at least one worker is waiting, which is the case here; I do not believe that the server is running out of threads, but I'm more than happy to investigate - but can you please share what symptoms you are seeing that makes you think this is related? there being worker threads waiting for work does not mean anything - that is expected. It looks like I don't currently expose the queue length, but I expect it is zero; I've added a tweak (see linked commit above) to help me expose this, but it'll need a new client build to show it.

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

Thanks for Quick response on this issue, will have new dump files tomorrow to do further analysis so i will share more details. But so far I share same thoughts you have that this should not have a big impact in overall server performance. Yes i can see your commit.

from pipelines.sockets.unofficial.

bbijesh avatar bbijesh commented on June 20, 2024

I also observed similar behavior. In one of the posts above you mentioned that the default thread count is 10. I see in the process dump that around 23 threads are blocked here and in another dump 14 threads. Is it expected that the thread count is increased like this?

from pipelines.sockets.unofficial.

bbijesh avatar bbijesh commented on June 20, 2024

Without this NuGet I don't expect that kind of issue as the specific thread pool itself is expected to not exist. Anyway, please keep this thread updated. Thanks!

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

Just to add another data point to this. We just upgraded from SE.Redis 1.2.6 to 2.0.571. All was fine in our tests and on our QA env, but when it hit Prod, our worker threads which use Redis a lot, thus placing it under fairly high load, randomly stop processing anything until the process is restarted. ie. the CPU on the boxes goes down to virtually 0 and they just sit there not doing anything.

There were no other code changes other than the Redis client upgrade. Taking a memory dump when it happens, I see the same 10 threads just waiting in RunWorkLoop. I understand that is normal when there is nothing to be done, but the question is why does work suddenly cease? Is there a deadlock happening somewhere?

The dump shows no other abnormalities, so I'm really scratching my head here. We are seeing a handful of RedisTimeoutExceptions since the upgrade, but I recall @mgravell saying that's likely because the 2.x branch now knows how to report on timeouts in async scenarios, whereas 1.x didn't, so those just may be a red herring.

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

one more detail regarding our implementation that we believe maybe causing this issue of leaving many threads in waiting state, is that we are subscribing to 8 Channels ( one for each app ).

We also have two Tasks: one running every 15 seconds on each app to save hash values to Redis Cache and a second thread every 30 seconds to try to sync hash across apps again using Redis Cache.

Do you think this sort of logic maybe the cause of this issue ? we are also thinking about simplifying this logic to use one channel and including application name as part of the message so it makes better use of Thread resources.

Any thoughts ?

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

The number of channels etc shouldn't be a factor here. If folks are saying there is something odd happening, then I will of course investigate, but: I would really like to focus on the symptoms rather than the investigations. So: I am really really interested in this, but rather than talking about people seeing waiting threads (which does not mean much): I'd really love to hear more about the performance problems and scenarios that you're seeing. Does that make sense?

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

Basically

  • Overtime response time increases. e.g. our website should return within 2 seconds, but it starts taking 5, 9 and more seconds to respond.

  • And eventually it stops processing more requests and even web process crashes, here we have taken manual dump snapshot of the server and have found those high number of threads in waiting state.

Following is an extract of what we seeing in those dump files when issue happens,

image

notice that 10.64 % threads blocked when clicking on any of them the .Net call stack is like this

image

Notice this was taken when we had in PROD version 1.0.9

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

Essentially same symptoms here, things just stop processing after a while, but mem dumps don't indicate why, other than the waiting threads as @robertmujica points out. We have some sync over async code in the redis path, so we're working on resolving those. Will report back tomorrow if it made any difference. Open to suggestions on how to dig deeper here.

For reference, our code is running on .net core 2.2 on Windows 2019 VM's against 3 node Redis cluster on version 5.0.3.

from pipelines.sockets.unofficial.

bbijesh avatar bbijesh commented on June 20, 2024

image
31% of threads blocked is quite high. We are running on Azure WebApp, .net framework 472 .

from pipelines.sockets.unofficial.

bbijesh avatar bbijesh commented on June 20, 2024

We took more dumps and see that 20 threads are waiting at
Pipelines_Sockets_Unofficial!Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()+84
Looks like 2 threadpools are getting created then.

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

analyzing some of the recent logs ( from 22nd of March ) I can see following entries:

Type: System.InvalidOperationException
Message: Reading is not allowed after reader was completed.
Stack:
[HelperMethodFrame]
System.IO.Pipelines.ThrowHelper.ThrowInvalidOperationException_NoReadingAllowed()
System.IO.Pipelines.Pipe.AdvanceReader(System.SequencePosition ByRef, System.SequencePosition ByRef)
System.IO.Pipelines.Pipe+DefaultPipeReader.AdvanceTo(System.SequencePosition, System.SequencePosition)
StackExchange.Redis.PhysicalConnection+d__108.MoveNext()
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Runtime.CompilerServices.AsyncMethodBuilderCore+MoveNextRunner.Run()
Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.Execute(System.Action`1<System.Object>, System.Object)
Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop()
Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler+<>c.<.cctor>b__41_0(System.Object)
System.Threading.ExecutionContext.RunInternal(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object, Boolean)
System.Threading.ExecutionContext.Run(System.Threading.ExecutionContext, System.Threading.ContextCallback, System.Object)
System.Threading.ThreadHelper.ThreadStart(System.Object)
[GCFrame]
[DebuggerU2MCatchHandlerFrame]
[ContextTransitionFrame]
[DebuggerU2MCatchHandlerFrame]

No i wondering if this is the side effect of having network connectivity issues ?

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

Reporting back as I said I would. Removing any remaining mixture of sync/async code in this path didn't seem to fix the issue we're having. I'm considering moving back to 1.2.6 just to verify the symptoms we're seeing are in fact caused by the new version.

Looking closer at our logs, to try and see if I can establish any patterns, what I'm seeing is that right before a server goes dead (ie. stops doing any work), I see a handful of RedisTimeoutExceptions like these:

Exception="StackExchange.Redis.RedisTimeoutException" ExceptionDetail="StackExchange.Redis.RedisTimeoutException: Timeout awaiting response (outbound=0KiB, inbound=0KiB, 5235ms elapsed, timeout is 5000ms), inst: 0, qu: 0, qs: 1, in: 0, serverEndpoint: , mgr: 7 of 10 available, clientName: ASWorker9, IOCP: (Busy=5,Free=995,Min=1,Max=1000), WORKER: (Busy=10,Free=32757,Min=1,Max=32767), v: 2.0.571.20511 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)

from pipelines.sockets.unofficial.

bbijesh avatar bbijesh commented on June 20, 2024

We have switched back to 1.2.6 and the behavior is much better.

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

@robertmujica curious if you tried upgrading to 2.0.588? We went back to 1.2.6 for a while after trying 571 (and it totally made the problem go away), then tried 588 and immediately upon deployment last night the problem of dead worker processes came back. Dumps are showing that 43% of threads are blocked, with one in particular that uses SE.Redis that is in a SpinWait.SpinOnce which I don't recall seeing before.

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

@mscrivo if you're seeing a significant amount of SpinOnce - presumably because you're using a lot of "sync" paths in very high congestion: that changes in 593, which I've pushed to nuget (note to self: go do a release note).

the presence of a small number of threads waiting by themselves does not indicate a problem; those aren't pool threads - they are reserve threads intended to isolate the IO code from the peaks and troughs of the regular thread-pool.

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

(you can also get the SpinOnce fix simply by taking pipelines.sockets.unofficial 2.0.20; all SE.Redis does here is take the later pipelines build)

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

The SpinOnce is coming from .net's BlockingCollection which uses a SemaphoreSlim ... we use BlockingCollections quite a bit in our workflow. Could that be a problem?

Happy to share a mem dump or the debug diag report with you, if you'd like to take a look? I'd admittedly not great with understanding .net's locking mechanisms.

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

ok will try that new build, thanks!

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

Neither SE.Redis nor Pipelines.Sockets.Unofficial use BlockingCollection - however, Pipelines.Sockets.Unofficial does use SpinOnce() directly in the MutexSlim code; so - if the stalls you're seeing definitely have BlockingCollection in them, then: I think that's coming from your code.

As for "Could that be a problem?" - all I can say is "hell yeah" - SemaphoreSlim has a landmine embedded in it: https://blog.marcgravell.com/2019/02/fun-with-spiral-of-death.html

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

@mscrivo no we kept 2.0.571, after further analysis i found in dump file Stackoverflow and OutofMemoryException that we believe is actual root cause of the issue we have. After i fixed the offending lines and Performance testing was done everything seems to be working ok, so next week we are going live with this change ( that is the real test ), will update this threads once i have some news from PROD.

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

One more thing we did try was to minimize the number of Channels we use for Pub / sub, but after some analysis we found no differences between listening for 8 channels or two, the number of Threads were pretty much the same, so we discarded that SPIKE.

from pipelines.sockets.unofficial.

mscrivo avatar mscrivo commented on June 20, 2024

thanks @mgravell. I did read your blog post when you posted it. It was very informative. I guess we have to look for an alternative to using BlockingCollections given we can't control what locking mechanism it uses.

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

Ah, right - yeah there's an important update here that I had shared on the SE.Redis repos, but I hadn't looked for here:

there was a critical race condition when in MutexSlim - occasionally it screwed up and failed to notify the next competitor, which could result in perceived locks and timeouts. This is now addressed, and was the big change in SE.Redis 588; there is also a more recent 593 which makes the spin-lock a little less aggressive when contested by more than one competitor

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

@mscrivo well, I wouldn't get too excited there without good reason, but... if you're sure you're seeing blocks from BlockingCollection then yeah. I don't know your use-case, but in most cases I've found that less is more - i.e. simple primitives like lock can be very effective in the majority of cases (where you aren't doing exotic things).

from pipelines.sockets.unofficial.

robertmujica avatar robertmujica commented on June 20, 2024

@mgravell so would you recommend to update to 593 or 588 version ?

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

593, or I wouldn't have pushed it to nuget :)

from pipelines.sockets.unofficial.

gokhanabatay avatar gokhanabatay commented on June 20, 2024

Hi mscrivo,
I think I have the same issue, I also checked with pre-release
Pipelines.Sockets.Unofficial 2.0.31.51448
image

image

StackTrace:
image

from pipelines.sockets.unofficial.

klmallory avatar klmallory commented on June 20, 2024

I'm also still having this issue (601) on Batched or Pipelined requests in burst traffic situations.
We use a 9 instance 3 VM cross replicated cluster (no proxy). The packages are on average 10-20 KB. client has low CPU/ memory usage and only shows moderate bandwidth usage on the client during "sync" operations. I know that the package size is more than recommended, and some latency is expected, it's just that Sync and Async behave so differently and I would like to start batching requests and can't because of this issue.

This one line of code is continuously in a wait state.

workloop

Intrinsic Latency during test:

Max latency so far: 6 microseconds.
Max latency so far: 26 microseconds.
Max latency so far: 31 microseconds.
Max latency so far: 36 microseconds.
Max latency so far: 101 microseconds.
Max latency so far: 137 microseconds.
Max latency so far: 168 microseconds.
Max latency so far: 230 microseconds.
Max latency so far: 772 microseconds.

It seems like the multiplexer is waiting for other unrelated batches unnecessarily.

I have a simple console test to reproduce. The Parallel.For loops represent a burst traffic situation on the website where we are seeing this and replicates the situation fairly well.
https://gist.github.com/klmallory/da06bd68b5ebef92de72cd4597880c1a

Sync results are consistent:

Sync in 164ms on Thread 16
Sync in 173ms on Thread 20
Sync in 169ms on Thread 25
Sync in 175ms on Thread 22
Sync in 172ms on Thread 17
Sync in 176ms on Thread 1
Sync in 178ms on Thread 24

Batching / Async are all over the place:

Async Pipeline in 753ms on Thread 25
Async Pipeline in 754ms on Thread 46
Async Pipeline in 753ms on Thread 22
Async Pipeline in 754ms on Thread 20
Async Pipeline in 16ms on Thread 37
Async Pipeline in 16ms on Thread 23
Async Pipeline in 15ms on Thread 24
Async Pipeline in 20985ms on Thread 22
Async Pipeline in 20001ms on Thread 46
Async Pipeline in 20983ms on Thread 29
Async Pipeline in 8ms on Thread 22
Async Pipeline in 9ms on Thread 28
Async Pipeline in 10ms on Thread 35
Async Pipeline in 12ms on Thread 41
Async Pipeline in 12ms on Thread 33
Async Pipeline in 15ms on Thread 29
Async Pipeline in 14ms on Thread 25
Async Pipeline in 15ms on Thread 34

from pipelines.sockets.unofficial.

tiltsoftware avatar tiltsoftware commented on June 20, 2024

Hi! thanks for all the work on this and SE.Redis.

Question related to this. I am seeing the same issues with the RunWorkerLoop showing up as a very high percentage of threads blocked.

My symptoms are similar, CPU spikes and very slow loading asp.net/mvc pages. Sometimes more than a minute to load doing nothing and then suddenly the page loads. Site does not get much traffic and the traffic does not spike when this occurs.

But I am still on Redis 2.0.519 which references version 1.0.9 of this framework.

Could the issue that was mentioned in the blog https://blog.marcgravell.com/2019/02/fun-with-spiral-of-death.html be causing my issues? considering I have the version that included those bugs?

I am currently testing SE.Redis 2.0.601 which is the current latest version in Nuget. I'm hoping that will resolve the issue I am having.

But possibly I'm doing something wrong in my implementation that's causing it and the blocks are just showing up in my profiler coincidentally or relatedly but not the root problem.

I'm also using Micrsoft.SignalR.StackExchange.Redis and Microsoft.Web.RedisSessionStateProvider that also use SE.Redis

Thanks

from pipelines.sockets.unofficial.

mgravell avatar mgravell commented on June 20, 2024

from pipelines.sockets.unofficial.

tiltsoftware avatar tiltsoftware commented on June 20, 2024

"Could the issue that was mentioned in the blog https://blog.marcgravell.com/2019/02/fun-with-spiral-of-death.html be causing my issues? considering I have the version that included those bugs?" quite possibly, but impossible for us to give a hard "yes"/"no" from here

-- Regards, Marc

Thanks for the reply Marc, I went ahead and updated a bunch of dependencies so I could bring us up to the latest version of this so that will eliminate whether that was our issue or not. I suspect it's somewhere in our implementation though but was worth doing to get everything up to date anyway.

from pipelines.sockets.unofficial.

svenclaesson avatar svenclaesson commented on June 20, 2024

Hi,

This might be of interest to someone.
We are currently experiencing these issues after upgrading to .NET core 3.1. We had no issues when running on .NET core 2.2.

https://i.imgur.com/cYQ92Wp.png

from pipelines.sockets.unofficial.

ahmedtolba1984 avatar ahmedtolba1984 commented on June 20, 2024

@svenclaesson
We already have the same issue when upgrading to .NET Core 3.1
Did you solve this issue? and If Yes, How did you solve it?

from pipelines.sockets.unofficial.

Imjamper avatar Imjamper commented on June 20, 2024

Hi, same question, same issue.

@svenclaesson
We already have the same issue when upgrading to .NET Core 3.1
Did you solve this issue? and If Yes, How did you solve it?

from pipelines.sockets.unofficial.

ikalafat avatar ikalafat commented on June 20, 2024

Hi @mgravell

is there anything we can do to circumvent this issue except removing all sync-over-async code? I've had a fun evening with the performance profiler (due to an unrelated issue) and noticed that 25% of CPU time from profiling session is accounted to Pipelines.Sockets.Unofficial.DedicatedThreadPoolPipeScheduler.RunWorkLoop

Not all redis-related code is under my control, so I'm not sure whether I'll be able to remove all sync-over-async code.

Any other suggestions? Pipelines.Sockets.Unofficial 2.1.16 and SE.Redis 2.1.55, not sure were there any changes in SE.Redis lib in meantime.

Thanks.

from pipelines.sockets.unofficial.

ThiagoT1 avatar ThiagoT1 commented on June 20, 2024

We also have traces pointing to high CPU usage on this flow. We are running MS Orleans on K8s with the RedisGrainDirectory, and immediately after the deploy, even with no load, the issue arrises.

We are referencing version 2.2.88

image

Edit: we removed SE.Redis of the equation and the CPU kept high. Then we found a race in our code. We're fixing.

Is very weird that it didn't show in the profilings, though.

from pipelines.sockets.unofficial.

rkenshin avatar rkenshin commented on June 20, 2024

We also have traces pointing to high CPU usage on this flow. We are running MS Orleans on K8s with the RedisGrainDirectory, and immediately after the deploy, even with no load, the issue arrises.

We are referencing version 2.2.88

image

Edit: we removed SE.Redis of the equation and the CPU kept high. Then we found a race in our code. We're fixing.

Is very weird that it didn't show in the profilings, though.

Hey ThiagoT1, how did you find the race in your code, i am also using Orleans and running into similar issues, i can't find anything that suggest that our code is at fault so far. Thank you
image

from pipelines.sockets.unofficial.

roklenardic avatar roklenardic commented on June 20, 2024

Experiencing similar situation, app takes 6 minutes to start and when profiling with dotTrace this shows up:
image

Using nuget StackExchange.Redis 2.6.66 and the same was with 2.2.88. Using .NET framework 4.8

from pipelines.sockets.unofficial.

alex-jitbit avatar alex-jitbit commented on June 20, 2024

Seeing the same issue on .NET 6 with SE-.Redis latest version (2.6.70)

Has anyone came up with any solution?

image

from pipelines.sockets.unofficial.

pengweiqhca avatar pengweiqhca commented on June 20, 2024

Use SocketManager.ThreadPool rather than SocketManager.Shared.

var options = ConfigurationOptions.Parse(configuration);

options.SocketManager = SocketManager.ThreadPool;

from pipelines.sockets.unofficial.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.