Giter Site home page Giter Site logo

mg_wakeup() about mongoose HOT 54 CLOSED

scaprile avatar scaprile commented on May 29, 2024
mg_wakeup()

from mongoose.

Comments (54)

jvo203 avatar jvo203 commented on May 29, 2024 1

There is more to it than meets the eye. "/", "test.css" and "test.js" were fast local disk accesses from SSD but the other stuff was from external resources (WAN).

root "/" test.html:

<head>
    <title>FITSWEBQLSE</title>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">

    <!-- version 3.3.7 -->
    <!-- <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css"> -->
    <!-- <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> -->
    <!-- <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script> -->

    <!-- version 3.4.1 -->
    <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css"
        integrity="sha384-HSMxcRTRxnN+Bdg0JdbxYKrThecOKuH5zCYotlSAcp1+c8xmyTe9GYg1l9a69psu" crossorigin="anonymous">
    <script src="https://code.jquery.com/jquery-1.12.4.min.js"
        integrity="sha384-nvAa0+6Qg9clwYCGGPpDQLVpLNn0fRaROjHqs13t4Ggj3Ez50XnGQqc/r8MhnRDZ"
        crossorigin="anonymous"></script>
    <script src="https://stackpath.bootstrapcdn.com/bootstrap/3.4.1/js/bootstrap.min.js"
        integrity="sha384-aJ21OjlMXNL5UyIl/XNwTMqvzeRMZH2w8c5cRVpzpU8Y5bApTppSuUkhZXN0VxHd"
        crossorigin="anonymous"></script>

    <script src="//cdnjs.cloudflare.com/ajax/libs/numeral.js/2.0.6/numeral.min.js"></script>
    <link rel="stylesheet" href="test.css" />
    <script src="test.js"></script>
</head>

That's probably why the libmicrohttpd seemingly has such a low transaction rate. The siege benchmark program seems to be counting the external WAN accesses too in the transaction rate.

from mongoose.

cpq avatar cpq commented on May 29, 2024 1

Looking closer at the API, there is a way to do what you want with a low cost:

bool mg_wakeup_init(struct mg_mgr *);

change to

bool mg_wakeup_init(struct mg_mgr *, mg_event_handler_t fn, void *fn_data);

mg_wakeup_init(mgr, NULL, NULL) would assign a built-in wufn and behave like it does now

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

The problem is more subtle. It also happens on Linux: openSUSE Tumbleweed. The other day I had been testing it in Fedora Linux 39 running from within Windows 11 (Fedora Remix WSL2). It worked fine, the mg_pipe was being released promptly. There is definitely something dodgy going on with event polling.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

There must be a way to force the mongoose connection manager to eject pipes created / inserted into it with int mg_mkpipe(struct mg_mgr *, mg_event_handler_t, void *, bool udp);. At present it does not quite work.

Calling close(int pipe_fd) releases the file descriptor back to the OS - I've checked the numbers, the released int pipe_fd are then being allocated again in subsequent WebSocket user sessions. But the mg_mgr does not deregister the pipes upon close(int pipe_fd).

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Here is a debug log. Even after calling close(session->channel) the pipe keeps getting polled by the mg_mgr every 1s (the poll interval), with the event == 2.

[C] spectrum compressed size: 32 bytes
[C] float array size: 2032, compressed: 32 bytes
[C] PIPE_END_OF_STREAM
[C] spectrum length: 508, elapsed: 2.538000 [ms], compressed_size: 32, msg_len: 52 bytes.
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 7
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[WS] {"type":"kalman_reset","seq_id":15}
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] closing a websocket connection for ALMB00130937/000711eb-5796-476a-9ef5-6309984183a0
[C] removed 000711eb-5796-476a-9ef5-6309984183a0 from the hash table
[C] deleting a session for ALMB00130937/000711eb-5796-476a-9ef5-6309984183a0
[C] ws_event_loop terminated.
[C] video_event_loop terminated.
[C] pv_event_loop terminated.
[C] closing a socket pipe channel 35 with stat = 0
WEBSOCKET CONNECTION CLOSED.
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
^C[C] Interrupt signal [2] received.
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 2
[C] mg_pipe_callback: event 9
[C] mg_pipe_callback: MG_EV_CLOSE (000711eb-5796-476a-9ef5-6309984183a0)
[C] shutting down the µHTTP daemon... done
garbage collection thread terminated.
 deleting ALMB00130937; cache dir: /Volumes/OWC/CACHE/ALMB00130937, status           0 , bSuccess T

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

The mystery has been solved. mg pipes are being released promptly in both macOS and Linux but only for TCP socket pipes:

session->channel = mg_mkpipe(c->mgr, mg_pipe_callback, (void *)(sessionId != NULL ? strdup(sessionId) : NULL), false);

The UDP pipes, on the other hand, created with

session->channel = mg_mkpipe(c->mgr, mg_pipe_callback, (void *)(sessionId != NULL ? strdup(sessionId) : NULL), true);

linger inside the connection list until the program termination.

So with TCP pipes things work as expected but not with UDP.

from mongoose.

cpq avatar cpq commented on May 29, 2024

Not sure this is relevant yet.
We're not creating pipes on demand, there is only one "global" pipe.
Closing, please reopen if still relevant.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

OK, thank you. Actually this might be relevant - only one global pipe might create a bottleneck under a heavy load. I will re-do my software and stress-test it for the usual 12 hours. Then we might be able to tell if a single global pipe is creating a bottleneck.

All the multiple concurrent WebSocket "POSIX thread" events loops from multiple end-user connections will be writing responses nearly simultaneously to one global mongoose pipe.

There is Xmas and New Year coming so please bear with me, I will be officially back to work on the 4th of January.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thank you!

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

First of all thank you for taking time to add the mg_wakeup() functionality.

This functionality does not handle easily what the prior mg_mkpipe() used to do. I mean it works but it seems a bit "hacky".

The new mg_wakeup() event handler explicitly assumes an struct mg_str structure being passed around the pipes:

  } else if (ev == MG_EV_WAKEUP) {
    struct mg_str *data = (struct mg_str *) ev_data;
    mg_http_reply(c, 200, "", "Result: %.*s\n", data->len, data->ptr);
  }

whereas my code has always been passing a custom data structure struct websocket_message containing a binary payload. Hence the following new mg_wakeup() event handling code does not work:

case MG_EV_WAKEUP:
    {
        printf("[C] MG_EV_WAKEUP\n");
        struct websocket_message *msg = (struct websocket_message *)ev_data;

        if (msg != NULL)
        {
            printf("[C] found a WebSocket connection, sending %zu bytes.\n", msg->len);

            if (c->is_websocket && msg->len > 0 && msg->buf != NULL)
                mg_ws_send(c, msg->buf, msg->len, WEBSOCKET_OP_BINARY);

            // release memory
            if (msg->buf != NULL)
            {
                free(msg->buf);
                msg->buf = NULL;
                msg->len = 0;
            }
        }

        break;
    }

The workaround seems to be to extract the websocket_message structure from within mg_str->ptr like this:

case MG_EV_WAKEUP:
    {
        printf("[C] MG_EV_WAKEUP\n");
        struct mg_str *data = (struct mg_str *)ev_data;

        if (data != NULL)
        {
            printf("[C] MG_EV_WAKEUP received %zu bytes.\n", data->len);

            int i, n;
            size_t offset;

            n = data->len / sizeof(struct websocket_message);

            // #ifdef DEBUG
            printf("[C] MG_EV_WAKEUP: received %d binary message(s).\n", n);
            // #endif

            for (offset = 0, i = 0; i < n; i++)
            {
                struct websocket_message *msg = (struct websocket_message *)(data->ptr + offset);
                offset += sizeof(struct websocket_message);

                // #ifdef DEBUG
                printf("[C] found a WebSocket connection, sending %zu bytes.\n", msg->len);
                // #endif
                if (msg->len > 0 && msg->buf != NULL)
                    mg_ws_send(c, msg->buf, msg->len, WEBSOCKET_OP_BINARY);

                // release memory
                if (msg->buf != NULL)
                {
                    free(msg->buf);
                    msg->buf = NULL;
                    msg->len = 0;
                }
            }
        }

        break;
    }

It is a bit "clunky" but it seems to work. By the way, since there might a possibility of several mongoose messages getting strung together within a single MG_EV_WAKEUP event, I am explicitly counting the number of received payloads:

n = data->len / sizeof(struct websocket_message);

Well at the end of the day it works, I will now stress-test it over a 12-hour period.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

With an enduring 12-hour stress-test under still way, having one global pipe does not cause any performance bottlenecks. So far so good, thumbs up to the new mg_wakeup() function.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Unfortunately it seems the new global pipe using mg_wakeup() and struct mg_str leaves a serious memory-leak loophole.

mg_wakeup() may return true signalling to a separate POSIX thread that the message corresponding to a given session->conn_id has been passed over to the mongoose event loop successfully. However, in the meantime a user might have closed the matching WebSocket connection (the one with c->conn_id).

This would mean that the mg_str actually pointing to another C structure web socket_message, with its malloced payload, would not be delivered to the MG_EV_WAKEUP event handler, where the payload memory would be expected to be released.

The mongoose code is not aware of the fact that mg_str->ptr now points to another structure, with its associated memory buffer.

To make matters worse, in the mg_wakeup() function the send command send(mgr->pipe, extended_buf, len + sizeof(conn_id), MSG_NONBLOCKING); loses a direct link to conn_id, it cannot tell whether or not the

const void *buf,
size_t len

buffer has actually been delivered all the way to the conn_id MG_EV_WAKEUP handler.

This scenario is not hypothetical, it is actually very likely to happen when separate non-mongoose threads (like in my code) call mg_wakeup() with non-string payloads (complex binary structures).

The previous point-to-point approaches (with mg_mkpipe() or mg_queue) did not suffer from this problem. There were fewer intermediate steps. Either my payload was inserted successfully into the mg_queue or not. Two very clear outcomes. Either way my binary payload would be released without any memory leaks. The new global mg_wakeup() breaks this clear chain of command.

I think it might be worth to re-think the new mg_wakeup() functionality, re-design it so that it gives a clear feedback as to whether or not the payloads have been delivered all the way until the final MG_EV_WAKEUP event handler destination corresponding to c->conn_id. Failures should be communicated all the way back to the original caller of the mg_wakeup() function. Right now it's not happening.

I sincerely hope you can understand what I have been trying to say!

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

At the very least passing a custom pipe event handler to mg_wakeup_init() would help.

The mongoose-provided

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd)

is not aware of non-standard end-user binary data structures being passed around. In a custom pipe event handler I could handle all those cases where t->id is not found, and release the underlying custom memory data.

send(mgr->pipe, extended_buf, len + sizeof(conn_id), MSG_NONBLOCKING);

would also need to be able to report on any failures.

This would be a bare minimum that can be done.

from mongoose.

scaprile avatar scaprile commented on May 29, 2024

Well, it is a bit over my head but I get what you say. If a connection is closed before we check the pipe, then data will never be freed. Perhaps we can just check the pipe before handling closures (rambling), passed data does not necessarily come from an alloc()ed source so I wouldn't actively free data for non-matching connections. What do you think, @cpq ?

I can comment that the original idea is to pass data, we are using a Mongoose connection, so if you want to pass other stuff you need to push into data and take care of popping it later. The pipe is UDP, so things will come out as they were put in.
https://mongoose.ws/documentation/#mg_wakeup

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

"I can comment that the original idea is to pass data"

Because the size of the binary data can be quite large, in order to avoid any extra memory allocations when messages go through the mongoose pipe and event handlers, I send pointers to data as "data", instead of actual data.

Otherwise under a large stress pushing multiple 200KB ~ 500KB binary messages from various threads through a single UDP or TCP pipe would quickly become a bottleneck. Plus multiple memory allocations / extra memcpy are not welcome under heavy load.

That's why a special care needs to be taken to handle errors / closed connections all the way and release the memory from inside the data structure, a pointer to which is being passed through mongoose.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Here is what I have in mind. The solution required patching the mongoose code. It still needs to be tested extensively to make sure there are no memory leaks etc. in case the t->id is not found etc.

My application calls mg_wakeup():

        // create a queue message
        struct mg_str msg = {pv_payload, msg_len};

        // pass the message over to mongoose via a communications channel
        bool sent = mg_wakeup(session->mgr, session->conn_id, &msg, sizeof(struct mg_str)); // Wakeup event manager

        if (!sent)
        {
            printf("[C] mg_wakeup() failed.\n");

            // free memory upon a send failure, otherwise memory will be freed in the mongoose pipe event loop
            free(pv_payload);
        };

A modified mg_wakeup() mongoose function (defensive-style code):

bool mg_wakeup(struct mg_mgr *mgr, unsigned long conn_id, const void *buf, size_t len) {                
  // safety first, no need to send anything if there is no valid data
  if(len == 0 || buf == NULL)
    return false;

  if (mgr->pipe != MG_INVALID_SOCKET && conn_id > 0) {
    char *extended_buf = (char *) alloca(len + sizeof(conn_id));

    if(extended_buf == NULL)
      return false;        

    memcpy(extended_buf, &conn_id, sizeof(conn_id));  
    memcpy(extended_buf + sizeof(conn_id), buf, len);

    if(send(mgr->pipe, extended_buf, len + sizeof(conn_id), MSG_NONBLOCKING) == (ssize_t)(len + sizeof(conn_id)))
      return true;    
  }
  return false;
}

Next comes a modified mongoose mg_wakeup() event handler. It frees the underlying payload (pointed to by data.ptr) in case the destination connection has not been found. The search is also cut short with break;:

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd) {
  if (ev == MG_EV_READ) {
    unsigned long *id = (unsigned long *) c->recv.buf;
    // MG_INFO(("Got data"));
    // mg_hexdump(c->recv.buf, c->recv.len);
    if (c->recv.len >= sizeof(*id)) {
      struct mg_str data = mg_str_n((char *) c->recv.buf + sizeof(*id), c->recv.len - sizeof(*id));
      struct mg_connection *t;
      for (t = c->mgr->conns; t != NULL; t = t->next) {
        if (t->id == *id) {          
          mg_call(t, MG_EV_WAKEUP, &data);
          data.ptr = NULL; // prevent freeing the original data 
          break; // stop searching for the connection
        }
      }
      // free the original data in case a connection was not found
      free((char*)data.ptr); // it's OK to free NULL
    }    
    c->recv.len = 0;  // Consume received data
  } else if (ev == MG_EV_CLOSE) {
    closesocket(c->mgr->pipe);         // When we're closing, close the other
    c->mgr->pipe = MG_INVALID_SOCKET;  // side of the socketpair, too
  }
  (void) evd, (void) fnd;
}

mg_call() does not require any changes, it simply calls the event handler function with MG_EV_WAKEUP straight away.
Finally the MG_EV_WAKEUP handling code back inside my application (note it now assumes n==1, i.e. only one incoming message):

case MG_EV_WAKEUP:
    {
#ifdef DEBUG
        printf("[C] MG_EV_WAKEUP\n");
#endif
        struct mg_str *data = (struct mg_str *)ev_data;

        if (data != NULL)
        {
#ifdef DEBUG
            printf("[C] MG_EV_WAKEUP received %zu bytes.\n", data->len);
#endif

            if (data->ptr != NULL && data->len == sizeof(struct mg_str))
            {
                struct mg_str *msg = (struct mg_str *)data->ptr;

                if (c->is_websocket && msg->len > 0 && msg->ptr != NULL)
                {
#ifdef DEBUG
                    printf("[C] found a WebSocket connection, sending %zu bytes.\n", msg->len);
#endif
                    mg_ws_send(c, msg->ptr, msg->len, WEBSOCKET_OP_BINARY);
                }

                // release memory
                if (msg->ptr != NULL)
                {
                    free((char *)msg->ptr);
                    msg->ptr = NULL;
                    msg->len = 0;
                }
            }
        }

        break;
    }

Notice nested mg_str structures. My binary payload (the pointer and the length) is embedded within struct mg_str, which itself becomes embedded inside an outer mg_str by mongoose. This avoids large stack allocations using alloca() as well as avoids large memory copying.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Heavy stress-test results:

Sorry for the barrage of messages. After repeatedly stress-testing the new mg_wakeup() functionality, the global mg_wakeup() is less robust compared with the prior point-to-point mg_mkpipe().

Under really heavy "bombardment" via WebSockets the global mg_wakeup() solution freezes (my application freezes) without an explanation. Probably there is a deadlock in the mongoose event loop, I don't know. Neither gdb nor valgrind give any hints as to where the problem might lie.

The prior point-to-point mg_mkpipe() experiences no such freezes even under a heavy load. (There is an unresolved problem with UDP connections not being closed promptly, the TCP ones work fine). So I am inclined to abandon the global mg_wakeup() in favour of the prior point-to-point TCP mg_mkpipe() solution.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Using valgrind I've found some hints to the freeze reported an hour ago when using the global mg_wakeup() version:

[C] deleting a session for ALMA01018032/c4efcc63-198c-425e-88ed-69308f620390
==110097== Thread 1:
==110097== Invalid free() / delete / delete[] / realloc()
==110097==    at 0x4845B2C: free (vg_replace_malloc.c:985)
==110097==    by 0x41E5D0: wufn.lto_priv.0 (mongoose.c:7315)
==110097==    by 0x433256: UnknownInlinedFun (mongoose.c:1009)
==110097==    by 0x433256: iolog.lto_priv.0 (mongoose.c:6787)
==110097==    by 0x433E17: UnknownInlinedFun (mongoose.c:6977)
==110097==    by 0x433E17: mg_mgr_poll (mongoose.c:7397)
==110097==    by 0x480599: start_ws (ws.c:2453)
==110097==    by 0x407FA0: main (main.c:379)
==110097==  Address 0x18d95588 is 8 bytes inside a block of size 2,048 alloc'd
==110097==    at 0x4849E60: calloc (vg_replace_malloc.c:1595)
==110097==    by 0x4306BC: mg_iobuf_resize (mongoose.c:2961)
==110097==    by 0x433F23: UnknownInlinedFun (mongoose.c:6941)
==110097==    by 0x433F23: UnknownInlinedFun (mongoose.c:6952)
==110097==    by 0x433F23: mg_mgr_poll (mongoose.c:7397)
==110097==    by 0x480599: start_ws (ws.c:2453)
==110097==    by 0x407FA0: main (main.c:379)
==110097==

The mongoose.c line numbers do not match your GitHub version, they only match my local patched version. As stated earlier, the prior point-to-point mg_mkpipe() code was more robust, it was easier to write error-free code.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Getting rid of the nested struct mg_str solves the above "freeze" problem.

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd) {
  if (ev == MG_EV_READ) {
    unsigned long *id = (unsigned long *) c->recv.buf;
    // MG_INFO(("Got data"));
    // mg_hexdump(c->recv.buf, c->recv.len);
    if (c->recv.len >= sizeof(*id)) {
      char* ptr = (char *) c->recv.buf + sizeof(*id);      
      struct mg_str* data = (struct mg_str*)ptr;
      for (struct mg_connection * t = c->mgr->conns; t != NULL; t = t->next) {
        if (t->id == *id) {          
          mg_call(t, MG_EV_WAKEUP, data);
          data->ptr = NULL; // prevent freeing the original data 
          data->len = 0;
          break; // stop searching for the connection
        }
      }
      // free the original data in case a connection was not found
      if(data->ptr != NULL)
      {      
        free((char*)data->ptr);
        data->ptr = NULL;
        data->len = 0;
      }
    }    
    c->recv.len = 0;  // Consume received data
  } else if (ev == MG_EV_CLOSE) {
    closesocket(c->mgr->pipe);         // When we're closing, close the other
    c->mgr->pipe = MG_INVALID_SOCKET;  // side of the socketpair, too
  }
  (void) evd, (void) fnd;
}

and

case MG_EV_WAKEUP:
    {
#ifdef DEBUG
        printf("[C] MG_EV_WAKEUP\n");
#endif
        struct mg_str *msg = (struct mg_str *)ev_data;

        if (msg != NULL && msg->ptr != NULL && msg->len > 0)
        {
#ifdef DEBUG
            printf("[C] MG_EV_WAKEUP received %zu bytes.\n", msg->len);
#endif

            if (c->is_websocket)
            {
#ifdef DEBUG
                printf("[C] found a WebSocket connection, sending %zu bytes.\n", msg->len);
#endif
                mg_ws_send(c, msg->ptr, msg->len, WEBSOCKET_OP_BINARY);
            }

            // release memory
            if (msg->ptr != NULL)
            {
                free((char *)msg->ptr);
                msg->ptr = NULL;
                msg->len = 0;
            }
        }

        break;
    }

from mongoose.

scaprile avatar scaprile commented on May 29, 2024

Otherwise under a large stress pushing multiple 200KB ~ 500KB binary messages from various threads through a single UDP or TCP pipe would quickly become a bottleneck. Plus multiple memory allocations / extra memcpy are not welcome under heavy load.

Single or multiple sockets, I don't really see a difference without knowing the internals of the OS... it is a single loopback interface anyway and it all goes down to what resources can be actually paralleled and the number of possible really simultaneous threads.
But I see your point, passing pointers is faster than passing data, though then your data needs to be shareable to avoid the MMU (MPU) firing illegal access exceptions, and data integrity kicks in, and...
And there is also Mongoose linear search to find the connection id, it becomes more relevant with frequent small amounts of data...

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203

  1. MG_EV_WAKEUP sends mg_str, and it is up to you what is in that - a structure of yours or whatever. mg_str is just a piece of memory, that's it
  2. Claiming what is more robust in our code, and what is less robust needs a backup argument. So far all arguments I've seen is the issues in your own code

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024
  • Basically, the server code has to run for a long time with absolutely zero issues: for up to 12 months (in between annual electricity shutdowns due to regulatory electrical wiring checks in Japan).
  • Therefore I am taking a "safety first" approach, better safe than sorry. There is an absolute requirement for zero bugs, zero memory leaks, zero memory problems and zero segmentation faults. Like writing a Rust-style C code.
  • The claim / issue about robustness has been solved in an earlier comment: it was an issue on my side (nested mg_str etc.). Other than that the mongoose.c code had to be modified too as the new mg_wakeup_init() does not support passing a user-provided event handler. The prior mg_mkpipe() supported a user-provided event handler.

I will try to measure the average client-side WebSocket response times under heavy load using both approaches: the global mg_wakeup() and the prior point-to-point mg_mkpipe(). This should give an objective measure to help judge which solution is better performance-wise.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thank you!

I've modified multi-threaded example for a benchmark (attached)
main.c.txt

Build it with a backlog size increased:

make clean all CFLAGS_EXTRA="-DMG_SOCK_LISTEN_BACKLOG_SIZE=50"

Then run a benchmark. Here is a quick siege session:

siege -t3s -c10 http://localhost:8000/

{
        "transactions":                          290,
        "availability":                       100.00,
        "elapsed_time":                         3.15,
        "data_transferred":                     0.00,
        "response_time":                        0.11,
        "transaction_rate":                    92.06,
        "throughput":                           0.00,
        "concurrency":                          9.75,
        "successful_transactions":               290,
        "failed_transactions":                     0,
        "longest_transaction":                  0.12,
        "shortest_transaction":                 0.10
}

App output looks something like this:

8708a49a 2 main.c:80:timer_fn           threads: active 9, total 320; conns: active 11, total 321
8708a873 2 main.c:80:timer_fn           threads: active 7, total 410; conns: active 11, total 411
8708ac27 2 main.c:80:timer_fn           threads: active 9, total 500; conns: active 11, total 501

You can run it for longer time and stress more. Note: client connection may quickly exchaust ethemeral ports, so I suggest to run benchmarking tool on a different machine.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Before running your benchmark I ran my own comparison. Basically, under heavy load there is no noticeable difference between mg_wakeup and mg_mkpipe. You can find the two histograms in the attached ZIP archive mongoose.zip. The Julia stress_test.jl (+.txt) is also attached. The Julia program simulates a subset of web browser user requests.

The 1-hour stress tests were also repeated five times each but there were no differences between test runs. The tests were comparing the total response times via WebSockets. This includes both the parallel computation of the result as well as pushing the results via mongoose. The test was conducted over LAN so there was no WAN overhead.

The response time range in histograms is shown for between 0 and 2000ms. Some responses go over 2s. The CPU cores were all overwhelmed by a brutal stress test, hence such long response times. In a normal operation the response times are within a few milliseconds + WAN latency between the user web browser and our server.

My fears were there would be some sort of a penalty, a bottleneck when using a single global mg_wakeup UDP pipe. But as you can see there is no negative / no positive impact at all. Essentially the same.
mongoose.zip
stress_test.jl.txt

P.S. I am aware the statistical mean is not the most suitable measure for such distributions. The median response values were also essentially the same.

from mongoose.

scaprile avatar scaprile commented on May 29, 2024

#2532 (comment)

Looks like sometimes my 40-year expertise serves some purpose.

Single or multiple sockets, I don't really see a difference without knowing the internals of the OS... it is a single loopback interface anyway and it all goes down to what resources can be actually paralleled and the number of possible really simultaneous threads.
But I see your point, passing pointers is faster than passing data, though then your data needs to be shareable to avoid the MMU (MPU) firing illegal access exceptions, and data integrity kicks in, and...
And there is also Mongoose linear search to find the connection id, it becomes more relevant with frequent small amounts of data...

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203
Thanks for that, appreciated
Good to know that performance - wise there is no/little difference.
Resource usage wise, the mg_wakeup() is using a single socketpair per manager, whereas mg_mkipe creates a socketpair per thread. That is, the number of socket descriptors grow 3x with the number of connections: 1 socket per connected client + 2 sockets for the pair. For the large number of connections, I suspect this should show up.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

The siege benchmark when there is no load on the server:

Lifting the server siege...
Transactions:		         340 hits
Availability:		      100.00 %
Elapsed time:		        3.52 secs
Data transferred:	        0.00 MB
Response time:		        0.10 secs
Transaction rate:	       96.59 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        9.70
Successful transactions:         340
Failed transactions:	           0
Longest transaction:	        0.11
Shortest transaction:	        0.10

Now together with my application stress-test running in the background:

Lifting the server siege...
Transactions:		         328 hits
Availability:		      100.00 %
Elapsed time:		        3.42 secs
Data transferred:	        0.00 MB
Response time:		        0.10 secs
Transaction rate:	       95.91 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        9.87
Successful transactions:         328
Failed transactions:	           0
Longest transaction:	        0.11
Shortest transaction:	        0.10

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 Thank you!

95 requests per second.. Does that mean that the frequency of using the wakeup socket pair is not high enough to exhaust its socket buffer? Can it be stressed even more? Like, can we get to a thousand requests a second or more?

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

One thing should be made clear: I did not integrate into my main C / FORTRAN application (the one undergoing a heavy stress test using the Julia client) any modifications you made to the multi-threaded example. These two applications were running completely separately (your modified multi-threaded example using the 8000 port and my stress test using 8080).

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Here is a more interesting test. As you may recall my application uses both libmicrohttpd to handle clients' HTTP requests on the 8080 port as well as mongoose for WebSockets (+ some inter-node HTTP) on the port 8081.

So let's see the siege results for my main C / FORTRAN application. First there is no extra server load:

libmicrohttpd on 8080:

chris@capricorn:~> siege -t3s -c10 http://0.0.0.0:8080/
Lifting the server siege...
Transactions:		        1244 hits
Availability:		      100.00 %
Elapsed time:		        3.27 secs
Data transferred:	       16.73 MB
Response time:		        0.03 secs
Transaction rate:	      380.43 trans/sec
Throughput:		        5.12 MB/sec
Concurrency:		        9.84
Successful transactions:        1244
Failed transactions:	           0
Longest transaction:	        0.11
Shortest transaction:	        0.00

and the mongoose on port 8081:

chris@capricorn:~> siege -t3s -c10 http://0.0.0.0:8081/
** SIEGE 4.1.6
** Preparing 10 concurrent users for battle.
The server is now under siege...
siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc

Transactions:		           0 hits
Availability:		        0.00 %
Elapsed time:		        1.05 secs
Data transferred:	        0.00 MB
Response time:		        0.00 secs
Transaction rate:	        0.00 trans/sec
Throughput:		        0.00 MB/sec
Concurrency:		        0.00
Successful transactions:           0
Failed transactions:	        1033
Longest transaction:	        0.00
Shortest transaction:	        0.00

This fails because I explicitely reject non-WebSocket general connections on 8081 (my code prints out "rejecting the connection." and does mg_http_reply(c, 404, "", "Rejected");). After replacing the 404 with 200 (mg_http_reply(c, 200, "", "Testing");) we get

siege -t3s -c10 http://0.0.0.0:8081/
Lifting the server siege...
Transactions:		       51866 hits
Availability:		      100.00 %
Elapsed time:		        3.20 secs
Data transferred:	        0.35 MB
Response time:		        0.00 secs
Transaction rate:	    16208.12 trans/sec
Throughput:		        0.11 MB/sec
Concurrency:		        7.40
Successful transactions:       51866
Failed transactions:	           0
Longest transaction:	        1.02
Shortest transaction:	        0.00

Seemingly more than libmicrohttpd but this is not a fair comparison. Mongoose only sends "200 Testing" without any delays whilst libmicrohttpd actually processes an HTTP request and sends out a real HTTP response with valid data from my application.

So let's see what happens during a heavy stress-test. First the client-facing libmicrohttpd part:

chris@capricorn:~> siege -t3s -c10 http://0.0.0.0:8080
Lifting the server siege...
Transactions:		        1289 hits
Availability:		      100.00 %
Elapsed time:		        3.26 secs
Data transferred:	       17.31 MB
Response time:		        0.02 secs
Transaction rate:	      395.40 trans/sec
Throughput:		        5.31 MB/sec
Concurrency:		        9.86
Successful transactions:        1289
Failed transactions:	           0
Longest transaction:	        0.32
Shortest transaction:	        0.00

and the mongoose on 8081 (still replying with "200 Testing" but also handling all the stress-test WebSocket requests being fired by Julia).

chris@capricorn:~> siege -t3s -c10 http://0.0.0.0:8081/
Lifting the server siege...
Transactions:		       33247 hits
Availability:		      100.00 %
Elapsed time:		        3.92 secs
Data transferred:	        0.22 MB
Response time:		        0.00 secs
Transaction rate:	     8481.38 trans/sec
Throughput:		        0.06 MB/sec
Concurrency:		        8.26
Successful transactions:       33247
Failed transactions:	           0
Longest transaction:	        2.02
Shortest transaction:	        0.00

Not bad at all. The mongoose transaction rate went down by 50% but it's still high (more than sufficient). There was hardly any difference on the HTTP 8080 libmicrohttpd part, mostly reflecting the fact that the stress-test only fires client WebSocket requests (hitting mongoose) whilst the libmicrohttpd is not being hit at all by the Julia stress test.

(the libmicrohttpd transaction rate actually went up under stress most likely due to the CPU frequency simply being raised under a heavy CPU load)

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thank you.

Yeah it is not exactly apples to apples, but 16M total transmitted size for microhttpd is too low to make the test IO bound. So the difference I guess is in the processing logic.

And that difference is order of magnitude.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

A sample output from siege attacking libmicrohttpd, under no CPU load. As you can see apart from getting the root "/" there were also multiple disk file servings as well as loading external resources from WAN.

Contrast this against a single mg_http_reply(c, 200, "", "Testing");.

HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.08 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.08 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.08 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.01 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.01 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.01 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.06 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.06 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.06 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.03 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.06 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.03 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.06 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.06 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.08 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12745 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.06 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.02 secs:   33738 bytes ==> GET  /jquery-1.12.4.min.js
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.08 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.08 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.07 secs:   23973 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.01 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.02 secs:    4054 bytes ==> GET  /ajax/libs/numeral.js/2.0.6/numeral.min.js
HTTP/1.1 200     0.07 secs:   23949 bytes ==> GET  /bootstrap/3.4.1/css/bootstrap.min.css
HTTP/1.1 200     0.00 secs:   17577 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:    7414 bytes ==> GET  /test.js
HTTP/1.1 200     0.00 secs:     225 bytes ==> GET  /test.css
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js
HTTP/1.1 200     0.07 secs:   12719 bytes ==> GET  /bootstrap/3.4.1/js/bootstrap.min.js

Lifting the server siege...
Transactions:		        1224 hits
Availability:		      100.00 %
Elapsed time:		        3.11 secs
Data transferred:	       16.37 MB
Response time:		        0.02 secs
Transaction rate:	      393.57 trans/sec
Throughput:		        5.26 MB/sec
Concurrency:		        9.81
Successful transactions:        1224
Failed transactions:	           0
Longest transaction:	        0.25
Shortest transaction:	        0.00

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

It's been "bugging" me so for completeness here is an "apples to apples" comparison with libmicrohttpd.

the libmicrohttpd part:

static enum MHD_Result http_testing(struct MHD_Connection *connection)
{
    struct MHD_Response *response;
    int ret;
    const char *okstr = "200 Testing";

    response =
        MHD_create_response_from_buffer(strlen(okstr),
                                        (void *)okstr,
                                        MHD_RESPMEM_PERSISTENT);
    if (NULL != response)
    {
        ret =
            MHD_queue_response(connection, MHD_HTTP_OK,
                               response);
        MHD_destroy_response(response);

        return ret;
    }
    else
        return MHD_NO;
};

and

     // ...

    // static resources
    if (url[strlen(url) - 1] != '/')
        return serve_file(connection, url);
    else
    {
        // root document
        return http_testing(connection);
        if (options.local)
            return serve_file(connection, "/local.html");
        else
            return serve_file(connection, "/test.html");
    }

    return http_not_found(connection);

under no stress yields

HTTP/1.1 200     0.01 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Transactions:                  94401 hits
Availability:                 100.00 %
Elapsed time:                   3.37 secs
Data transferred:               0.99 MB
Response time:                  0.00 secs
Transaction rate:           28012.17 trans/sec
Throughput:                     0.29 MB/sec
Concurrency:                    6.20
Successful transactions:       94402
Failed transactions:               0
Longest transaction:            0.02
Shortest transaction:           0.00

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 Thanks!

Are you saying that side-by-side, libmicrohttpd beats Mongoose almost 2x, serving a simple response?
Mongoose is not built for performance, but I am very sceptical about that result.
I expect benchmark results about the same.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

I am just a messenger reporting the numbers but yes, that is what the numbers are saying. Nearly 2x. This time it was a completely fair comparison, serving the same simple response and nothing else, no WAN resources, no mg_wakeup() to worry about, no WebSocket accesses going on in the background etc.

libmicrohttpd listening on port 8080 and mongoose listening on 8081. Separate from each other but running inside a single application.

libmicrohttpd was started with the following options:

void start_http()
{
    signal(SIGPIPE, SIG_IGN); // ignore SIGPIPE

    http_server = MHD_start_daemon(MHD_USE_AUTO | MHD_USE_INTERNAL_POLLING_THREAD | MHD_USE_ERROR_LOG | MHD_USE_ITC | MHD_USE_TURBO,
                                   options.http_port,
                                   &on_client_connect,
                                   NULL,
                                   &on_http_connection,
                                   PAGE,
                                   MHD_OPTION_END);

    if (http_server == NULL)
    {
        printf("[C] Could not start a libmicrohttpd web server.\n");
        return;
    }
    else
    {
#ifdef SHARE
        int rc = sqlite3_open_v2(SHARE "/splatalogue_v3.db", &splat_db, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX, NULL);
#else
        int rc = sqlite3_open_v2("splatalogue_v3.db", &splat_db, SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX, NULL);
#endif

        if (rc)
        {
            fprintf(stderr, "[C] Can't open local splatalogue database: %s\n", sqlite3_errmsg(splat_db));
            sqlite3_close(splat_db);
            splat_db = NULL;
        }

#ifdef DEBUG
        printf("[C] µHTTP daemon listening on port %" PRIu16 "... Press CTRL-C to stop it.\n", options.http_port);
#endif
    }
};

and mongoose:

void start_ws()
{
    char url[256] = "";
    sprintf(url, "ws://0.0.0.0:%d", options.ws_port);

    struct mg_mgr mgr; // Event manager

    mg_mgr_init(&mgr); // Initialise event manager

#ifdef DEBUG
    mg_log_set(MG_LL_DEBUG);
    printf("Starting WS listener on %s\n", url);
#endif

    mg_http_listen(&mgr, url, mg_http_ws_callback, NULL); // Create HTTP listener
    mg_wakeup_init(&mgr);                                 // Initialise wakeup socket pair

    while (s_received_signal == 0)
        mg_mgr_poll(&mgr, 1000); // Event loop. Use 1000ms poll interval

    mg_mgr_free(&mgr);
}

They are started one after another, first http in a non-blocking mode in the background and then a blocking mongoose.
main.c:

// ...
    printf("Browser URL: http://localhost:%" PRIu16 "\n", options.http_port);
    printf("*** To quit FITSWEBQLSE press Ctrl-C from the command-line terminal or send SIGINT. ***\n");

    // Ctrl-C signal handler
    signal(SIGTERM, signal_handler);
    signal(SIGINT, signal_handler);

    start_http();

    // a blocking mongoose websocket server
    start_ws();

    stop_http();
// ...

Hard to believe or not, I'm just reporting the numbers, no more no less!

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thanks.

I did not believe that.
I've just downloaded libmicrohttpd, version 0.9.77. I am using Mac 14.1
Built a minimal example, used https://github.com/rboulton/libmicrohttpd/blob/master/src/examples/minimal_example.c
Then built Mongoose examples/http-resetful-server with a small patch:
x.diff.txt

Used this command to build and run:

make CFLAGS_EXTRA=-DMG_SOCK_LISTEN_BACKLOG_SIZE=128 clean all

So now this is apples to apples.
Ran siege -t3s -c10 http://localhost:8000 against both.
Result:

Mongoose

{
	"transactions":			       31441,
	"availability":			      100.00,
	"elapsed_time":			        3.17,
	"data_transferred":		        0.12,
	"response_time":		        0.00,
	"transaction_rate":		     9918.30,
	"throughput":			        0.04,
	"concurrency":			        9.83,
	"successful_transactions":	       31441,
	"failed_transactions":		           0,
	"longest_transaction":		        0.49,
	"shortest_transaction":		        0.00
}

libmicrohttpd

{
	"transactions":			       32748,
	"availability":			      100.00,
	"elapsed_time":			        3.65,
	"data_transferred":		        2.81,
	"response_time":		        0.00,
	"transaction_rate":		     8972.05,
	"throughput":			        0.77,
	"concurrency":			        8.13,
	"successful_transactions":	       32748,
	"failed_transactions":		           0,
	"longest_transaction":		        0.39,
	"shortest_transaction":		        0.00
}

Summary - 31441 vs 32748, Mongoose is slightly slower, nowhere near 2x.

Then I modified Mongoose code even more, to remove formatted print and send predefined string, just like libmicrohttpd examples does:

    const char response[] = "HTTP/1.0 200 OK\r\nContent-Length: 4\r\n\r\nok\r\n";
    mg_send(c, response, sizeof(response) - 1);

And with that, the benchmark shows

{
	"transactions":			       32253,
	"availability":			      100.00,
	"elapsed_time":			        3.40,
	"data_transferred":		        0.12,
	"response_time":		        0.00,
	"transaction_rate":		     9486.18,
	"throughput":			        0.04,
	"concurrency":			        9.81,
	"successful_transactions":	       32253,
	"failed_transactions":		           0,
	"longest_transaction":		        0.54,
	"shortest_transaction":		        0.00
}

So, 32253 vs 32748.

So yeah, was hard to believe your numbers. I believe mine.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

One thing to note: libmicrohttpd uses different polling mechanisms in Linux and macOS. The flag "MHD_USE_AUTO" automatically selects an appropriate polling library, and I believe they are different under macOS and Linux. I was testing it on Linux.

Another thing I notice is that the stock minimal_example.c from libmicrohttpd probably does not use the fastest possible daemon flags. It uses stock flags, including a slow debug option:

d = MHD_start_daemon (// MHD_USE_SELECT_INTERNALLY | MHD_USE_DEBUG | MHD_USE_POLL,
			MHD_USE_SELECT_INTERNALLY | MHD_USE_DEBUG,
			// MHD_USE_THREAD_PER_CONNECTION | MHD_USE_DEBUG | MHD_USE_POLL,
			// MHD_USE_THREAD_PER_CONNECTION | MHD_USE_DEBUG,
                        atoi (argv[1]),
                        NULL, NULL, &ahc_echo, PAGE,
			MHD_OPTION_CONNECTION_TIMEOUT, (unsigned int) 120,
			MHD_OPTION_END);

whereas my code likely used faster (or so I hoped!!!) daemon start-up flags. What happens if you replace the stock options with the flags I used (MHD_USE_AUTO | MHD_USE_INTERNAL_POLLING_THREAD | MHD_USE_ERROR_LOG | MHD_USE_ITC | MHD_USE_TURBO)? And certainly not use MHD_USE_DEBUG which might slow down libmicrohttpd.

By the way, is there any reason why you comparing "successful_transactions" instead of the "transaction_rate"? The elapsed times are different, they are not exactly 3s.

OK, I might try to compile and run libmicrohttpd and mongoose in separate programs, not in a single application. It may well be that replacing mg_printf() by send() (as you did) will speed up mongoose.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

OK, I've kept the same slow but safe debug flags that you used with the libmicrohttpd minimal example.

static enum MHD_Result http_testing(struct MHD_Connection *connection)
{
    struct MHD_Response *response;
    int ret;
    const char *okstr = "200 Testing";

    response =
        MHD_create_response_from_buffer(strlen(okstr),
                                        (void *)okstr,
                                        MHD_RESPMEM_PERSISTENT);
    if (NULL != response)
    {
        ret =
            MHD_queue_response(connection, MHD_HTTP_OK,
                               response);
        MHD_destroy_response(response);

        return ret;
    }
    else
        return MHD_NO;
};

static int
http_callback(void *cls,
              struct MHD_Connection *connection,
              const char *url,
              const char *method,
              const char *version,
              const char *upload_data, size_t *upload_data_size, void **ptr)
{
    (void)cls;         // silence gcc warnings
    (void)upload_data; // silence gcc warnings

    static int aptr;

    if (0 != strcmp(method, "GET"))
        return MHD_NO; /* unexpected method */
    if (&aptr != *ptr)
    {
        /* do never respond on first call */
        *ptr = &aptr;
        return MHD_YES;
    }
    *ptr = NULL; /* reset when done */

    return http_testing(connection);
}

d = MHD_start_daemon( // MHD_USE_SELECT_INTERNALLY | MHD_USE_DEBUG | MHD_USE_POLL,
        MHD_USE_SELECT_INTERNALLY | MHD_USE_DEBUG,
        // MHD_USE_THREAD_PER_CONNECTION | MHD_USE_DEBUG | MHD_USE_POLL,
        // MHD_USE_THREAD_PER_CONNECTION | MHD_USE_DEBUG,
        port1,
        NULL, NULL, &http_callback, PAGE,
        MHD_OPTION_CONNECTION_TIMEOUT, (unsigned int)120,
        MHD_OPTION_END);

Apparently there are significant differences between macOS and Linux.

in Intel macOS 14.2.1:

siege -t3s -c10 http://localhost:8000

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       16336 hits
Availability:		      100.00 %
Elapsed time:		        3.95 secs
Data transferred:	        0.17 MB
Response time:		        0.00 secs
Transaction rate:	     4135.70 trans/sec
Throughput:		        0.04 MB/sec
Concurrency:		        4.66
Successful transactions:       16336
Failed transactions:	           0
Longest transaction:	        0.30
Shortest transaction:	        0.00

in openSUSE Tumbleweed Linux (a completely different Intel x86_64 server machine):

siege -t3s -c10 http://0.0.0.0:8000

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       94626 hits
Availability:		      100.00 %
Elapsed time:		        3.30 secs
Data transferred:	        0.99 MB
Response time:		        0.00 secs
Transaction rate:	    28674.54 trans/sec
Throughput:		        0.30 MB/sec
Concurrency:		        6.83
Successful transactions:       94626
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

Switching to supposedly faster non-debug flags does not seem to speed up libmicrohttpd in macOS:

d = MHD_start_daemon(MHD_USE_AUTO | MHD_USE_INTERNAL_POLLING_THREAD | MHD_USE_ERROR_LOG | MHD_USE_ITC | MHD_USE_TURBO,
                         port1,
                         NULL, NULL, &http_callback, PAGE,
                         MHD_OPTION_CONNECTION_TIMEOUT, (unsigned int)120,
                         MHD_OPTION_END);

Intel macOS (two trials):

siege -t3s -c10 http://localhost:8000

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       16354 hits
Availability:		      100.00 %
Elapsed time:		        3.99 secs
Data transferred:	        0.17 MB
Response time:		        0.00 secs
Transaction rate:	     4098.75 trans/sec
Throughput:		        0.04 MB/sec
Concurrency:		        4.48
Successful transactions:       16354
Failed transactions:	           0
Longest transaction:	        0.29
Shortest transaction:	        0.00

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       16350 hits
Availability:		      100.00 %
Elapsed time:		        3.57 secs
Data transferred:	        0.17 MB
Response time:		        0.00 secs
Transaction rate:	     4579.83 trans/sec
Throughput:		        0.05 MB/sec
Concurrency:		        5.05
Successful transactions:       16350
Failed transactions:	           0
Longest transaction:	        0.31
Shortest transaction:	        0.00

openSUSE Tumbleweed Linux (two runs), if anything the performance decreased slightly but it's still significantly higher compared with macOS:

siege -t3s -c10 http://0.0.0.0:8000

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       91883 hits
Availability:		      100.00 %
Elapsed time:		        3.32 secs
Data transferred:	        0.96 MB
Response time:		        0.00 secs
Transaction rate:	    27675.60 trans/sec
Throughput:		        0.29 MB/sec
Concurrency:		        6.04
Successful transactions:       91883
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       89949 hits
Availability:		      100.00 %
Elapsed time:		        3.27 secs
Data transferred:	        0.94 MB
Response time:		        0.00 secs
Transaction rate:	    27507.34 trans/sec
Throughput:		        0.29 MB/sec
Concurrency:		        6.28
Successful transactions:       89949
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

As I said, libmicrohttpd behaves differently in macOS and Linux, it likely uses different event polling mechanisms, either this or something else is going on that I cannot possibly explain. Neither libmicrohttpd nor mongoose has been created by me, I'm just an end-user, faithfully reporting the numbers.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

P.S. A different Linux machine running CentOS Stream 9 yields (two trials):

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       72550 hits
Availability:		      100.00 %
Elapsed time:		        3.88 secs
Data transferred:	        0.76 MB
Response time:		        0.00 secs
Transaction rate:	    18698.45 trans/sec
Throughput:		        0.20 MB/sec
Concurrency:		        9.26
Successful transactions:       72551
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       70417 hits
Availability:		      100.00 %
Elapsed time:		        3.62 secs
Data transferred:	        0.74 MB
Response time:		        0.00 secs
Transaction rate:	    19452.21 trans/sec
Throughput:		        0.20 MB/sec
Concurrency:		        9.28
Successful transactions:       70417
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

As you can see there is a rather wide variation in results when running libmicrohttpd, it all depends on which OS is used, even within Linux distributions there is a large variation.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thank you, appreciate the effort!

Would it be possible for you to build and share a static Linux for x86-64 microhttpd binary, with the fastest flags possible?
We'd like to take it and stress tests on our linux machines, comparing to mongoose.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

I've tried compiling with a -static flag (requires installing a static version of glibc-devel-static) but unfortunately openSUSE does not provide a static version of libmicrohttpd. It only provides an optimized shared version:

sudo zypper search microhttpd
リポジトリのデータを読み込んでいます...
インストール済みのパッケージを読み込んでいます...

S | Name                | Summary                                      | Type
--+---------------------+----------------------------------------------+-----------
i | libmicrohttpd-devel | 小さくて組み込み可能な HTTP サーバライブラリ | パッケージ
i | libmicrohttpd12     | 小さくて組み込み可能な http サーバライブラリ | パッケージ

 ls -l /usr/lib64/libmicrohttpd*
lrwxrwxrwx 1 root root     24  8月 13 02:18 /usr/lib64/libmicrohttpd.so -> libmicrohttpd.so.12.61.0
lrwxrwxrwx 1 root root     24  8月 13 02:18 /usr/lib64/libmicrohttpd.so.12 -> libmicrohttpd.so.12.61.0
-rwxr-xr-x 1 root root 170040  8月 13 02:18 /usr/lib64/libmicrohttpd.so.12.61.0

OS: openSUSE Tumbleweed

CC:

cc --version
cc (SUSE Linux) 13.2.1 20231130 [revision 741743c028dc00f27b9c8b1d5211c1f602f2fddd]

The source code:
main.c.txt

Compilation flags:

cc -Wall -Ofast -march=native -mtune=native -static `pkg-config --cflags libmicrohttpd` `pkg-config --libs libmicrohttpd` -o test main.o

So I cannot compile a static version of the test program. By the way, the fastest flag combination on openSUSE Tumbleweed is MHD_USE_SELECT_INTERNALLY | MHD_USE_TCP_FASTOPEN | MHD_USE_TURBO:

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       97987 hits
Availability:		      100.00 %
Elapsed time:		        3.45 secs
Data transferred:	        1.03 MB
Response time:		        0.00 secs
Transaction rate:	    28402.03 trans/sec
Throughput:		        0.30 MB/sec
Concurrency:		        6.57
Successful transactions:       97987
Failed transactions:	           0
Longest transaction:	        0.02
Shortest transaction:	        0.00

Replacing MHD_USE_SELECT_INTERNALLY by MHD_USE_THREAD_PER_CONNECTION badly affects the performance (as it should, there would be an extra overhead of launching threads to serve a simple string):

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       80282 hits
Availability:		      100.00 %
Elapsed time:		        3.97 secs
Data transferred:	        0.84 MB
Response time:		        0.00 secs
Transaction rate:	    20222.17 trans/sec
Throughput:		        0.21 MB/sec
Concurrency:		        8.09
Successful transactions:       80283
Failed transactions:	           0
Longest transaction:	        0.01
Shortest transaction:	        0.00

So it looks like the performance of libmicrohttpd is rather sensitive to two factors: i) OS (macOS or Linux) and ii) event polling and/or threading mode, as explained in https://www.gnu.org/software/libmicrohttpd/ .

Even my super fast M1 Ultra Mac Studio at home gets a measly 4259 trans/sec with MHD_USE_SELECT_INTERNALLY | MHD_USE_TCP_FASTOPEN | MHD_USE_TURBO. A shame isn't it!

M1 Ultra Mac Studio:

HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /
HTTP/1.1 200     0.00 secs:      11 bytes ==> GET  /

Lifting the server siege...
Transactions:		       16354 hits
Availability:		      100.00 %
Elapsed time:		        3.84 secs
Data transferred:	        0.17 MB
Response time:		        0.00 secs
Transaction rate:	     4258.85 trans/sec
Throughput:		        0.04 MB/sec
Concurrency:		        4.17
Successful transactions:       16354
Failed transactions:	           0
Longest transaction:	        0.49
Shortest transaction:	        0.00

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thanks for you time, appreciated.

Anyway, what do we do with this issue? Close it?

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Before closing it, there is one minor change on the mongoose side that would be rather helpful. Right now there is no way to pass a custom wakeup handler to mg_wakeup_init(). This means that anyone with non-standard stuff inside mg_str (i.e. some data structures that require manual memory freeing in the MG_EV_WAKEUP event handler) only has one option: to modify mongoose.c directly.

Edit: a clarification: I am calling for a custom user-defined handler to replace

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd)

The other part - MG_EV_WAKEUP - is already being handled on the user side.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 I do not agree that modification is required:

if (ev == MG_EV_WAKEUP) {
  struct mg_str *data = ev_data;
  struct foo *foo = (struct foo *) data->ptr;  // data contains struct foo.
  if (data->len == sizeof(*foo)) {
    // Success, use foo->whatever
  } else {
    // Error
  }
}

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

I cannot speak for other mongoose users but at least in my case the default

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd)

cannot be used "as is" since in order to reduce internal memory copying I push pointers to payloads through mg_wakeup instead of actual payloads.

Here is what the default wufn() handler does inside:

if (c->recv.len >= sizeof(*id)) {
      struct mg_connection *t;
      for (t = c->mgr->conns; t != NULL; t = t->next) {
        if (t->id == *id) {
          struct mg_str data = mg_str_n((char *) c->recv.buf + sizeof(*id),
                                        c->recv.len - sizeof(*id));
          mg_call(t, MG_EV_WAKEUP, &data);
        }
      }
    }

And here is my version of wufn():

if (c->recv.len >= sizeof(*id)) {
      char* ptr = (char *) c->recv.buf + sizeof(*id);      
      struct mg_str* data = (struct mg_str*)ptr;
      for (struct mg_connection * t = c->mgr->conns; t != NULL; t = t->next) {
        if (t->id == *id) {          
          mg_call(t, MG_EV_WAKEUP, data);
          data->ptr = NULL; // prevent freeing the original data 
          data->len = 0;
          break; // stop searching for the connection
        }
      }
      // free the original data in case a connection was not found
      if(data->ptr != NULL)
      {      
        free((char*)data->ptr);
        data->ptr = NULL;
        data->len = 0;
      }
    }    

Your code uses struct mg_str data, mine does struct mg_str* data - a subtle difference. In addition my code
i) releases the underlying memory if a destination connection cannot be found and
ii) breaks out of a search loop if a destination connection has been found.

Without i) there would be occasional memory leaks (and I have encountered them in the real life when testing with valgrind).

The last point ii) does not matter for a small number of connections but it helps if there is a thousand or so connections to loop through. Of course doing a binary search (a binary tree, a hash table to index mongoose connections via t->id) would be much better than the current linear search.

As to the handler for "if (ev == MG_EV_WAKEUP) {", this is already being handled inside a user code (on my side) so there are no problems here.

So to re-cap, a custom handler for

// mg_wakeup() event handler
static void wufn(struct mg_connection *c, int ev, void *evd, void *fnd)

is needed in this particular case. Other mongoose users might or might not have similar special circumstances.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 I am sorry but what you wrote, just doest not make any sense to me.
I provided an example where you can send a structure as a payload.
Likewise, you can send a pointer to structure. So the payload will be sizeof(void *).
For the pointer, I've provided an example earlier. Did you actually look at it?

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024
  • With all due respect I think we just need to agree to disagree.

  • "Likewise, you can send a pointer to structure. So the payload will be sizeof(void *).
    For the pointer, I've provided an example earlier. Did you actually look at it?
    "

I am not quite sure which pointer example you are referring to. If you mean this

if (ev == MG_EV_WAKEUP) {
  struct mg_str *data = ev_data;
  struct foo *foo = (struct foo *) data->ptr;  // data contains struct foo.
  if (data->len == sizeof(*foo)) {
    // Success, use foo->whatever
  } else {
    // Error
  }
}

then this part refers to the client-side MG_EV_WAKEUP handler: I have no issue with that part. My issue is to do with the wakeup pipe event handler, not the final MG_EV_WAKEUP event handler.

The misunderstanding probably stems from my non-standard use of mongoose (not the way it was originally envisaged). So no, I do not follow your multi-threaded example at https://github.com/cesanta/mongoose/blob/master/examples/multi-threaded/main.c . This is what I do instead:

  • Start with a binary WebSocket message. Not a structure, just some binary bits and pieces held together in a single character array:
size_t msg_len = sizeof(float) + sizeof(uint32_t) + sizeof(uint32_t) + read_offset + padding;
char *image_payload = malloc(msg_len);
  • Fill in the binary data:
if (image_payload != NULL)
    {
        float ts = resp->timestamp;
        uint32_t id = resp->seq_id;
        uint32_t msg_type = 2;
        // 0 - spectrum, 1 - viewport,
        // 2 - image, 3 - full spectrum refresh,
        // 4 - histogram

        size_t ws_offset = 0;

        memcpy((char *)image_payload + ws_offset, &ts, sizeof(float));
        ws_offset += sizeof(float);

        memcpy((char *)image_payload + ws_offset, &id, sizeof(uint32_t));
        ws_offset += sizeof(uint32_t);

        memcpy((char *)image_payload + ws_offset, &msg_type, sizeof(uint32_t));
        ws_offset += sizeof(uint32_t);

        // fill-in the content up to pixels/mask
        write_offset = sizeof(uint32_t) + flux_len + 7 * sizeof(float) + 2 * sizeof(uint32_t) + sizeof(uint32_t) + pixels_len + sizeof(uint32_t) + mask_len;
        memcpy((char *)image_payload + ws_offset, buf, write_offset);
        ws_offset += write_offset;

        // add an optional padding
        // ...

        // and the histogram
        // ...

        // et cetera
        // ...
  • Because the image_payload* does not carry information about its length, it is wrapped within a mongoose struct mg_str:
// create a queue message
struct mg_str msg = {image_payload, msg_len};
  • Push struct mg_str through mongoose:
// pass the message over to mongoose via a communications channel
bool sent = mg_wakeup(session->mgr, session->conn_id, &msg, sizeof(struct mg_str)); // Wakeup event manager
  • Finally free the memory in case of a send error:
if (!sent)
        {
            printf("[C] mg_wakeup() failed.\n");

            // free memory upon a send failure, otherwise memory will be freed in the mongoose pipe event loop
            free(image_payload);
        };
  • Upon the send success the memory will be released either in the MG_EV_WAKEUP event handler or, if a destination WebSocket connection has not been found, inside the wakeup pipe event handler (that's why I keep overriding the default mongoose pipe event handler inside mongoose.c).

It's not an over-engineered piece of code, it's just a complex real-life application engineered with memory safety in mind (no leaks allowed). If you can propose an easier / better / simpler way of doing things then I am certainly "all ears".

EDIT: P.S. Before you ask why don't I simply do as originally intended by mongoose

bool sent = mg_wakeup(session->mgr, session->conn_id, image_payload, msg_len);

the answer is "to avoid havingmg_wakeup() mem-copy large payloads". So now instead of say memcpy 1MB of data, memcpy only has to copy sizeof(struct mg_str), which is the 64-bit pointer (8 bytes) + another 8 bytes for size_t len. Plus the extra 8 bytes for the unsigned long conn_id:

bool mg_wakeup(struct mg_mgr *mgr, unsigned long conn_id, const void *buf, size_t len) { 
// ...
    memcpy(extended_buf, &conn_id, sizeof(conn_id));  
    memcpy(extended_buf + sizeof(conn_id), buf, len);
// ...
} 

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 thank you!

Well the idea is simple.

  1. Another thread can send data to Mongoose using mg_wakeup(..., buf, len)
  2. That data could be anything. It could be a structure. Or it could be a pointer. Whatever!
  3. The receiving side, MG_EV_WAKEUP, receives that data as a chunk of memory: struct mg_str *data = ev_data. Here we use struct mg_str cause it allows us to express buf, len as a single variable. What that struct mg_str holds, only the sender and receiver know.
  4. So if sender sends a structure struct foo foo = {0}; mg_wakeup(..., &foo, sizeof(foo)), then a receiver can cast to structure: struct foo *foo = (struct foo *) data->ptr
  5. If sender sends a pointer to structure struct foo *foo = malloc(123); mg_wakeup(..., &foo, sizeof(foo)), then a receiver can cast too: struct foo *foo = *(struct foo **) data->ptr

Does that make sense?

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Yes what you say makes sense and I don't have an issue with that. This is what I do in my code.

The point I have been trying to make in the above messages is that in my particular case there is user-allocated memory that needs to be deallocated inside the mongoose.c wufn() in case a destination connection t->id is not found. Because if t->id is not found the MG_EV_WAKEUP handler (mg_call(t, MG_EV_WAKEUP, data);) will not be called, the user-allocated memory would not be released and there would be a memory leak.

As per your own words

What that struct mg_str holds, only the sender and receiver know.

therefore the default wufn() provided by mongoose is not aware of the need to deallocate the contents of whatever struct mg_str might be holding.

Consequently it makes sense to let mongoose users provide their own custom version of wufn() without the need to modify mongoose.c. That's all I've been trying to convey.

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203

The sender and receiver know what data they exchange. If sender sends an allocated pointer, a receiver can free it.

I really don't understand why do you keep mentioning wufn(). Please forget about it. Sender sends some data, receiver receives it, full stop. It's their business what IS the data, and should it be deallocated or not. Mongoose has nothing to do with it.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Sender sends some data, receiver receives it, full stop.

No, in my humble opinion the above statement is not 100% factually correct, the receiver does not ALWAYS receive the data. Inside wufn() there is a loop searching for a receiver by t->id:

for (t = c->mgr->conns; t != NULL; t = t->next) {
        if (t->id == *id)

What happens if the receiver t->id is not found in the connection list? The receiver will not receive the data, and therefore the data originally allocated by the sender WILL NOT be deallocated by the receiver. There will be a memory leak.

This is the sole reason why I keep obsessing about wufn(). wufn() sits right in the middle between the sender and the receiver. It's a kingmaker!

from mongoose.

cpq avatar cpq commented on May 29, 2024

@jvo203 that's a valid point

One thing: we won't make wufn visible or accessible or customizable by the user. It'll just kill the the simplicity.

Want to receive messages 100% ? Make a dedicated listener and send stuff to it.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

We are on the same wavelength :-) !

we won't make wufn visible or accessible or customizable by the user.

I see. Then in the short term I will stick to over-riding the default wufn() inside mongoose.c.

Want to receive messages 100% ? Make a dedicated listener and send stuff to it.

Thank you, longer-term I will think about using a dedicated listener, as per your suggestion.

from mongoose.

jvo203 avatar jvo203 commented on May 29, 2024

Yep, this is the best way.

P.S. Either way a modification to mongoose.c is required on my side. Either directly overwriting wufn() or modifying mg_wakeup_init() as per your suggestion. That's OK though it would have been even better if the new bool mg_wakeup_init(struct mg_mgr *, mg_event_handler_t fn, void *fn_data); could make it to the official mongoose.c file at some point in the future.

from mongoose.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.