jbaldwin / liblifthttp Goto Github PK
View Code? Open in Web Editor NEWSafe and easy to use C++17 HTTP client library.
License: Other
Safe and easy to use C++17 HTTP client library.
License: Other
Also change the bechmark process to use <getopt.h>
so its easier to pass command line arguments properly.
Performance benchmark/test, should theoretically provide some major speed benefits due to contiguous memory of the underlying std::vector data structure on the std::priority_queue
Release the RequestHandle for the event loop to do its work.
Re-capture the RequestHandle when calling the IRequestCb().
How to deal with the Request proxy object and its pool? Should the event loop own the pool? The problem being the Request object holds the reference to the pool that owns it.
client is not required to make synchronous calls, but is required to make async calls.
The original reason for this is to set the threads priority level to a negative niceness, gotta run fast!
The curl_slist_append() function does a double malloc + copy. A vector of curl_slist objects that point into the header buffer would be far superior and should still work with the CURL* object.
gcc:
clang:
Optional settings perhaps can be turned on with a cmake flag?
It appears different compilers will have incomplete types based on the timeout functions requiring duration<>. Switching them to use std::chrono::milliseconds resolves this issue since the functions are no longer template.
The same problem is occurring on the templated Get/Set UserData. Switching them to void* and forcing the user to cast is an easy solution -- but not type safe at all.
Currently each request submitted into an eventloop will malloc an Executor. The requirement to malloc should be removed and a pool of Executors that can be re-used (alongside the m_curl_handles) should be leveraged.
Integrate with a coroutine library or implement a small enough subset of c++20 coroutines to allow for co_await
'ing async requests.
It would be smart to also allow for the event loop to be driven by a user land thread to integrate with existing frameworks like seastar, etc..
It might be possible to implement this in a way that allows for conditional compliation to enable to feature, requries C++20 and the external library (say seastar) to be available to enable coroutines.
Right now it always executes a lambda on an async request for extreme flexibility in what the user can do when the async request completes. To make the API a bit more friendly add another async function which returns a std::future<lift::request_ptr, lift::response> (probably wrapped in a pair?) so the user can simply block on the future and have the execution context resumed on the original thread.
Currently the end user must call RequestPool::Return(request) to re-use the handle. Introduce a proxy type "Request" and move the current Request to RequestHandle. The new request proxy will only allow for move semantics and on destruction it will return the RequestHandle to the creating pool.
Hi,
I was wondering if a new release was going to be pushed out soon which includes api changes since v2.0. I was comparing it to another library using v2.0 and noticed that the current examples don't work unless I'm on the latest git or unless I use v2.0's examples (which have a slightly different api).
Thanks!
Right now there are still a few system libs that cannot be changed or overriden by the user.
z uv pthread dl stdc++fs
The only one that can is 'curl'. Make this so any library can be pointed to a custom version.
Currently the user of the library must spinoff a separate thread to run the event loop separately. When creating an event loop object have it automatically spawn a background thread to run the event loop on.
This probably removes the "RunOnce()" function available since the user would no longer drive.
ShutDown() will have to cleanup the background thread and join appropriately.
Remove all the custom LIFT_* cmake variables that a user can set to things like the curl include, curl library, ssl libs, etc. Instead allow the user to pass in targets via a new LIFT_USER_LINK_LIBRARIES variable to override the default curl link library. If the user doesn't provide it then default to 'curl'.
This is the same optimization for the curl_slist request headers
Removing the RequestPool construct has introduced a regression where creating a curl handle within the new Executor class accesses a std::deque<CURL*> across thread boundaries. This can cause crashes since two threads are accessing an unprotected data structure.
Request::Perform() can check to see if it needs to be done and use a global static lock to enforce safety.
EventLoop constructor can also do the same.
Remove the functions to get the thread ids and do this by calling a lambda functor on the newly created thread. This will remove the linux platform dependent values.
I'm almost 100% that the exact same version of lift (it hasn't been modified in a while after all) did work earlier this year, so it might be something that has changed with libcurl. Its changelog during that time is quite extensive so it will take a while to sift through...
Running the lift_async_simple
example now segfaults.
(gdb) bt -entry-values both
#0 0x00007ffff7f3cccf in ?? () from /lib/x86_64-linux-gnu/libcurl-nss.so.4
#1 0x00007ffff7f3d222 in ?? () from /lib/x86_64-linux-gnu/libcurl-nss.so.4
#2 0x00007ffff7f3d3bf in curl_multi_socket_action () from /lib/x86_64-linux-gnu/libcurl-nss.so.4
#3 0x00005555555c7999 in lift::client::check_actions (this=0x7fffffffd390,
this@entry=<optimized out>, socket=-1, socket@entry=<optimized out>, event_bitmask=0,
event_bitmask@entry=<optimized out>) at /home/andjonss/devel/liblifthttp/src/client.cpp:235
#4 0x00005555555c7944 in lift::client::check_actions (this=0x7fffffffd390,
this@entry=<optimized out>) at /home/andjonss/devel/liblifthttp/src/client.cpp:226
#5 0x00005555555c87ee in lift::curl_start_timeout (timeout_ms=0, timeout_ms@entry=<optimized out>,
user_data=0x7fffffffd390, user_data@entry=<optimized out>)
at /home/andjonss/devel/liblifthttp/src/client.cpp:506
The top of the stack trace:
curl_start_timeout
(set in lift::client
ctor) gets called with a timeout_ms
of zero, which I understand is normal.client::check_actions()
which just unconditionally calls check_actions(CURL_SOCKET_TIMEOUT, 0)
, where CURL_SOCKET_TIMEOUT
is -1.curl_code = curl_multi_socket_action(m_cmh, socket, event_bitmask, &running_handles)
, which segfaults.I did notice, however, that the docs for CURLMOPT_TIMERFUNCTION does state:
WARNING: do not call libcurl directly from within the callback itself when the timeout_ms value is zero,
Which lift seems to be doing by calling client::check_actions()
which eventually calls curl_multi_socket_action(...)
.
And... is this still being maintained in some capacity?
Will require an external httpd server running that returns an expected result.
The example should be tune-able for worker threads and the number of concurrent requests.
Hi,
Just wondering is it possible to also use this library to also receive data from a streaming socket?
As an example i have a stream of data ( {JSON DATA}\r\n{JSON DATA}\r\n{JSON DATA}\r\n ... )
how i currently receive it using lib curl would be like follows:
curl_easy_setopt(handles_, CURLOPT_URL, "https://test.com");
curl_easy_setopt(handles_, CURLOPT_CONNECT_ONLY, 1L);
( then to receive, i call the below which gets called from my event loop when epoll notices there is )
curl_easy_recv(handles_, read_buf_, sizeof(read_buf_), &nread_);
with liblifthttp would i be able to set up my socket as normal and get callbacks when there is data ready ?
Thanks!
Hi Josh,
Thanks for open sourcing such cool C++17 lib!
I'd like to ask if it supports setting proxy?
If yes, how could I achieve it?
I did not see examples about this.
Thank you for answering!
Best,
Hang
Only a setter exists, but for a library function to wrap the user's lambda in another lambda it would be useful to be able to access the currently set on complete handler.
A few thoughts:
AddParameter(key, value) that would automatically escape the value. This has a problem in that the URL to curl would need to be set in preparePerform(). Currently disliking this idea.
Add a URL builder type class that would take the protocol/hostname/port/path/query parameters and then output a wellformed and escaped url. This would then be passed in as the SetUrl() function that already exists on the Request object.
https://curl.haxx.se/libcurl/c/CURLOPT_DEBUGFUNCTION.html to allow for the user to get callbacks about what is being sent on the wire.
Currently share is only available between event loops, make this available to standalone requests as well.
When the Expect 100 continue header is set for post requests sending large bodies, requests take 1 second longer than expected.
Running the test suite through valgrind shows a memory leak on uv_loop_new() which means it isn't getting cleanedup properly.
https://stackoverflow.com/questions/25615340/closing-libuv-handles-correctly
Perhaps it isn't waiting long enough to close all the handles so uv_loop_close() is returning busy? Need to investigate.
This will allow for multiple event loops to share connection state information for DNS, SSL and Data pipeline'ing at the cost of locks.
In usage of the library it is clear that individual callbacks are probably superior to a single interface callback. This allows for the user of the library to 'route' specific request types to different on complete handler functions instead of having to manually do the routing in the single on complete callback that the IRequestCallback enforces.
I currently do not like the WebKit style of { on the same line and a few other settings. Spending some time to come up with a good .clang-format file and add a root makefile to issue make format
to format the code consistently.
At the moment the entire request is returned in the ICompleteCb interface -- but it doesn't allow for the user to extend or add any specific user data.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.