Comments (14)
Normally websocat reuses connection, unless you restart it.
For example:
$ websocat ws://echo.websocket.org
AAA
AAA
BBB
BBB
AAA
and BBB
should be sent though the same WebSocket connection.
If you want to run separate console commands to send multiple messages through one WebSocket connection, use websocat's proxy mode.
$ websocat -lt tcp-l:127.0.0.1:1234 reuse-broadcast:ws://echo.websocket.org&
$ nc 127.0.0.1 1234
AAA
AAA
^C
$ nc 127.0.0.1 1234
BBB
BBB
^C
$ kill %1
Terminated
$
AAA
and BBB
still sent though one connection.
from websocat.
it works 👍
it is incredibly hard to do without websocat. I spent two days without success, the only solution I found was using scapy or hping or kermit where I have to craft the tcp packet from zero... a long and hard way...(and even with this solutions, i don t know what I will do with the port already opened that I want to control, I think it is impossible, then there is the kernel controlling the SYN and ACK and prohibit sending an ACK without having a priori an established syn connection.... this would have been a big nightmare headache).
now websocat have a solution in 1 line? how can I describe this? thanks to God. thank you
I don t have the words really to describe this gem, I m falling in love with websocat 😍
thank you very very very much sir and thank you for your valuable knowledge and time.
from websocat.
It can probably be done with other WebSocket client as well, especially if mix it with some additional program like socat
or some bash or whatever scripting. Maybe also my dive.
craft the tcp packet from zero
It's low-level approach not typically adviced for simple scenarious.
You should have just implemented a WebSocket client/proxy in Python/Golang/whatever for this. I expect it to be easier. (I mean before the websocat)
Using user-space networking stack is OK for educational programs or really high-performance scenarios (imagine more than 65536 simultaneous connections on one host).
Also keeping manual TCP connection (in particular, using local port) hidden from kernel won't automatically prevent RST replies by kernel or assigning some other thing the same local port.
from websocat.
Depending on exact usage scenarios, such command line should be done carefully. The abovementioned commands are for demo usage, not for production.
If it is unidirectional it is simpler, but if it is request-reply this tcp-l:127.0.0.1:1234
is unreliable and may require some additional work (e.g. an additional overlay for helping "request-reply" operation mode).
from websocat.
if it is request-reply this tcp-l:127.0.0.1:1234 is unreliable and may require some additional work
Can you shed some light on this please? I m sending and receiving messages, I may send 1 message and receive 20 messages depending on the conditions. why it is unreliable to use tcp-l:127.0.0.1:1234
? thank you.
from websocat.
Describe your use case in more detailed way. Are there requests-replies, notifications (requests without a reply, in one direction or in both directions), broadcast messages for all clients. Give example (simplified, without boilerplate, with shortened messages) of some session.
why it is unreliable to use tcp-l:127.0.0.1:1234
As long as only one nc
client is connected and not terminated, it should be more or less OK. But when there are multiple or disconnecting clients, you may get half of a message or miss some messages. This needs testing.
Also --linemode
+ reuse-broadcast:
currently means msg2line:reuse-broadcast:
, but it may be better to have reuse-broadcast:msg2line:
(i.e. to have linemode processor closer to websocket than to TCP).
from websocat.
when i attempted to make a full example i found some sharks , i m still fighting with them and i will publish the results when I have this working as it should.
from websocat.
As long as only one nc client is connected and not terminated, it should be more or less OK.
for now this is what I use , one client that will communicate with one server. I saw already that using multiple clients with the same server have problems of responding to an aleatory client.
msg2line:reuse-broadcast:, but it may be better to have reuse-broadcast:msg2line:
they seems equal! what does the order imply please?
Describe your use case in more detailed way.
I m using websocat as a websocket client to control chrome/chromium using the chrome devtool protocol.
in this case I will instrument chrome to save a trace/profiling of the javascript running inside chrome tabs of some page that I want to inspect:
following the same steps cited here #4:
#this will open chrome with the debugging port needed to control the websocket server of chrome:
./chrome --remote-debugging-port=9222 --user-data-dir=tempProfile
#this will open a new tab and save the ws link of this page so we can control it using the devtool debugging protocol:
WSurl=$(curl -sg http://127.0.0.1:9222/json/new | grep webSocketDebuggerUrl | cut -d'"' -f4 | head -1)
#and now we can use the Websocat to communicate with chrome using this url $WSurl and control mostly everything in chrome.
websocat -lt tcp-l:127.0.0.1:1234 reuse-broadcast:$WSurl&
#now we can begin to send the commands needed for chrome to begin the tracing/profiling:
# a necessary step
printf '%s\n' '{"id":1,"method":"Page.enable","params":{}}' | nc -w 1 127.0.0.1 1234
# a neceary step, here we tell chrome to iniciate the tracing/profiling of this categories only, or we will end with a lot of unceserry info
printf '%s\n' '{ "id":2, "method":"Tracing.start", "params":{"categories": "-*, devtools.timeline, disabled-by-default-devtools.timeline, disabled-by-default-devtools.timeline.frame, toplevel, blink.console, disabled-by-default-devtools.timeline.stack, disabled-by-default-devtools.screenshot, disabled-by-default-v8.cpu_profile, disabled-by-default-v8.cpu_profiler, disabled-by-default-v8.cpu_profiler.hires", "options": "sampling-frequency=10000"} }' | nc -w 1 127.0.0.1 1234
sleep 1
# open the page we want to trace
printf '%s\n' '{ "id":3, "method":"Page.navigate", "params":{"url": "http://www.example.com"} }' | nc -w 1 127.0.0.1 1234
# sleep the time needed for finishing the loading and the running of javascript ajax...
# here normaly I have to grep the response of the above command (id:3), chrome will send a message that contain the loadEventFired when the page finishes loading, but it is not sure that all javascript have terminated running, so a sleep is safer (but the best is to use both, so next command (id4) will not receive messages of id3).
sleep 7
# end the tracing
# here chrome will send a lot of messages at less 20mb which is the tracing we want (this tracing can be opened with chrome devtool using the performance tab), how can we know that it has finished? chrome will send a tracingComplete message once it had finished sending the trace:
printf '%s\n' '{ "id":4, "method":"Tracing.end","params":{}}' | nc 127.0.0.1 1234 > trace.json
the result is :
......
{ "method": "Tracing.dataCollected", ".........}] } }
{ "method": "Tracing.dataCollected", ".........}] } }
{ "method": "Tracing.dataCollected", ".........}] } }
.....
ERROR 2018-06-30T03:24:24Z: websocat::broadcast_reuse_peer: Too big message dropped
ERROR 2018-06-30T03:24:24Z: websocat::broadcast_reuse_peer: Too big message dropped
websocat: An established connection was aborted by the software in your host machine. (os error 10053)
and this is the fruit we need (the trace) { "method": "Tracing.dataCollected", ".........}] } }
.
but there is a strange errors happening: I don t know what this errors are, and they happen aleatory in the middle or the end...
using the same commands interactively with nc 127.0.0.1 1234
doesn't gave this errors! I will attempt to understand why this errors happens, but apart from that everything works fin. thank you
from websocat.
N.B.: the final trace should be stringified with something like JSON.stringify() in node, I don t know how to do it yet in bash (maybe jq have a stringify function). the final trace should be like that if you want to open it in chrome devtool:
[
{
"pid": 7956,
"tid": 8852,
"ts": 366485538941,
"ph": "X",
"cat": "toplevel",
"name": "MessageLoop::RunTask",
"args": {
"src_file": "../../content/browser/frame_host/render_frame_message_filter.cc",
"src_func": "RenderFrameMessageFilter"
},
"dur": 25
},
{
"pid": 7956,
"tid": 8852,
"ts": 366485538978,
"ph": "X",
"cat": "toplevel",
"name": "MessageLoop::RunTask",
"args": {
"src_file": "../../mojo/public/cpp/system/simple_watcher.cc",
"src_func": "Notify"
},
"dur": 21
},
............
............
]
from websocat.
they seems equal! what does the order imply please?
It changes how things are processed inside websocat. I'm not sure yet what does change from user perspective. Needs thinking and modeling. The difference may kick in corner cases:
- Connection and disconnection;
- Non-newline-terminated reads and writes to socket;
- Client too slow to read from its socket (keeping data buffered) - for
reuse-broadcast:
it means dropped messages (or dropped chunks from inside messages - here the order may matter the most). Usereuse:
(to be renamed) instead ofreuse-broadcast:
to slow down websocket instead (but it have it's own problems).
If you are doing a prototype/demo which is not expected to be stable, it OK. But if it is expected to be reliable solution, we need to think about the details (and if it is in scope of WebSocat; if yes, how to do it really properly).
from websocat.
websocat::broadcast_reuse_peer: Too big message dropped
Try increasing the 65536
in src/my_copy.rs
to the size of maximum expected message.
I suppose you are not trying to work with WebSocket messages that are comparable to RAM size?
from websocat.
Try increasing the 65536 in src/my_copy.rs
I was hopping to ask you how to adjust the max size but because you told me you may add an option to define the size so I thought it s hard to do by my self. thank you :)
I suppose you are not trying to work with WebSocket messages that are comparable to RAM size?
I don t think chrome will consume all the ram, the max may be of 1gb I think if the trace of javascript is so big. please what is the max limit? it s based on the max frame size that can be of exabytes or it s based on my RAM size?
from websocat.
with the new updates and removing the -l option I have this error:
changing to this in the above script:
websocat -t -B 999999999 tcp-l:127.0.0.1:1234 reuse-broadcast:$WSurl&
i got this errors:
WARN 2018-07-03T20:33:20Z: websocat::line_peer: Throwing away 0 bytes of incomplete line
WARN 2018-07-03T20:33:29Z: websocat::broadcast_reuse_peer: A client's sink is NotReady for start_send
WARN 2018-07-03T20:33:29Z: websocat::broadcast_reuse_peer: A client's sink is NotReady for start_send
......
......
from websocat.
with what you told me in gitter chat it works better than before, all those errors has gone now :)
this is what need to be changed for newcomers:
websocat -t --no-line -B 999999999 tcp-l:127.0.0.1:1234 reuse-raw:$WSurl&
from websocat.
Related Issues (20)
- "Advanced" mode does not proxy connections (upgrade failed) HOT 1
- Debugging output is useless HOT 1
- What do I do with the file? Say 'websocat.aarch64-unknown-linux-musl' I downloaded, now what? HOT 2
- Executable no longer statically linked HOT 3
- unable to run external commands in background or from crontab HOT 9
- what is the value of "--just-generate-accept" HOT 4
- `let res = stage1.custom_headers(&h);` got block HOT 6
- provide interface that accepts `Opts` and `std::process::Stdio` HOT 3
- Binary response frames are not handled well HOT 5
- Binary Frames support HOT 8
- websocat.x86_64-unknown-linux-musl on Ubuntu is seems to end before a message is received? HOT 4
- Specify server TLS certificate HOT 2
- "dropped a ping request from websocket due to channel contention" HOT 2
- Specify TLS hostname separately from connect hostname HOT 2
- Refersh header while interractive session is in progress HOT 15
- Question: how do websocat know the other one reached EOF ?
- How do websocat know the other one reached EOF ? HOT 3
- Print more details on I/O failure
- websocat does not report errors when the connection dies HOT 1
- Is there any way to print plain text of request and response when connecting to wss endpoint ? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from websocat.