Comments (16)
Browser limit for parallel xhr is 6..
This is latency measurements during download test and upload test right.
Because of limited parallel connection we cannot make new requests. But we can measure load events of each requests. But i am not sure it will be enough or not. Because on mobile devices or slower connections browser will keep the additional requests pending.
I need to test this then only i can say anything about this.
from speed-test.
So you are asking about " Can we implement 8 parallel request to server " also measure latency at the same time?
Yes. That is what dsl_reports8dn.conf does, for example.
https://github.com/tohojo/flent/blob/master/flent/tests/dslreports_8dn.conf
Parallel connection limitations only applicable to single host , if i remember correctly. But i never tested multiple connection to more than one host.
What could affect the time cost performance of how many HTTP resources referenced are downloaded in parallel by HTML'/JS/WASM?
-
System load noise
-
Same-origin policy AND/OR .*Cookie.* origin policy
-
HTTP "pipelining" / request multiplexing
- HTTP/1.1
- HTTP/2
- HTTP/3 (UDP)
-
Max active connections
-
TCP per-IP host connection limits
-
UDP per-IP host connection limits
-
How fetch/xhr JS/WASM tasks map to actual tasks in separate "processes*, threads, coroutines (with static messages over channels) that share the same networking stack
-
[links above re browser threads, processes, ipc, actors beneath the W3C DOM APIs]
-
-
/? domain sharding CDN
https://www.google.com/search?q=domain+sharding+CDN
Browsers limit the number of active connections for each domain. To enable concurrent downloads of assets exceeding that limit, domain sharding splits content across multiple subdomains. When multiple domains are used to serve multiple assets, browsers are able to download more resources simultaneously, resulting in a faster page load time and improved user experience.
The problem with domain sharding, in terms of performance, is the cost of extra DNS lookups for each domain and the overhead of establishing each TCP connection.
The initial response from an HTTP request is generally an HTML file listing other resources such as JavaScript, CSS, images and other media files that need to be downloaded. As browsers limit the number of active connections per domain, serving all the required resources from a single domain could be slow as assets need to be downloaded sequentially. With domain sharding, the required downloads are served from more than one domain, enabling the browser to simultaneously download needed resources. Multiple domains, however, is an anti-pattern, as DNS lookups slow initial load times.
HTTP2 supports unlimited concurrent requests making domain sharding an obsolete requirement when HTTP/2 is enabled.
Which servers have HTTP/2 enabled by default?
Which servers have HTTP/3 enabled by default; w/ less TCP & TCP load-balancing latency
If you really need to do this , make two servers and try to send 6 parallel XHR request to each serves at the same time.. if 12 connection worked at the same time, then you can use 8 for download and other for latency.
Instead of providing documentation or along with that write what you are trying to say in few sentences.
Reference material discovered otw to answering my own async question:
https://en.wikipedia.org/wiki/Web_performance#Optimization_techniques
- Domain Sharding: If the static file hosting web server or client doesn't support HTTP/2, domain sharding is advisable
- Same-origin and CSP and CORS policy are how relevant to remote JS and CSS and WASM resources from other domains
from speed-test.
FWIU the max_connections_per_domain setting varies amongst browsers:
- "Max parallel HTTP connections in a browser?"
- "Match Firefox's [HTTP/1.1] per-host connection limit of 15" https://bugs.chromium.org/p/chromium/issues/detail?id=12066
No hurry here. Cool tool and thanks as well
from speed-test.
+someday +maybenp
Thought the RRUL thing might be worth sharing
- https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/API/devtools/network
- https://developer.mozilla.org/en-US/docs/Web/Performance/Navigation_and_resource_timings#performance_timings
- https://blog.bitsrc.io/using-the-performance-web-api-with-chrome-devtools-f4c59564b3d4
- reactwg/react-18#76 (comment) ; React Profiler API & how that works
What else might cause a render to get delayed?
React profiling tools have previously focused on only reporting what React (or React components) are doing, but any JavaScript the browser runs affects performance. The new profiler shows non-React JavaScript as well, making it easy to see when it delays React from rendering.
from speed-test.
see #17
You can run a stress test on one browser window
http://xxx.xxx.xxx.xxx:3000?s=y
and run ping test on another window
http://xxx.xxx.xxx.xxx:3000?t=p&p=300
you can see the added latency
or you can use two browser for this.
When testing latency while running stress test will show the added latency.
You can run speed test and latency test indefinitely.
t=y for run a speed test for full year
t=p&p=100000 to send 1L Ping
from speed-test.
jupyterlab/jupyterlab#1639 (comment) :
-
"4 Ways to Communicate Across Browser Tabs in Realtime"
-
JupyterLite implements the pyolite kernel in a Web Worker that doesn't block the main thread. AFAIU, web workers are possible without multiple browser tab overhead.
from speed-test.
Webworker also have same limitations.
XHR is asynchronous, and it will not block the code. Moving it to webworker make no sense.
Webworker is for CPU intensive tasks.
I am no expert. When i tested webworker on my app , I saw no improvements.
from speed-test.
Service Workers map onto all the processes in the browser task manager:
- Chrome:
- Task Manager:
Shift-Esc
- Devtools:
F12
orControl+Shift+I
orCommand-Option-I
https://developer.chrome.com/docs/devtools/shortcuts/ - https://chromium.googlesource.com/chromium/src/+/HEAD/docs/threading_and_tasks.md
- Task Manager:
- Firefox:
- Task Manager:
Ctrl-L
+about:performance
- Devtools:
F12
orControl+Shift+I
orCommand-Option-I
- https://firefox-source-docs.mozilla.org/dom/ipc/process_model.html
- Task Manager:
From https://web.dev/workers-overview/ :
- A page can spawn multiple web workers, but a single service worker controls all the active tabs under the scope it was registered with.
- The lifespan of the web worker is tightly coupled to the tab it belongs to, while the service worker's lifecycle is independent of it. For that reason, closing the tab where a web worker is running will terminate it, while a service worker can continue running in the background, even when the site doesn't have any active tabs open.
HTTP Same-origin; HTTP-pipelining in HTTP/1.1, HTTP/2, and HTTP/3; and parallel download tests:
- If you fetch the same test file from different domains on the same or different IPv4/IPv6 addresses (~vhosts), which browser settings determine how many requests are made in parallel to each HTTP origin?
- https://developer.mozilla.org/en-US/docs/Web/Security/Same-origin_policy#cross-origin_data_storage_access
- https://IPv6{0-7}:443/test.data?cachebreaker={datetime}-{randomuuid}
- https://ip:20{0-7}/
- https://*.domain.tld.local/
- ACME wildcard certs from letsencrypt have a 30d expiration and a 10d renewal IIRC
- https://en.wikipedia.org/wiki/HTTP_pipelining
-
Non-idempotent requests such as POST should not be pipelined.[6] Read requests like GET and HEAD can always be pipelined. A sequence of other idempotent requests like PUT and DELETE can be pipelined or not depending on whether requests in the sequence depend on the effect of others.[1]
-
From https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers :
Service workers are restricted to running across HTTPS for security reasons.
from speed-test.
Looks like a boat... You failed the turing test.
from speed-test.
What looks like a boat?
To download multiple files over HTTP in parallel (in order to implement RRUL / dslreports_8dn.conf) , http pipelining and same origin policy will be relevant: does it download the 8 files in serial or in parallel?
from speed-test.
Can we (also) download in ~actual parallel with the Service Workers API? (Can we see actual parallel fetch subprocesses (that can run on separate cores) with the browser Task Manager)?)
Is WASM (as in JupyterLite, which could interactively display speedtest reports) any advantage for parallel download speed tests and/or upload speed tests that report "bufferbloat" which indicates that low-latency applications will lag?
from speed-test.
So you are asking about " Can we implement 8 parallel request to server " also measure latency at the same time?
Parallel connection limitations only applicable to single host , if i remember correctly. But i never tested multiple connection to more than one host.
If you really need to do this , make two servers and try to send 6 parallel XHR request to each serves at the same time.. if 12 connection worked at the same time, then you can use 8 for download and other for latency.
Instead of providing documentation or along with that write what you are trying to say in few sentences.
from speed-test.
Yes, we need to test this on HTTP2 Server. Now I am busy with some other work. I will come back to this later.
If you know Javascript, deploy the openspeedtest docker image, enable http2, and log the download event data to the console.
After that, run the test by adding xhr=12 or 20.
from speed-test.
I have not tested this. I was thinking like this.
When we start testing, when the first DATA request receives data, we can send a PING request, then the next DATA request, and before each DATA request, we can send a PING request. That will measure latency under load. For HTTP 1.1, this will work under six requests, and for HTTP2 and above, it will go parallel. I am currently working on some other stuff. I will come back to this soon. Thanks for the valuable insights.
from speed-test.
I will test this soon.
Chat-GPT
Question
Http2 single TCP connection can be a bottleneck
Answer
It is possible that a single HTTP/2 connection using a single TCP connection could become a bottleneck in certain situations. This is because HTTP/2 multiplexing allows multiple requests to be sent and received over a single TCP connection at the same time, but the total bandwidth available to all of these requests is still limited by the capacity of the underlying TCP connection.
If the total combined bandwidth required by all of the multiplexed requests exceeds the capacity of the TCP connection, the connection could become a bottleneck and limit the overall performance. This could occur, for example, if the client is trying to download a large number of very large files concurrently over a single HTTP/2 connection.
To avoid this bottleneck, it may be necessary to use multiple HTTP/2 connections or to switch to a different protocol that is better suited for transferring large amounts of data, such as BitTorrent or FTP.
from speed-test.
@westurner
HTTP 1.1
HTTP 2 & 3
We can send more than 6 XMLHttpRequest to the server using HTTP2 and 3.
HTTP 1.1 will use 6 Parallel connections to the server.
HTTP 2 & 3 will use a single connection to the server.
from speed-test.
Related Issues (20)
- Accessibility: make the interface navigable by the keyboard HOT 1
- embed OpenSpeedTest arguments HOT 2
- Migration to organization HOT 1
- API Accessible SpeedTests HOT 1
- WMIC Deprecation HOT 8
- macOS: support native launch on login, quiet launch (launch without window), and hiding dock icon HOT 1
- Crazy upload speeds HOT 7
- There is no option to change the Network Adapter HOT 1
- [Challenge] Run OpenSpeedTest on Ubuntu Server HOT 2
- Feature Req: Flatpak and/or appimage for Linux HOT 2
- IOS version of app doesn't recognize non-wifi interfaces HOT 1
- Getting canceled and failed uploads and downloads HOT 4
- Feature Request: Provide core functionality as a library HOT 1
- Does Openspeedtest support 2.5+Gbps? HOT 2
- Insane download and upload speeds on a gig plan HOT 4
- [ It was False Positive ] OpenSpeedTest-Server 2.1.8_ia32.exe For Windows x86-32 flagged as virus HOT 5
- Slow dowload speeds HOT 2
- CSS not loading after changing the port in Nginx HOT 2
- Firefox Issue with Darkmode, prevents page load HOT 5
- Feature request: Restart button HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from speed-test.