Comments (40)
Thanks for the documentation. I did as said above and it all works fine except the Executors tab in Spark UI. It seems that the proxy replaces the [app-id] with the port instead of the actual app-id.
From: https://spark.apache.org/docs/latest/monitoring.html
/applications/[app-id]/allexecutors | A list of all(active and dead) executors for the given application.
from jupyter-server-proxy.
This is partially addressed by 50e0358. Visiting /hub/user/proxy/4040/ still takes you to /jobs/ but I think that is the webui. However visiting /hub/user/proxy/4040/{jobs,environment,...}/ does the right thing without requiring the proxyBase setting.
from jupyter-server-proxy.
@ransoor2 Were you able to find a workaround for the blank spark UI 'Executor' tab? I have the same issue.
also looking for an update.
from jupyter-server-proxy.
The problem that allexecutors endpoint returns 404 can be fixed by modifying core/src/main/resources/org/apache/spark/ui/static/utils.js
. For example, Our hub url include jupyter
in URL.
- (Spark 2) https://github.com/apache/spark/blob/v2.4.8/core/src/main/resources/org/apache/spark/ui/static/executorspage.js
- (Spark 3) https://github.com/apache/spark/blob/v3.1.2/core/src/main/resources/org/apache/spark/ui/static/utils.js
function getStandAloneAppId(cb) {
var words = document.baseURI.split('/');
var ind = words.indexOf("proxy");
if (document.baseURI.indexOf("jupyter") > 0) { ind = 0 } // newly added line
function createRESTEndPointForExecutorsPage(appId) {
var words = document.baseURI.split('/');
var ind = words.indexOf("proxy");
if (document.baseuri.indexof("jupyter") > 0) { ind = 0 } // newly added line
function createTemplateURI(appId, templateName) {
var words = document.baseURI.split('/');
var ind = words.indexOf("proxy");
if (document.baseuri.indexof("jupyter") > 0) { ind = 0 } // newly added line
But basically this problem can be fixed simply if jupyter-proxy-server extension supports modification of proxy/
URL infix. Since spark javascript functions in UI tries to handle proxy
string in URL as you can see the code above.
Is it possible to modify proxy
string infix in URL for jupyter-server-proxy extension? (e.g, by setting some options..)
I searched the code of this repository, but could find any hardcoded proxy
string. The proxy
string might come from jupyter-server
extensions or somewhere outside of this repository :(
from jupyter-server-proxy.
Looks like the Url can be changed with SPARK_PUBLIC_DNS
. I tried it and changed it to <JUPYTERHUB_URL>/hub/user/<username>/proxy/4040/jobs/
. This changes the sc.uiWebUrl
to <JUPYTERHUB_URL>/hub/user/<username>/proxy/4040/jobs/:4040
resulting in a link that is actally redirecting to the web app but the app is still broken and links point to <JUPYTERHUB_URL>/<XYZ>
from jupyter-server-proxy.
@yuvipanda Thanks for help! Still doesn't work.
-
I think you misspelled in setup.py. Should be jupyter_sparkui_proxy/etc/jupyter-sparkui-proxy-serverextension.json instead of jupyter_server_proxy/etc/jupyter-server-proxy-serverextension.json.
-
Im running in my dockerfile:
ADD common/jupyter-sparkui-proxy /jupyter-sparkui-proxy
RUN cd /jupyter-sparkui-proxy &&
python setup.py install
installation looks correct, but im still getting the same error as above.
from jupyter-server-proxy.
So, the issue is in Spark Core.
See the utility: https://github.com/apache/spark/blob/c2d0d700b551e864bb7b2ae2a175ec8ade704488/core/src/main/resources/org/apache/spark/ui/static/utils.js#L88 .
function getStandAloneAppId(cb) {
var words = document.baseURI.split('/');
var ind = words.indexOf("proxy");
if (ind > 0) {
var appId = words[ind + 1];
cb(appId);
return;
}
...
getStandAloneAppId will always return the value after "proxy", which in our case is the port, 4040
from jupyter-server-proxy.
Jupyterhub proxy allows us to create named servers, I was able to access executors tab with following traitlets config
c.ServerProxy.servers = {
"spark_ui": {
"port" : 4040,
"absolute_url": False
}
}
Then, you will be able to access spark UI under $HUB_URL/spark_ui/jobs/ without the problematic proxy keyword.
from jupyter-server-proxy.
I tried reproducing your example in a container with the following:
import os
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
conf = SparkConf()
conf.setMaster('local')
conf.set('spark.kubernetes.container.image', 'idalab/spark-py:spark')
conf.set('spark.submit.deployMode', 'client')
conf.set('spark.executor.instances', '2')
conf.setAppName('pyspark-shell')
conf.set('spark.driver.host', '127.0.0.1')
os.environ['PYSPARK_PYTHON'] = 'python3' # Needs to be explicitly provided as env. Otherwise workers run Python 2.7
os.environ['PYSPARK_DRIVER_PYTHON'] = 'python3' # Same
# Create context
sc = SparkContext(conf=conf, master=None)
I confirmed that there is a service on :4040. Visiting localhost:8888/proxy/4040/ redirected me to localhost:8888/jobs/ which returned a 404. I then manually visited http://localhost:8888/proxy/4040/jobs/ and it displayed a small web app. All of the links within this app do not include the /proxy/4040/ path, e.g. http://localhost:8888/stages/, http://localhost:8888/storage/. This suggests to me that either this app as I've configured it does not inspect the URL it is being visited on or generates links that sit on top of the URL path (/stages/ rather than stages/).
Apparently sc.uiWebUrl is read-only.
from jupyter-server-proxy.
Interesting. I actually never tried to use the localhost:8888/proxy/4040/
variant in the container assuming it just makes sense on the hub. I just used port forwarding to check whether the UI is generally accessible. I can confirm the behaviour you're describing with /proxy/4040/jobs/
, adding that all styles and graphics of the web-app are broken.
I just checked it on my jupyterhub deployment with jobs/
added so <JUPYTERHUB_URL>/hub/user/<username>/proxy/4040/jobs/
. This replicates the behaviour from the container, leading to the web app with broken styles. It just works with a trailing /
after jobs.
So what to do? Is this a Spark/PySpark issue, regarding the sc.uiWebUrl
problem?
from jupyter-server-proxy.
So what to do? Is this a Spark/PySpark issue, regarding the sc.uiWebUrl problem?
I think so, without having dug into the source. I found some documentation on spark's webui properties which doesn't describe a way to alter the URL. I'll look into the source a bit more.
I suppose one could subclass nbserverproxy to alter the proxied content, but that could get messy.
from jupyter-server-proxy.
Created https://issues.apache.org/jira/browse/SPARK-26242.
from jupyter-server-proxy.
@mgaido91 mentioned in the spark jira that setting spark.ui.proxyBase can address this. I've confirmed that if you add conf.set('spark.ui.proxyBase', '/proxy/4040')
and then visit {your_server}/.../proxy/4040/jobs/, the webui renders correctly. Visiting /proxy/4040 still doesn't however.
I'll try his other suggestion of setting X-Forwarded-Context in the proxy.
from jupyter-server-proxy.
On my Jupyterhub deployment I have to configure it as conf.set('spark.ui.proxyBase', '/user/<username>/proxy/4040')
but then it works as well!
from jupyter-server-proxy.
Based mostly on work here by @h4gen and @ryanlovett, I've now built https://github.com/yuvipanda/jupyter-sparkui-proxy which does the right thing wrt redirects. Thank you! <3
needs docs and stuff.
from jupyter-server-proxy.
I've solved setting conf.set("spark.ui.proxyBase", "")
.
Then point to http://localhost:4040 to see the ui.
from jupyter-server-proxy.
@ransoor2 Were you able to find a workaround for the blank spark UI 'Executor' tab? I have the same issue.
from jupyter-server-proxy.
@dbricare No..
from jupyter-server-proxy.
I'm having an issue I don't quite understand. I've followed the steps listed here by @ransoor2 to set up a spark on Kubernetes deployment. Things work (though there is actually a bug in all version of spark that was just recently introduced that stops workers from being started), but the spark UI only kind of works. Specifically, if I go to <JUPYTERHUB-IP>/user/<username>/proxy/4040/jobs/
things work as I expect them to. However, if I go to <JUPYTERHUB-IP>/user/<username>/proxy/4040/
which I expect based on @ryanlovett's comment #57 (comment) that I should be redirected to <JUPYTERHUB-IP>/user/<username>/proxy/4040/jobs/
. Instead I am redirected to <JUPYTERHUB-IP>/hub/jobs/
, which is a broken link.
Is this expected behavior? My spark config (the relevant bits) is basically identical to that of @ransoor2's writeup.
EDIT: I just looked at the jira issue that @ryanlovett opened, and he describes the same behavior. So, maybe this is expected?
from jupyter-server-proxy.
I am trying to set up the dashboard for the user jobs with jupyter-server-proxy. I can get the dashboard with kubectl port-forward but that's not an option for all the users who won't have access to kubectl.
I tried setting the following:
conf = SparkConf().setAppName('Cluster1').setMaster(spark_master)
conf.set('spark.ui.proxyBase', f'/user/{user}/proxy/4040')
sc = SparkContext(conf=conf)
sc
but it still produces a url with http://jupyter-{user}:4040
When I do
kubectl port-forward -n jupyterhub jupyter-{user} 4040:4040
it breaks the dashboard at localhost:4040, with links pointing to localhost:4040/user/{user}/proxy/4040/jobs etc.
So it seems some config has been propagated but sc.uiWebUrl is still not the right one, does anyone have an idea what's wrong?
from jupyter-server-proxy.
Any update about this issue? I'm running spark locally (master=local[*]) using zero-to-jupyterhub installation, but, I'm getting a 404 error every time I try to open the spark UI.
I tried with https://github.com/yuvipanda/jupyter-sparkui-proxy proposed by @yuvipanda but i'm still getting page not exists.
from jupyter-server-proxy.
I've been playing with this today and it seems to work as long as you start off by going to one of the Spark UI pages. This way you avoid that initial absolute path redirect that breaks it. So going to /proxy/4040/jobs/
seems to work with the latest versions of everything.
I wound up monkeypatching the function that renders the Spark cluster info where the "Spark UI" link shows up. This way it'll point it to the jupyter-server-proxy link instead. The goal is to make things more seamless feeling for users who won't be expected to know this stuff.
from pyspark.context import SparkContext
def uiWebUrl(self):
from urllib.parse import urlparse
web_url = self._jsc.sc().uiWebUrl().get()
port = urlparse(web_url).port
return "/proxy/{}/jobs/".format(port)
SparkContext.uiWebUrl = property(uiWebUrl)
Edit: The executors tab of the Spark UI doesn't seem to be working.
from jupyter-server-proxy.
Thank you for your comment. I tried again, I installed the proxy, set the configuration property, but the result is always the same (404 on the UI page).
I used the latest stable release of the helm chart to install jupyterhub on my k8s cluster (the 0.8.2 with the 0.9.6 version of jupyterhub).
May the problem can related to the jupyterhub version?
from jupyter-server-proxy.
I've been playing with this today and it seems to work as long as you start off by going to one of the Spark UI pages. This way you avoid that initial absolute path redirect that breaks it. So going to
/proxy/4040/jobs/
seems to work with the latest versions of everything.I wound up monkeypatching the function that renders the Spark cluster info where the "Spark UI" link shows up. This way it'll point it to the jupyter-server-proxy link instead. The goal is to make things more seamless feeling for users who won't be expected to know this stuff.
from pyspark.context import SparkContext def uiWebUrl(self): from urllib.parse import urlparse web_url = self._jsc.sc().uiWebUrl().get() port = urlparse(web_url).port return "/proxy/{}/jobs/".format(port) SparkContext.uiWebUrl = property(uiWebUrl)Edit: The executors tab of the Spark UI doesn't seem to be working.
I am interested in how you updated the spark ui link to point to the proxy/4040/ does it work for any port number?
Also the issue with the executor is a bug in spark I think. Look at the URL it drops one of the paths. If I remember correctly. Seems fixed in spark 2.4.4 I actually forgot about it till ya said it. So will check
from jupyter-server-proxy.
Thank you for your comment. I tried again, I installed the proxy, set the configuration property, but the result is always the same (404 on the UI page).
I used the latest stable release of the helm chart to install jupyterhub on my k8s cluster (the 0.8.2 with the 0.9.6 version of jupyterhub).
May the problem can related to the jupyterhub version?
Correct if I'm wrong, but does widget only work with jupyterlab. It working fine for me on 1.2.6
from jupyter-server-proxy.
On my Jupyterhub deployment I have to configure it as
conf.set('spark.ui.proxyBase', '/user/<username>/proxy/4040')
but then it works as well!
Hi I'm facing the same issue with spark history server, will it applies to history server too?
from jupyter-server-proxy.
i am using Anaconda and have a jupyter notebook running on port 8890.
http://127.0.0.1:8890/notebooks/Spark_DataFrame_Basics.ipynb
Once i invoke spark session, i can see port 4040 LISTENING On the server.
Code:
from pyspark.sql import SparkSession
from pyspark.sql.types import StructField,StringType,IntegerType,StructType
spark = SparkSession.builder.appName('MYfirstAPP').getOrCreate()
netstat -anp |grep 4040
tcp6 0 0 :::4040 :::* LISTEN 11553/java
i can access the spark jobs ,using :
My environment variables:
Name ▴ | Value |
---|---|
Java Home | /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.252.b09-2.el7_8.x86_64/jre |
Java Version | 1.8.0_252 (Oracle Corporation) |
Scala Version | version 2.12.10 |
spark.app.id | local-1595389946259
spark.app.name | MYfirstAPP
spark.driver.host | localhost.local
spark.driver.port | 42234
spark.executor.id | driver
spark.master | local[*]
spark.rdd.compress | True
spark.scheduler.mode | FIFO
spark.serializer.objectStreamReset | 100
spark.submit.deployMode | client
spark.submit.pyFiles |
spark.ui.showConsoleProgress | true
from jupyter-server-proxy.
I am working with Jupyterhub on K8s which I deployed using a helm chart.
So I have been unable to figure out a solution to properly display the details on the executors or even details of stages inside a job. I always get a blank page.
For example when I visit the executors page, I can see in the browser console the following message:
Failed to load resource: the server responded with a status of 404 ()
`https://{JHUB_URL}/jupyterhub/user/jupyterhub-admin/proxy/4040/api/v1/applications/4040/stages/0/0`
As noted by a user above
So, the issue is in Spark Core.
See the utility: https://github.com/apache/spark/blob/c2d0d700b551e864bb7b2ae2a175ec8ade704488/core/src/main/resources/org/apache/spark/ui/static/utils.js#L88 .
function getStandAloneAppId(cb) { var words = document.baseURI.split('/'); var ind = words.indexOf("proxy"); if (ind > 0) { var appId = words[ind + 1]; cb(appId); return; } ...
getStandAloneAppId will always return the value after "proxy", which in our case is the port, 4040
It seems like because the url we have has proxy
in it the function getStandAloneAppId(cb)
will use the port as the application ID which fails and we get a blank page.
Was anybody able to get around this issue. I have tried using jupyter-sparkui-proxy but have the same issue. I will appreciate any help. Thank you.
from jupyter-server-proxy.
I was able to resolve the issue with the blank executors tab by custom modifying the spark javascript and the function mentioned by @hbuttguavus getStandAloneAppId
. I did similar updates to functions createTemplateURI
and createRESTEndPoint
.
I didn't do anything fancy, just hardcoded the port to search for in the URI (e.g., "4040") and if its found use the REST API to retrieve the app ID.
The changes are needed in the spark-core JAR which can be unzipped, modified, and re-zipped.
In spark 3.0 the function can be found in spark-core_2.12-3.0.1/org/apache/spark/ui/static/utils.js
In spark 2.4 its spark-core_2.11-2.4.4/org/apache/spark/ui/static/executorspage.js
Here's an example for spark 2.4 (it may need to be modified based on the format of the jupyter URL).
function getStandAloneppId(cb) {
var words = document.baseURI.split('/');
//Custom jupyterhub workaround that parses port number in URI
var ind = words.indexOf("4040");
if (ind > 0) {
$.getJSON(location.origin + "/" + words[3] + "/user-redirect/proxy/4040/api/v1/applications", function(response, status, jqXHR) {
if (response && response.length > 0) {
var appId = response[0].id
cb(appId);
return;
}
});
}
var ind = words.indexOf("proxy");
var indp = words.indexOf("4040");
if ((ind > 0) && (indp < 1)) {
var appId = words[ind + 1];
cb(appId);
return;
}
var ind = words.indexOf("history");
if (ind > 0) {
var appId = words[ind + 1];
cb(appId);
return;
}
//Looks like Web UI is running in standalone mode
//Let's get application-id using REST End Point
$.getJSON(location.origin + "/api/v1/applications", function(response, status, jqXHR) {
if (response && response.length > 0) {
var appId = response[0].id
cb(appId);
return;
}
});
}
from jupyter-server-proxy.
@dbricare Do you want to submit the fix to Apache Spark? I think many of us will benefit from it. It might need some refinement though.
from jupyter-server-proxy.
@h4gen : How did you use SPARK_PUBLIC_DNS to change sc.uiWebUrl
? I am attempting to do this in pyspark but setting it as an environmental variable via os.environ does not seem to work.
Looks like the Url can be changed with
SPARK_PUBLIC_DNS
. I tried it and changed it to<JUPYTERHUB_URL>/hub/user/<username>/proxy/4040/jobs/
. This changes thesc.uiWebUrl
to<JUPYTERHUB_URL>/hub/user/<username>/proxy/4040/jobs/:4040
resulting in a link that is actally redirecting to the web app but the app is still broken and links point to<JUPYTERHUB_URL>/<XYZ>
from jupyter-server-proxy.
I fixed executors & stages pages for Pyspark 3.0.x notebooks inside the Kubeflow environment.
Put this inside your notebook Dockerfile:
# Fixing SparkUI + proxy
RUN cd /tmp && mkdir -p org/apache/spark/ui/static/ && \
curl -s https://gist.githubusercontent.com/slenky/f89ee5de18a2f075a481e3d4452a427c/raw/470c7526cdfd1022c14a0857156d26a606508c30/stagepage.js > org/apache/spark/ui/static/stagepage.js && \
curl -s https://gist.githubusercontent.com/slenky/f89ee5de18a2f075a481e3d4452a427c/raw/470c7526cdfd1022c14a0857156d26a606508c30/utils.js > org/apache/spark/ui/static/utils.js && \
zip -u $SPARK_HOME/jars/spark-core_2.12-3.0.1.jar org/apache/spark/ui/static/* && \
rm -rf /tmp/org
https://gist.github.com/slenky/f89ee5de18a2f075a481e3d4452a427c
However, now I am getting an issues with loading a DataTables on both stages & executors:
DataTables warning: table id=accumulator-table - Cannot reinitialise DataTable. For more information about this error, please see http://datatables.net/tn/3
Any help is appreciated
from jupyter-server-proxy.
In case of Spark 2, you need to apply this change too.
from jupyter-server-proxy.
The problem that allexecutors endpoint returns 404 can be fixed by modifying
core/src/main/resources/org/apache/spark/ui/static/utils.js
. For example, Our hub url includejupyter
in URL.
- (Spark 2) https://github.com/apache/spark/blob/v2.4.8/core/src/main/resources/org/apache/spark/ui/static/executorspage.js
- (Spark 3) https://github.com/apache/spark/blob/v3.1.2/core/src/main/resources/org/apache/spark/ui/static/utils.js
function getStandAloneAppId(cb) { var words = document.baseURI.split('/'); var ind = words.indexOf("proxy"); if (document.baseURI.indexOf("jupyter") > 0) { ind = 0 } // newly added line function createRESTEndPointForExecutorsPage(appId) { var words = document.baseURI.split('/'); var ind = words.indexOf("proxy"); if (document.baseuri.indexof("jupyter") > 0) { ind = 0 } // newly added line function createTemplateURI(appId, templateName) { var words = document.baseURI.split('/'); var ind = words.indexOf("proxy"); if (document.baseuri.indexof("jupyter") > 0) { ind = 0 } // newly added lineBut basically this problem can be fixed simply if jupyter-proxy-server extension supports modification of
proxy/
URL infix. Since spark javascript functions in UI tries to handleproxy
string in URL as you can see the code above.Is it possible to modify
proxy
string infix in URL for jupyter-server-proxy extension? (e.g, by setting some options..)
I searched the code of this repository, but could find any hardcodedproxy
string. Theproxy
string might come fromjupyter-server
extensions or somewhere outside of this repository :(
is there a way to patch an existing pyspark installation? Or where do you apply the patch?
from jupyter-server-proxy.
Hi, @belfhi I built spark for 2, 3 to customize hadoop environment and the UI related parts.
from jupyter-server-proxy.
@1ambda
Hello, right now is there any simple way to work around the 404 issue for allexecutors?
from jupyter-server-proxy.
Is there any improvement for this issue ?
Using a jupyterhub on k8s but with a local (on the pod for demontration purposes) Spark 3.1.2, I couldn't get acces to the SparkUI even after trying some of the proposed solutions above.
from jupyter-server-proxy.
I am also curious about if there's a resolution for the allExecutors tab info. From what I can tell, there's a conflict between jupyter-server-proxy adding a /proxy
to the path prefix, and when Spark sees "proxy" in the URL, it assumes that it's the spark-internal proxy and does something else to it.
If that's the case, I guess there's two solutions:
- Patch Spark
- Patch jupyter-server-proxy
from jupyter-server-proxy.
I tried @jgprogramming 's suggestion, but it keeps redirecting to $HUB_URL/jobs
and gave me 404 page.
But after I set the spark config to spark.ui.proxyRedirectUri=/
, it works as expected.
from jupyter-server-proxy.
We're dealing with this issue by changing the jupyter-server-proxy handlers to a different url than proxy (using spark-ui
):
sed -e 's/r"\/proxy/r\"\/spark-ui/g' -e 's/"proxy"/"spark-ui"/g' -i /opt/conda/lib/python${PYTHON_VERSION}/site-packages/jupyter_server_proxy/handlers.py
We're doing this in a pyspark-notebook image with the jupyter-server-proxyextension baked in the docker image. With this change, all the tabs work including the executors.
from jupyter-server-proxy.
Related Issues (20)
- labextension support on jupyterlab >= 4.0 HOT 2
- JupyterLab extension to also be a Notebook 7 extension
- Test failures with JupyterLab 3 - soon resolved upstream HOT 1
- Do we need .yarnrc, or can we always build the extension with jupyterlab 4 that doesn't need it? HOT 3
- Resolve consistent failures related to "address in use" etc HOT 1
- Test suite is failing in main branch
- Test suite failing in our CI system HOT 1
- Error 500 when using Jupyter Server Proxy Arbitrary Ports on a port that is not serving HOT 1
- Handler subprotocol method fails to detect empty list HOT 1
- PDF preview of LaTeX Workshop (code-server) results in `HTTP 400: Bad Request` HOT 4
- Test failures in main branch HOT 4
- WebSocket subprotocols for client/proxy are chosen without asking the server we proxy to HOT 10
- Issue on page /install.html HOT 1
- Consider changing or removing Author field from PyPI
- Require at least tornado 6.1 - a dependency for notebook 6.2+ and jupyterlab 3+
- Open in Panel HOT 3
- `new_browser_tab` is True by default, but documented to be False
- Ability to Configure Activity Reporting HOT 1
- Document `rewrite_response`
- Process management and the ability to start and stop the launched process HOT 4
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jupyter-server-proxy.