Giter Site home page Giter Site logo

jupyterlab / benchmarks Goto Github PK

View Code? Open in Web Editor NEW
12.0 12.0 16.0 7.62 MB

Benchmarking tools for JupyterLab

Home Page: https://jupyterlab-benchmarks.readthedocs.io

License: BSD 3-Clause "New" or "Revised" License

Jupyter Notebook 99.87% TypeScript 0.07% Python 0.01% HTML 0.01% CSS 0.01% JavaScript 0.03% Makefile 0.01% Shell 0.01%
benchmarks jupyterlab

benchmarks's Introduction

Installation | Documentation | Contributing | License | Team | Getting help |

PyPI version Downloads Build Status Build Status Documentation Status Crowdin OpenSSF Best Practices OpenSSF Scorecard GitHub Discourse Gitter Gitpod

Binder

An extensible environment for interactive and reproducible computing, based on the Jupyter Notebook and Architecture.

JupyterLab is the next-generation user interface for Project Jupyter offering all the familiar building blocks of the classic Jupyter Notebook (notebook, terminal, text editor, file browser, rich outputs, etc.) in a flexible and powerful user interface.

JupyterLab can be extended using npm packages that use our public APIs. The prebuilt extensions can be distributed via PyPI, conda, and other package managers. The source extensions can be installed directly from npm (search for jupyterlab-extension) but require an additional build step. You can also find JupyterLab extensions exploring GitHub topic jupyterlab-extension. To learn more about extensions, see the user documentation.

Read the current JupyterLab documentation on ReadTheDocs.

Important

JupyterLab 3 reached its end of maintenance date on May 15, 2024. Fixes for critical issues will still be backported until December 31, 2024. If you are still running JupyterLab 3, we strongly encourage you to upgrade to JupyterLab 4 as soon as possible. For more information, see JupyterLab 3 end of maintenance on the Jupyter Blog.


Getting started

Installation

If you use conda, mamba, or pip, you can install JupyterLab with one of the following commands.

  • If you use conda:
    conda install -c conda-forge jupyterlab
  • If you use mamba:
    mamba install -c conda-forge jupyterlab
  • If you use pip:
    pip install jupyterlab
    If installing using pip install --user, you must add the user-level bin directory to your PATH environment variable in order to launch jupyter lab. If you are using a Unix derivative (e.g., FreeBSD, GNU/Linux, macOS), you can do this by running export PATH="$HOME/.local/bin:$PATH". If you are using a macOS version that comes with Python 2, run pip3 instead of pip.

For more detailed instructions, consult the installation guide. Project installation instructions from the git sources are available in the contributor documentation.

Installing with Previous Versions of Jupyter Notebook

When using a version of Jupyter Notebook earlier than 5.3, the following command must be run after installing JupyterLab to enable the JupyterLab server extension:

jupyter serverextension enable --py jupyterlab --sys-prefix

Running

Start up JupyterLab using:

jupyter lab

JupyterLab will open automatically in the browser. See the documentation for additional details.

If you encounter an error like "Command 'jupyter' not found", please make sure PATH environment variable is set correctly. Alternatively, you can start up JupyterLab using ~/.local/bin/jupyter lab without changing the PATH environment variable.

Prerequisites and Supported Browsers

The latest versions of the following browsers are currently known to work:

  • Firefox
  • Chrome
  • Safari

See our documentation for additional details.


Getting help

We encourage you to ask questions on the Discourse forum. A question answered there can become a useful resource for others.

Bug report

To report a bug please read the guidelines and then open a Github issue. To keep resolved issues self-contained, the lock bot will lock closed issues as resolved after a period of inactivity. If a related discussion is still needed after an issue is locked, please open a new issue and reference the old issue.

Feature request

We also welcome suggestions for new features as they help make the project more useful for everyone. To request a feature please use the feature request template.


Development

Extending JupyterLab

To start developing an extension for JupyterLab, see the developer documentation and the API docs.

Contributing

To contribute code or documentation to JupyterLab itself, please read the contributor documentation.

JupyterLab follows the Jupyter Community Guides.

License

JupyterLab uses a shared copyright model that enables all contributors to maintain the copyright on their contributions. All code is licensed under the terms of the revised BSD license.

Team

JupyterLab is part of Project Jupyter and is developed by an open community. The maintenance team is assisted by a much larger group of contributors to JupyterLab and Project Jupyter as a whole.

JupyterLab's current maintainers are listed in alphabetical order, with affiliation, and main areas of contribution:

  • Mehmet Bektas, Netflix (general development, extensions).
  • Alex Bozarth, IBM (general development, extensions).
  • Eric Charles, Datalayer, (general development, extensions).
  • Frédéric Collonval, WebScIT (general development, extensions).
  • Martha Cryan, Mito (general development, extensions).
  • Afshin Darian, QuantStack (co-creator, application/high-level architecture, prolific contributions throughout the code base).
  • Vidar T. Fauske, JPMorgan Chase (general development, extensions).
  • Brian Granger, AWS (co-creator, strategy, vision, management, UI/UX design, architecture).
  • Jason Grout, Databricks (co-creator, vision, general development).
  • Michał Krassowski, Quansight (general development, extensions).
  • Max Klein, JPMorgan Chase (UI Package, build system, general development, extensions).
  • Gonzalo Peña-Castellanos, QuanSight (general development, i18n, extensions).
  • Fernando Perez, UC Berkeley (co-creator, vision).
  • Isabela Presedo-Floyd, QuanSight Labs (design/UX).
  • Steven Silvester, MongoDB (co-creator, release management, packaging, prolific contributions throughout the code base).
  • Jeremy Tuloup, QuantStack (general development, extensions).

Maintainer emeritus:

  • Chris Colbert, Project Jupyter (co-creator, application/low-level architecture, technical leadership, vision, PhosphorJS)
  • Jessica Forde, Project Jupyter (demo, documentation)
  • Tim George, Cal Poly (UI/UX design, strategy, management, user needs analysis).
  • Cameron Oelsen, Cal Poly (UI/UX design).
  • Ian Rose, Quansight/City of LA (general core development, extensions).
  • Andrew Schlaepfer, Bloomberg (general development, extensions).
  • Saul Shanabrook, Quansight (general development, extensions)

This list is provided to give the reader context on who we are and how our team functions. To be listed, please submit a pull request with your information.


Weekly Dev Meeting

We have videoconference meetings every week where we discuss what we have been working on and get feedback from one another.

Anyone is welcome to attend, if they would like to discuss a topic or just listen in.

Notes are archived on GitHub Jupyter Frontends team compass.

benchmarks's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

benchmarks's Issues

Running benchmark across versions with UI changes

Description

TLDR - Snapshots change across 4.0.0a26 and 4.0.0a27. This makes it tedious to generate a benchmark report to compare 3.x and 4.x.

I'm running benchmark tests as documented here with a slight change - instead of building local checkout, I use 2 different versions of Jupyterlab installed in the venv, upgrading to go from the reference state (3.4.5) to the challenger state (4.0.0a29 - the latest pre-release version available at this time)

My aim is to generate benchmark reports, especially the graph depicting actual vs expected time.

However, since the snapshots change between 4.0.0a26 and 4.0.0a27, (the cell toolbar appears in 4.x's snapshots), the tests fail and the report is not generated. Updating the snapshots while running 4.0.0a27 does not fix the issue since this marks the current version (4.0.0a27 - the challenger) as the reference.

I understand that UI changes could be part of major version changes and that I can comment out the pieces of the tests that compare snapshots to only generate the benchmark report.

Is there a cleaner way to compare benchmark results across two versions with UI changes?

Thanks in advance!

Reproduce

  1. Install [email protected]
  2. Start local jupyter lab instance
  3. Run benchmark tests
  4. Upgrade jupyterlab to 4.0.0a27
  5. Start local jupyter lab instance
  6. Run benchmark tests using 4.0.0a27 as challenger state

Tests fail since snapshots do not match.

Expected behavior

A way to compare benchmark tests from 3.x and 4.x.

Context

  • JupyterLab version: 4.0.0a26, 4.0.0a27

I've provided all information relevant to the question; please let me know if anything else is required.

Proposal: Jupyter should be able to handle large notebooks

Problem

We should add a benchmark test and make changes so that 2k cell notebooks feels good to work with. In practice, I have seen some users make notebook in the 1k-ish range, so 2k is an arbitrary number that is bigger than that (maybe it should be 10k?).

We'd first need to define "feels good to work with" a bit more, which I'll state as something like:

  1. Allows a user to interact with it in no more than 10s seconds
  2. Clicking on cells within the notebook become interactive as fast as a 10 cell notebook
  3. Characters typed in code cells are rendered as fast as a 10 cell notebook
  4. Switching tabs should be no more than 20% slower than with a 10 cell notebook
  5. Scrolling/jumping to a cell (e.g. via ToC) should be interactive in less than 500ms
  6. It does not significantly interfere with the rest of the page (e.g. button clicks takes no more than 20% more than if the notebook was not on the page)

All the numbers and metrics above are just somewhere to get started. Happy to put in other metrics and/or change any of the numbers as I choose them somewhat arbitrarily as well. That being said, today none of these metrics pass.


What is is like today?

Given this generated notebook (note, there is no output for any cell which makes this simpler than in the real world):

import json
import nbformat

NUM_CELLS = 2000

nb = nbformat.v4.new_notebook()
nb.metadata.kernelspec = {
    "display_name": "Python 3",
    "language": "python",
    "name": "python3",
}
for n in range(NUM_CELLS):
    nb.cells.append(nbformat.v4.new_code_cell("# cell {}".format(n + 1)))

with open(
    "generated-{}cells.ipynb".format(NUM_CELLS),
    "w",
) as f:
    f.write(json.dumps(nb, indent=4))

In lab 3.1 I am finding the following performance when I open the above notebook and try to use it:
image

Zooming in, all the work seems to be this codemirror pattern over and over again:
image

While we are working on things like jupyterlab/jupyterlab#10370 and jupyterlab/lumino#231 I thought it would be good to set a both a bit more defined goal and give everyone the same example to test against.

@blois, I'm curious how this notebook performs with the new colab virtualization (#68 (comment)).

What do others think?

CC those who have come to performance meetings as this size notebook was a topic of our first meeting. @fcollonval @sagemaster @echarles @Zsailer @jasongrout @afshin @ellisonbg @3coins @goanpeca

Playwright does not support chromium on mac11.4

Description

Error when trying to run jlpm

benchmarks/packages/run/node_modules/playwright/lib/helper.js:325
        throw new Error(message);
              ^

Error: ERROR: Playwright does not support chromium on mac11.4

Reproduce

  1. Quick start benchmarks as instructed
  2. run jlpm

Expected behavior

benchmarks should be started without Playwright issue

Context

  • Operating System and version: macOS Big Sur Version 11.4
  • Browser and version: Google Chrome Version 92.0.4515.131 (Official Build) (arm64)
  • JupyterLab version: 4.0.0a1
Troubleshoot Output
Paste the output from running `jupyter troubleshoot` from the command line here.
You may want to sanitize the paths in the output.
Command Line Output
Paste the output from your command line running `jupyter lab` here, use `--debug` if possible.
Browser Output
Paste the output from your browser Javascript console here, if applicable.

Proposal: create a user-facing benchmarking extension

Problem

Profiling of the UI performance issues is difficult due to differences in:

  • browsers and browser versions
  • sets of extensions users have installed, some of which are not even public
  • notebooks, cell outputs and other main area widgets used
  • complexity of workspaces

Some browser-specific performance regressions are caused by CSS rules which affect the number of elements marked for style recalculation; existence of a single such a rule in an extension will cause JupyterLab to slow down substantially unless browser implements clever optimisation strategies. Such issues by definition cannot be detected on CI.

Proposed Solution

I would like to propose a creation of user-facing extension enabling users and managers of JupyterLab deployments to benchmark the UI of JupyterLab locally, providing information on performance impact of installed extensions in a specific browser of choice for a set or pre-specified actions. The extension would ideally be forward compatible with upcoming Jupyter Notebook v7.

I would like to enable users/workspace administrators to:

  • run benchmarks for pre-specified UI actions such as (see #84):
    • opening context menu
    • switching menus in the menu bar
    • switching tabs
    • opening notebooks
    • opening completer
  • parameterise the benchmarks (e.g. using my_notebook.ipynb for tab switching)
  • generate results to share with developers (of extensions or of JupyterLab)
    • automatic or semi-automatic uploading of reports is a non-goal as that would require some kind of privacy compliance checks, we don't really need that
    • the result should optionally allow to include (allowing to deselect each of the options separately):
      • count of HTML nodes on the page, broken down by type
      • basic, anonymised workspace information (how many notebooks, how many cells, how many files were opened etc)
      • basic information about the browser/OS versions
  • run the benchmarks in any supported browser to be able to exactly capture the performance issue the users are facing
  • the benchmarks would cover:
    • timing with specific stylesheets disabled to narrow down the plausible source of the problem
    • timing specific style rules disabled
    • JavaScript self-profiling for browsers which support it (currently only Chromium-based browsers) to understand if the problem comes from JS code, or any specific browser operation like layout, style or painting;

I don't know how to name it, but for now we could name the npm package @jupyterlab-benchmarks/ui-profiler and the PyPI jupyterlab-ui-profiler.

Some UI ideas for the UI profiler:

  • it would be accessible from the Help menu bar menu with an option to disable/re-enable the menu entries from settings/command palette
  • the results would be displayed in a main area widget; for simplicity that could be an HTML render of static output or a notebook

I already have a PoC for the CSS benchmarking part, and if we are happy to proceed with the idea of making it a user-facing extension, I would like to wrap it in an extension, integrate with CI (for the part which can be integrated) and open a PR.

Additional context

  • Another project VS Code, according to this wiki on GitHub:
    • includes a self-profiler to record performance of extensions,
    • allows users to easily report workspace metadata etc,
    • for the renderer process it relies on the Chromium JS profiler.

Add @jupyterlab-benchmarks/table-render in all Dockerfiles and have large output tables

The current Dockerfiles install @jupyterlab-benchmarks/table-render on the released version of jupyterlab

RUN jupyter labextension install @jupyterlab-benchmarks/table-render

We also need to have those in the Dockerfiles running in dev mode from the branches.

https://github.com/jupyterlab/benchmarks/blob/6f6878193c34be3d04385b49db484189e00cf79e/docker/Dockerfile.delayout

We also need notebook examples with large output tables to push more the limits

https://github.com/jupyterlab/benchmarks/blob/6f6878193c34be3d04385b49db484189e00cf79e/docker/notebooks/fixedDataTable%20-%20Many%20fixed%20datatable%20outputs%20each%20with%2050%20columns%20and%205000%20rows.ipynb

An notebook with many output table each of 1000 rows could be added.

benchmark vl2png not working over npm proxy

In benchmark package, the vl2png is acting weird... I think b/c it tries to pull in some NPM packages, not sure why it does this.

From @mlucool

I first had to use the whole path for some reason (not sure). It still fails and I think econn refused is likely because you are not picking up our cafile correctly

$ ./node_modules/.bin/vl2png analysis.vl.json > analysis.png && open analysis.png
npm ERR! code ECONNREFUSED
npm ERR! errno ECONNREFUSED
npm ERR! FetchError: request to registry.mycompany.com/vega failed, reason: connect ECONNREFUSED 10.0.16.87:443

Distribute benchmarks more easily

We should distribute the benchmark package in some way to make it easier to run against in development jupyterlab or published jupyterlab.

We might also consider using rollup/webpack to pre-bundle it, so we can avoid some weirdness with stdlib that it does on postinstall that doesn't always work with yarn.

Benchmark tests broken

Description

Since vega-lite 5.6.0, the benchmark tests are not working because playwright cannot interpret the nullish assignement operator:

$ playwright test -c tests-out --project jupyterlab -u
/home/runner/work/benchmarks/benchmarks/benchmarks/node_modules/vega-lite/build/vega-lite.js:3708
          bandPosition ??= getBandPosition({
                       ^^^

SyntaxError: Unexpected token '??='
    at compileFunction (<anonymous>)
    at wrapSafe (internal/modules/cjs/loader.js:1001:16)
    at Module._compile (internal/modules/cjs/loader.js:1049:27)
    at Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Object.t.<computed>.tu._extensions.<computed> [as .js] (/home/runner/work/benchmarks/benchmarks/benchmarks/node_modules/@playwright/test/lib/utilsBundleImpl.js:16:1010)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)
    at Module.require (internal/modules/cjs/loader.js:974:19)
    at require (internal/modules/cjs/helpers.js:101:18)
    at Object.<anonymous> (/home/runner/work/benchmarks/benchmarks/benchmarks/node_modules/@jupyterlab/galata/src/benchmarkReporter.ts:16:1)
    at Module._compile (internal/modules/cjs/loader.js:1085:14)
    at Module._extensions..js (internal/modules/cjs/loader.js:1114:10)
    at Object.t.<computed>.tu._extensions.<computed> [as .js] (/home/runner/work/benchmarks/benchmarks/benchmarks/node_modules/@playwright/test/lib/utilsBundleImpl.js:16:1010)
    at Module.load (internal/modules/cjs/loader.js:950:32)
    at Function.Module._load (internal/modules/cjs/loader.js:790:12)

https://github.com/vega/vega-lite/blob/4f7f83996812840bed4089ea9fedc7b435904f0b/src/compile/mark/encode/valueref.ts#L245

Reproduce

https://github.com/jupyterlab/benchmarks/actions/runs/3257875091

Make it more robust to find browsers

Sometimes, I would have to run node ./node_modules/playwright/install.js to have it manually find the browsers to run on.

We should document this and also make it an env variable to specify a custom list of browsers and their paths. So we can avoid having to make this kind of change to the code:

@@ -11,7 +11,7 @@ const DATA_PATH = process.env['BENCHMARK_OUTPUT'] || 'out.csv';-const BROWSERS: Array<'firefox' | 'chromium'> = ['firefox', 'chromium'];
+const BROWSERS: Array<'firefox' | 'chromium'> = ['chromium'];
 // The maximium N
 const MAX_N = 100;
 // The number of different n's to try out
@@ -67,7 +67,7 @@
      * List of types that are now too big for this browser.
      */
     const tooLong = new Set<string>();
-    const browser = await playwright[browserName].launch({ headless: false });
+    const browser = await playwright[browserName].launch({ headless: false, executablePath: '/path/to/node_modules/puppeteer/.local-chromium/linux-706915/chrome-linux/chrome' });
     const context = await browser.newContext();
     const page = await context.newPage();
     await page.setViewportSize({

Weekly benchmark report - 2022

This issue gathers the results of the weekly scheduled benchmark tests.

Those test are run against JupyterLab master comparing the HEAD to the commit one week older than the HEAD; so every benchmark test is the evolution compared to the previous test.

For more information please refer to the documentation.

Older reports are in: #80


Report of January 2nd 2022

Benchmark report

The execution time (in milliseconds) are grouped by test file, test type and browser.
For each case, the following values are computed: min <- [1st quartile - median - 3rd quartile] -> max.

The mean relative comparison is computed with 95% confidence.

Results table
Test file large_code_100_notebook large_md_100_notebook longOutput - A single output with 100x100 divs errorOutputs - 10x100 errors output collected
open
chromium
actual 1489 <- [1536 - 1558 - 1576] -> 2354 1021 <- [1110 - 1170 - 1205] -> 1415 1382 <- [1436 - 1452 - 1469] -> 1619 1654 <- [1708 - 1727 - 1753] -> 2094
expected 1511 <- [1547 - 1567 - 1586] -> 1751 1072 <- [1129 - 1185 - 1218] -> 1297 1389 <- [1440 - 1454 - 1469] -> 1516 1625 <- [1694 - 1719 - 1743] -> 2134
Mean relative change -0.1% ± 1.2% -1.1% ± 1.5% 0.0% ± 0.5% 1.0% ± 1.2%
switch-from-copy
chromium
actual 112 <- [145 - 181 - 226] -> 1678 53 <- [68 - 89 - 335] -> 503 54 <- [70 - 84 - 299] -> 565 34 <- [55 - 65 - 119] -> 250
expected 114 <- [149 - 185 - 222] -> 385 51 <- [66 - 81 - 292] -> 464 56 <- [72 - 83 - 415] -> 590 34 <- [54 - 62 - 119] -> 260
Mean relative change 11.1% ± 10.5% 5.7% ± 13.6% -0.1% ± 14.8% 3.5% ± 9.7%
switch-to-copy
chromium
actual 112 <- [146 - 171 - 189] -> 262 48 <- [67 - 76 - 105] -> 262 51 <- [68 - 79 - 88] -> 224 29 <- [50 - 59 - 76] -> 190
expected 118 <- [144 - 175 - 189] -> 216 47 <- [63 - 70 - 82] -> 225 54 <- [67 - 75 - 83] -> 224 33 <- [52 - 59 - 70] -> 160
Mean relative change 0.9% ± 2.6% 18.1% ± 8.8% 5.5% ± 3.8% 0.7% ± 6.1%
switch-from-txt
chromium
actual 53 <- [116 - 147 - 162] -> 236 52 <- [63 - 69 - 83] -> 181 54 <- [67 - 75 - 84] -> 199 30 <- [55 - 63 - 75] -> 176
expected 65 <- [114 - 142 - 158] -> 201 53 <- [62 - 67 - 75] -> 144 54 <- [66 - 75 - 83] -> 186 30 <- [55 - 62 - 75] -> 163
Mean relative change 3.3% ± 3.3% 7.4% ± 4.0% 2.1% ± 3.4% 1.3% ± 5.5%
switch-to-txt
chromium
actual 40 <- [53 - 62 - 70] -> 233 36 <- [52 - 65 - 72] -> 142 49 <- [72 - 84 - 103] -> 220 23 <- [38 - 46 - 53] -> 93
expected 41 <- [51 - 62 - 71] -> 197 39 <- [53 - 66 - 73] -> 115 52 <- [73 - 85 - 102] -> 184 23 <- [40 - 49 - 56] -> 106
Mean relative change 3.2% ± 7.0% -0.8% ± 3.7% 1.7% ± 5.3% -6.6% ± 3.7%
close
chromium
actual 463 <- [558 - 577 - 599] -> 677 374 <- [395 - 410 - 421] -> 570 670 <- [687 - 695 - 710] -> 772 187 <- [197 - 204 - 214] -> 285
expected 469 <- [572 - 587 - 601] -> 658 379 <- [401 - 410 - 419] -> 442 670 <- [680 - 691 - 698] -> 746 184 <- [193 - 197 - 208] -> 281
Mean relative change -1.2% ± 1.9% 0.8% ± 1.4% 1.3% ± 0.7% 2.4% ± 3.1%

Changes are computed with expected as reference.


Expected = 26335b4d936925a7cbbdf31a86fa8bd4495efa83 | Actual = 602b05399b0ca762613c8f560a49b15abdefee39
Go to action log
Changelog covered

Originally posted by @github-actions[bot] in #80 (comment)

[Chrome] Actions outside of a large notebook are sometimes are slow

Description

After a large notebook fully renders, clicking on menus, tabs etc. are all much much slower at times (i.e. this isn't always slow on Chrome). I have noticed this in Chrome moreso than Firefox.

The way it seems to be reacting is as if the whole main thread is blocked. Some profiling showed large "Recalculating styles" but not much more to go on.

Reproduce

Use generated-1000cells-0.9ratio-2loc.ipynb and let it fully load. Click on a menu (e.g. File, Kernel) or switch between the file browser and ToC. The video below took a couple of tries and shows that it took a few seconds at times to respond to a mouse click. I have seen it much, much worse than this, but have not yet been able to create a reproducer.

Screen.Recording.2022-02-25.at.2.48.42.PM.mov

Expected behavior

It's fast!!

Context

Lab: 3.3.0rc0
Chrome: 98.0.4758.109
Firefox: 97.0

Lernafy (and remove duplicate compare.ts)

#24 has introduced a package to compare with a duplicate of compare.ts

We should bring all code under packaages and have lerna do the module resolution to avoid that duplicate.

Having other large notebooks open while tab switching

Problem

There were some recent issues where a change that gave a performance improvement when just 2 notebooks were open and switched between, it gave a performance penalty when multiple large files were open.

Proposed Solution

Keep multiple copies of large files open during benchmarking (maybe as a new class, maybe as a change to existing). As long has they have been displayed once before switching between tabs, all their elements (lumino and/or DOM) will still affect the performance. This should be closer to many symptomatic cases that users will see during their normal work.

Jupyterlab Tab Close Performance

Description

Similar to switching tabs (jupyterlab/jupyterlab#4292) and opening notebooks, closing notebooks is slow.

Reproduce

Open a notebook with 1k cells.

The performance graph looks like this when closing it (all the purple at the bottom is another forced reflow):
image

Expected behavior

It closes even faster than it opens

Context

  • Operating System and version: Linux
  • Browser and version: Chrome
  • JupyterLab version: 3.1.x

How to benchmark lumino branches/forks?

Problem

Running benchmarks against custom branches of JupyterLab is a pleasant experience, but benchmarking against custom lumino forks seems tricky. Can we benchmark JupyterLab against a lumino fork/branch?

Proposed Solution

We could add another parameter to the GitHub Action, which would result in using lumino from given fork/branch.

I initially thought about a local NPM registry overwriting the published lumino versions, but probably we should just make use of: https://jupyterlab.readthedocs.io/en/stable/developer/contributing.html#linking-unlinking-packages-to-jupyterlab

Additional Context

@fcollonval I wonder what was your workflow for jupyterlab/lumino#231 was your benchmarking automated or fully manual?

Build Benchmarks RTD on-merge in main branch

We need to automate the docs building on each merge of the main/master branch, especially because we iterate in this inception phase.

I have tried to configure this based on previous experience on the RTC project as documented by @saulshanabrook on https://github.com/jupyterlab/rtc/issues/62#issuecomment-661136761

Saul says

Delete the Read the Docs integration from my personal github account.
Remove the github integration from my Read the Docs account
Re-add the github integration, making sure to grant access to the jupyterlab org
Re-add the webhook for the repo from the Read the Docs admin page

How can I make sure to grant access to the jupyterlab org

Should we add more test cases: menu responsiveness, latex and moving tabs around

Problem

jupyterlab/jupyterlab#9757 highlights some scenario where the entire UI becomes slow to respond due to a high number of DOM nodes.

Proposed Solution

Should we add test case for this scenario, or is it already somewhat covered by any of the existing benchmarks? Should we add a case for each non-covered manifestation of the problem (menu responsiveness, latex rendering, moving tabs around)?

Additional context

Tab switching is already covered (but it seems it has a different cause).

Add some time wait between samples of a test

I think I have seen mostly increasing numbers when running the tests in a row.

So every time a file opens and closes there is some extra work the browser may be doing (cleanup ...) adding a wait time (a second maybe?), a second or so, may make the results less variable (reduce variance) and more accurately measure the specific actions, instead of accumulating browser work from the previous sample.

[CI] Merge runs and see the difference

For now, CI launches 3 activities based on the matrix, eg.
Screenshot 2020-09-10 at 08 52 53

This allows to run faster and bring more tuning capability, but this also introduces potential issues as the benchmark runs we compare run in differents Environement ? Virtual Machine (at least this is my understanding, as the run in parallel).

We could explore a configuration for the virtual-dom case that would run in a single machine and compare the result with the existing parallel ones.

CI is failing on `master` branch

  • Execute benchmark tests for validation strop is failing since end of February: https://github.com/jupyterlab/benchmarks/actions/runs/4280818726/jobs/7453054661

    Run set -ex
      set -ex
      # Update the snapshots
      BENCHMARK_NUMBER_SAMPLES=1 PW_VIDEO=1 jlpm run test --project testing -u
      # Check that all tests pass with updated snapshots
      BENCHMARK_NUMBER_SAMPLES=1 PW_VIDEO=1 jlpm run test --project testing
      shell: /usr/bin/bash -e {0}
      env:
        BENCHMARK_SWITCHES: 3
        pythonLocation: /opt/hostedtoolcache/Python/3.10.10/x64
        PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.10.10/x64/lib/pkgconfig
        Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
        Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
        Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
        LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.10/x64/lib
    + BENCHMARK_NUMBER_SAMPLES=1
    + PW_VIDEO=1
    + jlpm run test --project testing -u
    yarn run v1.22.19
    $ tsc --incremental -p tsconfig.json
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(612,34): error TS1005: '?' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(613,17): error TS1005: ':' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(614,17): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(617,34): error TS1005: '?' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(618,17): error TS1005: ':' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(619,17): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(619,26): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(620,5): error TS1109: Expression expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(624,24): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(628,35): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(634,39): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(638,21): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,19): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,27): error TS1005: ':' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,36): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,55): error TS1005: '{' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,64): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(643,77): error TS1005: ',' expected.
    Error: ../node_modules/@types/node/ts4.8/test.d.ts(647,22): error TS1005: ',' expected.
    error Command failed with exit code 2.
    info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
    Error: Process completed with exit code 2.
    
  • An earlier step, Launch JupyterLab started failing since as seen on master of my fork (fully synced with upstream): https://github.com/krassowski/benchmarks/actions/runs/4589258332/jobs/8104021108

     Run jlpm install
       jlpm install
       # Mount a volume to overwrite the server configuration
       jlpm start-jlab [2](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:2)>&1 > /tmp/jupyterlab_server.log &
       shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
       env:
         BENCHMARK_SWITCHES: [3](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:3)
         pythonLocation: /opt/hostedtoolcache/Python/3.10.10/x6[4](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:4)
         PKG_CONFIG_PATH: /opt/hostedtoolcache/Python/3.10.10/x64/lib/pkgconfig
         Python_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
         Python2_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
         Python3_ROOT_DIR: /opt/hostedtoolcache/Python/3.10.10/x64
         LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.10.10/x64/lib
       
     ➤ YN0070: Migrating from Yarn 1; automatically enabling the compatibility node-modules linker 👍
     
     ➤ YN0000: ┌ Resolution step
     Resolution step
       ➤ YN0061: │ @evocateur/pacote@npm:9.6.[5](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:5) is deprecated: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
       ➤ YN00[6](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:6)1: │ @evocateur/libnpmaccess@npm:3.1.2 is deprecated: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
       ➤ YN0061: │ @evocateur/npm-registry-fetch@npm:4.0.0 is deprecated: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
       ➤ YN0061: │ read-package-tree@npm:5.3.1 is deprecated: The functionality that this package provided is now in @npmcli/arborist
       ➤ YN0061: │ readdir-scoped-modules@npm:1.1.0 is deprecated: This functionality has been moved to @npmcli/fs
       ➤ YN0061: │ uuid@npm:3.4.0 is deprecated: Please upgrade  to version [7](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:7) or higher.  Older versions may use Math.random() in certain circumstances, which is known to be problematic.  See https://v[8](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:8).dev/blog/math-random for details.
       ➤ YN0061: │ querystring@npm:0.2.0 is deprecated: The querystring API is considered Legacy. new code should use the URLSearchParams API instead.
       ➤ YN0061: │ @evocateur/libnpmpublish@npm:1.2.2 is deprecated: Package no longer supported. Contact Support at https://www.npmjs.com/support for more info.
       ➤ YN0061: │ @npmcli/move-file@npm:1.1.2 is deprecated: This functionality has been moved to @npmcli/fs
       ➤ YN0061: │ node-fetch-npm@npm:2.0.4 is deprecated: This module is not used anymore, npm uses minipass-fetch for its fetch implementation now
       ➤ YN0061: │ source-map-resolve@npm:0.5.3 is deprecated: See https://github.com/lydell/source-map-resolve#deprecated
       ➤ YN0032: │ fsevents@npm:2.3.2: Implicit dependencies on node-gyp are discouraged
       ➤ YN0061: │ @npmcli/move-file@npm:2.0.1 is deprecated: This functionality has been moved to @npmcli/fs
       ➤ YN0061: │ request@npm:2.88.2 is deprecated: request has been deprecated, see https://github.com/request/request/issues/3142
       ➤ YN0061: │ resolve-url@npm:0.2.1 is deprecated: https://github.com/lydell/resolve-url#deprecated
       ➤ YN0061: │ urix@npm:0.1.0 is deprecated: Please see https://github.com/lydell/urix#deprecated
       ➤ YN0061: │ source-map-url@npm:0.4.1 is deprecated: See https://github.com/lydell/source-map-url#deprecated
       ➤ YN0061: │ har-validator@npm:5.1.5 is deprecated: this library is no longer supported
       ➤ YN0061: │ mkdirp-promise@npm:5.0.1 is deprecated: This package is broken and no longer maintained. 'mkdirp' itself supports promises now, please switch to that.
       ➤ YN0061: │ popper.js@npm:1.16.1 is deprecated: You can find the new Popper v2 at @popperjs/core, this package is dedicated to the legacy v1
       ➤ YN0032: │ leveldown@npm:5.6.0: Implicit dependencies on node-gyp are discouraged
       ➤ YN0002: │ @jupyterlab-benchmarks/table-render@workspace:extensions/fixed-data-table doesn't provide react-dom (pa8553), requested by fixed-data-table-2
       ➤ YN0002: │ @jupyterlab/application@npm:3.6.3 doesn't provide crypto (pfde1e), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/apputils@npm:3.6.3 doesn't provide crypto (pf6235), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/builder@npm:3.6.3 doesn't provide crypto (p0a20[9](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:9)), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/cells@npm:3.6.3 doesn't provide crypto (pf05ee), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/codeeditor@npm:3.6.3 doesn't provide crypto (pc9273), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/codemirror@npm:3.6.3 doesn't provide crypto (p7ee66), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/codemirror@npm:3.6.3 doesn't provide yjs (pac1cf), requested by y-codemirror
       ➤ YN0002: │ @jupyterlab/coreutils@npm:5.6.3 doesn't provide crypto (p9bc55), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/docmanager@npm:3.6.3 doesn't provide crypto (p89acc), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/docprovider@npm:3.6.3 doesn't provide crypto (p3c205), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/docprovider@npm:3.6.3 doesn't provide yjs (pc7994), requested by y-websocket
       ➤ YN0002: │ @jupyterlab/docregistry@npm:3.6.3 doesn't provide crypto (p6566d), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/filebrowser@npm:3.6.3 doesn't provide crypto (pe680c), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/galata@npm:4.5.3 doesn't provide crypto (pa9d20), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/nbformat@npm:3.6.3 doesn't provide crypto (pcf328), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/notebook@npm:3.6.3 doesn't provide crypto (pd0582), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/observables@npm:4.6.3 doesn't provide crypto (p6661c), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/outputarea@npm:3.6.3 doesn't provide crypto (p28759), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/rendermime-interfaces@npm:3.6.3 doesn't provide crypto (pb230c), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/rendermime@npm:3.6.3 doesn't provide crypto (p4c525), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/services@npm:6.6.3 doesn't provide crypto (pf1e13), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/settingregistry@npm:3.6.3 doesn't provide crypto (p03049), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/statedb@npm:3.6.3 doesn't provide crypto (p03c0d), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/statusbar@npm:3.6.3 doesn't provide crypto (p7bf82), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/translation@npm:3.6.3 doesn't provide crypto (p429[10](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:10)), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/ui-components@npm:3.6.3 [66f46] doesn't provide crypto (p6976f), requested by @lumino/coreutils
       ➤ YN0002: │ @jupyterlab/ui-components@npm:3.6.3 [7a438] doesn't provide crypto (p20226), requested by @lumino/coreutils
       ➤ YN0002: │ @lumino/application@npm:1.31.4 doesn't provide crypto (p43fa4), requested by @lumino/coreutils
       ➤ YN0002: │ @lumino/commands@npm:1.21.1 doesn't provide crypto (pf607b), requested by @lumino/coreutils
       ➤ YN0002: │ @lumino/dragdrop@npm:1.14.5 doesn't provide crypto (pc4019), requested by @lumino/coreutils
       ➤ YN0002: │ @lumino/polling@npm:1.[11](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:11).4 doesn't provide crypto (p53824), requested by @lumino/coreutils
       ➤ YN0002: │ @lumino/widgets@npm:1.37.2 doesn't provide crypto (p763f2), requested by @lumino/coreutils
       ➤ YN0002: │ @octokit/rest@npm:16.43.2 doesn't provide @octokit/core (p84523), requested by @octokit/plugin-request-log
       ➤ YN0000: │ Some peer dependencies are incorrectly met; run yarn explain peer-requirements <hash> for details, where <hash> is the six-letter p-prefixed code
       
     ➤ YN0000: └ Completed in [14](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:14)s 891ms
     ➤ YN0000: ┌ Post-resolution validation
     Post-resolution validation
       ➤ YN0028: │ The lockfile would have been modified by this install, which is explicitly forbidden.
     ➤ YN0000: └ Completed
     ➤ YN0000: Failed with errors in [15](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:16)s [47](https://github.com/krassowski/benchmarks/actions/runs/4589476259/jobs/8104389851?pr=2#step:11:48)ms
     Error: Process completed with exit code 1.
    

Test JupyterLab with many extensions

Problem

It would be interesting to set up some basic tests to see how the number of installed extensions could impact the loading time of the JupyterLab application.

For example does the loading time grow linearly with the number of installed extensions?

Proposed Solution

Set up a (recurring?) job that would pull the latest JupyterLab stable (or dev) version and install a given number of extensions (for example 10 or 100). And measure the time it takes for the splash animation to disappear.

Additional context

This can be relevant for large organizations and JupyterLab deployments that include many third-party extensions.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.