Giter Site home page Giter Site logo

mdn / yari Goto Github PK

View Code? Open in Web Editor NEW
1.1K 36.0 466.0 130.77 MB

The platform code behind MDN Web Docs

License: Mozilla Public License 2.0

JavaScript 8.55% HTML 1.56% CSS 0.18% Shell 0.12% TypeScript 61.67% Python 4.80% SCSS 9.39% EJS 13.63% PEG.js 0.11% Procfile 0.01%
yari mdn

yari's Introduction

Yari

Testing Prod Build

Quickstart

Development on yari involves updating the machinery that renders MDN content or improving the structure and styling of the MDN UI (e.g. the styling of the header). If you are more interested in contributing to the MDN content, you should check out the content repo README instead.

Before you can start working with Yari, you need to:

  1. Install git, Node.js, and Yarn 1.

  2. Fork the MDN content and yari repositories using the Fork button on GitHub.

  3. Clone the forked repositories to your computer using the following commands (replace [your account] with the account you forked the repositories to):

    git clone https://github.com/[your_account]/content.git
    git clone https://github.com/[your_account]/yari.git
    

To run Yari locally, you'll first need to install its dependencies and build the app locally. Do this like so:

cd yari
yarn install

See the troubleshooting section below if you run into problems.

Now copy the .env-dist file to .env:

cp .env-dist .env

If you followed the instructions above and cloned the content repo as a sibling of your yari repo, the CONTENT_ROOT environment variable is already set and Yari will be able to find the content it needs to render.

At this point, you can get started. Run the following lines to compile required files, start the Yari web server running, and open it in your browser:

yarn dev
open http://localhost:3000

If you prefer you can use yarn start, which will re-use any previously compiled files; this is "riskier" but faster. yarn dev always ensures that everything is up-to-date.

The yarn start command also starts a server with slightly different behavior — it doesn't automatically reload when its source code files change, so use with caution.

See also our reviewing guide for information on how to review Yari changes.

Pull request requirements

Firstly, thank you for your interest in contributing to Yari! We do have a few requirements when it comes to pull requests:

  1. Please make use of a feature branch workflow.
  2. We prefer if you use the conventional commits format when making pull requests.
  3. Lastly, we require that all commits are signed. Please see the documentation about signed commits and how to sign yours on GitHub.

Thank you for your understanding! We look forward to your contributions.

How to stay up-to-date

Periodically, the code and the content changes. Make sure you stay up-to-date with something along the following lines (replace yari-origin with whatever you called the remote location of the original yari repo):

git pull yari-origin main
yarn
yarn dev

When you embark on making a change, do it on a new branch, for example git checkout -b my-new-branch.

License

All source code is MPL-2.0.

For content, see its license in the mdn/content repository.

Supported Platforms

yari runs on Linux in CI, and when building for Production.

We also support Windows and MacOS, however we don't aim to proactively catch issues with CI on those platforms. If bugs arise, we welcome issues being filed, or PRs being opened to fix them.

How it works

Yari does a number of things, the most important of which is to render and serve the MDN content found in the content repo. Each document is stored as an index.md (recommended) or index.html file that contains metadata presented as YAML front-matter followed by the document source.

The builder converts these "source files" into "build files" using a CLI tool that iterates over the files, builds the HTML, and lastly packages it up with the front-end code, ready to be served as static files.

Development

The yarn start command encapsulates the front-end dev server (on http://localhost:3000) and the server (on http://localhost:5042).

All the sub-commands of yarn start can be broken down and run individually if you want to work more rapidly.

Setting up $EDITOR

If you configure an environment variable called EDITOR, either on your system as a whole or in the root .env file, it can be used in the development server to link to sources which, when clicked, open in your preferred editor/IDE. For example, in the root of the repo you could run:

echo 'EDITOR=code' >> .env

Now clicking certain links will open files directly in the currently open VS Code IDE (replace code in the above command with a different text editor name if needed, e.g. atom or whatever). To test it, view any document on http://localhost:3000 and click the "Open in your editor" button.

How the server works

The server has two main jobs:

  1. Simulate serving the site (e.g. from a server, S3 or a CDN).
  2. Trigger builds of documents that haven't been built, by URL.

Linting

All JavaScript and TypeScript code needs to be formatted with prettier and it's easy to test this with:

yarn prettier-check

And conveniently, if you're not even interested in what the flaws were, run:

yarn prettier-format

When you ran yarn for the first time (yarn is an alias for yarn install) it automatically sets up a git pre-commit hook that uses lint-staged — a wrapper for prettier that checks only the files in the git commit.

If you have doubts about formatting, submit your pull request anyway. If you have formatting flaws, the pull request checks should catch it.

Upgrading Packages

We maintain the dependencies using Dependabot in GitHub but if you want to manually upgrade them you can use:

yarn upgrade-interactive --latest

Sharing your dev environment with ngrok

ngrok allows you to start an HTTP proxy server from the web into your Yari server. This can be useful for testing your current build using external tools like BrowserStack, WebPageTest, or Google Translate, or to simply show a friend what you're up to. Obviously it'll never be faster than your uplink Internet connection but it should be fairly feature-complete.

  1. Create in account on Ngrok.com
  2. Download the executable
  3. Start your Yari server with yarn start in one terminal
  4. Start the ngrok executable with: /path/to/your/ngrok http 5042

This will display something like this:

Session Status                online
Account                       (Plan: Free)
Version                       2.3.35
Region                        United States (us)
Web Interface                 http://127.0.0.1:4040
Forwarding                    http://920ba2108da8.ngrok.io -> http://localhost:5042
Forwarding                    https://920ba2108da8.ngrok.io -> http://localhost:5042

Connections                   ttl     opn     rt1     rt5     p50     p90
                              0       0       0.00    0.00    0.00    0.00

Now, take that "Forwarding" URL (https://920ba2108da8.ngrok.io in this example) and share it.

Building

The server builds content automatically (on-the-fly) when you're viewing pages, but you can pre-emptively build all the content in advance if desired. One potential advantage is that you can get a more complete list of all possible "flaws" across all documents before you even visit them.

The most fundamental CLI command is:

yarn build

What gets built

Every index.html becomes two files:

  • index.html — a fully formed and complete HTML file
  • index.json — the state information React needs to build the page in the client

Flaw checks

When building you can enable specific "flaw checks" and their level of handling. Some flaws are "cosmetic" and some are more severe but they should never block a full build.

More information about how to set flaws can be found in docs/envvars.md.

Essentially, the default is to warn about any flaw and you can see those flaws when using http://localhost:3000. For completed builds, all flaws are ignored. This makes the build faster and there's also no good place to display the flaws in a production-grade build.

In the future, we might make the default flaw level error instead. That means that any new edits to (or creation of) any document will break in continuous integration if there's a single flaw and the onus will be on you to fix it.

Icons and logos

The various formats and sizes of the favicon are generated from the file mdn-web-docs.svg in the repository root. This file is then converted to favicons using realfavicongenerator.net. To generate new favicons, edit or replace the mdn-web-docs.svg file and then re-upload that to realfavicongenerator.net.

Contact

If you want to talk to us, ask questions, and find out more, join the discussion on the MDN Web Docs chat room on Matrix.

Troubleshooting

Some common issues and how to resolve them.

Error: ENOSPC: System limit for number of file watchers reached

There are two options to resolve this.

  1. Disable the watcher via REACT_APP_NO_WATCHER

    echo REACT_APP_NO_WATCHER=true >> .env

  2. Increase max_user_watches:
    See https://github.com/guard/listen#increasing-the-amount-of-inotify-watchers

Error: Cannot find module 'levenary'

We can't know for sure what's causing this error but speculate a bug in how yarn fails to resolve if certain @babel helper libs should install its own sub-dependencies. A sure way to solve it is to run:

rm -fr node_modules
yarn install

Error: listen EADDRINUSE: address already in use :::5042

The default server port :5042 might be in use by another process. To resolve this, you can pick any unused port (e.g., 6000) and run the following:

echo SERVER_PORT=6000 >> .env

Yarn install errors

If you get errors while installing dependencies via yarn on a Mac, you may need to install some additional packages. Check the error message for the package name causing the problem.

  1. First, install brew if you haven’t already

  2. To fix problems with gifsicle:

    brew install automake autoconf libtool

  3. To fix problems with pngquant-bin:

    brew install pkg-config

  4. To fix problems with mozjpeg:

    brew install libpng sudo ln -s /opt/homebrew/Cellar/libpng/1.6.40/lib/libpng16.a /usr/local/lib/libpng16.a

You may need to adjust the path to libpng16.a depending on the version of libpng you have installed.

yari's People

Contributors

bershanskiy avatar bsmth avatar caugner avatar chrisdavidmills avatar danielhjacobs avatar dependabot-preview[bot] avatar dependabot[bot] avatar elchi3 avatar escattone avatar fiji-flo avatar github-actions[bot] avatar gregoor avatar hamishwillee avatar jasonren0403 avatar joedarc avatar leomca avatar mfuji09 avatar mrbrain295 avatar nschonni avatar onkarruikar avatar peterbe avatar queengooborg avatar rachelandrew avatar sideshowbarker avatar sphinxknight avatar tannerdolby avatar teoli2003 avatar wbamberg avatar yin1999 avatar yoshinorin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

yari's Issues

Little things to make this look great

Important

  • Build the Browser Compat Table component
  • Build the Interactive Examples component
  • Build the Attributes component
  • Come up with a better name for this project We decided to rename it to stumptown-renderer
  • Move this repo to the mdn GitHub org
  • Make a full deployment build and upload somewhere (e.g. GitHub pages)
  • Some basic unit tests
  • ESLint
  • Prettier
  • Refactor code so that the core React components don't exclusively sit inside the client app
  • CI for testing and linting
  • A nice footer
  • Sidebar (imperfect)

Nice to haves

  • docker-compose to start all the things
  • Building a sitemap.xml (or sitemaps.xml etc.)
  • Build a JSON file for all titles for possible search
  • Some MDN nice looking stylesheets
  • Figure out how to display document metadata (contributors, tags, last modified)
  • Tooling to automatically update stumptown-experiment git submodule
  • HTML validation tooling
  • Service Worker (should be disabled in PR staging builds!)
  • Lazy loading non-critical features
  • Site search (titles)
  • Netlify deployment should be easy
  • Have control over our server-side rendered 404 pages.
  • CSP headers and Mozilla Observatory A+
  • Google Analytics
  • humans.txt file

Improve a11y of SVG elements using ARIA

After reading the tip Chris Mills recently learned with regards to setting role=img on an SVG, I did some more reading and found some additional things we can do to improve the a11y of our SVG images.

This means updating the current SVG elements of the icon system to make proper use of:

  • role="img"
  • aria-labeledby="title desc"
  • title with an id that matches the value of aria-labeledby
    *<title>
  • <description> (as appropriate) with an id that matches the value of aria-labeledby
  • role="presentation" on the SVG child element, unless the child is of type text

More can be read here:
https://developer.paciellogroup.com/blog/2013/12/using-aria-enhance-svg-accessibility/

Bugzilla

https://bugzilla.mozilla.org/show_bug.cgi?id=1491779

Two React bundles?

Would it be worth it to break up the React work.
In the cli we can use React to stitch together the large HTML blobs and that'd make the, for example, video/index.html and video/index.json.
No need for hydration or any inlined react state to support hydration.

Then, on the client, we have a different React bundle that does different things. (And if done correctly, could be swapped out for Preact or svelte or whatever) It handles routing, XHR loading for routing, header stuff (e.g. autocomplete), etc. If routing triggers that the page has changed, it goes to XHR fetch the JSON for the new page and simply injects the whole thing as one large blob of dangerouslySetHTML. We'd need to make that video.json dumber. It could basically look like this:

{
    "title": "<video> - The HTML video tag",
    "body": "<h2>Overview</h2>\n<p>Bla bla bla...",
    "sidebar": "<h3>References</h3>\n<ul>...",
    "bcd": "html-video-bcd",
    "some": "metadata",
}

That body in the JSON would contain everything except the <div> that displays the BCD table. Instead it could be a placeholder like <div id="bcd" data-id="html-video-bcd">. Then, the client-side React bundle could do a second XHR request just to get the BCD data and hydrate that into this div.

Possible outcome:

  1. Smaller .js bundle for the client-side
  2. Crisp separation between React components that stitch together HTML page and React components that deal with user interaction.
  3. Ability to abandon React and swap it for something different without breaking "the other side"
  4. Smaller HTML since we wouldn't need to ship the hydration data inline.
  5. Still get fancy things like lazy-loading BCD tables.

Possible risks:

  1. If we want more and more advanced functionality into within the large HTML blob it'll be hard.

submodules suck - but is it in the right place

In #85 (comment) Daniel and myself got into an off-tangent discussion about git submodules and how the stumptown-renderer works. It's not urgent to solve now but it should definitely be on our active radar.

The goal should be, I think:

  • You merge a change to a .md file into master
  • That changes goes to production immediately (minus latencies)

Also, if you change the mdn-browser-compat-data changes, that's a merge to master too and it should immediately update every page that has a BCD table in production.

stumptown-renderer already has the ability to override which version of stumptown-content it should build with. So, perhaps instead of stumptown-render --(specific sha of stumptown-content)--> stumptown-content we make it the other way around. I.e. that stumptown-content has a git submodule to stumptown-renderer.

Then, in .travis.yml, if all the tests pass, it could as a final step do something like this:

export STUMPTOWN_CONTENT_ROOT=`pwd`
cd stumptown-renderer
make deployment-build
if [ $branch = "master" ]; then
  push-to-prod-s3-and-cloudfront client/build
fi 

Note! I'm not convinced we update production just because there's a push to master. It might be better to do something like (pseudo code):

if is_tag():
    deploy_to_prod()
elif branch == "master":
    deploy_to_stage()
elif trusted_user($PR_OWNER):
   deploy_to_pr_submdomain()
else:
   echo "Not deploing anything"

related_content is heavy in JSON

In the renderer, we include all the "state" necessary to render a page. It's encoded as JSON into the HTML file. That's so React can hydrate.
Also, if you click on an internal link, in the sidebar for example, it will only need to download the index.json file which is a lot smaller.

That JSON is getting pretty big and I think there's room for improvement/optimizations.

I analyzed the sizes of sub-keys in the client/build/docs/Web/HTML/Element/**/*.json files and concluded this:

related_content                  5358    47%
prose                            2405    21%
contributors                     1627    14%
browser_compatibility            1307    11%
examples                          412     3%
interactive_example_url            70     0%
mdn_url                            58     0%
title                              34     0%
attributes                          2     0%

E.g. a json like this

This all gets gzipped and stuff but let's get performance as a primary thought now rather than an afterthought. But, let's also not prematurely optimize.

Hydrate with less content

At the moment, when we server-side render a page (e.g. a document) we inject the JSON source (the JSON built from the Stumptown JSON) into the DOM as <script id="document" type="application/json>. That way, when you do the first client-side render, it leverages this to avoid a XHR lookup. But 99% of that content is not needed for hydration.

The content of that script tag is about 30KB and 7.4KB gzipped. Since we mostly don't need it, we can avoid a render by stripping out almost all of it possibly. That would boost the initial load if the whole .html file can be made smaller.

Localize chrome

(This is not about the content)

We need to figure out the right/best way to use gettext in the client code. Thankfully the client has very little English but it needs to to be localizable.

Setting up the tooling to something like mdn-l10n might be best to do as a separate issue to reduce the size of the scope.

Use jssec for inlining the document data in the HTML file

Currently we're using this technique in the SSR rendering:

const escapeDocumentJson = JSON.stringify(document).replace("</", "<\\/");

Another option is: https://github.com/mathiasbynens/jsesc
And yet another option is: https://www.npmjs.com/package/serialize-javascript
but our approach don't "code generate" JavaScript code in the HTML document. Either way, it might be worth investigating if there are ways to break out of the </script> tag and cause an XSS.

Locale redirects are busted

The initial prototype, of this project, attempted to do a similar locale redirect solution as is done on MDN. E.g. / --> 302 --> /en-US
The problem is that we can't redirect to any URL that doesn't match a real file. I.e. <build>/en-US/index.html

Improve how we load ZillaSlab

Today, the ZillaSlab font is loaded by this: https://github.com/mdn/stumptown-renderer/blob/75e2e48fa929e8c015a5eb9a7a8fb845b2710b59/client/src/index.js#L4

What happens is that it loads:

  • /static/media/zilla-slab-latin-700.aefbe7f1.woff2
  • /static/media/zilla-slab-latin-600.861f6710.woff2
  • /static/media/zilla-slab-latin-400.a4701e4e.woff2

Why do we need all three? Can it be reduced to just 1 .woff2 URL?

Also, is latin really right for us? Stumptown-renderer doesn't have any Chinese content yet but it will have.

That third-party package uses font-display:swap; which is good. But if we could somehow figure out which URL(s) is going to be needed I think it would be nice to add this as a preload tag so that downloading of the font happens sooner.

woff2 is great but browsers that don't support WOFF2 could probably make do without web fonts.

If we need to we can simply extract what typeface-zilla-slab package does and do it ourselves.

Netlify 404 pages

Generally in single page apps, where you use something like react-router you just have 1 /index.html (and some static assets). There you want URLs like /foo/bar/page to actually serve up /index.html and 200 OK.
However, we have a file for every single possible valid URL. I.e. we have a file called /foo/bar/page/index.html. It's a copy of /index.html but with the content already server-side rendered into the HTML.

However, we need a solution so we can cope with typos. We don't want the custom stock 404 template you get with Netlify. We want our React code to kick in and run its own "Page Not Found" message.

Outside the browser, with curl for example, it's important that invalid URLs yield a 404 Page Not Found HTTP error code. E.g. curl -v https://stumptown/foo/bar/pajge should be 404 but look great.

Netlify cache-control

Generally, we want something like 1h Cache-Control (public) for all HTML pages and for things like /static/css/2.02b14290.chunk.css we want the max cache-control possible.

The /service-worker.js URL needs to have max-age=0 for best practice.

T - Does the JavaScript section restrict google translation?

Request type

  • [] Please close this issue, I accidentally submitted it without adding any details
  • [] New documentation
  • Correction or update

Details

很抱歉,我不知道是不是Chrome浏览器的原因,但是我在HTML板块使用Chrome浏览器的google翻译可以进行英文翻译中文,但是在JavaScript板块中却始终无法进行翻译,所以我觉得是JavaScript板块进行了某些限制,我想问一下,是不是这个原因?

PS:因为中文版mdn有许多过时,所以只好用google翻译看英文版mdn

[Translation via Google]
Sorry, I don't know if it's the reason for the Chrome browser, but I can translate the Chinese in English using the google translation of the Chrome browser in the HTML section, but I can't translate it in the JavaScript section, so I think it is the JavaScript section. Some restrictions, I want to ask, is this the reason?

PS: Because the Chinese version of mdn has a lot of obsolescence, so I have to use google translation to see the English version of mdn

Render BCD tables

The renderer doesn't yet know how to render BCD tables from the data. This is one of the relatively complicated bits of rendering.

The result should look pretty much like the existing BCD tables on MDN (https://developer.mozilla.org/en-US/docs/Web/CSS/margin#Browser_compatibility) which is built using the Compat.ejs KumaScript macro. So in a sense this is a port of Compat as a React component.

There are a few bits which rely on JS in the page (opening the little disclosures for compat details).

Dockerize?

There are many advantages with using Docker.

Pros

  • In CI - Get out of however Travis does Node. This way the only determining input variable for what version of Node and yarn we get is the version string of node we specify in the Dockerfile. For example, as of today Travis preinstalls an ancient version of yarn and there's no roadmap to that updating.

  • If we build a docker image, that contains ALL the rendered out .html files ready to be served, we can simply ship that docker container from Travis to Dockerhub and use something like Netlify to run that docker image.

  • For local development, every human developer would get the exact same version of Node and Yarn.

  • For local development, with a good docker-compose you would no longer need to "Open two terminals and in one run this and in another run that". Just docker-compose up and you get a complete dev environment.

  • In CI, we would potentially have better control of caching. Instead of the cryptic Travis way of caching files.

Cons

  • Docker is slow. Sometimes makes it hard to do hacky, but necessary, debugging within installed node_modules.

  • Not everyone is as comfortable with Docker as they are with pure Node and stuff. Possibly more applicable to contributors but on the flip-side being able to just say "To get started, run docker-compose up and you're off to the races".

  • Too soon? We're not entirely sure where we're going with the outcome of this project so perhaps it's fine to leave it a bit "pure" and flat at the moment.

ESLint non-client code

Since we rely on create-react-app for the client package, it comes with a robust and large-community maintained eslint configuration.
For the record, it's tested automatically when you run cd client && yarn run build but it's also in effect when you run cd client && yarn run start (aka. make run-dev).

But we have lots of other .js files outside of client that could benefit from eslint validation.

html-react-parser and dangerouslySetInnerHTML are both terrible

Unfortunately, there is good solution for rendering in HTML strings without a wrapper element.

See facebook/react#12014

If you do:

let stuff = "<p>paragraph</p>";
<p dangerouslySetInnerHTML={{__html: stuff}}/>

you get...

<p><p>paragraph</p></p>

html-react-parser is OKish but it adds 21KB to the bundle (8KB gzipped) which just too much at this early stage. I'd rather learn to live with a bunch of <div> tags everywhere.

Basically, this issue is about keeping an eye on the above mentioned issue.

"Browser compatibility"-colors unclear for colorblind people (bug 1548316)

Request type

  • Correction or update

Details

Hi, I'm colorblind and I have some difficulties reading the browser campatibility table at the end of each docs page. The issue I have is with distinguishing between "Full Support" and "Compatibility Unknown". To me, the two colors look the same. Also when I look at the legend at the bottom, I see no difference between the two colors picked for this. I see some difference between "No Support" and the others, but there the cross helps me best.

I would suggest to pick colors that are more widely apart from eachother in the color spectrum and to perhaps also introduce an icon of some sort (like the cross for No Support) for all statuses. This would help me out a lot.

Thanks and regards,
Martijn

Drop Babel from cli

If I remember correctly, the only reason we're using Babel to transform cli/index.js into cli/dist/cli.js, with Babel (and Webpack), is for the sake of module importing.
If we just switch from import minimist from "minimist"; to const minimist = require("minimist"); we might be able to run everything without transpilation in Node 10.

There might be other features that Babel allows us, but stuff like fat arrow functions etc are perfectly fine in Node >=10 for example.

Netlify deployment

Every pull request on Stumptown should create a full deployment of the whole site and this deployment should tie into the pull request.

Avoid inserting section permalinks after DOM render, in DOM

On https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video for example. The HTML sent from the server to the browser looks like this:

<h2 id="Attributes">Attributes</h2>

Then, a piece of (jQuery??) code kicks in after the .js has been downloaded, parsed and executed (after document.onload) that turns that into this:

<h2 id="Attributes">Attributes
   <a href="/en-US/docs/Web/HTML/Element/video$edit#Attributes" class="button section-edit only-icon" rel="nofollow, noindex">
    <svg class="icon icon-pencil" version="1.1" width="24" height="28" viewBox="0 0 24 28" aria-hidden="true" focusable="false"><path d="..."></path></svg>
    <span>Edit</span>
  </a>
  <a class="local-anchor" href="https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#Attributes" data-heading-id="Attributes">
    <svg class="icon icon-link" version="1.1" width="24" height="28" viewBox="0 0 512 512" aria-hidden="true" focusable="false"><path d="..."></path></svg>
    <span>Section</span></a>
</h2>

Let's not do that in stumptown.

Not only does it cause the page to "flicker" as the little section chain-icons suddenly appears. That makes the site feel "wobbly". It can also cause a re-paint (aka. recalculate style).

Note: Using Chrome Performance dev tools, with 6x CPU slowdown, I found 4!! blocks of "Recalculate style" in the first 3.5 seconds! It's very hard to exactly see how large the repaint is due to the added extra <a> tags.

Either way, it should be avoidable. I.e. let's not modify the DOM shipped from the HTML from the server. At least not just to add section links.

I think (and hope) the permalink section can be done entirely with CSS.

preconnect to iframes if applicable

Since stumptown-renderer is stateless and relatively dumb our best bet is to use meta tags for sending pre-fetching type of hints.

When rendering a page that will have <iframe src="https://someotherdomain/... in it, we should inject something like <link rel="preconnect" href="https://someotherdomain/... in the HTML.

"Update compatibility data on GitHub" link

The link here...
Screen Shot 2019-08-28 at 10 31 34 PM
Plainly goes to https://github.com/mdn/browser-compat-data
It's the same on prod as of today: https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video#Browser_compatibility

Why can't it go to https://github.com/mdn/browser-compat-data/blob/master/html/elements/video.json instead?

The data needed to build the table looks like this:


{
  "html": {
    "elements": {
      "video": {
        "__compat": { ...

So, to make the right URL you just need to take each key of the JSON until you dug down to "__compat" and join them with a / and add .json to the end and prefix the whole thing with https://github.com/mdn/browser-compat-data/blob/master/.

A decent CSS grid

We currently have a super basic CSS skeleton based on water.css plus our own dumping groud mdn.scss where we've ported some necessary CSS stuff to display the pages correctly.

What would be nice is a ground-up solid CSS framework with a decent grid and a decent header and footer.
It should be responsive at least.

Error throwing

One thing I've noticed when running the cli is that the errors thrown are hard to map to which document (aka. which stumptown .json) was the cause.
Perhaps I don't know how to use sourcemaps in Node. (remember the cli is babel'ified into dist/cli.js) or perhaps it's just the nature of Node tracebacks being not great.

A quick fix would be to change, for example:

throw new Error(
   `prose section '${name}' is not optional and not present in document.prose`
);

to something like:

throw new Error(
   `prose section '${name}' is not optional and not present in document.prose (on ${document.title)`
);

...or something similar. That would make sense when using the cli and it might come in hand when we connect the cli with Travis. But it doesn't make sense to repeat the obvious when you're doing dev in the browser because which document it is is clear from the URL bar.

Another thing to ponder is that going into production; it might be nice to NOT throw errors and just use console.error() instead. So whatever we do we could leverage process.env.NODE_ENV.

Our Makefile assumes to use the stumptown-content submodule

stumptown-renderer ships with a specific version of stumptown-content as a git submodule. And this "relationship" is hardcoded in the Makefile to the stumptown-content as git submodule.

You're free to not use Makefile and instead open it and copy-n-paste the various things the the Makefile does. However, that can become tedious and complicated. It would be much better to be able to do something like:

STUMPTOWN_CONTENT_ROOT=/my/other/place/of/stumptown-content make build-content

Service workers with "toast"

When we enable a service worker we should add some registration callback so that we can display a "toast" encouraging the user to refresh here and now instead of waiting for the next time they restart the browser tab.

Alternative approaches to SSR preparing the documentData

We're currently doing this:

<script id="documentadata" type="application/json">
{"title": "Video ...", ...
</script>

and

let documentData = null;
const documentDataElement = document.getElementById("documentdata");
if (documentDataElement) {
  documentData = JSON.parse(documentDataElement.text);
}

First of all, should we use domelement.textContent instead of domelement.text??

Another approach, which might be faster is to use:

<script id="documentadata" type="application/json">
window.DOCUMENT_DATA  = JSON.parse('{"title": "Video ...", ...   ');
</script>

In this benchmark Henrik Joretag argues that it's faster to use a script tag and call JSON.parse() in it. The blog post also mentions interesting topics around escaping the JSON string correctly.

This approach obviously is going to need to be reflected in the CSP headers. If we set our CSP headers in a Netlify config file, so it becomes real HTTP headers, then can we update the nonce in runtime with a <meta> tag?

That last point about the CSP complexity might escalate the complexity to the point where this (possible) small performance boost isn't worth it.

Copy-to-clipboard for examples

I think, whenever you focus on hover over a code example <pre> tag a little copy-to-clipboard icon should appear. Clicking it should copy the raw source code into the clipboard.

Context and post-build scripts

In the CLI, you might, at least in make deployment-build iterate over every single document. That gives you opportunities. For example, ...

  • you can build a sitemap.xml since you will have encountered every possible URL.
  • build up and populate some persistency for a site search functionality.
  • build related links and side-bars.
  • make nicely formatted inter-document links
  • build landing pages (e.g. /docs/Web/HTML)

The buildHtmlAndJson function takes a (stumptown json file) file path as input and then it has its own side-effect. But it doesn't really collect what it's doing. So you can't, after all file paths have been consumed, see what it built.

This issue is about an API refactor so that, upon building a HTML page, you can access what other documents are available. And at the end of the whole loop, a possibility to get insight into all documents and which ones were built. Enough to be able to build something like a sitemap.xml.

Lazy load BCD table

The BCD table is always far down. So it shouldn't block the rendering path.
Also, we should totally consider to use an intersection observer so that we only bother to load it if the user scrolls it into view.

Goal: Smaller HTML, smaller index.json, less CSS, less JS CPU work; still perfectly fine looking BCD table for those who scroll down.

  • On SSR render, figure out a way to put a placeholder only.
  • Lazy load its .css bundle
  • Lazy load the component
  • Trigger loading with an intersection observer
  • All the __compat stuff in the index.json files should be omitted.
  • We might want to preload things in case the user eventually scrolls the BCD table into view.
  • If the user is clearly on a fast connection (using the few Web APIs that exists for that), could we just go ahead and load the BCD table anyway. If it's done in a non-main-thread blocking kinda way, what's the harm if the user never scrolls down later?

The place of icons and picture is invalid on RTL languages

User story

What did you do?
1.Goto a RTL version of MDN ste. eg: https://developer.mozilla.org/fa/

2.Look at small arrows on side of Web Technologies; Learn web development; Developer Tools

3.Look at cat that shows "Help us build better CSS debugging tools!
Which CSS bugs are the worst?
Take the Survey"

4-Look at location of Arrow of "Sign Up now" button

5-From Refrences and Guides menu, look at more docs

What happened?
2-Small arrows are on their related texts.
3-Cat is located on it's related text.
4-Arrow of sign up now button is located on left side of button like LTR texts.
5-Three dots are beforer More docs menu in English text! Like:
...More docs
As result: localizers repeteated using ... before text on ntranslation of More docs as well as english

What should have happened?
2-Arrows should be mirrored and located on left side of their related texts on RTL languages. Look at samples:
LTR: Text --->
Correct RTL: <--- متن
Invalid RTL(Current status): م--تن->

3-Cat should be located on left side of it's related rectangle.

4-Small arrow on sign up now button should go to left side of button and txt should goto right side of button and arrow should be like <--- not --->

5-...More docs in English shold be More docs... and Localizers should move three dots aftre more docs like fixed english phrase.

Is there anything else we should know?
Currently, MDN main page is RTL but it's details like buttons, content of DIV tags are not RTL.

Bugzilla - https://bugzilla.mozilla.org/show_bug.cgi?id=1539793

Acceptance criteria

  • Icons and images on the landing page display correctly for both LTR and RTL locales.

Tasks

    • Update CSS to use bidi mixin
    • Open PR
    • Marged to master

Automatic reloaded when stumptown-content files change

User story:

  1. Check out stumptown-renderer and make install, make run-server etc.
  2. Start viewing it in the browser. E.g. http://localhost:3000/docs/Web/HTML/Element/video/
  3. Open up something like stumptown/content/html/reference/elements/video/docs.md in an editor like vim or vscode.
  4. Make some changes and hit Ctrl-S to save the file.
  5. For the web page to automatically reload to show these new changes.

This daemon that monitors file changes would need to be something that developers ideally run in the foreground. That way they can see if a typo, for example, breaks the rendering update. So it would be nice to make it all pretty and colorful.
We could also do what create-react-app's webpack dev server does which is to catch errors in the daemon, send them to the browser and instead of reloading the page, show the whole error fullscreen.

Google translate needs to work

E.g. https://translate.google.com/translate?sl=auto&tl=es&u=https%3A%2F%2Fstumptown.netlify.com%2Fdocs%2FWeb%2FHTML%2FElement%2Fvideo

Right now, it goes wrong because, I suspect, something's wrong with the react-router or the way Netlify butchers the URL in static html.

Either way, it needs to work eventually. In Kuma we had to do this: https://github.com/mozilla/kuma/blob/ea382cd58cb51ef39f0378bfce175df0e5e712bc/kuma/javascript/src/index.jsx#L65-L67

But we need to try to make sure interactive (iframes) examples also work. E.g. https://bugzilla.mozilla.org/show_bug.cgi?id=1564963

cli needs to be able to read N directories

At the moment, the main argument to the cli is a list of files. E.g.

yarn run run ../stumptown/packaged/html/reference/elements/*.json

which is expanded by bash so it basically corresponds to typing

yarn run run ../stumptown/packaged/html/reference/elements/abbr.json
yarn run run ../stumptown/packaged/html/reference/elements/address.json
...

What would be ideal is to be able to pass a list of files or directories and if it's a directory I think it should drill down recursively by default. E.g.

yarn run run ../stumptown/packaged/html/reference ../stumptown/packaged/css/ some/other/directory some/other/specific/file.json

Lets use a locale prefix

MDN uses the locales prefix technique. Those URLs are not going away. It's not a necessary truth but if Stumptown is ever to take over Kuma it would be immensely useful to have all URLs intact and not force everyone and everything to redirect to new URLs.

Having the locale in the URL is also very useful when doing static file deployment. Even if you wanted to, it wouldn't be feasible to have a piece of server code that decides which content to display based on cookies and user agent headers.

Stumptown-content has currently not yet nailed how it's going to do l10n but it's most likely going to be a tree where the locale is that root. Similar to MDN URLs.
I.e. not content/html/reference/elements/video/fr/prose.md but content/fr/html/reference/elements/video/prose.md

See also: mdn/stumptown-content#75

What font-display technique is best for us?

We're currently using font-display:swap (It's by default when using typeface-zilla-slab)

It means it renders the default OS font (or whatever we configure as number 2 in font-family) until the webfont loads. If the webfont comes in after 1s the headers will flash as they change style.

An attractive alternative might be to try fallback or optional as it'll display nothing for the first 100ms which hopefully avoids the flash of unstyled text.

This site explains the differences: https://font-display.glitch.me/

Note, we're using web fonts of headlines only. Which might be an important factor as evaluate this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.