Giter Site home page Giter Site logo

w3c / echidna Goto Github PK

View Code? Open in Web Editor NEW
37.0 28.0 49.0 4.45 MB

New publication workflow at W3C — main component

Home Page: https://labs.w3.org/echidna/

License: MIT License

JavaScript 9.45% HTML 89.92% CSS 0.63% Shell 0.01%
w3c spec specification publication standard checker validator

echidna's Introduction

Build Status

Echidna

Echidna is the central piece of software taking care of the new publication workflow at W3C. The plan is for Echidna and related sub-projects (see below) to automate the publication of new specs under http://www.w3.org/TR/.

Using Echidna as an editor

If you are a spec editor, you do not need to install Echidna, nor to run it locally.

Please see the wiki for how to use Echidna as a spec editor.

Hacking Echidna as a developer

Installation

To run Echidna, you need to install Node.js first. This will install npm at the same time, which is required as well.

Then run the following commands with your favorite terminal:

git clone https://github.com/w3c/echidna.git
cd echidna
cp config.js.example config.js
npm install

Running it locally

Note: local setup of the full system is not supported currently due to dependency on W3C's DB and IPP system, but having mock services that emulate these pieces is our short-term goal.

In your terminal, run the following:

npm start [-- STAGING_PATH [HTTP_LOCATION [PORT [RESULT_PATH]]]]

You may use the optional defined below:

  1. STAGING_PATH: path in the local filesystem where documents will be downloaded; staged. (Default /var/www/html/trstaging/.)
  2. HTTP_LOCATION: HTTP endpoint for Specberus. (Default http://localhost/trstaging/.)
  3. PORT: where Echidna will be listening for publication requests. (Default 3000.)
  4. RESULT_PATH: local path where Echidna will dump the results of publication requests in JSON format.

Alternatively, you can use the configuration file config.js.

Once the server is started, you can throw publication requests at it through a curl/POST request to its endpoint, http://localhost:3000/api, or using the web-based testbed (described below).

You can also use a simple web client to send and monitor those requests, at http://localhost:3000/ui.

For more information, please refer to DEVELOPMENT.md.

Testing Echidna

This section describes how to run Echidna's test suite to make sure that the project itself is working properly over time. Note that the test suite is not intended to test actual documents.

Running the unit test suite

You can run the test suite with the following command line:

npm test

Using test documents

For testing purposes, we are using a local web server. The test server simulates some of the W3C services, such as the CSS and HTML validators, or the token authorization checker. It also serves a set of sample drafts.

You can launch this test server separately by using:

npm run testserver

When the test server is running, the testbed with all drafts will be available in http://localhost:3001.

Feedback and contributions

Please refer to our contribution reference to learn how to contact us, give feedback, or actively contribute to this project.

echidna's People

Contributors

anssiko avatar astorije avatar darobin avatar deniak avatar dependabot-preview[bot] avatar dependabot[bot] avatar dontcallmedom avatar greenkeeper[bot] avatar greenkeeperio-bot avatar jean-gui avatar jennyliang220 avatar nschonni avatar plehegar avatar rrrene avatar snyk-bot avatar tripu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

echidna's Issues

Extend the definition of registered URL to deal with "compatible URLs" that aren't always identical

As discussed in #31.

One possible use case (of many): the URL registered with the token was https://github.com/wg/, but the URLs intended for publication are going to be in different projects, branches, subdirectories, etc. ie, the URLs that will need to be published don't match exactly the registered URL.

One option would be to go for regexes, and store them in the DB, instead of plain URLs. So for example, one could register https:\/\/github\.com\/wg\/*, or http:\/\/galactic-json\.org\/docs\/v[\d\.\-]+\/spec\.html. All kinds of issues could potentially arise related to metacharacters and regex syntax, and URL encoding…

Another (ugly) solution would be to deal with special cases (we don't expect them to be many, after all) one by one, adding logic to Symfony that would tweak checks. So for example, if the URL is a GitHub one, we more or less know how to be flexible with the URLs to be published, and what patterns we might encounter.

See also #36.

Smash the pyramid of doom in app.js with a hammer

Using Promises always felt right, but adding pieces to this pyramid never did.

This quickly became very difficult to maintain and to review, so it's easy to throw a bug inside and since the test suite doesn't test that piece, it's easy to not detect it.

Once v1 is released, I'll work on cleaning this up, but I would be happy to discuss it with other people having experience with Promises to make sure we go towards the right direction.

Restrict “status” method to give info about particular publication jobs (instead of dumping all) and have the “request” method to return the ID of the new job

For privacy/security reasons, the HTTP GET method status of the API should not return all details about all publication jobs since the system was last started. Instead, it should accept an array of job IDs, and return info only about those jobs.

…which also means that the POST method request should always return the ID of the new publication job the user just submitted, instead of returning always a meaningless 200

Deprecation messages when running the test suite

With latest dependencies (see #50 and #51), we now get the following:

$ npm test

> [email protected] test /home/jeremie/w3repos/echidna
> mocha

[...]
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/tokenchecker.js:13:17
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/tokenchecker.js:14:19
[...]
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/htmlvalidator.js:8:17
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/cssvalidator.js:8:17
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/cssvalidator.js:9:21
express deprecated req.param(name): Use req.params, req.body, or req.query instead test/lib/cssvalidator.js:16:21

The testbed web interface / console is pretty silent

I am currently working on #48 and learning how to use the test bed the hard way.

Since I am heavily butchering app.js, I would like to see what happens when I try to publish a document. I actually discovered the test bed due to this, after grumbling that I cannot use the web UI. It seems like a neat feature but right now it seems to be pretty silent.

Where can I see the history / the content of the jobs? Since I am breaking stuff, I am trying to see what happens and right now I just get undefined for most of the things I click. Could you try the test bed on a clean clone to see what I mean? Maybe I am getting confused somewhere but at the moment I really a tour...

(PS: As you might guess, don't try to refactor or clean up app.js :-) )

Make Echidna run locally without failing

Although Echidna is supposed to be run locally, there are parts of the system that currently fail if you do not have a full W3C setup. This issue is meant to keep track of what needs to be done and what it already done to fix that.

Here are a list of the steps with their status regarding ability to run locally:

  • Document Downloader: does not interact with W3C systems
  • Specberus: works as a standalone project and as a Node module, but still markup/css validator issue
  • Token Checker: a mock API is provided
  • Third Party Checker: works as a standalone project and as a Node module
  • W3C DB Publisher: a mock API is provided
  • TR Installer: shares interface with cp -r so it can be mocked locally
  • Shortlink Updater: shamefully use # as a local command... we can probably do better, but at least it works...
  • Serve the staging documents through a webserver started with Node.js rather than expect the configuration to point to a server set up somewhere else

Implement a real format and parser for the manifests

At the moment, the manifests are quite clumsy, I think. They have to be text files, one line per file, and the first one is always and automatically read as the main file, renamed Overview.html, even if Overview.html is present somewhere else in the manifest. It has also some serious security flaws that we'd like to avoid¹.

But it is and always was a temporary solution to release Echidna faster, and to poll opinions before coming up with a more definitive solution.

In this issue, I'd like to start a discussion with those who already use manifests. What would be your preferences for such system?
Should it stay a file-based format, move to JSON, YAML, etc.?
What would be an example of manifest you'd like to have in your repos?
When submitting a manifest to Echidna instead of a plain document, should it be detected by its filename (like Travis CI does, for example), its content-type, the first line inside the document, ... ? Or simply not detected, a boolean to the HTTP request specifying if it's plain or manifest?
How do you see integration with generators such as Respec (note that spec generators also trigger their whole world of problems)?

I think any calm discussion will help improving the system :-)

¹ And until this is fixed, we'd also like our super nice editors to not try to mess up with our system :P

New FAQ entry: "Can I setup Echidna locally?"

I was unable to push to the wiki directly (or fork it), so created this issue instead. I recommend adding the following FAQ entry, or something along these lines:

[[

Can I setup Echidna locally?

Unfortunately not, since some of the system is linked to W3C's DB and the IPP system. We tried to expose as much as possible in the source code provided in GitHub. At the minimum, you should be able to run Specberus locally and probably the third-party-resources-checker.

]]

Make some of the functionality from .htaccess available

Echidna removes .htaccess files and that is a good, healthy decision. However some specs do make use of features exposed in .htaccess and it would be useful to support reproducing them.

The idea is that the request to publish would include additional parameters, and Echidna would generate a (sane, controlled) .htaccess to include in TR.

It might be useful to look at all the .htaccess in (recent) TR to see what is needed. The functionality that is missing for HTML is the ability to setup a 404 handler.

HTML has a lot of content and it is split into several files. Sometimes the split changes. What we do is we have a special 404 page that looks at the fragment identifier and knows where to redirect users (you can see it in action in the ED area). It would be great to be able to specify 404handler=404.html or something like that to get the behaviour back.

Allow to use pre-publication steps as a Travis job

I'd like to be able to use the pre-publication part of echidna (retrieval, generation, checks) as a Travis job.
That way, I could send stuff to the real publication system only when I know that my job won't fail.
Also, editors would know why their doc isn't updated automatically.

Making it Travis-compatible encompasses two things:

  • making it possible to run only a subset of the steps
  • making it exit with a status code ≠ 0 when it fails (which might already be the case)

npm start doesn't take args

README.md says one can run npm start [STAGING_PATH], but npm doesn't seem to make it possible to pass arguments to the command.

(actually npm2 might, but I don't think we should assume npm2 unless we must)

Decide a strategy regarding the dependencies versions

This ticket is to take care of our dependencies versions, i.e. deciding what versions we want to allow/rely on rather than the default generated by npm install --save[-dev].
We might want to make the decisions before fully releasing v1, to be ready with related issues from the beginning.

devDependencies

According to David, our devDependencies (mocha and morgan so far) are up-to-date.

However, (1a) I don't think we should not use * for these. That will never break the production environment and we might benefit from having them always up-to-date. Alternative: (1b) having the major version fixed (1.x.x, 2.x.x, ...).

dependencies

This is a bit more tricky. According to David, some of them are outdated (format is: version in package.json / current stable version):

  • body-parser: ~1.10.1 / 1.11.0
  • compression: ~1.3.0 / 1.4.0
  • ejs : 1.0.x / 2.2.4
  • express: ~4.10.7 / 4.11.2 (latest: 5.0.0-alpha.1)
  • immutable: ~3.5.0 / 3.6.2
  • promise: ~6.0.1 / 6.1.0

Regarding ejs, we are not using templates at this stage, so (2) I suggest we remove it. We'll see when we play with UI stuff.

For the other outdated dependencies, (3) I suggest that we manually review the commits that have changed (by clicking on </>) and then (4) use the x-ranges to have the major version only fixed (4.x.x). Reviewing should be useless with the semantic versioning, but it doesn't hurt to be sure.

Finally, if all the outdated dependencies go well and we have n.x.x for all of them, (5) I suggest we do the same for the up-to-date ones.

@deniak, @tripu, @plehegar, do you guys 👍 or 👎 on any of proposals (1) through (5)?

(When I say "I suggest we review...", of course I mean "I suggest I review...", but I want your approval first :-) )

Running npm test throws an exception

npm even calls this a weird error :)

Here the stack trace:

$ npm test

> [email protected] test [...]
> mocha

[...]/test/lib/testserver.js:47
  return "http://localhost:" + server.address().port;
                                               ^
TypeError: Cannot read property 'port' of null
    at Function.TestServer.location ([...]/test/lib/testserver.js:47:48)
    at Suite.<anonymous> ([...]/test/test.js:22:51)
    at context.describe.context.context ([...]/node_modules/mocha/lib/interfaces/bdd.js:74:10)
    at Suite.<anonymous> ([...]/test/test.js:17:3)
    at context.describe.context.context ([...]/node_modules/mocha/lib/interfaces/bdd.js:74:10)
    at Object.<anonymous> ([...]/test/test.js:15:1)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at [...]/node_modules/mocha/lib/mocha.js:185:27
    at Array.forEach (native)
    at Mocha.loadFiles ([...]/node_modules/mocha/lib/mocha.js:182:14)
    at Mocha.run ([...]/node_modules/mocha/lib/mocha.js:394:31)
    at Object.<anonymous> ([...]/node_modules/mocha/bin/_mocha:394:16)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Function.Module.runMain (module.js:497:10)
    at startup (node.js:119:16)
    at node.js:902:3
npm ERR! weird error 8
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian

npm ERR! not ok code 0

it seems that when you call the location() method (here for example) from the TestServer, server.adress() (here) is not ready yet because the app.listen(port) call (here) can be asynchronous in some cases (that's a weird thing to say "potentially asynchronous", that's where we need promises!).

Here is an explication on Stack Overflow. Why this passes Travis CI bugs me though.

Automated testing?

In the readme, there is a Automated testing section - but that seems unhelpful to those looking at the project (i.e., why would a user want to run the tests?). Maybe move that stuff to the wiki.

The document downloader fails when the manifest lists sub-directories

If the manifest contains files in sub-directories, the document downloader doesn't create the sub-directory before downloading the resource. It ends up failing with the following error:

...
"jobs": {
    "retrieve-resources": 
{
    "status": "error",
    "errors": 
    [
        "Error: ENOENT, open '/u/echidna-staging/47aadb10-02fc-474a-9f1c-69b59f552b29//img/fingerprint.png'"
    ]
},
...

Overview.html should only be forced if not in the list

Currently the first line of the manifest is always turned into Overview.html. In some cases it might be that Overview.html is listed elsewhere in the manifest; when that happens the first line should be left as is.

Of course it's possible for manifest generators to work around this, but if for instances they're doing things like listing all files in a directory it'll require special code.

Tests do not pass anymore

Since w3c/specberus#153 got merged, the editorId related test fails with:

  1) SpecberusWrapper validate(url) should promise the proper metadata.editorIDs:
     AssertionError: expected { Object (0, 1, ...) } to deeply equal [ 44357, 69474, 45188 ]

I messed up something and I am currently working on it :-)

Add HTTPS support for the DocumentDownloader

Since the API of Node.js splits support http and https requests in 2 different modules, the latter is not supported at the moment.
I'll look into it, because it would be a shame to resolve the proper module to call depending on the URL ourselves.

Document the Echidna API

The 3 HTTP actions that Echidna understands (version, status, request) and their parameters, expected results and formats should be well documented, with examples.

Token acquisition

It's not clear to me what happens after the token is acquired from W3C staff? Where does the token go?

Recursive validation makes the whole system hang

I set the actual Specberus check params in a recent commit.
However, after discussing with @deniak, I wanted to set the validation one to recursive, but when I run the system locally it makes the system hang forever. I didn't investigate but set it to simple-validation until I have time to dig into the issue.

Make it clear on the web interface what fields are compulsory

Right now, if you enter the URL of the document and the token, but leave blank the WG decision URL field, Echidna ignores you miserably when you click the request publication button.

Make it clear on the form that all 3 fields are mandatory, and disable the button unless all three have data.

Remove support for Node 0.8

express 4.10 only supports Node 0.10, which has been stable for a long time anyway. I need to check what version of Node runs on the production server just to be sure before.

Description of process for ReSpec users

Can you please add the following to the Wiki?

How to use Echidna with ReSpec and GitHub

YOU DON'T NEED TO ACTUALLY INSTALL ENCHIDA AT ALL!

Before you start - unfortunately, there are a few process things you need to do. These steps can take about 1-2 weeks to complete.

You will need the following:

  1. Working Group Approval to use the new process.
  2. A token from the W3C.
  3. The "editor ID" of each editor of the spec.

Working Group Approval

In order to publish your document using the new process, you need to get consensus to do so by your Working Group by emailing your group's mailing list. See, for example, how approval was requested for the WebApps WG

The chair will generally put out a Call for Consensus (CFC), which can take about 1 week.

Once you get approval (or the CFC), keep the URL handy cause your will need it later to actually publish!

The Token

You will need to get a token for your spec from the W3C. You can request this while you are waiting for WG consensus through the CFC (see above)! Email either your team contact or [email protected].

You will get an email within a few days with your token.

The Editor IDs

Then you will need to get the IDs for the Editors of your spec. You can find yours by going to your W3C profile:

You will need to add this ID to your ReSpec config using the w3cid property, like so:

editors: [{
    name: "Spec Editor",
    w3cid: 39125
}]

Actually publishing

Welcome back! now that you have all the things above, you can finally proceed to publishing.

  1. Go to the root directory where your spec is and make a config file for your spec. Call it ECHIDNA.
touch ECHIDNA
  1. In ECHIDNA, you need to list the main spec file and any dependent images or other files. For example:
# ECHIDNA configuration
index.html?specStatus=WD;shortName=appmanifest respec
images/manifest-src-directive.svg
  1. Save it, and push that back to your gh-pages branch on GitHub.
git checkout gh-pages
git add ECHIDNA
git commit -m "Echidna config" ECHIDNA
git push

1 Run your spec over the new PubRules and fix all the errors. PubRules won't accept a raw ReSpec document, so you can basically modify the following to suit your document:

https://labs.w3.org/spec-generator/?type=respec&url=https://w3c.github.io/linkToYourSpec/?specStatus=WD;shortName=theShortName

  1. Ok! now run the following using curl. You will need:

    • url=: the URL to your echidna config on GitHub, as served from GitHub pages (usually http://w3c.github.io/YourSpecName/ECHIDNA).
    • decision=: URL to the working group decision on a w3c mailing list.
    • token= the token you got from the W3C.

    Got 'em? Good! now replace all the bits below...

curl 'https://labs.w3.org/echidna/api/request' --data 'url=<echidnaConfigURL>&decision=<decisionUrlOnMailingList>&token=<W3Ctoken> 

Finally, once you do that, you can check if your document actually got published by going to the TR-Notification list. If something went wrong, it will tell you what happened (and hopefully what you need to fix!).

Otherwise, you should see success! If successful, your Working Draft should now be on /TR/.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.