Giter Site home page Giter Site logo

assume's Introduction

BigPipe

Version npmBuild StatusDependenciesCoverage Status

BigPipe is a radical new web framework for Node.JS. The general idea is to decompose web pages into small re-usable chunks of functionality called Pagelets and pipeline them through several execution stages inside web servers and browsers. This allows progressive rendering at the front-end and results in exceptional front-end performance.

Most web frameworks are based on a request and response pattern, a request comes in, we process the data and output a template. But before we can output the template we have to wait until all data has been received in order for the template to be processed. This doesn't make any sense for Node.js applications where everything is done asynchronously. When receiving your first batch of data, why not send it directly to the browser so it can start downloading the required CSS, JavaScript and render it.

BigPipe is made up over 20 modules whose current status is available at: HEALTH.md

Installation

BigPipe is distributed through the node package manager (npm) and is written against Node.js 0.10.x.

npm install --save bigpipe

Versioning

To keep track of cross module compatibility, the imported components will be synced on minor releases. For example, [email protected] will always be compatible with [email protected] and [email protected].

Support

Got stuck? Or can't wrap your head around a concept or just want some feedback, we got a dedicated IRC channel for that on Freenode:

  • IRC Server: irc.freenode.net
  • IRC Room: #bigpipe

Still stuck? Create an issue. Every question you have is a bug in our documentation and that should be corrected. So please, don't hesitate to create issues, many of them.

Table of Contents

BigPipe

Getting started

In all of these example we assume that your file is setup as:

'use strict';

var BigPipe = require('bigpipe');

BigPipe.createServer()

public, returns BigPipe.

To create a BigPipe powered server can simply call the createServer method. This creates an HTTP or HTTPS server based on the options provided.

var bigpipe = BigPipe.createServer(8080, {
  pagelets: __dirname +'/pagelets',
  dist:  __dirname +'/dist'
});

The first argument in the function call is port number you want the server to listen on. The second argument is an object with the configuration/options of the BigPipe server. The following options are supported:

  • cache A cache which is used for storing URL lookups. This cache instance should have a .get(key) and .set(key, value) method. Defaults to false
  • dist The location of our folder where we can store our compiled CSS and JavaScript to disk. If the path or folder does not exist it will be automatically created. Defaults to working dir/dist.
  • pagelets A directory that contains your Pagelet definitions or an array of Pagelet constructors. Defaults to working dir/pagelets. If you don't provide Pages it will serve a 404 page for every request.
  • parser The message parser we should use for our real-time communication. See Primus for the available parsers. Defaults to JSON.
  • pathname The root path of an URL that we can use our real-time communication. This path should not be used by your Pages. Defaults to /pagelet
  • transformer The transformer or real-time framework we want to use for the real-time communication. We're bundling and using ws by default. See Primus for the supported transformers. Please note that you do need to add the transformer dependency to your package.json when you choose something other than ws.
  • redirect When creating a HTTPS server you could automatically start an HTTP server which redirects all traffic to the HTTPS equiv. The value is the port number on which this server should be started. Defaults to false.

In addition to the options above, all HTTPS server options are also supported. When you provide a server with cert and key files or set the port number to 443, it assumes you want to setup up a HTTPS server instead.

var bigpipe = BigPipe.createServer(443, {
  key: fs.readFileSync(__dirname +'/ssl.key', 'utf-8'),
  cert: fs.readFileSync(__dirname +'/ssl.cert', 'utf-8')
});

When you're creating an HTTPS server you got to option to also setup a simple HTTP server which redirects all content to HTTPS instead. This is done by supplying the redirect property in the options. The value of this property should be the port number you want this HTTP server to listen on:

var bigpipe = BigPipe.createServer(443, {
  ..

  key: fs.readFileSync(__dirname +'/ssl.key', 'utf-8'),
  cert: fs.readFileSync(__dirname +'/ssl.cert', 'utf-8'),
  redirect: 80
});

new BigPipe()

public, returns BigPipe.

If you want more control over the server creation process you can manually create a HTTP or HTTPS server and supply it to the BigPipe constructor.

'use strict';

var server = require('http').createServer()
  , BigPipe = require('bigpipe');

var bigpipe = new BigPipe(server, { options });

If you are using this pattern to create a BigPipe server instance you need to use the bigpipe.listen method to listen to the server. When this is called, BigPipe starts compiling all assets, attach the correct listeners to the supplied server, attach event listeners and finally listen on the server. The first argument of this method is the port number you want to listen on, the second argument is an optional callback function that should be called when server starts listening for requests.

bigpipe.listen(8080, function listening() {
  console.log('hurray, we are listening on port 8080');
});

BigPipe.version

public, returns string.

bigpipe.version;

The current version of the BigPipe framework that is running.

BigPipe.define()

public, returns BigPipe.

bigpipe.define(pagelets, callback);

Merge pagelet(s) in the collection of existing pagelets. If given a string it will search that directory for the available Pagelet files. After all dependencies have been compiled the supplied, the callback is called.

bigpipe.define('../pagelets', function done(err) {

});

bigpipe.define([Pagelet, Pagelet, Pagelet], function done(err) {

}).define('../more/pagelets', function done(err) {

});

BigPipe.before()

public, returns BigPipe.

bigpipe.before(name, fn, options);

BigPipe has two ways of extending it's build-in functionality, we have plugins but also middleware layers. The important difference between these is that middleware layers allow you to modify the incoming requests before they reach BigPipe.

There are 2 different kinds of middleware layers, async and sync. The main difference is that the sync middleware doesn't require a callback. It's completely optional and ideal for just introducing or modifying the properties on a request or response object.

All middleware layers need to be named, this allows you to enable, disable or remove the middleware layers. The supplied middleware function can either be a pre-configured function that is ready to modify the request and responses:

bigpipe.before('foo', function (req, res) {
  req.foo = 'bar';
});

Or an unconfigured function. We assume that a function is unconfigured if the supplied function has less than 2 arguments. When we detect such a function we automatically call it with the context that is set to BigPipe and the supplied options object and assume that it returns a configured middleware layer.

bigpipe.before('foo', function (configure) {
  return function (req, res) {
    res.foo = configure.foo;
  };
}, { foo: 'bar' });

If you're building async middleware layers, you simply need to make sure that your function accepts 3 arguments:

  • req The incoming HTTP request.
  • res The outgoing HTTP response.
  • next The continuation callback function. This function follows the error first callback pattern.
bigpipe.before('foo', function (req, res, next) {
  asyncthings(function (err, data) {
    req.foo = data;
    next(err);
  });
});

BigPipe.remove()

public, returns BigPipe.

bigpipe.remove(name);

Removes a middleware layer from the stack based on the given name.

bigpipe.before('layer', function () {});
bigpipe.remove('layer');

BigPipe.disable()

public, returns BigPipe.

bigpipe.disable(name);

Temporarily disables a middleware layer. It's not removed from the stack but it's just skipped when we iterate over the middleware layers. A disabled middleware layer can be re-enabled.

bigpipe.before('layer', function () {});
bigpipe.disable('layer');

BigPipe.enable()

public, returns BigPipe.

bigpipe.enable(name);

Re-enable a previously disabled module.

bigpipe.disable('layer');
bigpipe.enable('layer');

BigPipe.use()

public, returns BigPipe.

bigpipe.use(name, plugin);

Plugins can be used to extend the functionality of BigPipe itself. You can control the client code as well as the server side code of BigPipe using the plugin interface.

bigpipe.use('ack', {
  //
  // Only run on the server.
  //
  server: function (bigpipe, options) {
     // do stuff
  },

  //
  // Runs on the client, it's automatically bundled.
  //
  client: function (bigpipe, options) {
     // do client stuff
  },

  //
  // Optional library that needs to be bundled on the client (should be a string)
  //
  library: '',

  //
  // Optional plugin specific options, will be merged with Bigpipe.options
  //
  options: {}
});

Pagelets

Pagelets are part of the bigpipe/pagelet module and more information is available at: https://github.com/bigpipe/pagelet

Events

Everything in BigPipe is build upon the EventEmitter interface. It's either a plain EventEmitter or a proper stream. This a summary of the events we emit:

Event Usage Location Description
log public server A new log message
transform::pagelet public server Transform a Pagelet
listening public server The server is listening
error public server The HTTP server received an error
pagelet::configure public server A new pagelet has been configured

Debugging

The library makes use of the diagnostics module and has all it's internals namespaced to bigpipe:. These debug messages can be trigged by starting your application with the DEBUG= env variable. In order to filter out all messages except BigPipe's message run your server with the following command:

DEBUG=bigpipe:* node <server.js>

The following DEBUG namespaces are available:

  • bigpipe:server The part that handles the request dispatching, page / pagelet transformation and more.
  • bigpipe:pagelet Pagelet generation.
  • bigpipe:compiler Asset compilation.
  • bigpipe:primus BigPipe Primus setup.
  • pagelet:primus Pagelet and Primus interactions
  • pagelet Pagelet interactions

Testing

Tests are automatically run on Travis CI to ensure that everything is functioning as intended. For local development we automatically install a pre-commit hook that runs the npm test command every time you commit changes. This ensures that we don't push any broken code into this project.

Inspiration

Bigpipe is inspired by the concept behind Facebook's BigPipe. For more details read their blog post: Pipelining web pages for high performance.

License

BigPipe is released under MIT.

assume's People

Contributors

3rd-eden avatar decompil3d avatar jcrugzz avatar jonasfj avatar lpinca avatar msluther avatar terinjokes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

assume's Issues

`assume.wait` but immediately calls `next` on failure

This might be better served by a try/catch wrapper, or something like tryit, but wanted to get feedback.

The assume.wait function is supposed to make it much similar to handle async tests. However, it's still a bit annoy to remember to capture errors to be able to pass them to next. For example:

 it('does async things', function (done) {
   next = assume.wait(2, 4, done);

   asynctask(function (err, data) {
     assume(err).is.a('undefined');
     assume(data).equals('testing');

     next();
   });

   asynctaskfail(function (err, data) {
     assume(err).is.a('undefined');
     assume(data).equals('testing');

     next();
   });
 });

If asynctaskfail actually did return an err, one might expect done to be called with the assertion error. Instead nothing happens, and most harnesses will time out after 2 seconds, giving the user the harness's timeout error instead.

To handle this correctly, I must write something similar to the following:

 it('does async things', function (done) {
   next = assume.wait(2, 4, done);

   asynctask(function (err, data) {
     try {
       assume(err).is.a('undefined');
       assume(data).equals('testing');

       next();
     } catch(e) {
       next(e);
     }
   });

   asynctaskfail(function (err, data) {
     try {
       assume(err).is.a('undefined');
       assume(data).equals('testing');

       next();
     } catch(e) {
       next(e);
     }
   });
 });

This isn't much different than the code I had to write before assume.wait, so I'm not sure what benefit it brings to the table in the current iteration.

While reading the documentation and glancing at the example, I expected the assertion functions to immediately call done on error, instead of throwing an exception.

Update npm package

npm install assume

yields this error:

Error extracting /Users/XYZ/.npm/ansi-codes/0.0.0/package.tgz archive: ENOENT: no such file or directory, open '/Users/XYZ/.npm/ansi-codes/0.0.0/package.tgz'

Installing directly from the repo works.

`string(thing)` can cause unwanted output in terminals

We generate expectation messages for each assertion that is ran so we can generate some useful about why something failed. The expectation message is generated before we check if we would fail or not. (There is no way of knowing this before hand due to the .not flag). To generate the output we need need to crawl through objects that are provided and access properties etc so if getters are set properties they will always be executed. This can have unwanted side affects as we might trigger errors that have nothing to do with the test as all:

var foo = {
  bar: 'bar',
  get latest () {
    throw new Error('fail')
  }
};

assume(foo).equals(foo);

The test above would fail because of this. Most cases might not be as extreme as this, but it can be anonying for example: https://github.com/primus/primus-emit/blob/master/test.js#L219 will generate an

connections property is deprecated. Use getConnections() method

Message in the console as the string() method crawls the supplied server object and checks the server.connection property.

Possible ways around this is to have the expect messages to be generated by a function. We could wrap the expect function in a try / catch to ensure that property access does not throw and we would only access the objects when we've failed.

Another way around this is to prevent he expect messages from being generated at all.

Found by @lpinca in https://github.com/primus/primus-emit/

Object thrown in NodeJS

The thing thrown between browsers and NodeJS are different: in browsers an Error instance is thrown, whereas in NodeJS an object is thrown. Is there a particular reason an AssertionError couldn't be thrown in both cases?

Is `Assert.add` part of the public interface?

I like what you've done so far with this project, and I've used it in a couple of my side projects so far. I'm interested in utilizing it in our main project, but we have custom assertions in our other library. Is Assert.add part of the public interface, or in another way, how are you thinking about a supporting custom assertions and/or plugins?

Add the ability to extend functionality of existing methods

We have a .contains method that currently is used for string operations and arrays using .indexOf there might be usecases where plugin authors want to extend this method to allow for more checking of methods. We should come up with a way to do this.

assume.contains.before(function () {
  return true / false;
});

Variadic assertions and custom messages

The standard bag of assertions tend to come come in the following form: a function that accepts a pre-determined or easily determinable at runtime number of arguments (such as throw), with a custom message as the last parameter. It's also what's documented:

If you want the failed assertion to include a custom message or reason you can always add this as last argument of the assertion function.

I wanted to keep the feel of the standard assertions in my plugins: they should feel apart of the ecosystem, rather than something foreign that happens to work. So for anything that could be variadic, I'm accepting an array as the first parameter.

There is one exception in the standard assertions that questions this assumption: either. It's the only true variadic in the standard library, and it's also the only assertion that doesn't accept a custom message.

(The other exception, a nit-pick, is eql, but it's not documented to accept the slice parameter, and it's correctable.)

I have a preference to not challenge user assumptions, and have all methods accept the custom message parameter, even if it requires more arrays. But I wanted your input on how I should proceed.

[feature request] assume(new Promise(...)).throws();

It would be really neat if assume(p).throws() could assert that a promise is rejected/unfurfilled with an error.

Use case:

assume(asyncPromiseOperation(invalidInput)).throws();

It would also be cool, if assume(f).throws() would work with functions that returns promises.
Use case:

assume(async () => {
  while (true) {
    await asyncPromiseOperationThatSometimesFails(invalidInput);
  }
}).throws();

Add the ability to manually pass in source files

The pruddy-error project is currently using AJAX or fs to read files that are in the error stacktrace so it can pin-point the location of the errors. There are environments where we cannot use these methods, so we should supply a way to pass in source files manually.

assume.source('filename', 'var balbalb = content of the filename');

increment `slice` when cloned?

Working on my plugin, was wondering if it would make sense to increment the default value of slice to this.test when the assertion is cloned, as otherwise it makes the failure as with my plugin, rather than the entry point from the user.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.