Giter Site home page Giter Site logo

karma-mocha-reporter's People

Contributors

4kochi avatar adamcraven avatar brendannee avatar christian-fei avatar curtishumphrey avatar dasziesel avatar developit avatar esrefdurna avatar gkedge avatar greenkeeperio-bot avatar joscha avatar josephfrazier avatar litixthomas avatar nguymin4 avatar ppvg avatar supersonicclay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

karma-mocha-reporter's Issues

Failed test's error points to Chai

See e.g. https://travis-ci.org/ckeditor/ckeditor5-engine/builds/173847164#L7972:

Error: Uncaught AssertionError: expected 1 to equal 2 (/home/travis/build/ckeditor/ckeditor5-engine/node_modules/chai/chai.js:206)

This is pretty useless if a test have more assertions. In the screenshot I can see that with some other assertion lib it works fine, so perhaps it's a problem with Chai. But what could help (and in fact, be pretty useful in general) was if a longer stack trace was logged in case of a failed assert.

console.log doesn't work in terminal (karma 1.1.0)

I'm using Karma v1.1.0, and now I have a trouble.
on v0.13.22 ... console.log to show log in terminal works.
on v1.1.0 ... doesn't work.

I think a config below is causing this trouble.
https://github.com/karma-runner/karma/blob/master/lib/config.js#L242-L246

The code below is handling "browserConsoleLogOptions".
https://github.com/karma-runner/karma/blob/master/lib/reporters/base.js#L5

sorry I don't know how to make a reporter plugin. but if this help you, I'm glad.

Issue using plugin

If i have the plugin defined

plugins: ['karma-mocha-reporter']

I receive the following error (remove this line and all works just fine as expected) Curious what functionality does the plugin supply


/home/merickson/code/angular/angular-sandbox/node_modules/di/lib/injector.js:9
      throw error('No provider for "' + name + '"!');
      ^

Error: No provider for "framework:mocha"! (Resolving: framework:mocha)

Manipulate the order of output?

This reporter rules but one thing that's bugging me is that the output doesn't have the most recently modified files listed last... right now I have to scroll up to see the most recent tests. Is there an easy way to make this happen or configure?

One of the reasons this would be useful is for when you have to log something out to console you wouldn't have to scroll up until you find the console.log result.

If no tests pass, no summary is written

If all the tests fail, then no summary is written. You can easily reproduce this in the demo by simply commenting out all but one test, then ensure that test will fail.

console output shown in prev test

When you have some console output inside test, this output written in previus test.
For example hire is two test files one SlovoAsListItemView wich has no console output and next SidePanelView has 2 console output. And hire is reporter.

SlovoAsListItemView
✔ should $mount view
LOG: 'Zemla {5, "Россия"} subscribed to "api:model.Zemla.id5."'
LOG: 'Zemla {1, "Земля"} subscribed to "api:model.Zemla.id1."'
SidePanelView
✔ should $mount view SidePanelView
✔ should $mount view

As you can see it is incorrect

Error after errors in tests are printed out

Hey, I am not really sure if this applies to karma-mocha-reporter, karma or gulp, but it only happens after the Failed Tests are printing (If all tests are passing I don't get the stacktrace printed out and it ends properly). This is a screenshot of the error that I encountered.

image

[Request] Output test results into file

It would be handy, for staging/production deploying needs to be able to save test output into, let's say an *.xml file.
Is there a way to do it now?
Currently didn't found an answer in API provided inside README.md

TypeError occurs when I use function name of Object.prototype in suite description

Hi, this issue reproduces like this.

describe("spec", function() { 
  it("toString", function() { 
    var list = [1,2]; 
    expect(list.toString()).to.be("1,2"); 
  }); 
}); 

Error Message.

➜  ./node_modules/karma/bin/karma start                                                                                                                                                                   [/Users/koba04/Desktop/karma] 
INFO [karma]: Karma v0.12.16 server started at http://localhost:9876/ 
INFO [launcher]: Starting browser Chrome 
INFO [Chrome 35.0.1916 (Mac OS X 10.7.5)]: Connected on socket khMgWNqe48h01O-MMQLr with id 51369303 
ERROR [karma]: [TypeError: Cannot assign to read only property 'name' of function toString() { [native code] }] 
TypeError: Cannot assign to read only property 'name' of function toString() { [native code] } 
    at /Users/koba04/Desktop/karma/node_modules/karma-mocha-reporter/index.js:182:23 
    at Array.reduce (native) 
    at specComplete (/Users/koba04/Desktop/karma/node_modules/karma-mocha-reporter/index.js:178:14) 
    at self.onSpecComplete (/Users/koba04/Desktop/karma/node_modules/karma-mocha-reporter/index.js:234:9) 
    at null.<anonymous> (/Users/koba04/Desktop/karma/node_modules/karma/lib/events.js:15:22) 
    at EventEmitter.emit (events.js:98:17) 
    at onResult (/Users/koba04/Desktop/karma/node_modules/karma/lib/browser.js:213:13) 
    at Socket.<anonymous> (/Users/koba04/Desktop/karma/node_modules/karma/lib/events.js:15:22) 
    at Socket.EventEmitter.emit [as $emit] (events.js:117:20) 
    at SocketNamespace.handlePacket (/Users/koba04/Desktop/karma/node_modules/karma/node_modules/socket.io/lib/namespace.js:335:22) 

The report does not show as displayed in Readme.md

image

As above, some of the karma start info went into START section(is it own by this plugin?).

and it doesn't output the full report.

I'm using the newest karma and mocha and newest version of this plugin.

Test suits containing skipped tests are reported as failed

If some other test fails, the Reporter lists the name of the skipped test suite in addition to the actually failed test.

skipped-test-as-failed

The code of the test suite that is mistakenly reported here:

describe('"describe" containing a skipped "it"', function () {
    var spy;

    xit('test 1 - SKIPPED', function () {
        expect(spy).to.have.been.calledOnce();
    });

    it('test 2', function () {
    });
});

Missing/wrong error messages for tests that throw in one browser only

Hello!
If a test fails only in PhantomJS, that error will not be listed by karma-mocha-reporter.

Here is a little test-suite to reproduce the error:

function goodFunction() {
  return true;
}

function badFunction() {
  throw new Error('Oh no!');
}

function badFunctionFor(agent) {
  if (navigator.userAgent.indexOf(agent) !== -1) {
    throw new Error('Oh no! What\'s wrong with ' + agent);
  }
}

describe('Reproduce mocha-reporter output', function() {
  it('works in all browsers', function() {
    goodFunction();
  });

  it('fails in all browsers', function() {
    badFunction();
  });

  it('fails in Chrome', function() {
    badFunctionFor('Chrome');
  });

  it('fails in PhantomJS', function() {
    badFunctionFor('PhantomJS');
  });
});

And thats the output:

bildschirmfoto 2016-03-26 um 18 02 59

So you see, if all browsers fail, the report works, but if only one fails, things get weird:
fails in Chrome is incorrectly green, although it fails and its error is listed below FAILED TESTS. fails in Phantom is correctly red, but the error is not listed below FAILED TESTS.

I tried to debug it by adding some console.logs to karma-mocha-reporter. Unfortunately I'm not sufficiently aware of the inner workings of Karma's base reporter to fix it. But these two outputs might give you a hint:

Output of console.log(result); in function specComplete:

...
{ description: 'fails in Chrome',
  id: 'spec7',
  log: [ 'Error: Oh no! What\'s wrong with Chrome in http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20 (line 40)\nbadFunctionFor@http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:40:11\n@http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:54:5' ],
  skipped: false,
  success: false,
  suite: [ 'Reproduce mocha-reporter output' ],
  time: 0,
  executedExpectationsCount: 1 },

{ description: 'fails in PhantomJS',
  id: 'spec8',
  log: [],
  skipped: false,
  success: true,
  suite: [ 'Reproduce mocha-reporter output' ],
  time: 1,
  executedExpectationsCount: 0 }
...

Output of console.log(suite); in function printFailures:

...
{ name: '\u001b[31m✖\u001b[39m fails in Chrome',
     isRoot: false,
     type: 'it',
     skipped: false,
     success: false,
     count: 2,
     failed: [ 'Chrome 49.0.2623 (Mac OS X 10.10.0)' ],
     visited:
      [ 'PhantomJS 2.1.1 (Mac OS X 0.0.0)',
        'Chrome 49.0.2623 (Mac OS X 10.10.0)' ],
     printed: true,
     log: [ 'Error: Oh no! What\'s wrong with Chrome in http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20 (line 40)\nbadFunctionFor@http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:40:11\n@http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:54:5' ],
     assertionErrors: undefined },

{ name: '\u001b[32m✔\u001b[39m fails in PhantomJS',
     isRoot: false,
     type: 'it',
     skipped: false,
     success: true,
     count: 2,
     failed: [ 'PhantomJS 2.1.1 (Mac OS X 0.0.0)' ],
     visited:
      [ 'PhantomJS 2.1.1 (Mac OS X 0.0.0)',
        'Chrome 49.0.2623 (Mac OS X 10.10.0)' ],
     log: [ 'Error: Oh no! What\'s wrong with PhantomJS in http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20 (line 40)\nbadFunctionFor@http://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:40:58\nhttp://localhost:9877/base/demo/demo.spec.js?fb225905c65df1bed75ab7b02f98f89a0bf9be20:58:19' ],
     assertionErrors: undefined,
     printed: true } }
...

It seems like between the events onSpecComplete (moment of first output) and onRunComplete (moment of second output) something happens with self.allResults...

Running the test case stops after the first failed expectation

I believe it shouldn't.

This is my example test case:

    it('should show all failures for this silly test', function () {
        expect(1).toBe(0);
        expect(2).toBe(0);
        expect(3).toBe(0);
        expect(4).toBe(0);
    });

Result using karma-mocha-reporter:

(...)
SUMMARY:
✔ 42 tests completed
✖ 1 tests failed

FAILED TESTS:
  my test suite
    something
      ✖ should show all failures for this silly test
        PhantomJS 1.9.8 (Mac OS X)
      Expected 1 to be 0.
          at src/__tests__/schemaUtils-test.js:126:0

Compare this with the output from default karmareporter:

PhantomJS 1.9.8 (Mac OS X) my test suite something should show all failures for this silly test FAILED
    Expected 1 to be 0.
        at src/__tests__/schemaUtils-test.js:126:0
    Expected 2 to be 0.
        at src/__tests__/schemaUtils-test.js:127:0
    Expected 3 to be 0.
        at src/__tests__/schemaUtils-test.js:128:0
    Expected 4 to be 0.
        at src/__tests__/schemaUtils-test.js:129:0
PhantomJS 1.9.8 (Mac OS X): Executed 43 of 43 (1 FAILED) (0.336 secs / 0.036 secs)

No colors

There are no colors in the output.

Suppress Failed Tests Stack Traces

Is it possible to suppress the output of the last section, "FAILED TESTS:"?

My testing set up allows developers a couple different way to access the test pass/fail information and it is turning out that we only need the Karma console output for a summary and use other methods for looking at stack traces and fixing failing tests.

Would it be/is it possible to just leave out the extensive failure reporting? It feels to me that it could be configurable via the "output" option or a separate option entirely, maybe a "summaryOnly" value for the output option?

Better full page reload failures

If some tests accidentally do a full page refresh, karma-dots-reporter reports the error, but karma-mocha-reporter doesn't make the failure as clear.

With dots reporter:

..............................................................................
PhantomJS 1.9.8 (Windows 7 0.0.0) ERROR
  Some of your tests did a full page reload!
PhantomJS 1.9.8 (Windows 7 0.0.0): Executed 78 of 135 ERROR (0.2 secs / 0.2 secs)

With mocha reporter:

PhantomJS 1.9.8 (Windows 7 0.0.0) ERROR
  Some of your tests did a full page reload!
Finished in 0.222 secs / 0.217 secs
SUMMARY:
√ 78 tests completed

A better report would be:

PhantomJS 1.9.8 (Windows 7 0.0.0) ERROR
  Some of your tests did a full page reload!
Finished in 0.222 secs / 0.217 secs
SUMMARY:
√ 78 tests completed
× 57 test failed

Better handle duplicate descriptions

Repro

describe('thing', function() {
    it('should do stuff', function() {
        expect(1+1).toEqual(2);
    });
    it('should do stuff', function() {
        expect(2+1).toEqual(3);
    });
});

Produces

thing
  √ should do stuff

SUMMARY:
  √ 2 tests completed

Expected

thing
  √ should do stuff
  √ should do stuff

SUMMARY:
  √ 2 tests completed

Not getting individual error reports if multiple browsers run tests and a test fails

I typically do most of my development/testing using a single browser and then run all of the browsers at the end using a different karma config file. When running with a single browser, I get full output for each failure (browser, stack trace, etc.). However, if I'm running multiple browsers and one fails, I only get a list of failed tests, not the stack trace for the failed browser (nor do I get any indication of which browser failed).

Prettified test failure for JSON objects

I am running the following test:

describe("objects", function () {
    it("should equal", function () {
        var a = {
            a: 1,
            b: 2,
            c: {
                a: 1,
                b: 2,
                c: {
                    a: 1,
                    b: 2,
                    x: 3
                }
            }
        };

        var b = {
            a: 1,
            b: 2,
            c: {
                a: 1,
                b: 2,
                c: {
                    a: 1,
                    b: 2,
                    x: 4
                }
            }
        };
        a.should.deep.equal(b);
    });
});

The two objects, a and b, differ slightly and the test fails as expected. However, the error message is not meaningful.

AssertionError: expected { Object (a, b, ...) } to deeply equal { Object (a, b, ...) }

How would I get it so that it outputs deep diff instead?

Libraries I am currently using:

  • karma 0.12.1
  • karma-mocha 0.1.6
  • karma-mocha-reporter 0.3.0
  • karma-chai 0.1.0

It looks like others are having similar issues: https://groups.google.com/forum/#!topic/chaijs/YjYpc8vnuuc

And using mocha and chai without karma seems to work just fine: http://stackoverflow.com/questions/13792885/nodejs-deep-equal-with-differences

Error on some failed tests

Sometimes, when printing details about failed tests reporter throws below error and causes browser to permanently disconnect.
I say sometime, because on most of the failed tests the details are printed correctly, including the expected/actual diff.

Missing error handler on `socket`.
TypeError: Cannot read property 'expected' of undefined
    at printFailures (/opt/node_modules/karma-mocha-reporter/index.js:309:63)
    at printFailures (/opt/node_modules/karma-mocha-reporter/index.js:345:17)
    at printFailures (/opt/node_modules/karma-mocha-reporter/index.js:345:17)
    at printFailures (/opt/node_modules/karma-mocha-reporter/index.js:345:17)
    at [object Object].self.onRunComplete (/opt/node_modules/karma-mocha-reporter/index.js:543:21)
    at [object Object].<anonymous> (/opt/node_modules/karma/lib/events.js:13:22)
    at emitTwo (events.js:92:20)
    at [object Object].emit (events.js:172:7)
    at [object Object].onBrowserComplete (/opt/node_modules/karma/lib/executor.js:47:15)
    at [object Object].<anonymous> (/opt/node_modules/karma/lib/events.js:13:22)
    at emitTwo (events.js:92:20)
    at [object Object].emit (events.js:172:7)
    at [object Object].onComplete (/opt/node_modules/karma/lib/browser.js:142:13)
    at Socket.<anonymous> (/opt/node_modules/karma/lib/events.js:13:22)
    at emitTwo (events.js:92:20)

Reporter output is truncated when running multiple browsers

With multiple browsers, the mocha reporter output does not show all tests.

For reference, here is what the "dots" reporter looks like when the test suite is run for:

  1. Chrome only
  2. PhantomJS only
  3. Chrome + PhantomJS together

Dots reporter, Chrome only

$ karma start --reporters dots --browsers Chrome
17 02 2016 11:43:07.696:INFO [karma]: Karma v0.13.21 server started at http://localhost:9876/
17 02 2016 11:43:07.703:INFO [launcher]: Starting browser Chrome
17 02 2016 11:43:09.100:INFO [Chrome 50.0.2645 (Mac OS X 10.11.2)]: Connected on socket /#No-16XuiuhfcW7i0AAAA with id 84879202
................................................................................
................................................................................
................................................................................
................................................................................
...........................
Chrome 50.0.2645 (Mac OS X 10.11.2): Executed 347 of 347 SUCCESS (2.163 secs / 2.091 secs)

We can see here that 347 tests passed, and there are 347 dots displayed (4 rows of 80 dots, plus 27 more).

Dots reporter, PhantomJS only

$ karma start --reporters dots --browsers PhantomJS
17 02 2016 11:49:13.128:INFO [karma]: Karma v0.13.21 server started at http://localhost:9876/
17 02 2016 11:49:13.135:INFO [launcher]: Starting browser PhantomJS
17 02 2016 11:49:13.741:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#GbQWP9MwXAkl13auAAAA with id 3938306
................................................................................
................................................................................
................................................................................
................................................................................
...........................
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 347 of 347 SUCCESS (1.837 secs / 1.801 secs)

For PhantomJS, we get the same results as Chrome.

Dots reporter, Chrome + PhantomJS

$ karma start --reporters dots --browsers Chrome,PhantomJS
17 02 2016 11:53:03.695:INFO [karma]: Karma v0.13.21 server started at http://localhost:9876/
17 02 2016 11:53:03.703:INFO [launcher]: Starting browser Chrome
17 02 2016 11:53:03.709:INFO [launcher]: Starting browser PhantomJS
17 02 2016 11:53:04.848:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#EMck2MzgZHk2xUcUAAAA with id 81578473
................................................................................
................................................................................
.........17 02 2016 11:53:05.746:INFO [Chrome 50.0.2645 (Mac OS X 10.11.2)]: Connected on socket /#P5qe0B0r6t7JB3sRAAAB with id 34275392
.......................................................................
................................................................................
................................................................................
................................................................................
.............
PhantomJS 2.1.1 (Mac OS X 0.0.0): Executed 347 of 347 SUCCESS (1.992 secs / 1.969 secs)
................................................................................
................................................................................
.........................................
Chrome 50.0.2645 (Mac OS X 10.11.2): Executed 347 of 347 SUCCESS (2.235 secs / 2.17 secs)
TOTAL: 694 SUCCESS

Here the Phantom tests start before Chrome is connected, and also finish earlier (so the dots aren't contiguous).

But we can still see that both browsers executed 347 tests successfully (694 total), and there are 694 dots (8 rows of 80 dots, plus 54 more).

Now lets compare how the "mocha" reporter looks for the same three scenarios.

Mocha reporter, Chrome only

(for brevity, I'm piping the output in to wc -l which gives us a line count)

$ karma start --reporters mocha --browsers Chrome | wc -l
     381

We get 381 lines of output, which sounds about right for 347 tests.

Mocha reporter, PhantomJS only

$ karma start --reporters mocha --browsers PhantomJS | wc -l
     381

For Phantom, we get the same results.

Mocha reporter, Chrome + PhantomJS

$ karma start --reporters mocha --browsers Chrome,PhantomJS | wc -l
     223

Only 223 lines?

Looking at the full output, 694 tests were successful; but only a fraction of those tests are output?

$ karma start --reporters mocha --browsers Chrome,PhantomJS

START:
17 02 2016 12:03:06.996:INFO [karma]: Karma v0.13.21 server started at http://localhost:9876/
17 02 2016 12:03:07.006:INFO [launcher]: Starting browser Chrome
17 02 2016 12:03:07.016:INFO [launcher]: Starting browser PhantomJS
17 02 2016 12:03:08.226:INFO [PhantomJS 2.1.1 (Mac OS X 0.0.0)]: Connected on socket /#JNCw1S90MEeUck_uAAAA with id 27325993
17 02 2016 12:03:08.818:INFO [Chrome 50.0.2645 (Mac OS X 10.11.2)]: Connected on socket /#mqXYVwRxwPOP5mjpAAAB with id 32929822
  progressBar
    ✔ object constructor
    ✔ render
    ✔ setTotal
    ✔ setSection
  toucheventproxy
    ✔ object constructor
    ✔ handleEvent - mousedown disabled
    ✔ handleEvent - mousedown enabled
    ✔ handleEvent - mousemove
    ✔ handleEvent - mouseup
    ✔ handleEvent - click
    ✔ handleEvent - touch event from proxy
    ✔ handleEvent - touch event from brower
  cache-controller
    ✔ object constructor - without application cache
    ✔ object constructor
    ✔ update - with application cache
    ✔ update - without application cache
  database-controller
    ✔ constructor - error
    ✔ constructor - without upgrade
    ✔ constructor - with upgrade fail
    ✔ constructor - with partial upgrade
    ✔ constructor - with full upgrade
  application-controller
    ✔ object constructor
    ✔ constructor - cache update
    ✔ constructor - cache update progress
    ✔ constructor - no singleton
    ✔ constructor - singleton
    ✔ start - dbConfig 304 Not Modified
    ✔ start - fail
    ✔ start - database upgrade
    ✔ start - success
    ✔ loadDependencies
    ✔ gotAppConfig - 304 Not Modified
    ✔ gotAppConfig - sync warning
    ✔ pushView - initial view
    ✔ pushView - subsequent view
    ✔ pushView - 304 Not Modified
    ✔ popView
    ✔ setScrollPosition
    ✔ contentShown - loading
    ✔ contentShown - loaded
    ✔ contentShown - unknown state
    ✔ setHeader - with header content
    ✔ setHeader - without header content
    ✔ clearHeader
    ✔ setFooter - with footer content
    ✔ setFooter - without footer content
    ✔ setFooter - without footer
    ✔ clearFooter
    ✔ setContentHeight
    ✔ showNotice
    ✔ showNotice - without button events
    ✔ showNotice - without buttons or id
    ✔ hideNotice
    ✔ noticeHidden
    ✔ noticesMoved
    ✔ showScrollHelper
    ✔ hideScrollHelper
    ✔ gotLastTyncTime - no value
  list
    ✔ object constructor
    ✔ refresh - 304 not modified
    ✔ refresh - without event handler
    ✔ refresh - without grouping
    ✔ refresh - with grouping

Finished in 4.185 secs / 4.1 secs

SUMMARY:
✔ 694 tests completed

We know that Phantom was starting/ending earlier than Chrome...is it possible that as soon as the first browser completes; mocha reporter stops reporting?

Highlight complex object diff

It would be great if the reporter would highlight the diff in a complex object instead of just dumping both the expected and actual to the console. I'm using karma-jasmine because karma-mocha doesn't show the stacktrace on error and using chai 'assert' assertions.

Using:

  • karma 0.13.9
  • karma-chai-sinon 0.1.5
  • karma-jasmine 0.3.6
  • karma-mocha-reporter 1.1.1

Ticks and Crosses don't show up when using bg colors

The colors feature is fantastic! However, there is a problem when using chalk's bg colors. The icons (ticks / crosses etc that appear to the left of tests) don't show up. Any idea why this might be?

Thanks again!

Colored icons when config.colors is false

The icons from log-symbols are colored when the color configuration is false.
wrong_color

Just strip color when no color is required to remove this icons color:

chalk.stripColor(logSymbols.error);

Diff not working when comparison is in a callback

Any chance this could be fixed? When doing a object comparison inside of a callback, instead of displaying a nice diff, all you get is:

WARN: 'Unhandled rejection AssertionError: expected [ Array(1) ] to deeply equal [ Array(1) ]'

Example code:

describe('Some test', () => {
  let sandbox;

  beforeEach(() => {
    sandbox = sinon.sandbox.create();
    sandbox.useFakeServer();
    sandbox.server.autoRespond = true;
  });

  afterEach(() => {
    sandbox.restore();
  });

  it('should run some test with a callback', (done) => {
    const someResponse = [{ some: 'response' }];
    sandbox.server.respondWith('GET', '/a/url', (request) => {
      request.respond(
        200,
        { 'Content-Type': 'application/json' },
        JSON.stringify(someResponse)
      );
    });

    somePromise.then((someCollection) => {
      someCollection.toJSON().should.deep.equal(someResponse);

      done();

      return null;
    });
  });

});

I'm using chaijs for assertions, backbone+marionette, and bluebirdjs for promises.

No test result output of running test after upgrading to 1.3.0 and newer

I've used karma-mocha-reporter-1.1.5 for a long time. During an update of my system, I've upgraded also to the latest version. Since than I don't see any test results during running the tests.

I've played around with several reporter versions and it seem the "issue" was introduces with 1.3.0. Older versions are working, newer not. You have changed the behaviour of printout in 1.3.0 if multiple browsers are running the tests in parallel. I've only one browser which I connect manually to karma to run the tests. Seems there is something broken. Or do you have introduced a setting which may prevent the output?

String diffs JSON-encoded

Hi there,

It would appear that the recently released String diffing support diffs JSON-encoded strings. This seems unintentional.

Here is a reproduction:

expect(`a
b`).to.eql(`b
a`);

The above produces this output:

Since this appears to be JSON-encoded, I tried manually parsing the expected and actual values around line 369:

err.actual = JSON.parse(err.actual);
err.expected = JSON.parse(err.expected);

With this in place, the output is correct:

Is it possible that somewhere within the chai -> mocha -> karma -> karma-mocha-reporter stack, something is stringifying the actual and expected properties?

Skipped tests don't show up

Example:

describe("Data service", function(){
    it("has a dataObject constructor", function(){
      expect(DataService.dataObject).toBeTypeOf('function');
    });
  xit("which accepts a url and an alt ", function () {
      var obj =  new DataService.dataObject('http://www.google.com', 'google');
      console.log(obj.url);
      expect(obj.url).toExist();
    });
})

logs:

DataService
 ✓ has a dataObject constructor
LOG: 'http://www.google.com'

Summary
✓ 1 tests completed

Summary doesn't show that a test was skipped...
But doing the opposite- singling out tests shows skipped tests:

describe("Data service has a dataObject", function(){
    iit("has a dataObject constructor", function(){
      expect(DataService.dataObject).toBeTypeOf('function');
    });
  it("which accepts a url and an alt ", function () {
      var obj =  new DataService.dataObject('http://www.google.com', 'google');
      console.log(obj.url);
      expect(obj.url).toExist();
    });
});

logs:

DataService
 ✓ has a dataObject constructor
 ✓ which accepts a url and an alt  (skipped)

Is there a way to show when something is 'xit / xdescribe' as being (skipped) ?

Skipped tests listed in failed tests.

I run the same tests for multiple browsers (using browserstack). Some tests are skipped for certain browsers. The problem is, when one browser fails a test, other browsers that skipped that test are listed in the failed tests section. This prevents me from knowing which browser actually triggered the failure.

You can see below that 1 test is skipped and 1 test failed, but 2 browsers are listed in failed tests.

SUMMARY:
✔ 18 tests completed
ℹ 1 test skipped
✖ 1 test failed

FAILED TESTS:
  css-vendor
    .supportedValue()
      ✖ should prefix if needed
        IE 11.0.0 (Windows 8.1 0.0.0)
        Android 4.1.2 (Android 4.1.2)

Installation issues

Hey there,

Firstly thanks for the lib.

I was wondering if perhaps you could help me fix the following installation issue

 > [email protected] install /home/dev/www/node_modules/bufferutil
> node-gyp rebuild

gyp ERR! build error                                                                                                                                                                                                                                                                             
gyp ERR! stack Error: not found: make
gyp ERR! stack     at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:78:19)
gyp ERR! stack     at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:82:29)
gyp ERR! stack     at /usr/local/lib/node_modules/npm/node_modules/which/which.js:93:16
gyp ERR! stack     at FSReqWrap.oncomplete (fs.js:83:15)
gyp ERR! System Linux 3.13.0-29-generic
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/dev/www/node_modules/bufferutil
gyp ERR! node -v v5.2.0
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok 
npm WARN install:[email protected] [email protected] install: `node-gyp rebuild`                                                                                                                                                                                                                   
npm WARN install:[email protected] Exit status 1

> [email protected] install /home/dev/www/node_modules/utf-8-validate
> node-gyp rebuild

gyp ERR! build error                                                                                                                                                                                                                                                                             
gyp ERR! stack Error: not found: make
gyp ERR! stack     at F (/usr/local/lib/node_modules/npm/node_modules/which/which.js:78:19)
gyp ERR! stack     at E (/usr/local/lib/node_modules/npm/node_modules/which/which.js:82:29)
gyp ERR! stack     at /usr/local/lib/node_modules/npm/node_modules/which/which.js:93:16
gyp ERR! stack     at FSReqWrap.oncomplete (fs.js:83:15)
gyp ERR! System Linux 3.13.0-29-generic
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /home/dev/www/node_modules/utf-8-validate
gyp ERR! node -v v5.2.0
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok 
npm WARN install:[email protected] [email protected] install: `node-gyp rebuild`                                                                                                                                                                                                           
npm WARN install:[email protected] Exit status 1

Is there a way to customise colors?

Hi,

First of all, thanks for karma mocha reporter - love it!

My question is: is there a way to customise the colors used? Whilst the white is appropriately white on Windows, the green and red for pass and fail are really hard to read.

If there was a way to control this that would be fantastic.

Different output to Mocha CLI runner for BDD tests

Hi,

Have just started using this library in some Spikes to assess and PoC our TDD & BDD testing stacks for a new project, and am liking it so far and how the output is similar to Mocha.

Just one thing I have noticed, is that the output for BDD tests (using the Yadda BDD framework) is different to that when running mocha directly on the CLI. See below:

Mocha
C:\DEV\GMR\sandbox\ui-architecture>mocha test_out/scenarios.js

Context Sharing

√ Given I am on the scenarios chart view
√ When I select a particular hierarchy level
√ Then The full hierarchy path should be displayed in breadcrumb style in the header bar

√ Given I am on the scenarios chart view
√ When I select a particular hierarchy node in the breadcrumb bar
√ Then The chart and data should update to represent that level

6 passing (31ms)

Karma
C:\DEV\GMR\sandbox\ui-architecture>gulp scenario-test
[15:14:05] Using gulpfile C:\DEV\GMR\sandbox\ui-architecture\gulpfile.js
[15:14:05] Starting 'bundle-scenarios'...
[15:14:06] Finished 'bundle-scenarios' after 1.14 s
[15:14:06] Starting 'scenario-test'...
[15:14:06] Finished 'scenario-test' after 53 ms
INFO [karma]: Karma v0.12.28 server started at http://localhost:9876/
INFO [launcher]: Starting browser Chrome
INFO [Chrome 38.0.2125 (Windows 7)]: Connected on socket d72LZdA0eOZwJgHZZXL_ wi
th id 84544935

Start:
√ Given I am on the scenarios chart view
√ When I select a particular hierarchy level
√ Then The full hierarchy path should be displayed in breadcrumb style in the header bar
√ When I select a particular hierarchy node in the breadcrumb bar
√ Then The chart and data should update to represent that level

Finished in 0.025 secs / 0.018 secs

SUMMARY:
√ 6 tests completed


It looks like it is missing the "Context Sharing" description, one of the "Given"s and some spacing.

Is this by design, a bug, or am I doing something wrong ?

Env
Windows 7 | node v0.10.28 | npm v1.4.9 | karma v0.12.28 | karma-mocha v0.1.10 | karma-mocha-reporter v0.3.1 | mocha v2.0.1 | yadda v0.11.4

Thanks
Justin

Special characters in tfs build output logs

I have setup a gulp task to run our unit tests each time someone commits to master. We have a powershell script that is working for all of our gulp task except for the task that is running the tests. I have narrowed it down to some regex that is looking for the Starting and Finished strings in the gulp output. Specifically it is failing on the Finished string because of the special characters in the output. Here is an example of what the output in the tfs build logs looks like:

2016-05-03T21:44:03.6342587Z [�[90m16:44:03�[39m] Finished '�[36munit-tests-all�[39m' after �[35m17 s�[39m

Changing the reporter back to the default 'progress' reporter produces output like this:

2016-05-04T15:28:29.3935335Z [10:28:29] Finished 'unit-tests-all' after 16 s

Any idea what is causing the special characters in the output? Is it the colors? If so, is there any way to turn this off? We are dumping the results of the test runs into a slack channel for visibility and I didn't want to have to create a new custom script to strip these characters out if I can avoid it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.