Comments (19)
generators producing failing tests by default is not something we will do, this will great excess pain/noise, with little rewards.
I do see the value of red/green refactor cycles. I can think of two approaches.
use something like: https://github.com/JamesMGreene/qunit-pending and generating a pending test, this would be a sort of yellow test, pending implementation.
a community add-on that provides blueprints which generate failing tests, as an opt-in.
I am closing this, not because the discussion isn't good but it isn't quite the correct venue.
Typically these sorts of requests are to be provided via the https://github.com/ember-cli/rfcs, i noticed the CONTRIBUTING.MD doesn't say this clearly, I have updated it accordingly: fbbb354
I welcome an issue (or fleshed our RFC) on the RFC repo.
from rfcs.
Sure, testing is one of those "delayed gratification" things that is painful at the start, and sure, the ignorant may be vociferous, but: Ember's supposed to be an opinionated framework, so here I am offering an opinion: such behavior is for me a clear fit to the "convention over configuration" mantra that makes us all love Ember. Convince me (and yourselves) otherwise.
I am a firm believer in the red/green refactor flow, but am unfortunately always in the minority. I suspect the masses would revolt.
I believe my above recommendation, of pending
tests be a helpful middle-ground. Thoughts?
from rfcs.
"What's in a name? A rose by any other name would still smell as sweet." That is, I don't see much difference between alerting the developer with pending - This test is not implemented yet and alerting them with failed - this needs to be implemented (aside from perhaps color and aesthetics/wording). If the aim is just to draw attention to missing/not-implemented tests, then the failed mechanism does the job fine and would save us from having to package this extra plugin (I hope I understand correctly that this qunit-pending plugin would need to be referenced to make "pending" an option).
If, on the other hand, the aim is really to introduce additional semantics to differentiate "not implemented" tests from "truly" failing implemented tests, then the pending option sounds ok. That said, call me a test Nazi ("opinionated" again), but a not-implemented test is a failure in my book...it's big and red and angry and deserves my time and attention.
With respect to the revolting masses, again my gut doesn't really agree with this mindset. If an engineering concept/pattern has proven its worth in the industry over many years, why should a framework that's trying to be progressive ("ambitious") in that industry be afraid to follow it? The masses will always be resistant to any kind of change, but if we can help them to understand the reasons for that change then they "should" appreciate it in the long run. I'm no social scientist, but I suspect such moves also help to strengthen Ember's "culture" and "opinionatedness". Maybe I'm just being ueber-optimistic and utopian though.
It's clear to me that we would need more input/consensus from the community on these broad issues before fleshing out any details on this suggestion.
from rfcs.
@samcic the reason I am thinking introducing a new type, (its not really new in most test environments, but the browser is...). Is because it allows us to ship code that works for all our users, but a user could easily opt-in, to pending
tests being failures or not.
I also believe it would allow pending
to be a soft reminder that work needs to be done for those who do not opt-in to the stricter requirements.
from rfcs.
Ok thanks for clarifying. I can of course understand your arguments for wanting to avoid a change that might display too much negative "red" stuff to users. Personally, a pending result by default would solve my problem in that it would alert me to work needing to be done, so if that's they happy middle-ground then I'm all for it.
from rfcs.
Now someone needs to figure out what actually needs to all happen to land pending
support, and make it easy for people to opt into pending === failure
from rfcs.
As much as I'd like to get this in, are we representative enough? I appreciate you're an owner, but are there no others whose additional views/consensus on this might be useful?
from rfcs.
@samcic My opinion is I don't really care if the first test fails or passes, because I start changing it immediately. But for our newer users, red in the console might give the impression the tool is doing something wrong. We just rolled back our CSP addition for the same reason, the red in the console was confusing and reduced faith in the tool.
Maybe when the freshest of users run their first generator and run the tests for the first time, they see a green test, might be more good than bad?
from rfcs.
Is there any advantage to use pending over skip? Skip is already supported by QUnit out of the box.
from rfcs.
@kellyselden Most of the time I'm with you...I start changing the test immediately. That said, I really do come across cases where I'm "scaffolding" the "feel" for the skeleton of a feature with a bunch of resources and don't want to flesh out a unit test for each one as I'm going. In such cases there is a clear value of having tests "remind" me that they need to be completed. Maybe I get interrupted and don't get the chance to finish what I was doing for days. Can you appreciate this line of thought?
@stefanpenner's suggestion of a yellow "pending" might be a healthy compromise between TDD idealism and newbie-friendliness. Surely the console message could be appropriately worded in these cases to not frighten people ("Warning: this test has not yet been written" or similar).
If the freshest of users run the generator, perhaps code a bit, then run the tests and realize that they pass, I think we'd be holding their hand down Antipattern street...it's at such an early stage that they need to at least think about their testing methodology.
The CSP addition is also a slightly different situation. It's relevant to the actual application code, so even people who don't care about testing at all will be affected by it. For people that are just hacking around and don't care about tests, our suggested change here wouldn't affect them.
from rfcs.
@csantero - The idea of pending
is slightly different than skip
. See qunitjs/qunit#787 for some additional rationale.
from rfcs.
I think we'd be holding their hand down Antipattern street...
confirm
from rfcs.
When you're using rspec in rails and generate a controller, you also get a test for it, much like with ember-cli. There, the test is pending, and shows up yellow with a default message with something along the lines of
Please add tests for the something controller or remove this file. The test file can be found at path.
We feel this is a good way to go about it. Some times you just want to generate the thing, but it's good to also get a pending test as a kind of to do list.
from rfcs.
@kimroen Yep, this kind of "to-do" generation is also seen when generating an implementation for an interface in OO languages like Java and C# (Resharper, for example, will generate the method with a body that always has "throw new NotImplementedException();").
from rfcs.
Another option for consideration: Out of the box, if no assertions are made in the test, the test fails with what I would consider to be an intuitive message:
Expected at least one assertion, but none were run - call expect(0) to accept zero assertions.
These failures still come up "red", but the message wording makes it clear that nothing useful has been done in the test, i.e. one could argue that it's a kind of "pending".
This option wouldn't require the use of any additional add-ons/packages. Concretely, it might simply involve commenting (or removing...resulting in fewer lines of generated code!) the existing code/assertion that is made in the generated tests, something like:
test('it exists', function(assert) {
// e.g.
// var service = this.subject();
// assert.ok(subject);
});
This way we're simply exploiting qunit's "out of the box" (native) handling of an empty test case, which it defines to be a failure and dumps an understandable and sensible message in this context.
from rfcs.
Given the latest state of this discussion, does anybody have any objection to me submitting a pull-request to ember-cli that simply comments out the assertions in all the test blueprints (model, service, component, mixin etc)?
Practically this would mean that newly-generated tests would not run assertions in their unedited state. This means that, based on qunit's default behaviour, these tests would fail with the "Expected at least one assertion..." message (and hence increase the project's recommendation for developing with a mindset more along the lines of TDD, red-green refactor cycles etc).
from rfcs.
That would be a very large undertaking. As our test suite actually uses this to test that basic blueprints work.
As mentioned above, if someone wants to introduce the first class topic of pending I believe we would settle on that.
Pending is totally practical for all parties, those with who prefer TDD and those that do not.
from rfcs.
I think this will get more difficult now that some of the test blueprints have been moved to ember-data and the others are supposed to be moved to ember itself.
from rfcs.
We are working on closing the ember-cli/rfcs repo in favor of using a single central RFC's repo for everything. This was laid out in https://emberjs.github.io/rfcs/0300-rfc-process-update.html.
Sorry for the troubles, but would you mind reviewing to see if this is still something we need, and if so migrating this over to emberjs/rfcs?
from rfcs.
Related Issues (20)
- convention for specifying browser support HOT 2
- Problems with standardized targets RFC #95 and javascript HOT 23
- configurable tmp directory [Docker] HOT 5
- Dropping ember data from the default blueprint HOT 1
- Making addon dummy apps less special HOT 10
- addon entry-point other than index.js HOT 5
- Public API to examine app dependencies from an addon HOT 3
- lib/broccoli/ember-modules-app HOT 4
- Generate Babel Helpers HOT 2
- Provide a way to skip import of add-on assets HOT 5
- Breaking "bugfixes" for an eventual 3.0 HOT 1
- Access to `app.options` is inconsistent HOT 3
- Configurable paths for Tree Paths HOT 5
- I'd like to discuss Webpack HOT 4
- Better server watchers HOT 10
- Expose API to customize and/or disable colors used in terminal output. HOT 1
- Import syntax should "just work" for ES6 modules available via npm HOT 17
- We need a better teaching & learning story. HOT 8
- Don't set license in app's package.json HOT 1
- Support in-repo commands by default. HOT 3
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from rfcs.