Giter Site home page Giter Site logo

Comments (2)

searls avatar searls commented on July 22, 2024

tl;dr I'm sorry, but I think a feature like this would violate the spirit of the workflow that Suture is trying to encourage.

Longer explanation here:

I mention this in passing in my talk about suture, but a big reason for the record/playback meme is that these recordings encourage their deletion after refactoring is complete.

Tests of poorly-understood code that are locked-in by characterization tests make for poorly-understood tests, too. Good tests express the intent of the functionality, but if that was well understood, a tool like Suture wouldn't be necessary (this is why I defined legacy code as being code we don't understand confidently enough to change).

Where I've seen people run into trouble using Feathers' approach to writing out your own characterization tests is not merely in the cost of writing them, but in the aftermath: after the refactor is complete, the natural impulse is to hold onto those tests for all that they're worth. Any improvement to test coverage is seen as a huge value, but in practice these are actually very bad tests to keep around forever, for several reasons:

  • Seams are often cut at arbitrary half-way points of a call stack, and as a result characterization tests neither use the system like a real user would (full-stack integration) nor have the control and feedback of unit tests.
  • As mentioned, these tests are blind to the purpose of the code beneath them. If the goal is to lock-in the current behavior, then when these tests fail (due to intended changes), a costly procedure must be followed to figure out whether the breaking change was "good" or not. Well, unfortunately, because characterization tests can't encode any judgment about what the code "should" be doing, they dramatically increase the carrying cost of future change, and we don't want that either.
  • Keeping characterization tests around discourages us from writing fresh, thoughtful, intention-encoding unit tests around the newly refactored code that we just made, and that's no good--since not encoding that kind of understanding is how the old code became legacy in the first place.

from suture.

wadestuart avatar wadestuart commented on July 22, 2024

I totally see your perspective, where I was coming from is that part of the method you are supporting really needs the developer to stand up tests that live on after the refactor happens and when coming up against new code that is unknown those characterization data tests seem to be valuable both in the automated verify state but also to stand up and refactor down a test group. I think once you refactor to some state that seems to pass all of the verification tests you probably at least have a good idea of the workings of the code -- enough to write your own new test code. but being able to read through those verification tests seems like it could have value to expose test surface area that you may have re implemented without even realizing it. Consider oldcode -> newcode which has been refactored and is order sensitive but the newcode just happens to retain the same ordering (without intent and as a unknown side effect). When generating new hand written tests that live on you may not even realize that the stable order was required and fail to expand out tests for that coverage. Whereas if you can easily visually inspect the tests (not just failing ones) you may realize this requirement just from cues in the data.

Maybe exposing out usable tests is not the answer as it could be counter to the proposed method (and kept long term), but instead some exportable view of the input and output data structures utilized by the characterization tests without fishing through the sqlite database?

The thing I am reacting to is that Feathers characterization harness method's power comes not only from the harness but also from the gradual exposure of the behavior by building those tests. It seems like black boxing/automating that build, while saving time and effort, also reduces that exposure to the developer. At least with some way to view both the pass and fail states it seems like some of that value can be retrieved.

from suture.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.