Comments (25)
Ive pushed the WIP branch with the
WaitForNextChange
(renamed it fromWaitForNextUpdate
). You can check it out here: https://github.com/egil/razor-components-testing-library/blob/blazor-tests/src/Rendering/RenderedFragmentBase.cs#L110-L123That will also close the #29 issue.
WaitForNextChange
is good. Surely better than WaitForNextUpdate
. However, it is not generic enough. It does not help much, when AsyncEventHandlerComponent
is changed like this.
@using System.Threading.Tasks
<div>
<span id="state">@state</span>
<button id="tick" @onclick="Tick">Tick</button>
<button id="tock" @onclick="Tock" disabled="@(!tockEnabled)">Tock</button>
</div>
@code
{
bool tockEnabled = false;
TaskCompletionSource<object> _tcs;
string state = "Stopped";
Task Tick(MouseEventArgs e)
{
if (_tcs == null)
{
_tcs = new TaskCompletionSource<object>();
state = "Started";
tockEnabled = true;
return _tcs.Task.ContinueWith((task) =>
{
state = "Stopped";
_tcs = null;
});
}
return Task.CompletedTask;
}
void Tock(MouseEventArgs e)
{
tockEnabled = false;
_tcs.TrySetResult(null);
}
}
In this case Tock button is enabled only, when started. And especially in this case, there are not only 2 renders, but also 2 changes.
from bunit.
Also, I am wondering if we should be using
.GetAwaiter().GetResult()
instead of.Wait()
. Ill play a little with the code and see what makes the most sense.
I don't know what is advantage of .GetAwaiter().GetResult()
. GetAwaiter captures current synchronization context, but it's not used then. So I think there is no difference between Wait and GetAwaiter.
from bunit.
@duracellko just wanted you to know that I've started #5 and with that I'm experimenting with having the core renderers be async by default. Then the test contexts can choose whether to expose the async api or not.
from bunit.
My comments on Open question 1
Advantages of blocking operations
- Tests do not need to be defined as asynchronous
public async Task MyComponentTest()
(simplypublic void MyComponentTest()
is sufficient)
Disadvantages
- Test developer has more option on how to await the operation, by examining of Task object.
- The methods may enter deadlock, when executed from Razor Dispatcher context.
from bunit.
My comments on Open question 2
Advantages of running in single Razor Dispatcher context
- Test environment is simpler, when using single thread only. This may mitigate flaky tests or make tests easier to debug, especially for poorly designed tests. For example, when tests directly access components properties or methods.
- Possibly higher performance, because there is no need to switch between tests/contexts.
Advantages of running tests in 2 synchronization contexts
- 2 synchronization contexts better simulate real environment, where UI events or HTTP events come from different browser thread or even different process.
from bunit.
Thanks for this @duracellko.
A few points I would like us to consider in this discussion is:
- The goal should be to enable users to fall into a pit of success, meaning that the most obvious course of action should also be the one that we expect the users to pick.
- At this point, performance is secondary. I would like to make the library much faster, so a typical test runs as fast as possible, but usability has priority now.
- At present, I consider forcing async in all code that interact with or invoke the renderer to be a worse user experience than not. When we talk about making
DispatchAndAssertNoSynchronousErrors
, then most of the interactions with the renderer will also become async by default.
As for your two pros/cons for each of the questions, I generally agree, but I also think we need to actually built a async version of the library to really be able to compare and contrast, have performance measurements, additional tests, etc.. Otherwise it's an (informed) guessing game.
By the way, I would like to throw another discussion point into the mix. Currently, the "Event Dispatch Extensions" all call GeneralEventDispatchExtensions.TriggerEventAsync
, which is indeed async all the way to the renderer, and doesn't use DispatchAndAssertNoSynchronousErrors
. However, there are non-async variations of all async dispatch extensions, and I am not sure why there is no need to block in those. E.g. the Click
dispatcher looks like this:
public static void Click(this IElement element, MouseEventArgs eventArgs)
=> _ = ClickAsync(element, eventArgs);
I.e. the task is not being awaited, just discarded. Why does that work? Why does the renderer always complete rendering after an unwaited Click call? e.g. like in this example: https://github.com/egil/razor-components-testing-library/blob/master/sample/tests/Tests/Pages/CounterTest.cs#L34-L53
from bunit.
After having better look at RendererSynchronizationContext
I see that dispatched tasks are executed synchronously if possible.
Tasks are enqueued only, when there is another task running at the moment. That means that in most of test scenarios everything would run on single thread.
from bunit.
Interesting. Good find. So it is likely why we have the WaitForNextRender
, to handle the cases where the tasks are not executed synchronously.
Is my understanding correct then that when Wait()
is called in DispatchAndAssertNoSynchronousErrors
, it actually doesn't interrupt the thread, as it just receives a completed task right away and continues?
from bunit.
Is my understanding correct then that when
Wait()
is called inDispatchAndAssertNoSynchronousErrors
, it actually doesn't interrupt the thread, as it just receives a completed task right away and continues?
Yes it is very likely.
from bunit.
So @duracellko do you think it is worth the effort to investigate this further for now? maybe built a prototype and see how it behaves?
from bunit.
Also, I am wondering if we should be using .GetAwaiter().GetResult()
instead of .Wait()
. Ill play a little with the code and see what makes the most sense.
from bunit.
So @duracellko do you think it is worth the effort to investigate this further for now? maybe built a prototype and see how it behaves?
Well, before investigating I think it's important to define goals and principles. For example, we can invest into prototype to verify performance, if it is the main goal. Although I don't think high performance should be the main goal of testing framework.
Also after finding out how RendererSynchronizationContext schedules work, there will be not much difference between having single synchronization context or two separate ones.
However, the question is, how much should the framework follow Inversion of Control pattern. That's the pattern that async/await and Task
object actually implements. Before Task object it was usually decision of provider about how to wait for end of operation. And usually it was blocking. With Task
object the provider gives control to client and client can decide how to wait for end of operation. It can decide, if it wants to block, or schedule another operation afterwards, or cancel the operation.
And of course with more control comes more responsibility. Therefore implementation of client can get more complicated. However, thanks to async/await in C# this complexity can be usually very well hidden.
So main question is how much control does the test framework should give to client. And this is not related only to task awaiting. For example we had also discussion about ComponentTestFixture
as base class for xUnit tests, that it may not be flexible enough to adapt to other test framework. And there is item in backlog #5 to extract base functionality to separate library. So I can imagine to create base library that gives lot of control to client. And then test framework specific libraries, which would work as facade with simpler API.
from bunit.
Also I can create a prototype of using the test framework with some integration tests. For example with not mocking HttpClient, but doing real HTTP requests. I think WaitForNextRender is not the best API in such case. Because it may be unpredictable if the next render already happened or not.
from bunit.
So @duracellko do you think it is worth the effort to investigate this further for now? maybe built a prototype and see how it behaves?
Well, before investigating I think it's important to define goals and principles. For example, we can invest into prototype to verify performance, if it is the main goal. Although I don't think high performance should be the main goal of testing framework.
I agree. Lets take performance out of the equation for now.
Also after finding out how RendererSynchronizationContext schedules work, there will be not much difference between having single synchronization context or two separate ones.
I agree. Although I like that we have two, since, as you said previously, that mimics the way production code works.
However, the question is, how much should the framework follow Inversion of Control pattern. That's the pattern that async/await and
Task
object actually implements. Before Task object it was usually decision of provider about how to wait for end of operation. And usually it was blocking. WithTask
object the provider gives control to client and client can decide how to wait for end of operation. It can decide, if it wants to block, or schedule another operation afterwards, or cancel the operation.And of course with more control comes more responsibility. Therefore implementation of client can get more complicated. However, thanks to async/await in C# this complexity can be usually very well hidden.
Great observations. In general, I do not mind giving responsibility to the users, if they get a benefit from it. Otherwise I prefer to keep things simple and require as little typing as possible from the user.
So main question is how much control does the test framework should give to client. And this is not related only to task awaiting. For example we had also discussion about
ComponentTestFixture
as base class for xUnit tests, that it may not be flexible enough to adapt to other test framework. And there is item in backlog #5 to extract base functionality to separate library.
You are correct, the ComponentTestFixture
class does indeed depend on how xUnit work. As long as #5 is not being worked on, I think that is just fine. The obvious solution to that would be to create a xUnit specific project that xUnit fans can use, and one for each of the others test frameworks as well. The others would have their own variant of ComponentTestFixture
.
So I can imagine to create base library that gives lot of control to client. And then test framework specific libraries, which would work as facade with simpler API.
So that might be an actual good reason to go async. If we have a testlib.core
version of the lib that e.g. testlib.xunit
depends from, then testlib.xunit
could provide use .Wait()
where it makes sense. And e.g. a ``testlib.nunit` could do what makes most sense in that environment.
from bunit.
Also I can create a prototype of using the test framework with some integration tests. For example with not mocking HttpClient, but doing real HTTP requests. I think WaitForNextRender is not the best API in such case. Because it may be unpredictable if the next render already happened or not.
That would be an interesting scenario to verify. If you do, the integration tests that perform real http calls should probably be in its own testing project, and ideally, the http calls it performs could be to itself or some localhost api, such that there will never be an issue running the integration testing lib without an internet connection.
from bunit.
I created a prototype with integration testing.
https://github.com/duracellko/razor-components-testing-library/tree/feature/integration-testing
It introduces WaitForRenderAsync method that takes predicate and waits until the component is rendered so that the predicate is true.
from bunit.
Great. It will be interesting to see what use cases it solves.
I am currently working on incorporating the E2E tests from the aspnetcore repo, to get a lot more rendering cases covered. One of the first causes trouble with the current beta 5.1. It might be of interest for you to test against as well.
Its the last Click()
that is the challenge. It results in two renders, where the last one has the changes we want to assert against. In my working-branch I have added a cut.WaitForNextUpdate(() => cut.Find("#tock").Click());
that solves it, but I wonder if you have another solution that also works (ill push my branch later).
Test:
[Fact]
public void CanTriggerAsyncEventHandlers()
{
// Initial state is stopped
var cut = RenderComponent<AsyncEventHandlerComponent>();
var stateElement = cut.Find("#state");
Assert.Equal("Stopped", stateElement.TextContent);
// Clicking 'tick' changes the state, and starts a task
cut.Find("#tick").Click();
Assert.Equal("Started", cut.Find("#state").TextContent);
// Clicking 'tock' completes the task, which updates the state
// This click causes two renders, thus something is needed to await here.
cut.Find("#tock").Click();
Assert.Equal("Stopped", cut.Find("#state").TextContent);
}
And the component under test:
@using System.Threading.Tasks
<div>
<span id="state">@state</span>
<button id="tick" @onclick="Tick">Tick</button>
<button id="tock" @onclick="Tock">Tock</button>
</div>
@code
{
TaskCompletionSource<object> _tcs;
string state = "Stopped";
Task Tick(MouseEventArgs e)
{
if (_tcs == null)
{
_tcs = new TaskCompletionSource<object>();
state = "Started";
return _tcs.Task.ContinueWith((task) =>
{
state = "Stopped";
_tcs = null;
});
}
return Task.CompletedTask;
}
void Tock(MouseEventArgs e)
{
_tcs.TrySetResult(null);
}
}
from bunit.
Actually this may be solved by WaitForRenderAsync
.
In unit-tests it is fine to assume, when render occurs. Actually it may be even goal of the test to verify that. It is because there should not be any side-effects, when component is isolated. Therefore it is fine for a test to expect 1 or 2 renders and then verify the rendering result.
However, with integration tests it's not that simple. The component is not isolated and tests should not test, when rendering occurs. Only if after a specified time there is expected result rendered.
Selenium in End-2-End tests solves that by making Query methods more resilient. If the requested element is not found, it tries again until timeout elapses.
WaitForRenderAsync does same thing. And it is more flexible. Because query can be function. And also bit more reliable, because it can connect directly to rendering "events" of ASP.NET Core components.
Btw. the integration tests in the example are inspired by https://github.com/duracellko/planningpoker4azure, which are inspired by End-2-End in ASP.NET Core Components, you just mentioned.
from bunit.
Interesting. I'll have a closer look later.
My WaitForNextUpdate
doesn't care about how many renders occurs, it waits until a render produces new markup.
from bunit.
Ive pushed the WIP branch with the WaitForNextChange
(renamed it from WaitForNextUpdate
). You can check it out here: https://github.com/egil/razor-components-testing-library/blob/blazor-tests/src/Rendering/RenderedFragmentBase.cs#L110-L123
That will also close the #29 issue.
from bunit.
... there are not only 2 renders, but also 2 changes.
Great test case @duracellko. We definitely need more of these - out of the ordinary - test scenarios covered.
I like your idea about using a predicate, it is a more general and flexible way of blocking a test. But let me suggestion an extension to that idea.
- Since a RenderedFragment/RenderedComponent (through
RenderedFragmentBase
) knows when it has been changed, we do not need to while loop until the predicate passes. We can simply wait until the next change, check if the new state of the rendered fragment matches the predicate, if not, wait for the next change, and so on, until a timer hits. - With that in mind, I think it would be better to simply expose a
public Task NextChange {get;}
inIRenderedFragment
. That will allow us to create different extensions that utilize this knowledge. Do we also need apublic Task NextRender {get;}
, to allow us to assert that a render has not caused a change? - Instead of
WaitForRenderAsync(predicate)
orWaitForNextChange(renderTrigger)
, I propose a simple assertion helper. Naming is hard, but let's call itShouldPass
for now.- It should take in an assertion/verification action as input, e.g.
cut => Assert.Equal("Stopped", cut.Find("#state").TextContent)
. When the assertion is invoked, it will either pass or throw an exception. - The assertion is invoked initially and after every
NextChange
, until thetimout
runs out. - If the
timeout
runs out, the last exception produced by invoking the assertion is rethrown to the caller. That way, they do not get a timeout exception, they get an exception from their assertion, which I think might make more sense.
- It should take in an assertion/verification action as input, e.g.
Here is some pseudo code for it (written in GitHub, not tested at all):
public static T ShouldPass(this T cut, Action<T> verificationAction, TimeSpan? timeout = null) where T : IRenderedFragment
{
TimeSpan timeLeft = Debugger.IsAttached ? Timeout.InfiniteTimeSpan : timeout ?? TimeSpan.FromSeconds(1);
Exception? verificationFailure;
TryVerification();
while(verificationFailure is { } && timeLeft > TimeSpan.Zero)
{
// Add StopWatch logic to record time spend waiting
cut.NextChange.Wait(timeLeft);
timeLeft = timeLeft - time spend waiting for NextChange;
if(timeLeft > TimeSpan.Zero)
{
TryVerification();
}
}
if(verificationFailure is {})
{
ExceptionDispatchInfo.Capture(verificationFailure ).Throw();
}
return cut;
void TryVerification()
{
try { verificationAction(cut); verificationFailure = null; }
catch(Exception e) { verificationFailure = e; }
}
}
I think that will lead to a very neat assert pattern, that will complete as soon as possible, e.g.:
[Fact]
public void CanTriggerAsyncEventHandlers()
{
// Initial state is stopped
var cut = RenderComponent<AsyncEventHandlerComponent>();
var stateElement = cut.Find("#state");
Assert.Equal("Stopped", stateElement.TextContent);
// Clicking 'tick' changes the state, and starts a task
cut.Find("#tick").Click();
Assert.Equal("Started", cut.Find("#state").TextContent);
// Clicking 'tock' completes the task, which updates the state
// This click causes two renders, thus something is needed to await here.
cut.Find("#tock").Click();
cut.ShouldPass(cut => Assert.Equal("Stopped", cut.Find("#state").TextContent);
// or
cut.ShouldPass(cut => Assert.Equal("Stopped", cut.Find("#state").TextContent), TimeSpan.FromMilliSecond(200));
}
I've never been a fan of the WaitForNextRender
/WaitForNextChange
pattern where we pass in a method that causes a render. It is not super clear to the users whats going on. Using something like ShouldPass
is more clear I think. We trigger whatever render action before the call, e.g. cut.Find("#tock").Click();
, and then we check if the state is as we expect. It is a little similar to Assert.Collection
or ShouldAllBe
from Shouldly.
The trick will be to get the naming right, which I am not sure it is now :)
What do you think.
UPDATE: The name should probably have something in it that indicates it is meant for dealing with asynchronous rendering scenarios.
from bunit.
Good idea. I will try to create PR for that. I will think about the naming.
Regarding point 2:
I would be careful about exposing NextChange
and/or NextRender
directly. I will try to analyze access flows, but at very brief look at it today, I suspect that the backing fields for these properties should be volatile
. And I would be careful about exposing volatile values directly. Let me have better look at it.
I can imagine NextRender
(or a related method) may be still useful. For example, test waits for next change instead of next render. And there is a bug in the application that no change is rendered. The tests fail and find the bug as expected. However, they fail on timeout. This makes test much slower. Also in real scenario this may be very annoying. At first TimeoutException is always annoying, because it's hard to tell, if there is a bug in application or testing infrastructure. Also this may clog CI queues, when tests are run in CI pipelines.
For this reason NextRender
should be preferred in unit-tests. This way test includes less action in "Act" part and more in "Assert" part, that produces better test report.
Actually ShouldPass
method (or however we call it) has same disadvantage. However, I don't think there is much better way. All UI testing frameworks use similar principle. I guess, if there would be better way, someone smart would already introduce it.
from bunit.
I would be careful about exposing NextChange and/or NextRender directly.
I do not think it will be a problem but please validate this. Just to be sure we are talking about the same thing, the code would be something like this:
private TaskCompletetionSource<object?> _nextRender;
private TaskCompletetionSource<object?> _nextChange;
public Task NextRender => _nextRender.Task;
public Task NextChange => _nextChange.Task;
So the TaskCompletetionSource
is not exposed, only the Task
used to communicate that it is done. I believe that is safe, and also one of the purposes of TaskCompletetionSource
.
I can imagine NextRender (or a related method) may be still useful. For example, test waits for next change instead of next render. And there is a bug in the application that no change is rendered. The tests fail and find the bug as expected. However, they fail on timeout. This makes test much slower. Also in real scenario this may be very annoying. At first TimeoutException is always annoying, because it's hard to tell, if there is a bug in application or testing infrastructure. Also this may clog CI queues, when tests are run in CI pipelines.
I agree. I think NextRender
is, compared to NextChange
, much less useful, especially since the exception is a timeout, not an assert exception.
For this reason NextRender should be preferred in unit-tests. This way test includes less action in "Act" part and more in "Assert" part, that produces better test report.
Dont you mean "For this reason NextChange should be preferred in unit-tests."? Or did I misunderstand you?
All UI testing frameworks use similar principle. I guess, if there would be better way, someone smart would already introduce it.
Indeed. And I think we have an advantage here, at least over Selenium , in that we know when a change has happened, so there is less potential wait time.
About NextRender/NextChange and ShouldPass, lets move that discussion to the issue tracking it: #29
from bunit.
I would be careful about exposing NextChange and/or NextRender directly.
I do not think it will be a problem but please validate this. Just to be sure we are talking about the same thing, the code would be something like this:
private TaskCompletetionSource<object?> _nextRender; private TaskCompletetionSource<object?> _nextChange; public Task NextRender => _nextRender.Task; public Task NextChange => _nextChange.Task;So the
TaskCompletetionSource
is not exposed, only theTask
used to communicate that it is done. I believe that is safe, and also one of the purposes ofTaskCompletetionSource
.
Yes, I understand. I am just not sure, if the code should be like this:
private volatile TaskCompletetionSource<object?> _nextRender;
private volatile TaskCompletetionSource<object?> _nextChange;
public Task NextRender => _nextRender.Task;
public Task NextChange => _nextChange.Task;
For this reason NextRender should be preferred in unit-tests. This way test includes less action in "Act" part and more in "Assert" part, that produces better test report.
Dont you mean "For this reason NextChange should be preferred in unit-tests."? Or did I misunderstand you?
I meant NextRender. Example:
Component:
<!-- This is bug. It should be bound to @counter -->
<p>@initialValue</p>
<button @onclick="Counter">Counter</button>
@code
{
private readonly int initialValue = 1;
private int counter;
protected override void OnInitialized()
{
base.OnInitialized();
counter = initialValue;
}
private void Counter()
{
counter++;
}
}
Tests:
public void Test01()
{
var cut = RenderComponent<MyComponent>();
cut.WaitForNextRender(() => cut.Find("button").Click());
cut.Find("p").TextContent.ShouldBe("2");
}
public void Test02()
{
var cut = RenderComponent<MyComponent>();
cut.WaitForNextChange(() => cut.Find("button").Click());
cut.Find("p").TextContent.ShouldBe("2");
}
I like Test01 more than Test02.
Test01 fails immediately (few ms) and with explaining error: "Expected value was '2', but actual value was '1'."
Test02 fails after timeout (1 second) and with error: "There was no change in time out." And that is not very good point to start investigation.
from bunit.
Ill close this, as it seems it has served its purpose.
from bunit.
Related Issues (20)
- Problems running docs project locally HOT 6
- Docs: search exposes pages that are not in the TOC
- bunit.web has dependency on alpha version of AngleSharp.Css HOT 2
- No way to set value for property with `[SupplyParameterFromQuery]` but not `[Parameter]` HOT 6
- Support for keyed services HOT 5
- Cannot trigger RowSelected eventcallback on SFgrid HOT 3
- `WrapperElementsGenerator` cachable
- IJSUnmarshalledRuntime removed in .NET 9 HOT 2
- FakeNavigationManager registration changed from Singleton to Scoped HOT 7
- bUnit does not set parameters as a result of two-way binding or chained binding when values are updated. HOT 6
- `@formname` attribute always causes an error: System.InvalidOperationException : Invalid element frame type 'NamedEvent' HOT 2
- upgrading bUnit to 1.27.17 results in bunit.web.testcomponents load errors HOT 2
- WASM MSAL Configuration Not Correctly Supported HOT 1
- Async Task in Multiple Tests HOT 8
- Clicking an Anchor Tag does not update NavigationManager HOT 2
- Unable to resolve service for type 'Bunit.Rendering.ITestRenderer' HOT 4
- diff:ignoreChildren does not Ignore Children of a Button HOT 4
- Change in 1.27.12-preview breaks Moq/AutoMock with loose behavior HOT 5
- Documentation: Footer displays placeholders HOT 1
- BUnitBrowserFile ignores MaximumAllowedFileSize HOT 8
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from bunit.