revery-ui / revery Goto Github PK
View Code? Open in Web Editor NEW:zap: Native, high-performance, cross-platform desktop apps - built with Reason!
Home Page: https://www.outrunlabs.com/revery/
License: MIT License
:zap: Native, high-performance, cross-platform desktop apps - built with Reason!
Home Page: https://www.outrunlabs.com/revery/
License: MIT License
It seems I can't have multiple strings in a single <text>
component like this:
<text style=textInputStyle> {state.comment} {"|"} </text>
This gives me the following runtime error:
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")
Would it be possible to catch with types?
A to-do list is the canonical example for UI frameworks, so we should have one here too.
There's a few things we need first:
This was reported by @jordwalke - I believe it was on a Macbook Pro. On initial run of the examples, the framebuffer isn't matching the window size:
The autocomplete should be taking up the entire space of the window (there should be no 'cornflower blue').
After resize, the rendering correctly fills the window. This suggests there is a mismatch between how we are determining the framebuffer size on initial creation and on resize.
We have a small Color
API to parse / work with colors - it's very bare-bones at the moment, though!
One important feature we need for it is creating colors from hex-values. Some examples of cases we should be able to parse:
#FFF
(3 element, rgb)#FFFA
(4 element, rgba)#01010F
(6 element, rgb)#0101CC0F
(8 element, rgb)It'd be fine to use a library for this (I'm sure there is one existing), but the main thing is just that we can parse these hex colors and get the right output.
An API like Color.hex("#FFF")
that returns a Color.t
would be perfect. We should have tests covering this case, as well.
Rendering text is currently very expensive in Revery, because it involves lots of context-switches to jump between shaders (this is made even worse by the fact that we currently regenerate textures every frame - but that's a separate issue).
The text rendering could be significantly improved by having a texture atlas that contains all the rendered glyphs - then, we could render a line of text in a single pass (or at least, a more minimal set of passes), as opposed to the situation today - where we always render each quad / texture in a single pass.
There's an excellent TextureAtlas implementation by @cryza in Oni here: https://github.com/onivim/oni/blob/master/browser/src/Renderer/WebGLRenderer/TextRenderer/GlyphAtlas/GlyphAtlas.ts that could be useful here ๐
I can build, but I can't run the examples
jwalkes-MacBook:revery jwalke$ _build/install/default/bin/Autocomplete
- Loading font: Roboto-Regular.ttf
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")
To have a useful UI - we should have parity with what the browser supports for the border
styles: https://developer.mozilla.org/en-US/docs/Web/CSS/border
We should decide on our API surface. Perhaps something like this?
<view style={Style.make(~border=Border(Colors.red, 2), ())} />
<view style={Style.make(~borderHorizontal=Border(Colors.blue, 1)} />
For the initial implementation, only implementing the solid
style seems reasonable. We can always add other styles down the road - supporting the border-width
and border-color
are important, though!
There are a few things we'll have to do to support this
Part 1: Style properties
Style.re
- Add the border
properties to our style type: https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/src/UI/Style.re#L25Style.re
- Add the properties to our make
function: https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/src/UI/Style.re#L36Style.re
- Add the sizes to our toLayoutNode
function, so that layout handles it correctly: https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/src/UI/Style.re#L103Part 2: Rendering quads
For the core border rendering, we need to render a quad that spans the width (in the case of border-top/border-bottom) or the height (in the case of border-left or border-right), and positioned correctly in relationship to the node's layout.
This logic can live in ViewNode
, and it's quite similiar to the logic we have today for drawing the background - the only difference really is the color and dimensions of the quad.
Part 3: Rendering Junctions
The trickiest piece of this work that will need to be handled with the rendering is where borders meet up. For example, if I have a top-border and a left-border, we need to render triangles potentially where these meet. This is dependent on us having a triangle primitive in #120
For example, if I had a div like this:
<div style="border-left: 10px solid red; border-top: 20px solid yellow; width: 100px; height:100px;">
It would look something like this:
Once we get Part 3 completed, though, we'll have a pretty usable border
story ๐ฏ
Once we have mouse support, we need a way to detect when the cursor is over an element, for the purposes of implementing the following events:
onMouseOver
onMouseOut
This is critical for the button in #37
Specifically, we should add a hitTest
method to our Node
hierarchy: https://github.com/bryphe/revery/blob/master/src/UI/Node.re
This would take a point (x, y) and return a boolean of true
if the point is inside the node, false
otherwise.
An alternative implementation would be to have each node expose a bounding rectangle or bounding geometry that is then compared - but this is less flexible. With each node implementing a hitTest
method, there can be arbitrary geometry support, and transforms are easily supported (ie, for a box that has a rotation applied - we can apply the inverse transformation to the hit test point, and do a simple check if the point is in the box).
In windows in WSL I am getting this errors after doing an esy install
then esy build
:
https://gist.github.com/kyldvs/d25559a3f880cd2ad474bbbf0d1d875c
Managing focus for text input is critical for useful interactive applications. In general, clicking on a text input should grant it focus. In addition, other elements may be 'focusable' for accessibility, like buttons.
Revery should provide an intuitive, React-like interface for working with focus, that is familiar for web developers using React.
Focus is an inherently stateful concept - for a basic scenario, we can keep track of focus at the node
level. Our 'focus manager' could essentially keep track of the focused node via a ref
.
Internally, we need to:
Focus
module that keeps track of the actively focused node. When that actively focused node changes, it should dispatch focus
and blur
events to the respective nodes.For our Nodes, we need to:
.focus()
and .blur()
methods. These would be available via the ref
introduced in #139On our JSX side, we need to:
tabindex
field which, for now, is simply a proxy for whether or not it is focusable. Later, when there is more focus on keyboard accessibility, we can extend this to behave as the browser (ie, for tab-key flow)onFocus
and onBlur
event for nodes. This should be added to our NodeEvents
module and dispatched at the proper time.This lays the groundwork for a simple focus management story. Once we have this in place, we can start 'funneling' the keyboard input to the focused element. Key for implementing apps that need forms!
.focus
on a node without tabindex
should not change focus.blur
on a node with tabindex
should cause no node to have focus.focus
on a node with tabindex
set should change focusonBlur
is triggered for the previous element, and onFocus
is triggered for the new element).We should make it easy to define Animations - https://facebook.github.io/react-native/docs/animations is a good starting point. I wonder if we can improve on the API in the Reason world, too. For example, we shouldn't need Animated
-aware components - we should be able to apply these to any components.
Issue: If a border
is specified, but no background
, the border does not render.
For example, for this component:
module Test = (
val component((render, ~children, ()) =>
render(
() => {
let borderStyle =
Style.make(
~border=Style.Border.make(~width=10, ~color=Colors.white, ()),
(),
);
let innerStyle =
Style.make(
~backgroundColor=Colors.red,
~width=100,
~height=100,
(),
);
<view style=borderStyle>
<view style=innerStyle />
</view>
},
~children,
)
)
);
I actually hit this in #138 , which is why I added a pretty transparent background here:
https://github.com/bryphe/revery/blob/d05c343e67c90b81fdc19da74fbf59b5e6a87536/examples/Bin.re#L113
The issue seems to be this check: https://github.com/bryphe/revery/blob/d05c343e67c90b81fdc19da74fbf59b5e6a87536/src/UI/ViewNode.re#L241
Today, we don't have a simple way to create a Triangle
primitive to use in rendering the UI.
We should create a wrapper similiar to the Quad
primitive: https://github.com/bryphe/revery/blob/master/src/Geometry/Quad.re
...but for creating Triangles. This is important for implementing border-styling in #119
Now that we have text rendering in WebGL (thanks @jchavarri !), I'm thinking it makes sense to publish our WebGL examples to a website - either GH pages or using netlify or something. It'd be a really easy way for people to try things out without needing to set up to start.
We currently support custom components (with hooks even!) - some examples in the code:
https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/examples/Bin.re#L50
https://github.com/bryphe/revery/blob/711a90a0b19af2f39dd984cb1570a1231e745365/examples/Bin.re#L6
However, our documentation just says TODO
. We should put a simple example there and detail the 'anatomy' of a custom component, so that it is more accessible.
I'd like to publish an OPAM package for revery to make sure it's usable with the broader OCaml community (which would primarily consume this library via OPAM).
I've only tested it with esy
currently, but it would be great to publish and verify this through OPAM.
Issue: Today, all our animation infrastructure uses a 'linear' easing function.
This happens in our getLocalTime
method - we're implicitly using the linear easing function which is just t => t
. However, linear easing is not very visually appealing - usually you want an easing style animation which more closely replicates a physical model.
The proposal is to add an additional parameter, easing
to our animation options:
https://github.com/bryphe/revery/blob/e024382f2631ac2026f129c52091e410c5e7a29a/src/UI/Animation.re#L27
This would be simply a function of type easingFunction: float => float
.
The user could define their own easing function, or we could have some defaults:
linear
step(boundary)
would be t => t < boundary ? : 0.0 : 1.0
quadratic
would be t => t * t
cubic
would be t => t * t * t
easeIn
easeOut
easeInOut
A 'wall clock' would be a great example for showcasing the transform functionality in #40 - the rotations could be used to implement the hour / minute / second hands. In addition, it could be packaged as a custom component, so that there'd be a clear cut example of a stateful custom component.
A calculator would be a great example for using the Button in #37 , and perhaps some simple custom components.
Issue: The font rendering is not as clear as it could be on low-dpi displays.
One contributing factor is that we are not appropriately handling gamma color space. When we render a glyph, freetype gives us back an alpha mask - each pixel is an 8bit value describing the coverage. If a pixel is 50% covered, it is made 50% black.
However, that 50% does not actually translate to 50% brightness - we treat '128' as the halfway point (which it is in linear space), however, in actuality, ~'186' is the halfway point for brightness.
This is described in more detail here:
One open question is - does this mean we can't render text with a transparent background? It might be that, for subpixel rendering and for this, we'd need to render text with a solid background (or a known color / map in the background).
Following from the above document, there might be a way to gamma-correct properly w/o knowing the background: https://bel.fi/alankila/lcd/
The goal of this work would be to implement proper gamma-correction - ideally preserving transparent backgrounds for the text, if possible!
Often, for UI widgets / controls, they are relatively static - and don't require a re-render very frequently.
For those cases, it doesn't really make sense for us to re-render the entire widget every frame. This is costly and involves lots of transforms. If it doesn't get updated very often, it makes sense to render the widget to a render target (a texture). When it's cached - we'd just render a quad + that rendered texture.
Proposal: We'd add a <container>...</container>
tag that does this caching (it would correspond a ContainerNode
that handle this).
The downside is that, if a widget is rendered frequently, it ends up being more expensive to render to the texture, plus use that texture to render a quad.
I thought about if it would be possible to automate this, but I think the application developer has the right 'domain' to know about this pattern. It would never be functionally necessary to use this <container />
In addition to the ContainerNode
- itself, we'd have to add some dirty-tracking the container node would need to know how to check its children to see if invalidation of the cached texture is necessary.
It's similiar to how in CSS developers would use translate3d(0, 0, 0)
to force layer promotion (ie, https://aerotwist.com/blog/on-translate3d-and-layer-creation-hacks/) - it's a similar concept - this would just be more explicit.
/Users/Bret/.esy/3____________________________________________________________________/b/esy_freetype2-2.9.1001-4cd9f534
Using compiler: gcc
include...
.
..
freetype
ft2build.h
lib..
.
..
cmake
libfreetype.a
pkgconfig
ld: library not found for -lpng
clang: error: linker command failed with exit code 1 (use -v to see invocation)
./esy/test.sh: line 21: ./test: No such file or directory
error: command failed: './esy/test.sh' (exited with 127)
esy-build-package: exiting with errors above...
building [email protected]
esy: exiting due to errors above
I'm not entirely sure if this belongs here or on the esy issues, so any guidance is appreciated
REPRO:
esy x Bin.exe
Expected: App should launch.
Actual: Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")
The issue is that when we load fonts, or assets like images, we load them from the current working directory. This is problematic, because the user should be able to launch the executable from anywhere.
Because of this limitation - we have to include this awkward instruction in our README.md
:
After you build, the executables will be available in the _build\install\default\bin folder.
NOTE: Currently the executables must be run from install\default\bin, since the assets are there.
However, the user should just be able to run via esy x Bin.exe
to try out the example app.
Streamlining this would make the first-run experience much smoother, and also unblock #136 .
Some things we need to do:
rench
: revery-ui/rench#15 - but we could just port over the relevant code, too:let getExecutingDirectory = () => {
Filename.dirname(Sys.argv[0]);
};
TextNode
and ImageNode
classes, or we could handle it lower in the stack (ImageRenderer
, FontCache
). Append the executing directory + requested asset.Considerations:
getExecutingDirectory
simply returns /
(the root) in the JSOO environment.In the future, we might want to make our asset loading more flexible - some scenarios we'll potentially need to address:
Like we have for the <view />
primitive, we should have these events available for the <image />
primitive.
Thinking about #145 - for some of these very visual cases, we have no current test coverage. It's important to be able to make changes safely and confidently - so I always think when there is a regression - how can we improve our 'safety net' to catch these?
We're getting to a level of features with background color, text rendering, borders, shadows that it becomes tough to validate all of these in a PR change!
What I'd like to add to our infrastructure is a set of image-based verification tests, that can validate some of these basic scenarios. These would render a simple scene or component, save it as an image, and then compare that image to a snapshot.
This isn't a new idea; tools like Telerik have supported it for a while. Doing googling shows a RosettaCode problem for an algorithm for this ๐
The challenge with such tests is making sure they are reliable and easy-to-update. For reliability, it often helps to have a threshold (% of pixels with the same value), or use per-platform snapshots (there might be differences in anti-aliasing, for example). These test suites should be pretty limited and focused on the core set of rendering primitives we have, because they have a maintenance cost. But they can help protect us against regressions.
Open questions:
glReadPixels
API mentioned in this StackOverflow Post could help.Alternatives:
One alternative to image-based snapshot testing is OpenGL API snapshot testing - essentially, put a proxy in place for all the glXXX
calls, that record the inputs. This can ensure we end up with the same set of GL calls. This is more robust then the image-verification approach, but it also is a much higher maintenance cost - any internal refactoring performance improvements that would've passed the image verification test would also flag as a failure for these tests. So I'd lean towards the image verification test, for now.
Today, we have a Hooks.animation
hook here:
https://github.com/revery-ui/revery/blob/master/src/UI_Hooks/Revery_UI_Hooks.re and https://github.com/revery-ui/revery/blob/master/src/UI_Hooks/Animation.re
That is used as follows:
let (rotationY, pauseRotationY, restartRotationY, hooks) =
Hooks.animation(
Animated.floatValue(0.),
Animated.options(
~toValue=6.28,
~duration=Seconds(4.),
~delay=Seconds(0.5),
~repeat=true,
(),
),
hooks,
);
(from https://github.com/revery-ui/revery/blob/master/examples/Hello.re)
I think it'd be convenient to have a Hooks.transition
hook, that would work as follows:
let currentValue = Hooks.transition(1.0, { duration: Seconds(1) });
The idea is that you could use this along with some other events, for example:
let (opacity, setOpacity) = Hooks.state(1.0);
let transitionedOpacity = Hooks.transition(opacity, { duration: Seconds(1) });
let onMouseDown = () => setOpacity(0.5);
let onMouseUp = () => setOpacity(1.0);
<view style={Style.make(~opacity=transitionedOpacity, ())} ... />
This would enable a smooth transition between the opacity values, as opposed to just directly switching from 0.5 <-> 1.0. The Hooks.transition
hook could leverage Hooks.state
under the hood to keep track of the last value. If the last value is different, it could start an animation and use the animated value. Otherwise, it could just return the current value.
Issue: On Low DPI displays, the text rendering is not as clear as it could be.
One common technique for dealing with this is subpixel rendering, which exploits the fact that LGB displays tend to have strips in RGB order.
It's a strategy for increasing the resolution of font rendering by taking advantage of this pixel geometry.
@cryza did some amazing work in Oni to set up a full-WebGL based subpixel rendering strategy: onivim/oni#2120
The idea, as I understand it, was to render the same glyph 4 times (for each subpixel offset case), and then pick the appropriate one based on the pixel offset of the actual glyph. It would be great to have a similar strategy here, in combination with #108 - we'd have some really sharp font rendering ๐
Setting a background for text elements does not set the color in the actual character's cell to match the overall background color leading to a patchy appearance, not entirely sure how to go about looking into this @bryphe, as the meat of how revery
works aka shaders etc. is very new to me but could have a look if you point in me the right direction
Inspired by react-spring - springs are excellent tools for creating an interactive and animated UI. Another great description comes from the react-motion repo.
We could enable easy spring-based animations via a useSpring
hook. There's been thinking about this already here: https://medium.com/@drcmda/hooks-in-react-spring-a-tutorial-c6c436ad7ee4
Our hook could look like:
let currentVal = useSpring({ currentValue, destinationValue, springConfiguration });
springConfiguration
could have configurable properties:
stiffness
damping
And, like react-motion, it would be helpful to have a good set of presets for this.
Under the hood, we'd need to use setState
to store the current position
, velocity
, and acceleration
. We can use Hooke's Law to determine the acceleration based on the force (F=kd=ma
).
We'll also need to update this every tick
- we'll need to generalize our Animation
framework a bit to allow for arbitrary tick functions (right now, we always call the tickAnimation
function).
Placeholder to track adding a Button
component. I believe we could have this today with the primitives we provide (a view / image), but we do need mouse support:
https://github.com/bryphe/reason-glfw/pull/28
Now that we have Clickable
and <Button />
in #152 , I was thinking it'd make sense to consolidate to a single example project. We could still keep each sample in its own Module - but have some sort of navigation story to go between them.
For example, we could have <Button />
for each example on a pane, and then render the selected example in the remaining space.
It'd be nice to just be able to run esy x Examples.exe
and be able to quickly navigate between them!
We force the user at the moment to pick a font via fontFamily
. If a fontFamily
is not specified, you get a crash with this exception:
- Loading font:
Fatal error: exception Fontkit.FontKitLoadFaceException("[ERROR]: Unable to load font at FT_New_Face\n")
This is pretty rough - we should pick out a default font family if none is specified. I think that the Roboto-Regular.ttf
that we already bundle, and use in several examples, would be an OK default.
Does anyone have any thoughts / pereferences on this?
The only notes I see concerning web support are an unchecked box for supporting it as a platform, so I'm assuming this is a known issue, but I'm curious what the state of it is. In the build, I see the script 'build:js'. I ran it, then started an http server in _build/default/examples
and opened index.html. I got the following error:
Uncaught TypeError: runtime.caml_glfwDefaultWindowHints is not a function
I checked _build/default/examples/Bin.bc.js
, and it seems to be defined.
I'm personally interested in using js_of_ocaml for full stack dev, so very curious about the root cause here and steps to debug and solve.
This is ported from ideas @jchavarri and @OhadRau mentioned in PR #153 - some really neat ideas around compile-time asset loading. (Not my idea so I don't want to take credit for it ๐ ) Brought over some notes from that PR:
From @jchavarri :
I'd love to play around ideas around handling compile-time known assets paths at build time. I'm still unsure how that would look like exactly, but my idea right now is to read these paths at compile time, maybe through a ppx, read the assets from the ppx binary, and convert the load expression into an assignment to a binding of the whole path binary data (maybe as a string? or as binary data?). Something like https://github.com/johnwhitington/ppx_blob.
From @OhadRau :
@jchavarri If you're interested in compile time asset loading, that's some thing I was gonna try to make actually. I've thought of a few ways of doing it and once I settle on one I'll go ahead and write a PPX for it:
There seems to be an issue where the events don't propagate to the layer that one would expect.
It might be related to the coordinates and pixelRatio
conversion in retina screens.
In the gif below, I'm clicking on the Click Me
button which should increase the counter, but the events end up reaching the logo image:
@bryphe It might be an issue upstream in glfw
, I wasn't sure so for now opening it here where the Bin
example is ๐
Like #124 , but for the <text />
primitive.
Once https://github.com/bryphe/reason-glfw/issues/66 is implemented, we'll have the ability to change the cursor. This is important to give users the UX they expect - a way to show when an element is clickable when you hover over it, an I-beam to show the user that text input is available, etc.
When we have those APIs, we'll have to decide on a way to expose them in revery
.
I think the most intuitive approach, for users coming from React in the browser, would be to have a cursor
style property: https://developer.mozilla.org/en-US/docs/Web/CSS/cursor
We could leverage our mouse tracking + hit testing to figure out the 'cursor' style of the node the mouse is over, and decide how to call the glfwSetCursor
API, based on that.
Inspired by React-native - elements should be able to be passed an arbitrary transform
: https://facebook.github.io/react-native/docs/transforms
In particular, we should support the following transform types:
rotateZ
rotateY
rotateX
rotate
(proxy for rotateZ
)scale
scaleX
scaleY
scaleZ
translateX
translateY
Note that in Reason, we can have a more ergonomic API by using Variants, like:
<view style=(Style.make(~transform=[TranslateX(100), Scale(0.5)]) />
Where we define a transform
type like:
type transform =
| RotateZ(..)
I need word wrap for a PoC I'm building. I talked a bit to @bryphe on discord about this and it should probably be implemented here:
Line 114 in f025d79
And we can probably use a algorithm from here: https://stackoverflow.com/questions/17586/best-word-wrap-algorithm
I will probably try to implement this myself but if someone else wants to pick it up feel free to do it
We have a very small subset of Colors defined in our Colors
module - like Colors.red
, Colors.blue
, etc.
It'd be great to have parity with the default set in HTML - I think this is a comprehensive list here:
https://www.w3schools.com/colors/colors_names.asp
Might be easier with #102 tackled - but it would be awesome to have this full set of Colors in revery
, too!
Good place to start looking:
https://github.com/bryphe/revery/blob/master/src/Core/Colors.re
Whenever a element is rotated, it keeps its initial position for the remainder of the runtime of the program. Can't tell if this is something on my end or if transforms just haven't been finished for elements.
Really minimal example:
open Revery;
open Revery.Core;
open Revery.UI;
let init = app => {
let w = App.createWindow(app, "test");
let ui = UI.create(w);
let textHeaderStyle = Style.make(~backgroundColor=Colors.red, ~color=Colors.white, ~fontFamily="Roboto-Regular.ttf", ~fontSize=24, ~marginHorizontal=12, ());
let smallerTextStyle = Style.make(~backgroundColor=Colors.red, ~color=Colors.white, ~fontFamily="Roboto-Regular.ttf", ~fontSize=18, ~marginVertical=24, ());
Window.setShouldRenderCallback(w, () => true);
Window.setRenderCallback(w, () => {
UI.render(ui,
<view style=(Style.make(~position=LayoutTypes.Absolute, ~bottom=50, ~top=50, ~left=50, ~right=50, ~backgroundColor=Colors.blue, ()))>
<view style=(Style.make(~position=LayoutTypes.Absolute, ~bottom=0, ~width=10, ~height=10, ~backgroundColor=Colors.red, ())) />
<view style=(Style.make(~width=128, ~height=64, ~transform=[RotateX(Angle.from_radians(Time.getElapsedTime()))], ())) />
<text style=(textHeaderStyle)>"Hello World!"</text>
<text style=(smallerTextStyle)>"Welcome to revery"</text>
<view style=(Style.make(~width=25, ~height=25, ~backgroundColor=Colors.green, ())) />
</view>);
});
};
App.start(init);
Note that this is the exact same example as Bin.re, just with the tag changed to .
(Btw thanks for this project, I've been waiting for something like this to come along for ages... would love to help out on some of the work for this library)
odoc
is a great OCaml-community-supported project for generating documentation - and it even integrates with esy
. It'd be awesome if we could use it to generate some initial documentation.
I imagine we'll need to do a better job of documenting in the files (using the proper code-block comments to get good descriptions in the documentations) - it'd be helpful to know what we need to do there to get high-quality docs.
The overflow: hidden
style is important, and will be useful as we start implementing scrollable widgets.
A couple things we'd need to do:
flex
)overflow
hidden is set, we'd want to use glScissor
to clip the rendering region to the widgets bounds. We might have to take extra care in the transform case such that we only scissor the axis-aligned bounding box.capture
-like API for mouse eventsOften for UI elements, after the initial mousedown
, the component needs to track the mouse movement and actions exclusively. Some examples of this:
<Button />
- for a click
event, you don't want to dispatch it immediately on the mousedown
- most UIs will wait for the mouseup
before dispatching. If the mouseup
occurs elsewhere, the UI does not fire a click
event. While in this limbo-state between mousedown
and mouseup
, hovering over other elements is a no-op.<Slider />
and <Scrollbar />
- once a mousedown
has occurred, we want to track the mouse movement, even if the mouse cursor moves away from the slider or scrollbar. We can still update the value of the slider / scrollbar, until the user releases via a mouseup
event.Add a Mouse.setCapture
API that could be used as follows:
/* While capturing is active, events will _only_ be forwarded to these handlers */
Mouse.setCapture(~onMouseDown, ~onMouseUp, ~onMouseMove);
...
/* Release capture */
Mouse.releaseCapture();
For a button element, we could add an onMouseDown
handler that looks like this:
let onMouseDown = (evt) => {
let noop = (_evt) => ();
let releaseCapture = ref(None);
let capturedMouseUp = (evt) => {
dispatchClickEvent(evt);
Mouse.releaseCapture();
};
Mouse.setCapture(~onMouseDown=noop, ~onMouseMove=noop, ~onMouseUp=capturedMouseUp);
};
The <Button />
could do extra validation - like verify the onMouseUp
actually occurred over the element, or that it was within a certain distance, etc.
For box-shadow
, we'll want to add a Style
property, something like:
Style.make(~boxShadow=BoxShadow(-5., -5., 10., 10., Color.rgba(0., 0., 0., 0.5), ..., ())
The box-shadow properties would mirror the properties from CSS: https://developer.mozilla.org/en-US/docs/Web/CSS/box-shadow
(in order)
Note that the box shadow won't impact Layout, so we don't need to worry about passing it to layout (like we did for flex properties!)
The way I would think about tackling this is splitting it up into a few parts:
Part 1: Set up the types
For part 1 - I'd just look at adding the types in Style.re
, and plumbing them through.
We'd need to:
BoxShadow
typeboxShadow
property to the Style record / make fucntionsPart 2: Render a solid shadow
For part 2, I'd skip the blur / spread to simplify things - we can just render a quad the same size as the view node, but offset based on the values of xOffset
and yOffset
.
The place we'd want to look at drawing this is in the ViewNode
- this is the code that draws the background
of a node:
https://github.com/bryphe/revery/blob/4a52028176aad32f81973a3a0bbc651299359dda/src/UI/ViewNode.re#L48
Prior to drawing that - we'd want to draw the 'shadow' quad. Right now, our rendering looks like this:
uWorld
for background quad - this is the transform matrixuColor
for background quad - this sets the color in the shaderWith our shadow, we'd want to do this in two passes:
uWorld
for shadow - this is like the transform for the background quad, but with an extra translation to set the xOffset/yOffset.uColor
for the shadow - this would be based off the shadow colorGeometry.draw(_quad, solidShader)
uWorld
for background (same as today)uColor
for background (same as today)Geometry.draw(_quad, solidShader)
So our shadow would add an extra draw call
Part 3: Render the blurred edges
For part 3, we'd take into account the spread
and blurRadius
.
I think the easiest way to handle this (at least in a quick way) would be to add additional quads for the edges, and render those edges with a shader that handles the gradient. At the 'core' of our shadow, the opacity would be 1.0
x the shadow color. At the edge of the shadow, it'll fall off to 0
- becoming more transparent as it spreads out. We could use additional quads around the core shadow, along with a shader that models this fall-off to get a smooth gradient.
There might be better ways to handle this, too.
Right now, for our examples like AutoComplete, we just bind directly to the raw GLFW / window events.
This isn't ideal, because for things like focus management to work, we need to control the 'bubbling' of events across the node 'hierarchy'.
We should implement a handleEvent
method on our Node hierarchy, and add the following event types:
KeyDown
KeyPress
KeyUp
MouseDown
MouseUp
These are events that we'd bubble through our node hiearchy, and then expose as handlers on the primitives / component tags.
I had all the other dependencies, but I was unaware I needed ragel
until I saw that harfbuzz
failed to build. I'm not sure where the best place would be to document this, any guidance so I can submit a PR? Thanks!
At the current time, we have a very simple event model for handling mouse events.
That logic is in https://github.com/revery-ui/revery/blob/master/src/UI/Mouse.re , specifically here:
let isNodeImpacted = n => n#hitTest(pos);
let nodes: ref(list(Node.node('a))) = ref([]);
let collect = n =>
if (isNodeImpacted(n)) {
nodes := List.append(nodes^, [n]);
};
Node.iter(collect, node);
List.iter(n => n#handleEvent(eventToSend), nodes^);
This is very simple - we check for all the nodes that pass the 'hit-test', and dispatch the event to all of them.
However, this is unexpected behavior and not intuitive if you're coming from web programming - as webdevs, we'd expect the event to dispatch to the top-most element, and bubble up from there!
We need to implement this event-bubbling behavior in revery. This will be useful not just for the initial mouse events, but for all sorts of other events - like keyboard input, etc.
We add a UiEvents
module that has a method bubble = (node, event)
. The bubble
would do a few things:
stopPropagation
or preventDefault
- like we'd expect in Web events.ref
s).handleEvent
for each node in the hierarchy. If stopPropagation
is called, it should discontinue the traversal up the hierarchy.We should be able to craft unit tests that exercise this with some simple Node
objects that have handlers that call stopPropagation
, along with counters that validate whether the events were hit.
We can hook this up to the mouse event bubbling behavior today, by picking out the top-most node that passes the hit-test. We need to make sure our z-index tracking is working correctly for this.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.