Giter Site home page Giter Site logo

kpreid / all-is-cubes Goto Github PK

View Code? Open in Web Editor NEW
138.0 138.0 8.0 13.11 MB

Yet another block/voxel game; in this one the blocks are made out of blocks. Runs in browsers on WebGL+WebAssembly.

Home Page: https://kpreid.dreamwidth.org/tag/all+is+cubes

License: Apache License 2.0

Rust 98.58% JavaScript 0.10% HTML 0.09% CSS 0.12% Handlebars 0.10% WGSL 1.02%
game-engine rust voxel-game

all-is-cubes's People

Contributors

dependabot[bot] avatar kpreid avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

all-is-cubes's Issues

Bloom rendering fails / "Feedback loop formed between Framebuffer and active Texture"

On web, in Chrome, bloom fails to render (though everything else is fine) and the console gets repeated

[.WebGL-0x10402671500] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.

This, together with the lack of a wgpu validation error, suggests that in the configuration which bloom rendering uses (reading from a texture and writing to a different mip level of the same texture), wgpu is either mismanaging GL state or failing to enforce a downlevel limit.

  • This needs reduction to a test case.
  • As a workaround, we can use two textures for bloom instead of one, if my hypothesis is correct.

Handle transaction errors

The Transaction system is intended to allow various game rules and Behaviors to make changes in a consistent way (while also obeying Rust's exclusive borrow rules). However, most of the code that uses transactions (BehaviorSet, Universe::step()) is not actually prepared to handle transaction failures in robust ways. Fix that.

  • Transactions which make conflicting changes should both fail while everything else continues — no one transaction should take priority.
  • Failures should be reported appropriately within the UI and possibly be able to trigger alternative game logic.

Optimize evaluated voxel data layout (struct-of-arrays, palette…)

Right now, the storage of voxels in an EvaluatedBlock is a Vol<Arc<[Evoxel]>>. This is inefficient if all of the voxels have some property (e.g. collision) identical. Here are three things we could try doing instead:

  • Using separate storage, with separate resolution information, for each property (struct-of-arrays). This will hopefully permit:

    • Reduced memory usage (when properties are uniform)
    • Avoiding iteration over every voxel to calculate properties that can be calculated at lower resolution
    • Increased performance when such iteration is needed, since the relevant data is more densely packed
  • Continuing to use interleaved data, but with multiple choices of storage format — essentially extending Evoxels with more variants. For example, one could store only a color vector and uniform values for other properties. This would not support multiple resolutions (except for "1 and something else") and it might lead to machine code bloat handling all the variants, but it would avoid using multiple heap allocations.

  • Use a palette. This adds an indirection cost, and has the problem that block modifiers might cause a single block to contain more unique voxels than BlockIndex, but it would likely save memory for typical complex blocks. It might be wise for high-resolution blocks and not for low-resolution ones.

Implement saved games (a.k.a. `Universe` serialization)

Among the reasons this is nontrivial:

  • we want a schema that provides for future extensions or explicit versioning
  • Universe is a graph structure that may contain cycles
    • deserialization will require constructing URefs before their referents exist

In the long run I would like to have a save format where different parts of the universe are separate files (so that it's possible, if not necessarily wise, to e.g. change a block definition by swapping its file), but that is extremely hairy (filesystems are not transactional in the ways you'd like) and not necessary for a first iteration.

Block textures are stretched on some GPUs

The positive faces of all blocks are stretched in the Y or X axis. This is on Chrome OS 94.0.4606.104, Lenovo Chromebook C330. Which is just barely able to run it at all, so I haven't gotten around to digging in to the possible cause. There are also inactive uniform warnings for a_cube.

Screenshot 2021-11-01 17 23 01

Turning on DEBUG_TEXTURE_EDGE suggests that the the texture edge clamp code thinks that the entire middle surface is at or beyond the positive edge clamp coordinates. (This screenshot is looking down while facing NZ.)

Screenshot 2021-11-01 17 36 46

Editing tools

Game content, i.e. a Universe, should be fully editable within the application itself. Gameplay of course involves placing and otherwise interacting with blocks within a Space, but complete editing will require further features.

  • Universe editing:
    • Browse the existing members; get a useful summary of how they currently relate to each other.
    • Delete named members.
    • Add new members (each type will need its own kind of creation form or action).
    • Edit global properties (when we have them) like simulation tick rate.
  • Block editing (which may be in a Space or in a BlockDef):
    • Attributes (name, collision, light emission, placement rules, ...)
    • Modifiers
    • Enter the Space defining its voxels to edit further.
    • Color picker for Atom blocks.
    • Duplicate an existing Block into a BlockDef.
  • Space editing:
    • Edit global properties (spawn, light, physics)
    • Inspect and delete behaviors
  • Character editing:
    • Ability to relocate it to an arbitrary location (this might be just controlling it with noclip).
    • Inspect and delete behaviors

Nice to have but not mandatory:

  • Space editing:
    • Region selection and bulk copy/fill
    • Resize (we cannot expect users to just get the size right the first time)
  • Space editing specifically for recursive blocks:
    • Scale a block space up or down with resampling.
    • Color picker and palette — don't require an inventory full of individually crafted atom blocks.

Choose a new logging backend

simplelog is not getting prompt releases (such as to fix the problem mentioned in f772717), and is also not ideal for some of our purposes (for example, the output of the log filter cannot be read and sent to a destination other than the log printing).

Find or write a new logging utility that does all the things we care about.

Improve behavior with arbitrary display refresh rates

The simulation steps are currently always 60 Hz, and the display refresh logic somewhat (depending on which windowing interface is being used) assumes that the display will also be interested in receiving frames at 60 Hz. One user has reported to me that it behaved oddly on their (I think) 120 Hz display.

The frame timing logic should be redesigned to explicitly accommodate whatever the refresh rate actually is.

A place to start on implementing this without needing new test hardware would be to change the simulation rate away from 60 Hz, which is something I want to support anyway.

Depth sorting doesn't work from some angles

My best guess is that the sorting is correct and the direction-to-sort-order conversion is not, but I haven't investigated yet. A possible next step would be to add debug drawing that indicates what ordering is being used for each nonempty chunk.

Gamepad support

Enable playing All is Cubes on systems that have a gamepad or other non-keyboard input device.

This will likely involve, or at least benefit from, implementing customizable keybindings (and user preferences support to store them). It will also require finding a library to receive such input events (winit does not yet), except on web where we do all our own input anyway.

It will also require focus/cursor for navigating menus.

Texture allocation or usage glitches

At low framerates (or possibly only on web builds) the Animation demo glitches out and displays the wrong texture on a block. We're supposed to prevent this by keeping texture tile handles alive until the mesh is no longer in use, but evidently that isn't happening.

image

Screen.Recording.2023-07-10.at.20.01.18.mp4

Voxel light-emission rendering

Commit 761b4b1 revised the world model so that light emission is a property of individual Atoms rather than BlockAttributes, so recursive blocks can be partially lit. However, this is currently only interpreted by light propagation (and that in a non-directional way), not for renderers. Tasks:

  • Implement in raytracer. (Code is pushed in commit 39f8db0, but disabled for now.)
  • Implement in wgpu renderer. Preferably without allocating twice the texture space, but that would require being able to split meshes by material (which would be useful, but hairy).
  • Implement in glTF export; ditto regarding texturing.
  • Update example content as needed so it looks right under these new conditions.

Improve mouselook/pause UX and state transitions

Right now, mouselook is handled in a weird way where it's not the default and you have to push a button, and then it's kind of sticky. Most games that use mouselook instead arrange to have two states:

  • In game, unpaused, mouselook active.
  • In a menu, paused, mouselook inactive.

And if pointer lock (in web terms, or cursor grab in winit terms) is lost (externally imposed focus loss etc), then the game should enter the pause mode.

There are at least two reasons why I haven't just ended up there already:

  • I want the interface to be usable under conditions where pointer lock is unsupported.
  • I want launching the application to not demand focus + mouselook in order to be able to look at the world — this is mostly a developer workflow thing.

I think that a reasonable solution for the latter is to have a “pause” (may or may not actually pause) screen which frames the world rather than obscuring it with a central dialog-box style display. Then, clicking anywhere in the world would by default activate mouselook.

Texture-only block updates are incomplete

When a block is replaced with another block of the same shape, we try to keep the geometry and modify the texture. This only works as intended if the BlockMesh uses exclusively texture colors and not vertex colors, but it is not (even though the code looks like it is doing what it's supposed to); this is visible in the UI buttons having colors left over from the opposite toggle state.

This problem should be easy enough to fix if we can just add the right instrumentation to see what it's doing…

(Note that the UI buttons should also be setting their animation hint to indicate that they might be replaced; doing that would hide this bug.)

User interface completeness

All is Cubes should have a user interface which is sufficient for its purposes. Things that are needed for the game to be approachable without prior knowledge:

  • General-purpose text display as part of the widget system, for labels, instructions, text parts of the game state such as block names, and “about the game”.
  • A “paused” menu state which interacts with mouselook pointer lock/grab in the conventional fashion (releasing it and pausing when focus is lost or ESC is pressed, and such). #440
  • More hints and cues about how to interact with things, such as how mouse buttons work with the toolbar, and hover effects to signal clickability.

Things that are needed for a “version 1.0” level of completeness:

  • Inventory #305
  • Interactive editing of all parts of a Universe.
  • All of the modes of operation that are available via command-line options (e.g. import/export/record) should be available via menus.
  • “About the app” screen with credit and license information.
  • Editable keybindings.

Finish `GridArray` migration

GridArray is now a special case of Vol. Stop using the GridArray type alias, or come up with a rule for when we should still be using it.

Preferences system

We need a preferences system; an abstraction which persists, to config directory or browser storage, all the usual things:

  • graphics options
  • audio options when those exist
  • keybindings and other input options like look sensitivity
  • et cetera for all-is-cubes
  • platform-specific or library-user-defined extensions that work in the same style

Note that it is not sufficient to just serialize the GraphicsOptions and so on, because GraphicsOptions contains several enums which have numeric parameters (such as ExposureOption::Fixed) which should be remembered in a preferences system even if they're not currently active. The GraphicsOptions schema is optimized for unambiguously telling the renderer what to do, which is not aligned with editability. I expect that non-graphics options will have similar considerations.

(Despite this consideration, all-is-cubes-desktop currently does serialize GraphicsOptions. But it also doesn't persist edits yet — just creates a config file you can edit on disk. And web has no corresponding feature.)

Migrate content generation to be able to use transactions

Things like BlockBuilder and BlockProvider are organized around &mut Universe. It should be possible to use transactions instead for all the same functionality, so that they can be done asynchronously within an existing session.

WASM builds failing to produce wasm output

I am having a problem where updating the wasm-bindgen is causing there to be no all-is-cubes-wasm/dist/*.wasm output. Further investigation is needed, but I'm filing this issue to start making notes.

Presumably it is some kind of change in how wasm-bindgen interacts with wasm-pack and/or webpack. I've been thinking about getting rid of webpack since I'm not bundling other JS libraries, so that might be one solution.

My cargo installed versions of wasm-bindgen-cli and wasm-pack are up-to-date (0.2.84 and 0.10.3).

Save memory by discarding CPU-side mesh data

For interactive use, there is no actual benefit to SpaceMesh keeping a copy of the mesh data (of opaque triangles, at least), since it's always copied out to the GPU and used from there. In order to use less memory overall, we can and should keep it only on the GPU. There are two possible solutions I see:

  1. ChunkedSpaceMesh could discard the SpaceMesh and keep the render_data.
  2. SpaceMesh could become generic over the type of container used for vertices and indices. (We'd still need CPU-side data for depth sorting, but that applies only to transparent vertices.)

In either case, we'll want to arrange to hang on to the CPU-side allocations involved for reuse, but that could be done as an explicit context type rather than in-place update of the mesh itself.

Possible memory leak on Windows CI

Recent CI runs on Windows are failing, and the timing aligns with when I updated the wgpu dependency to 0.15. The symptom is that after some number of successful render tests, they start failing like

test no_update            ... panicked: Adapter::request_device() failed: RequestDeviceError

and before I pushed a change limiting test concurrency, I also saw errors like

wgpu error: Validation Error

Caused by:
    In Queue::write_texture
    not enough memory left

which suggests that something is not being deallocated properly (by wgpu most likely, but it could also be our code) when the renderer is dropped. Investigate this and construct a repro.

Multi-step content linking

Currently, BlockProvider::new() runs a function to create each block. However, some blocks (and other entities in the future) want to refer to each other, and there's no way to do that than awkwardly jumping into explicit mutation. Additionally, in the future, other linking might want more complex interconnections.

My current thinking for how to solve this problem is to split the work into two steps:

  1. Run step-1 functions to create each universe member, with a possibly-placeholder value.
  2. For each member that needs it, run a step-2 function to execute a transaction on it to make connections, while giving it &URef access to other members.

Wasm workspace separation

Currently, all-is-cubes-wasm is a member of the main workspace. This is inconvenient, because it has dependencies that won't compile on desktop platforms, and other packages have dependencies that won't compile on Wasm. Currently, this is being addressed by using conditional compilation to stub out the -wasm code when compiling on other platforms.

If we instead create a separate workspace, this would have the following advantages:

  • Each workspace can be fully built for its supported platforms, without careful special case commands and cfgs
  • all-is-cubes-server can invoke wasm-pack build from its build script, solving the dependency currently handled messily by xtask (see #270)
  • The wasm build won't have any superfluous feature-unification with the other builds

However, the last time I tried to do this, upon running the produced wasm module, I got a runtime error about a memory access out of bounds. This indicates that something is seriously wrong in one of my wasm dependencies or the build process, and ideally it would be fixed whether or not we change the workspace layout. However, this will need effort to minimize the bug.

Light units and dimensions

Define and document the units and dimensions used for light, so that content authors have a proper standard to work from, and so that we can determine whether our algorithms are correct and consistent.

The two quantities that we expose are:

Space's per-cube light values

The cube light values are interpreted as "multiply a surface's reflectance by this, to produce the value fed to exposure and tone mapping". However, we haven't defined what reasonable exposure values are, and currently tend to assume 1.0 is the "normal" value, such that a cube light of 1.0 on a pure white surface will become white on the user's display.

If we follow “physically based rendering” standard practice, then we should define a physical quantity of light as the reference for what 1.0 cube light means, and then adjust exposure values to suit that. If we did that, then what is that quantity? (Should users get to choose it when creating their universes?) sRGB (whose primaries we use) specifies its white as having a luminance of 80 cd/m², but this is often dimmer than modern displays, so this isn't a particularly relevant number other than being one we could choose as an arbitrary 1.0 standard.

Independent of how the quantity is chosen, what are its dimensions? Should it have dimensions of luminance, such as cd/m²? I think so, because it should be divided by area (of emitting/reflecting surface), on the premise that if the cube light is held unchanged and more surface is made visible in the cube, proportionally more light should be radiated from that cube, or observed.

Note that this interpretation implies that when calculating an incoming light ray, the block as well as the source cube's light value needs to be taken into account — the light in the cube is not "pre-multiplied" by the amount of it that is radiated from that cube volume.

Block light_emission values

I'm currently pretty sure I want to change block light emission to be per-voxel rather than per-block. If we do this then the question should be framed as: how should the light output be affected when the emitting voxel is scaled down (which is not a physically realistic operation) from a full block to a voxel component of another block? I think I want the choice where combining a bunch of identical emitters into a block should produce the same output. That suggests that light emission should also be defined as luminance — (photometric) light output per area of light emitting surface.

Make server → wasm dependency optional

It should be possible to build the workspace without necessarily running the wasm build. Use cases:

  • Build without requiring npm and wasm-pack tools.
    • People may wish to build and play or contribute to all-is-cubes without caring about the web target.
    • I haven't figured out how to get npm to work on Windows on a GitHub Actions runner; right now the CI steps have a special case for this.
  • Publish the all-is-cubes-server library without embedding a wasm binary; have it optionally supplied at run-time, or left out entirely, instead. This way we don't have large rarely-useful build artifacts in what we publish to crates.io.

Texture reallocation copy fails on WebGL

As of commit e4044db, the wgpu renderer supports reallocating the block texture atlas as needed. However, copying the old texture to the new texture silently fails under WebGL. This is plausibly a bug in wgpu, but we need to construct a standalone repro to report it usefully and confidently.

For now, I have added a workaround of copying from the CPU-side tiles instead of the old texture, but that wastes CPU time on the copies.

Mesh face colors

Commit 531ff47 added per-face colors to EvaluatedBlock. These should be used when generating meshes.

Implement “stair-climbing” collision

When the character collides with a small bump in the ground, it should be shifted upward instead of stopping, so that voxel “slopes” can be climbed without jumping. (Possibly we should also generalize this to all axes so that bumping a wall corner becomes going around it; I'm not sure whether that would feel good or not.)

Inventory management

To have a usable inventory we need:

  • An inventory size bigger than just the “hotbar” itself.
  • UI state where said entire inventory is visible.
  • Interactions to rearrange, re-stack, and discard inventory items.
    • UI, rendering, and input-handling support for drag-and-drop type gestures.
      • Click-and-drag inputs (as opposed to only instantaneous response to mousedowns)
      • Non-critical: Grabbed item rendered floating above the UI off-grid
    • Non-critical: support for items dropped in the world (#301).
  • For editing, a way to bootstrap into having various editing tools without necessarily having to keep the whole set in inventory. (I have been thinking that there should be a “free editing toolbox” that you can copy items out of, but that is even more mechanisms needed.)

VUI framework: missing features

Overview of UI framework stuff that is needed and not yet done:

  • Buttons with text labels.
  • Alignment to lines/planes that aren't the maximum available space, so, for example, HUD can set its position relative to the viewport in a cleaner way that allows for margin.
  • Mouse-pressed / dragging states (inventory manipulation, sliders, cancelling button presses).
  • Cursor/focus, for gamepad/keyboard navigation (#302) and for text entry.
  • Text entry.
  • Scrolling (panning the UI camera) for large content.
  • Widgets that can be replaced without rebuilding the entire page (this will probably be a Widget that contains a WidgetTree).

Change licensing to Apache & MIT

It is popular for Rust projects to be dual licensed under the Apache and MIT licenses. I would like to follow this, but have not yet reviewed the Apache license to determine if I like it.

Web build is re-claiming pointer lock when it should not

Chrome 115.0.5790.170

  1. Enter mouselook mode
  2. Press escape (browser handled cancellation of pointer lock)
  3. Note that the mouselook mode icon stays lit
  4. After a moment, pointer lock is reactivated

Mouselook mode should be deactivated when pointer lock is lost

GPU renderer backend change

Currently, the “default” GPU renderer is based on luminance. There are two reasons it is currently unsatisfactory:

  • Under WebGL, there is a state corruption bug when existing buffers are updated. We want to do lots of updating existing vertex buffers (chunk modifications) and index buffers (depth sorting).
  • My desired targets include macOS, ARM-based Macs no longer support OpenGL, and luminance currently has no non-OpenGL backend for desktop environments.

Both of these might be fixed, but at this time the option I'm pursuing is switching to wgpu. This has its own list of problems I've encountered so far:

  • Despite having an OpenGL backend, wgpu fails to initialize on the Chromebook Linux environment that I successfully used luminance in. (Some further investigation might help, but it also might just be missing features — but the error didn't say. In any case, this isn't a requirement, just a nice-to-have.)
  • Compiling wgpu for web includes WebGPU support, which is an unstable API, so web-sys locks that behind a cfg flag (not even a feature) --cfg=web_sys_unstable_apis, which is giving me some trouble with integration into my current workspace build process. Consequently, I haven't actually tested it on web. (Option to investigate: can wgpu be made to compile with its webgl2 backend only?)
  • Possible shader miscompilation (or maybe I just wrote a bug when translating GLSL to WGSL).

Note that in this picture, neither option yet has a fully-functional WebGL mode.

So, that's why I currently have two, differently-incomplete, GPU rendering implementations in all-is-cubes-gpu. This issue will track the progress of sorting this out to a sensible condition of having only one, or at least one that works well in browsers. (Which might involve a bunch of waiting for libraries to improve.)


Todo list for features/bugs to take care of before the wgpu renderer is satisfactory:

  • Volumetric transparency
  • Debug visualization: chunk borders
  • Debug visualization: light rays
  • Fix the info text overlay drop shadow, which is missing the upper half
  • Error recovery (SurfaceError, at least; stretch goal is to deal with GPU out-of-memory)
  • Web glue (at least experimentally behind a feature if we don't get the web_sys_unstable_apis thing sorted out)
  • Investigate why debug lines have wrong colors (something in the green and blue channels).

Rendering for non-block objects (bodies, particles)

All is cubes, but there are reasons for some of those cubes to not be necessarily on the Space grid.

  • Particles for assorted visual effects.
  • Physics bodies that can move smoothly (without large update and rendering cost), such as multiple characters, dropped items, and projectiles.

For the simple case of fully opaque meshes, this will be a straightforward matter of adding separate instanced meshes, but:

  • The raytracer will need to be extended to be able to trace simultaneously through several potentially-intersecting shapes. This is architecturally feasible, but will come at a computation cost. (But the raytracer is also more of a “because we could” feature than something that necessarily needs to support everything well.)
  • The mesh-based GPU renderer will have trouble with depth sorting when a moving object is also transparent. (Perhaps we should recommend that all particles be either fully opaque or almost entirely transparent, to reduce the effect of sorting errors?)
  • Open question: should objects be able to rotate smoothly, or should we require only right angles?
  • Open question: What sort of inputs should be able to be used as the appearance of particles or bodies? Are they all Blocks, perhaps?

See also #301 for the simulation of these objects.

Multiblocks, take 2

The currently partially implemented scheme of multi-block structures is that each one is a Block which contains the entire structure, and when it is placed in the world, a Zoom modifier is used to create each individual block. This has several disadvantages.

  • Most fundamentally, it limits the (size × resolution) of the structure to the maximum Block resolution.
  • Evaluating the structure blocks potentially involves repeatedly evaluating the entire structure, if there is no intervening evaluation cache. This could be addressed in part by adding a region-of-interest to the EvalFilter so that the rest isn't needed.
  • If the multiblock has repeated elements, they have to be distinct Blocks even if their evaluations are identical.

Considering all these factors, I think we need a different system. It should still be possible for a single Block to stand in as a proxy for the whole, but the multiblock placement should be based on explicitly configured data. I'm not sure exactly how that should work yet, or even where the data lives — should it be an attribute? a modifier? a primitive? Should there be a MultiBlockDef universe member type?

We also need to actually design and implement multiblock placement rules; right now, everywhere it's done is a hand-coded operation in worldgen, not a game rule.

Text rendering

As discussed in 95a1578, I am adding Primitive::Text, text rendering in blocks without rendering to an intermediate space, for more efficient rendering and better interactive editability. The feature needs work:

  • Specify text positioning via a bounding box, so that we can tolerate font changes better and more reasonably specify location
  • Optional truncation/ellipsis of long text
  • Usage and demos:
    • Add a vui::widgets::Label that displays static text blocks from a string.
    • Add a vui::widgets::TextBox that displays dynamic and eventually editable text.
    • Add text labels to vui::widgets::*Button.
    • Replace all or most existing text rendering
  • Resolution and Z/depth issues:
    • Explicitly chosen resolution / text sizing within block
    • Add a means to get outlined text (as used in the existing tooltip and logo) efficiently — this will increase thickness
    • We may want characters to be able to have depth themselves (3D emoji)
    • Flag indicating intent to engrave the text, so all features should touch the front face and be legible as monochrome
  • Fonts (and possibly replacing embedded_graphics)
    • Create or find a good set of “system fonts”
    • Add user-defined font support
  • Specify, test, and demonstrate the layout of multiline text
  • Fix numeric overflows possible by large layout numbers (see b4ebb67)

Sound

There is only one critical requirement for sound:

  • Add a platform-independent mechanism for world and UI elements to “emit” sound (as a sequence of events), which the desktop and web code can then feed into some API.

But then there are further tasks and open questions to add sound to the game:

  • Decide how we are going to handle specific sounds, such as UI interactions and tool uses, in the all-is-cubes crate, minimizing bloat. Minimal set of recorded "click" noises? Pure synthesis? No concrete sound in the library, dependents must add their own?
  • Ambient sound synthesis (wind noise, rustles, hums, etc.) driven by character environment in the same way as camera exposure is.
  • Can we usefully synthesize sounds from block definitions? My predecessor project “Cubes” did that, but not well, but I know more about synthesis principles and techniques now. But it would probably still require material-properties hints to be not too incongruous.
  • Perhaps a Universe should be able to contain recordings for custom SFX and background music.

Light model for transparency

If we want volumetric transparency to look consistent across block boundaries, then we need to compute the light within and across blocks in a way which will result in consistent rendering without seams between blocks. For example, if we have these two rays:

A   B
\   \
+\---\-+------+
| \   \|      |
|  \   \      |
|   \  |\     |
+----\-+-\----+
|######|######|
|######|######|
|######|######|
+------+------+

then ray A passes through a horizontal surface, then the transparent volume, and strikes a horizontal surface, but ray B passes through an additional vertical surface, and the two cube volumes that B passes through must combine to produce the same color (modulo intended local variations in lighting). The current light rendering model does not achieve this goal, because the smooth light calculation is completely different depending on the normal of the crossed/struck face — it interpolates light values in the two perpendicular (tangent) directions.

The obvious solution is to use a 3-dimensional interpolation analogous to our existing 2-dimensional interpolation, which can answer the question “what is the light at this 3d point”. This will cost more shader time — it may be worth trying again the idea I previously had of doing the texture loads in the vertex shader rather than the fragment shader. (That will mean doing a cube of 27 loads per triangle, but maybe that's still a win compared to doing 8 in the fragment shader.)

However, even given that interpolation we still need to work out what arithmetic on the light point samples will then produce the desired seamless volumetric light. We might have to integrate along the ray (which will not require more samples).

It might turn out that this is impractical, in which case we may want to drop the idea of light-scattering transparent substances entirely, and use only absorbing volumetric transparency, which is not affected by the illumination of the material, perhaps combined with diffuse reflection at the surface of the material (which will require rendering to do index-of-refraction comparisons to decide how much reflection versus transmission occurs at a boundary).

A different benefit of the 3-dimensional interpolation, though, is that it will improve the lighting of voxel surfaces that are not at the surface of their containing cube — right now, there's a sharp transition between the depth of the light sampling.

`wgpu::TextureFormat::Rgba16Float` output is broken on `webgl` backend

Expected rendering (all-is-cubes-desktop):

Screen Shot 2022-10-23 at 10 30 49

Actual rendering (web):

web14

It appears that any channel with a value above 1.0 gets 0.0 (whereas clamping might be more expected).

This is as of commit c49b52810515bcdd362e3dd4120bb68d89cfc8a1. If I revert to when all-is-cubes was using wgpu 0.13, Rgba16Float doesn't appear in the list of supported formats.

Modifying a wgpu web example to use Rgba16Float and have some over-1 values does not reproduce the problem, so there is some kind of additional factor to discover (possibly something to do with my temporary frame buffer and optional tonemapping logic?).

I am going to temporarily work around this by avoiding the Rgba16Float format in choose_surface_format().

Handle missing features in render tests (fix CI)

Introducing MSAA created a problem because wgpu uses llvmpipe software rendering in CI, which doesn't seem to support MSAA at least as we're invoking it. The output is not even correct non-antialiased, but besides that: this is the point at which we have to make the render tests capable of passing when the platform supports less than the tests want. (Or we could implement DIY supersampling, but that would not I think be a reasonable option for this problem.)

My current plan is that headless renderers should output a set of Flaws listing the ways in which they did not implement the requested graphics options (or scene content). The render test comparison can then use that to select a different expected image.

Correctly render inner transparent voxels in meshes

Right now, when using mesh-based rendering, only the outer surfaces of transparent shapes are rendered. This is a compromise to avoid ludicrous numbers of triangles.

The way I intend to solve this is to run a raytracer in the fragment shader, using the block texture data as input; the current volumetric transparency is sort of a 1-step-only prototype of this.

Mousedown/mouseup/hover interactions

Right now, all game and UI effects happen on mousedown events. Instead:

  • Characters should keep track of what their Cursor is and convert this into a behavior attached to the affected cube in the affected Space, which can have an "active" (button pressed) boolean.
  • Tools should be able to have effects continously while the mouse button is pressed, which is executed by the cursor-behavior.
  • UI widgets' click handling (currently expressed as ActivatableRegion behaviors) should respond to this cursor behavior or be invoked by it, and can thereby have hover and mouseup responses.

Active blocks

All is Cubes’ predecessor took an approach where blocks contained (in their voxel space) “circuits” made of blocks that were functional programs defining their behavior. This was neat because it stuck to the fundamental premise that all is cubes. But, it had disadvantages: blocks needed to have some region inside them large enough to contain the circuit. This would be unfortunate for blocks such as, say, a seedling that should grow eventually.

I haven't decided what exactly I want to do instead, but I know something needs to fill this space. One possibility would be to keep the circuits idea, but store the circuits as AST and only put them in a space (separate from the block voxels) for editing.

Implement physics bodies known to `Space`s

Right now, the relationship between Character and its Space is one-directional: the Character defines a camera-position which is, by its own efforts, affected by gravity and collision with the Space.

Instead, the Space should have Behaviors which are attached physics bodies and optionally linked to Characters. This will enable:

  • Space's block manipulations being affected by the presence of characters (as obstacles or otherwise).
  • Objects, like moving dropped items, which are purely subordinate to the Space.
  • Rendering characters that aren't the player (NPCs or multiplayer PCs).
  • Particle systems that collide with the world. (I'm thinking that dropped items and particle systems might be very closely related things.)

The reason that these should be space Behaviors and not their own kind of thing is because Behaviors should in general be able to “occupy” certain regions of space and influence what happens in them. (On the other hand, we could also have a world where a Space contains Bodys in the same way it contains Blocks, and the Bodys can have Behaviors themselves.)

Tear this idea down (please)

Hi Kevin,

Apologies in advance for hitting you with a massively naive question,

I came across your project while researching ML/DL/RL training models, data structures and voxels... in the hope that proven models could either be encoded or compressed/encrypted as a in-game primitive or expression.

I'v recently re-skilled as a dev but would be interested to find a novel way to have cubes be an abstraction of a more valuable dataset.

Hope that makes sense, and if you understand and it's possible - I'll have an excuse to learn rust! :)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.