Comments (11)
I'm going to retag this as a feature, the library currently knows nothing about .vrm or other third-party variants of .gltf/.glb. But I think I agree that this should be made to 'just work' for the .vrm file extension (other than not understanding the VRM extensions embedded in the file).
I suspect it's mostly a change in the CLI, for script-based usage, io.writeBinary
works.
Related:
from gltf-transform.
Understood, but I don't think that I agree with the suggestion. Creates open-ended issues:
- what happens when two documents are combined?
- what happens when a different I/O instance is used to read than to write?
- what about
.writeJSON
/.readJSON
, which supports both .glb/.gltf but doesn't get to see the magic header? - if buffers are added to the file, must they be merged, or do we convert from .glb to .gltf?
I'm OK with making changes to ensure .vrm is handled like .glb, and perhaps allowing registration of other extensions... but I do think that unknown extensions should trigger warnings rather than attempting auto-detection, which I don't think can be supported consistently.
from gltf-transform.
OK, understood! I think I looked in the wrong place, what you meant with "adding custom filetypes" is probably that the ".vrm is to be treated as a glb file" check would have to happen somewhere around here (at the end of the pipeline, not at the beginning).
from gltf-transform.
Also, probably the library should log a warning when it's asked to write a file extension it doesn't know the format of (json vs. binary). Perhaps with some way to register new extensions and their format. Like:
import { Format } from '@gltf-transform/core';
io.registerFileSuffix('vrm', Format.GLB);
from gltf-transform.
I think another option could be if the check "is this a binary or a text file" is based on the GLB magic header instead of relying on file extensions (which could possibly not exist at all for entirely different reasons).
from gltf-transform.
True – that'd work for the CLI, though not in the script-based I/O case, which may not read from disk. Not sure if something like ...
gltf-transform cp blob_no_extension other_blob
... is worth trying to guarantee same binary vs. json output, I think I'd prefer to log a warning that the library doesn't know what you mean.
from gltf-transform.
which may not read from disk
Maybe I'm misunderstanding what you mean – the magic header for GLB ("glTF") is there no matter if it's just a blob or read from disk.
from gltf-transform.
In this scenario...
const document = io.read('path/to/input.unknown');
...
await io.write('path/to/output.unknown', document);
... there is nothing in document
to tell the I/O class whether the original source on disk was JSON or binary, so no guarantees can be made that it will be written in the original format without user/application input.
from gltf-transform.
It's that what @hybridherbst means: take notice of the original file header in the io class and - if using write
and if the format is unknown then use that information to write as either json or binary?
from gltf-transform.
Do I understand right that the complexity here comes from the fact that
looks at the string path only, and it's not easy to add "look at the first 4 bytes to check what it actually is" since the actual file reads currently happen later / after that decision?
from gltf-transform.
From my perspective the complexity comes from trying to keep a persistent knowledge of what the original source-on-disk of a particular Document might have been. It's 'easy' at an application level, if your pipeline looks exactly like this:
const document = await io.read('in.unknown');
// ... make some edits ...
await io.write('out.unknown', document);
But at the library level, I don't want to make an assumption that the pipeline above is what's happening. Documents can be merged or cloned, applications might use different I/O classes for reading and writing. I think it's too much magic, when the alternative is much clearer and not particularly complex:
import { writeFile } from 'node:fs';
await writeFile('out.unknown', await io.writeBinary(document));
from gltf-transform.
Related Issues (20)
- ktx resize not work as intended HOT 1
- Expose simplifyRatio to the optimize CLI HOT 2
- How to add prefix to baseColor_xx.jpg?
- Invalid bounding box calculation when one of the meshes lacks a POSITION attribute. HOT 3
- An option to keep separate buffer views when doing `meshopt` HOT 7
- Unexpected handling of `generator` during `merge` HOT 3
- Race condition in toKtx.ts HOT 3
- Merging documents with extensions HOT 7
- Cache cleanup HOT 2
- Draco compression may produce an UNSIGNED_SHORT index buffer with vertex count > 65535 HOT 3
- Change default compression with 'optimize' CLI to Meshopt
- `partition` command is slow for large models HOT 6
- Simplify Draco and Meshopt compression implementations
- Prune fails to clean up accessors referenced by non-root properties HOT 2
- Serialization format for transforms HOT 2
- Roadmap for '@gltf-transform/view' package
- Quantization creates invalid skinning weights for CesiumMan.glb
- Support transforms reporting a 'skip' result
- Minor glitch in documentation typings HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gltf-transform.