Giter Site home page Giter Site logo

anchors's People

Contributors

adarosecannon avatar autokagami avatar bialpio avatar blairmacintyre avatar cabanier avatar foolip avatar himorin avatar johnpallett avatar judax avatar toji avatar yonet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

anchors's Issues

Should anchor creation be used in VR?

This topic was included in the explainer as an open question. It has also come up in the community group calls a couple of times and I am not sure if there was some closure to it that has been correctly captured so I decided to move it to an issue instead. Please, feel free to comment on it.

Retionale: Anchors could never update in VR scenarios but they provide positive features to the overall WebXR spec. In one hand, they may get developers to be aware of the importance of using anchors in general while developing for XR. They may provide more functionality when sharing and persistency are also proposed in the spec. Would it be interesting to use anchors in VR too? Should they be available in a VR session?

Does XRAnchor.detach() imply that the unattached anchor lives on?

Agreed on having an explicit method for destroying an anchor synchronously. However, I also agree with @blairmacintyre that the name detach() implies to me that this newly-detached object will live in on some meaningful unattached way, when in fact there's no longer anything meaningful you can do with that anchor.

I wonder if destroy(), drop(), forget() or simply stopTracking() (to align with trackedAnchors) makes it more clear when you'd call this method: when you're no longer interested at all in that anchor.

I'll make a specific issue for bikeshedding detach() so those with more web API experience than me can call out similar cases in other web APIs.

Originally posted by @thetuvix in #10 (comment)

Small issue on `README.md` and `explainer.md`?

Hi, just some questions related to docs.

  1. README.md
    Currently, the link for Explainer in README points to master branch's. Maybe update it to point to main branch's or use the relative path?

  2. explainer.md
    At the last part of the Explainer about sample code for Updating anchors. I'm thinking it's a typo, shouldn't xrFrame be frame instead?

  ...
  
  for(const anchor of trackedAnchors) {
    // Query most recent pose of the anchor relative to some reference space:
    const pose = xrFrame.getPose(anchor.anchorSpace, referenceSpace);
  }

  ...

No way to detect if a feature is enabled for a session.

Currently, there is no way to detect if anchors feature has been enabled for the session.

Some other APIs provide some way to detect a feature, e.g. plane-detection throws an exception when trying to access detectedPlanes property, which I've argued is not a good API design here: immersive-web/real-world-geometry#30

Regardless of anchors feature been enabled or not on the session, XRFrame has a property trackedAnchors that always returns a Set-like object.

For consistency across APIs, it would be best to not return Set, but either throw an exception as in plane-detection, or better to be null if the feature is not available on the session.

That way, the developer has a graceful way to detect if anchors feature is enabled or not.

Create an anchor from hit test result should not use native origins

https://immersive-web.github.io/anchors/#create-an-anchor-from-hit-test-result

Let anchor native origin be a new native origin returned from the device’s call to create a new anchor using pose, interpreted as if expressed relative to hitTestResult’s native origin and attached to nativeEntity, at the frame’s time.

Create new anchor object anchor using anchor native origin and session.

Native origins are static transforms, this code is essentially freezing the anchor at the hit test result. I imagine the intent is to track a native entity.

What should be done here is that:

  • An anchor should be able to hold either a native origin or a native entity
  • anchorSpace is just defined as an XRSpace with an "anchor" slot set to Anchor. If you want you can define this as XRAnchorSpace instead for ease of spec use.
  • Define the native origin of XRSpaces that have "anchor" set to be:
    • the native origin, if set
    • the native origin of the native entity otherwise

This is mostly spec pedantry and will not affect how the API is used.

Though if you're going the route of adding XRAnchorSpace, it's worth considering if XRAnchorSpace and XRAnchor can be merged (filed #47)

Deleted Anchors still present in frame.trackedAnchors for that frame

This is in Samsung Internet Beta (Chromium M90) i'm not sure if this is a spec issue or an implementation issue but is something I am encountering.

i.e.

anchor.delete()

trackedAnchors.forEach(function (anchor) {
  // Uncaught DOMException: Failed to read the 'anchorSpace' property from 'XRAnchor': Unable to access anchor properties, the anchor was already deleted.
  const anchorPose = frame.getPose(anchor.anchorSpace, refSpace);
});

Widen the scope to cover explainer’s concept number 2

One of the concepts of anchors listed in Scope section is "An object with a pose relative to a specific real world object the system has been able to identify and track." Should we extend the scope of the explainer to cover that concept as well now that we have started discussions around real world geometry (e.g. plane detection API)?

Discussion for issue #4 is already touching a bit upon that by mentioning RWG-related concepts.

Anchor reference space ancestry

While reading the explainer and documentation I didn't see any indication of ancestry for the anchor parent reference space.

If a future goal is to facilitate anchors between AR and VR then they would have to exist within the same reference space. However, if they are facilitate the localization into many maps then you need either a reference with offset to that map or you only allow anchors to be shared between reference spaces of the same id.

If prior reference spaces mutate due to merges or there is a common reference offset due to successfully localizing into many spaces then wouldn't you need some form of ancestry to facilitate the sharing of anchors.

You may not currently be localized into a shared space but you may eventually localize into some edge node map that allows for an eventual shared offset.

No way to get XRPose for just created Anchor.

The current spec defines createAnchor method on XRFrame, which returns a promise.
When a new anchor has been created, there is no way at that moment (in a callback) to getPose of an Anchor, as XRFrame is outdated.

Taking into account that Spec states that Anchors might persist between sessions, I would assume that trackedAnchors is subject to change during a session. Similar to what plane-detection spec does with detectedPlanes property.

It is useful to know when Anchor is created or failed to create (as well as reasons for a fail). But it might be confusing to a developer, that anchor is not "usable" in a callback, but only after the next XRFrame is available.

addAnchor() may need XRFrame

In the explainer XRSession.addAnchor() takes two arguments, they are pose and reference space the pose is relative to.

I think .addAnchor() inside may need to convert the pose to another space for underlying AR system. In the newest WebXR API spec handling pose is bound to frame (XRFrame.getViewerPose/getPose()). So I speculate .addAnchor() may need XRFrame.

If that's true, there may be two options.

  1. Adding the third argument frame(XRFrame) . XRSession.addAnchor(pose, referenceSpace, frame).
  2. Move .addAnchor() to XRFrame. XRFrame.addAnchor(pose, referenceSpace).

Plus, with XRFrame these sentences in the explainer can be a bit simpler because we may omit the case where .addAnchor() is called out of animation callback.

In the case of anchors, the creation of the anchor should happen in the next frame and before the request animation frame of the session is called, if called during an animation frame callback. If called outside of an animation callback, the promise might resolve before the next request animation frame, but may not.

Could anchors represent poses that change their relationship with the understanding of the world over time?

This issue was originally part of the Open Questions section in the explainer but I decided to move it to an issue so some specific discussion can happen around it. I think this is related to the concept of trackables that has also been captured in a different issue so if we all feel like this topic is correctly captured in a different issue we should close it. Let me know what you think.

Rationale: For example a helicopter landing on a platform, it may not need to have to be represented by an anchor as the platform could have been represented by one. But when the helicopter starts to fly, it is no longer influenced by the platform so it could need to be represented by an anchor to correctly reflect its pose changes. Then the helicopter could land in a different platform and be attached to it and no longer require an anchor to represent its pose.

What is the expectation from the App when an Anchor Tracking is Lost & Regained later

for(const anchor of previousFrameAnchors) {  
    if(!trackedAnchors.has(anchor)) {  
      // Handle anchor tracking loss - anchor was present  
      // in the present frame but is no longer tracked.  
    }
  }

When the underlying system looses tracking of an anchor, should the 'App' continue to preserve the XRAnchor object, So it can re-associate with the 'scene node' it was associated with when the system re-gains tracking of the anchor.

Can we add a scenario in the explainer.md to provide guidance in such cases.

Consider merging XRAnchor and its space into an XRAnchorSpace

This isn't something I feel strongly about, but overall it might be better to have an interface of

interface XRAnchorSpace: XRSpace {
    void delete();
}

since overall spaces are just really representations of tracked objects. We do something similar in the hands input API where the joints are just XRJointSpaces.

Apologies for posting this rather late in the process, this occurred to me after I noticed #46.

Should move discussion and API for "creating anchor from hitTestResult" to hit-test API

Because the hit-test API was considered before the Anchor API, the overlapping detail ("create anchor from hit test result") is in the Anchor explainer.

It should be in the hit test API explainer, and the hit-test API should depend on Anchor API. The reverse of what we have now.

The reason for this is one of future clarity: as more APIs (image recognition, persistence, plane/geometry recognition and tracking) possibly use Anchors as a way of associating "stuff" with the physical world, the particular details of how anchors are created and used should be in those API discussions.

The Anchor explainer should describe the generic behavior of anchors, and keep the "create an Anchor from a pose relative to a space."

IDL Proposal: Events for anchor creation and removal.

Should anchor creation and removal be events too? The rationale for this could be that anchors could appear/disappear without an explicit user requests. Imagine anchors attached to trackables or anchors that are detected passively (while the system is doing world understanding) like what in ARCore is called a cloud anchor. I think this requires some other API ergonomics (even cloud anchors need to explicitly be created) but I think is worth discussing it separately as it is true that anchors attached to trackables could be a problem if the world understanding (the mesh, the plane) changes/disappears.

UAs must retain expando XRAnchor attributes in future XRFrame.trackedAnchors sets

Given this approach of XRAnchor only representing explicit app-created freespace anchors, having a list of tracked anchors seems harmless to me.

XRFrame.trackedAnchors does feel lower value than is implied by the code in the anchors explainer, as there is no per-anchor ID or other data beyond the anchorSpace to tie a given XRAnchor back to any set of app content. The app itself will need to remember for each anchored scene object it creates which anchor that object is attached to... at which point the app could just loop over its own list of those anchors and poll for isLost or such.

Note that the for loop over trackedAnchors at the bottom stops after getting each XRAnchor's pose - it's not clear what the next line of code would be for the app to make productive use of that pose, unless it was already maintaining an equivalent trackedAnchors map from scene node to anchor itself:

let previousFrameAnchors = Set();

function onXRFrame(timestamp, frame) {
  frame.session.requestAnimationFrame(onXRFrame);

  const trackedAnchors = frame.trackedAnchors;

  for(const anchor of previousFrameAnchors) {
    if(!trackedAnchors.has(anchor)) {
      // Handle anchor tracking loss - `anchor` was present
      // in the present frame but is no longer tracked.
    }
  }

  for(const anchor of trackedAnchors) {
    // Query most recent pose of the anchor relative to some reference space:
    const pose = xrFrame.getPose(anchor.anchorSpace, referenceSpace);
  }

  previousFrameAnchors = trackedAnchors;
}

The primary way I could see an app productively using trackedAnchors is if it set its own additional attribute on each XRAnchor to store the list of its root scene objects whose poses need to be updated each frame:

  for(const anchor of trackedAnchors) {
    // Query most recent pose of the anchor relative to some reference space:
    const pose = xrFrame.getPose(anchor.anchorSpace, referenceSpace);

    for(const sceneNode of anchor.attachedSceneNodes) {
      // Adjust the pose of each scene node attached to this anchor.
      sceneNode.setTransform(pose.transform);
  }

This seems like a reasonable pattern, although it would rely on a promise that the UA will return the same XRAnchor instance on subsequent frames when XRFrame.trackedAnchors is enumerated, rather than returning a new equivalent XRAnchor instance that lost that extra data.

I'll file a new issue around specifying that the UA must retain any extra data on an XRAnchor when it's enumerated moving forward.

Originally posted by @thetuvix in #11 (comment)

Terminology: Anchors, Trackables, etc

I'm creating this issue to document the differences in terminology/semantics of things like Anchors & Trackables between various platforms, and to start a discussion on what terminology/semantics we ultimately want to adopt here. The main platforms I am looking at are ARCore, ARKit, Windows Mixed Reality (WMR), and Vuforia (which attempts to provide a cross platform API that supports the previous three platforms):

ARCore

  • Trackables represent objects that can be individually tracked and which Anchors can be attached to
  • Anchors represent fixed locations relative to a particular Trackable
  • Both Anchors and Trackables have a separate TrackingState (paused, stopped, or tracking)
  • Plane/Point tracking is represented as subclasses of Trackable
  • Each hitTest() result includes any Trackable that was hit
  • Anchors can be created directly from hits, allowing, for example, an Anchor that is attached to a Trackable at the point of contact (with the Anchor independently being updated by the system)

ARKit

  • Anchors can be any location relative to the session coordinate system
  • Anchor subclasses may (or may not) conform to a Trackable protocol, which exposes:
    • "isTracked": if true, the pose is valid, meaning the Anchor is actively being tracked
  • Presumably, if an Anchor subclass does not implement the Trackable protocol, one can assume that the Anchor represents a fixed location in the world
  • Plane/Image/Face recognition/tracking are represented as subclasses of Anchor, though not all of these are not necessarily Trackable:
    • Plane/Image Anchors are not Trackable
    • Face Anchors are Trackable
    • Image Anchors could easily become Trackable in future ARKit updates, simply by adding an “isTracked” property
  • Each hitTest() result includes any Anchor that was hit (whether or not it is Trackable)
  • There is no way to create an Anchor that is attached to another Anchor

WMR

  • Anchors are fixed locations in the world which can be persisted between sessions, even after the device has been shut down. Anchors can also be shared with other devices.
  • Tracked objects, surface geometry, and hit tests are handled completely separately from any Anchor concept
    • Anchor and surface mesh types are unified through a "CoordinateSystem" property

Vuforia

  • Anchors can be created from a pose (world coordinates) or from a hit test result
  • Anchors are a subclass of Trackable
  • Image/Object/Model recognition/tracking are handled as subclasses of Trackable (not as subclasses ofAnchor)

It seems that there are basically three ways of defining anchors:

  1. fixed location in the world, period (WMR)
  2. fixed location relative to something else in the world (which might not be fixed) (ARCore, Vuforia)
  3. any location in the world that is maintained by the system, fixed or not (ARKit)

So given these differences, here are some things to discuss:

  • Do we want to consider object recognition/tracking and surface geometry as being potential anchors?
  • If anchors are fixed locations, do we want a separateXRTrackable? Should XRAnchors be a special type of XRTrackable?
  • How do we distinguish between trackable and non-trackable things (things can be recognized but not continually tracked)? Does it make sense for an XRTrackable to represent something that can only be recognized once but isn't actually trackable (e.g., setting it's "trackingState" to "paused" or "unknown"?
  • Do we want to support the use-case of having anchors be relative to movable things (e.g., other anchors/trackables)? This is useful for placing anchors at hit test intersections on tracked/movable objects.
  • Do we want to consider an API for persisting/restoring anchors? If so, it may be confusing if only some types of anchors can be persisted while others cannot.

Align anchors and RWG (e.g. planes)

We should consider aligning real-world geometry APIs with anchors APIs. Whatever we think is the right model to communicate anchor updates / tracking loss to the application is going to likely be the right model to communicate geometry updates / tracking loss to the application. The discussion about it is already happening in issue #12, and is also touched upon in RWG repository in issues immersive-web/real-world-geometry#4, immersive-web/real-world-geometry#5. For example, a document about possible ways to inform apps about plane removals is available here.

Top view map of SLAM

When running SLAM in WebXR, is it possible to gain a a top view plane map of object in the environment from SLAM result?
image
image

Take into account performance implications of creating anchors

Creating anchors can have performance implications from the perspective of used AR framework (ARCore hints at this for example here and here). Do we have any data related to anchor performance that we could use when coming up with anchor design? @cabanier, @thetuvix - do you have any numbers that you could share?

If the performance hit of creating an anchor is significant, we might want to consider imposing some limit on the number of anchors that the app could create.

Update the explainer to reflect changes in WebXR spec

The IDL proposal section seems to be using outdated concepts from WebXR. It seems like we should probably use XRPose to represent the position and orientation of the anchor, and the actual pose should not be stored on the anchor - it should be obtained by calling XRFrame.getPose() with an XRSpace. Same applies for XRCoordinateSystem passed in for anchor creation - it should probably be an XRReferenceSpace. Code examples should probably be updated as well.

Experimental implementation questions about integrating hit-testing and anchors

I'm working to update Mozilla's WebXR Viewer to the latest draft of the WebXR Device API. That works well, thanks in large part to @jsantell!

Since we use the WebXR Viewer mostly as a test platform for AR experiments, I also need to implement hit testing and anchors, backed by ARKit.

Question 1:

ARKit accepts only normalized x,y screen coordinates, not arbitrary origin and
directions for hit-testing, so we need some pattern for requesting a hit test with that data. In our previous API we had two methods on the XRSession, one for requesting with an arbitrary origin and direction (which always failed) and one for requesting using normalized screen coordinates.

What are others thinking about to solve this problem?

Question 2:

For this version, I'm just implementing the simplest possible anchor, which is a pose that is updated over time using ARKit data. Right now I have the anchor class extend XRCoordinateSystem and it is returned as an attribute on XRHitResult.

Is that in line with others' thinking? Is XRFrameOfReference more appropriate?

Ensure algorithms return something

I noticed a few times that algorithms were supposed to return something, e.g., a promise, but didn't.

It might help re-reading the algorithms and see if you could write an implementation from scratch given what's written down.

Encourage developers to pass in a XRHitResult rather than a raw matrix

Ideally, developers always create anchors based on a XRHitResult object if they got the location using that API, since the hit-result may have a connection to a trackable object. We can create much higher quality anchors if we have access to the underlying trackable object and we want to create a well of success for developers so the best and easiest ways of using the API are the same.

This is tricky given that we want to support arbitrary poses - how do we ensure that developers use the right API if there are two options?

Explainer seems to under-explain API

I'm coming here from the TAG review thread.

The "API details" section in the Explainer is a little hazy on the arguments for the two createAnchor() methods - arguments are alluded to but not listed in the method signatures, and the argument types aren't spelled out.

Also, this seems to be the first mention of "anchor space". What is an anchor space? What is its type?

Possibly relatedly, how would I use an anchor to position objects relative to the anchor?

Finally, why is createAnchor() asynchronous? When might it reject?

Question: will the spec support other APIs creating Anchors and handing them back?

The current spec, as written, is very explicit about not addressing the need to have "things" (that can be sensed and tracked by the underlying platform) associated with Anchors. It focuses on providing an API to create Anchors from hit tests and absolute poses.

But eventually we will want to have "things" tracked in the world, and Anchors would be the right way to do that.

My reading of the proposed spec doesn't rule this out, it simply makes it beyond the current scope. Which is good, and fine.

I am asking here: am I reading this correctly? Is there something in the spec that I'm not catching that would prevent Anchors from being created by some future API (or extension to this API)? I just want to make sure.

Allow a limit of the number of active and persistent anchors

Currently the spec does not mention any limit to the number of anchors a site can create.
On Quest, there is a significant additional load if a lot of anchors are active so we like to limit how many can be in a scene at a time.
For persistence, the browser can only create a limited set of persistent anchors across all origins. We'd like the spec to be updated so the browser is allowed to deleted older anchors to make space for new ones.

/agenda Allow a limit of the number of active and persistent anchors

Scope

In the explainer you say

The main difference is that while ARKit uses the concept of an anchor to represent the identified real world object (that happens to have a pose) while this explainer uses the term anchor to only represent poses. Basic arbitrary ARAnchor (1) in ARKit would be equivalent of the concept of an anchor in this explainer. The representation of the real world objects is out of the scope of this explainer. This differentiation between the concept of an anchor in ARKit and in the scope of this explainer is subtle but important.

I agree we aren't defining more than the basic anchor, but this seems too much. Would it be possibly to say something along the lines that these other anchors are subclasses of the basic anchor in ARKit, and we anticipate that other APIs would use and extend anchors in the future. Because, that's what I expect.

In fact, in the very next section you say just that. So these two sections should be massaged for consistency.

Attach plane models to surfaces for occlusion and collision detection

Is it possible to attach virtual plane models to surfaces for occlusion and collision detection from other virtual models?
if so, then how to achieve this gold?
Example as:

  • I place a virtual kitty in AR space near a real table, and then I move to somewhere causing my sight blocked by an real table.
  • What I expect to see is part of that kitty, but actually is the whole virtual object.
    THX for your time to help me!

Fixed anchors independant from world understanding

From the explainer "Anchors update as real-world understanding improves". We might want to leave the option to the developer for such updates. If the experience is about to place objects relative to each other, having one object's pose update while the second one doesn't might mess up with the system representation (think assembling a puzzle with different parts all having a pose).
This problem might be solved by setting an initial pose and then adding objects relative to it, but there might be use cases where that wouldn't work

Anchors should have a unique UID

In my experiments with implementing Anchors, I've added a unique string id to each anchor, which makes it easy to use Javascript Map's to keep track of the Anchors I have.

Would like to have a property anchor.uid on the anchor to retrieve a unique string for each object.

(In my implementation on iOS, I either use ARKit UID's for their anchors, or generate my own for synthetic Anchors I create on top of ARKit's anchors)

Persisting anchors

Hi,

I was trying to see if it would be possible to serialize an anchor to JSON so that we could later on resume session, but I always get an empty object.

Any thoughts?

thanks.

Can a model freely move in AR space?

  1. Can a model freely move in AR space?
  2. Is it possible to control virtual model moving in the AR space by just changing the the pose of its anchor originally attached to the model?
  3. If moving virtual model is out of frames for a while, can we re-gain the model as we come close to it or the model come close to us?
    THX for your help!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.