hubs-foundation / hubs Goto Github PK
View Code? Open in Web Editor NEWDuck-themed multi-user virtual spaces in WebVR. Built with A-Frame.
License: Mozilla Public License 2.0
Duck-themed multi-user virtual spaces in WebVR. Built with A-Frame.
License: Mozilla Public License 2.0
The current audio check should be improved:
The 2D HUD work was implemented before we had a react component tree for the main UI flows -- it should be updated to use React. The name of the user should be read from the store and the mic level from the micLevel state from the root of the component hierarchy.
Also a design pass needs to be done to unify the look-and-feel of the 2d HUD with the entry flows.
We also seem to be connecting to janus twice somehow, which may lead to the audio output issue.
There is currently no way to mute yourself on Daydream.
Not sure but I believe hand gestures still aren't fully implemented on vive. Please close if not the case!
TBD what exactly this means, but we should have a sane UX for screen sharing that doesn't require a hotkey. I think the model where:
Is a good start. Not needed for MVP launch.
If all your recording devices are disabled or unplugged, the client just stalls. We should be able to detect this condition and continue after notifying the user
Probably should get this fixed before MVP launch due to the surface area of the platform, etc.
Placeholder for discussion around new environments for MVP
when users leave the room, sometimes their avatars remain. likely call stack: https://mozilla.slack.com/files/U723XJLDP/F835ECFCY/image.png
Still TBD based on marketing etc.
We noticed big frame drops when users joined during demo
Need to be able to pick an avatar in the UI.
We should ensure if you don't have a GearVR controller that you can use the touchpad/gaze cursor. (However up this should be timeboxed to a day or two's efforts for now if it ends up being particularly difficult to support.)
Users should be able to block others based upon IP address
We have a few options for this:
Create a component that uses THREE.Sky to create a skybox at runtime and render to a cubemap. This could be an awesome component for ourselves and the AFrame community.
Find a suitable skybox and use aframe-cubemap-component.
Zombie avatars remain in the room.
Pending #85 being completed we need to add interactables as a "layer" to the environments since they need to be laid out properly. The main thing to decide here is how to deal with template
tags for the bundle, and this will also be dependent upon the GLTF component work since we will need to attach those components.
see: microsoft/WSL#2448 for issue and workaround
We should limit room capacity to some reasonable amount. (Proposal: 100)
Unsure if this should be server or client enforced.
No errors are emitted. However it looks like its getting hung up in naf-janus-adapter
's createPublisher()
method.
We need to land on the proper mechanism(s) for doing device pairing for standalone devices like oculus go, or linking your desktop to your mobile for cases like screen sharing. Some relevant links:
https://developers.google.com/nearby/messages/overview
https://youtu.be/Acdu2ZdBaZE
https://youtu.be/1a0wII96cpE
http://smus.com/ultrasonic-networking/
Reproduce by just doing gearvr flow.
Currently your own hand models as well as every other avatar's hand models are visible despite them not having tracked controllers. These models should be invisible until the tracked controllers are active.
Currently on mobile, the microphone is left enabled when you navigate away from the app. This is good behavior in apps where you are having a conference call and multitasking, however I don't think its the same for our app.
We should disable the microphone or disconnect the user from Networked AFrame when navigating away from the application on mobile. We can probably use window.onblur
on mobile only.
For initial OKR metrics tracking and error handling, we should transition to using Google Analytics (which is sanctioned by Mozilla.)
the current "dual axis" controls on mobile work but are not ideal. we should implement the control scheme where single finger drag orbits camera and dual finger pinch-to-zoom moves forwards and backwards.
Make them lighter or add a backing plane
All avatars currently spawn at origin
The actual center of the virtual joysticks are nearer to the corners of the screen than the visual circles.
We should simulate load to determine our per-box capacity for the janus SFU for a few EC2 instance types (at least if the one we have now tips over easily.) Goal should be 1000 simulated CCU in 83 rooms of a dozen people. We will probably need to get metrics on the box first about message delivery latency, bandwidth usage, etc.
Currently on mobile, the microphone is left enabled when you navigate away from the app. This is good behavior in apps where you are having a conference call and multitasking, however I don't think its the same for our app.
We should disable the microphone or disconnect the user from Networked AFrame when navigating away from the application on mobile. We can probably use window.onblur on mobile only.
This is maybe desirable on mobile, but definitely not on desktop when screen sharing.
It looks like we have a regression with lerping in the client. Interactables and avatars are moving in a choppy, non-interpolated way.
In order to sidestep a variety of issues, we should change the screen sharing implementation to P2P. This will require negotiating a RTC peer connection with everyone in the room (as well as dealing with join/leaves.)
Error seen on Chrome for Android 52:
Failed to set remote video description send parameters..
This occurs on this line: https://github.com/mozilla/naf-janus-adapter/blob/master/src/index.js#L245
SDP sent by Janus:
v=0
↵o=- 1511245988964004 1 IN IP4 54.67.121.231
↵s=Janus session
↵t=0 0
↵a=group:BUNDLE audio video
↵a=msid-semantic: WMS janus
↵m=audio 9 RTP/SAVPF 111
↵c=IN IP4 54.67.121.231
↵a=sendonly
↵a=mid:audio
↵a=rtcp-mux
↵a=ice-ufrag:r58T
↵a=ice-pwd:w0+1h6bBhREj4nDqKLrtKB
↵a=ice-options:trickle
↵a=fingerprint:sha-256 40:F7:B1:3D:F9:1E:04:B8:FF:5F:FF:5F:25:75:EB:C0:ED:F3:25:06:49:1F:AE:B4:E7:7A:BE:69:2F:DA:CA:B2
↵a=setup:actpass
↵a=rtpmap:111 opus/48000/2
↵a=ssrc:3516440168 cname:janusaudio
↵a=ssrc:3516440168 msid:janus janusa0
↵a=ssrc:3516440168 mslabel:janus
↵a=ssrc:3516440168 label:janusa0
↵a=candidate:1 1 udp 2013266431 54.67.121.231 47417 typ host
↵a=end-of-candidates
↵m=video 9 RTP/SAVPF 107
↵c=IN IP4 54.67.121.231
↵a=sendonly
↵a=mid:video
↵a=rtcp-mux
↵a=ice-ufrag:r58T
↵a=ice-pwd:w0+1h6bBhREj4nDqKLrtKB
↵a=ice-options:trickle
↵a=fingerprint:sha-256 40:F7:B1:3D:F9:1E:04:B8:FF:5F:FF:5F:25:75:EB:C0:ED:F3:25:06:49:1F:AE:B4:E7:7A:BE:69:2F:DA:CA:B2
↵a=setup:actpass
↵a=rtpmap:107 H264/90000
↵a=fmtp:107 profile-level-id=42e01f;packetization-mode=1
↵a=rtcp-fb:107 ccm fir
↵a=rtcp-fb:107 nack
↵a=rtcp-fb:107 nack pli
↵a=rtcp-fb:107 goog-remb
↵a=ssrc:1447368931 cname:janusvideo
↵a=ssrc:1447368931 msid:janus janusv0
↵a=ssrc:1447368931 mslabel:janus
↵a=ssrc:1447368931 label:janusv0
↵a=candidate:1 1 udp 2013266431 54.67.121.231 47417 typ host
↵a=end-of-candidates
It's having an issue with the video part, so it doesn't like something in or after this line m=video 9 RTP/SAVPF 107
.
This SDP is created by Janus here: https://github.com/mozilla/janus-plugin-sfu/blob/master/src/lib.rs#L437
The .glb assets can use compressed .jpg textures with 80% quality to significantly reduce file sizes and initial scene load times.
Also includes reticulum, implement the react flow for creating a room and choosing the environment.
The smoke test bots are no longer working.
Right now superhands seems to work only with touch controllers.
Moving the GLTF cliff environment to GLTF bundles (via #85) resulted in realizing a few A-Frame components are attached to the a-gltf-entity
's involved and there is no way to represent them in the bundle format (nor likely should there be a way to do so.) These include the layer: reflection
component on the cliff vista, the visible:false
on the nav mesh, and the disabling of ar in the xr components. These will need to be rolled into the GLTF files and whitelisted as inflatable components.
https://github.com/mozilla/mr-social-client/blob/master/src/room.html#L240
Assets currently use a high quality and low quality url. There also isn't a great way to version them.
@gfodor @netpro2k @j-conrad @brianpeiris and I talked a bit about this in this slack thread
Here are some highlights:
Greg proposed an asset manifest file format that includes both versioning and LOD information:
http://duck.mozilla.com/assets/avatars/robot.XXX
http://duck.mozilla.com/assets/avatars/robot-0.1.0.XXX
{
"name": "robot",
"version": "0.1.0",
"layers": [
{
"name": "base",
"bundles": [
{ id: "robot-1", url: "https://duck.mozilla.com/assets/robot-desktop-0.3.4.glb" },
{ id: "robot-2", url: "https://duck.mozilla.com/assets/robot-mobile-0.3.4.glb" }
]
}
],
"hints": {
"mobile-gen-1": ["base.robot-1"]
}
}
http://duck.mozilla.com/assets/scenes/conference-room.XXX
http://duck.mozilla.com/assets/scenes/conference-room-0.2.0.XXX
{
"name": "conference-room",
"version": "0.2.0",
"layers": [
{
"name": "environment",
"bundles": [
{ id: "room-1", url: "https://duck.mozilla.com/assets/room-desktop-0.3.4.glb" },
{ id: "room-2", url: "https://duck.mozilla.com/assets/room-mobile-0.3.4.glb" }
]
},
{
"name": "spawn-points",
"bundles": [
{ id: "room-spawn", url: "https://duck.mozilla.com/assets/room-spawn-0.1.4.glb" }
]
},
{
"name": "duck-interactables",
"bundles": [
{ id: "room-ducks-1", url: "https://duck.mozilla.com/assets/room-ducks-desktop-0.0.4.glb" },
{ id: "room-ducks-2", url: "https://duck.mozilla.com/assets/room-ducks-mobile-0.0.4.glb" }
]
}
],
"hints": {
"mobile-gen-1": ["environment.room-2", "duck-interactables.room-ducks-2"],
"far-draw-distance": ["environment.room-2"]
}
}
Dom proposed using Microsoft's LOD glTF extension
Brian and I liked the idea of using .gltf instead of .glb for situations where you have shared assets. Ex. our avatars that only swap one texture between variants. Everything that is shared between the assets will be fetched from the cache.
We decided to delay working on this until after we have a few finished environments and avatars.
If the client's internet connection dies, it should reconnect to Janus and respawn the user, etc.
We should tune up the nametags to look better and be easier to read.
We need to make sure collision is set up properly for the 2-4 environments we launch with.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.