capnmidnight / calla Goto Github PK
View Code? Open in Web Editor NEWVirtual Meetups through Jitsi
Home Page: https://www.calla.chat
License: MIT License
Virtual Meetups through Jitsi
Home Page: https://www.calla.chat
License: MIT License
right next to the tweeter button
Could do whiteboarding and collaborative music synthesis with the things mentioned here: https://stackoverflow.com/a/39302994
Hearing range options and font size clutter up the tool bar. Make a new window for them to hide in.
This is a really fun project :)
I would like to party with hundrets of people on one map.
To make this party scale up, users should just receive the videostreams from avatars nearby.
I don't know how hard this is to implement, but it would be a killer feature in my opinion.
When I visit the example installation, I get a couple of errors in the JS console and it won't load the game.
Loading module from “https://meet.primrosevr.com/scripts/game.js” was blocked because of a disallowed MIME type (“text/plain”).
$ curl -I https://meet.primrosevr.com/scripts/game.js | grep content-type
content-type: text/plain; charset=utf-8
Line 57 in 8b81646
Should it be this.sy
?
It drops the UI back to the main screen, but the jitsi remains connected in the background.
Once users can select which devices they want to stream, we won't need the "show meeting UI" button anymore.
After a user logs in, they lose access to the instructions. Add a help button to the toolbar so they can view the instructions again.
It should be possible to set how loud you're speaking. The most quiet non-mute setting would make you audible only to users in squares adjacent to yours. The loudest setting would make you audible to all users within a given max radius which could be set by room administrators.
A user should be able to select any other user and warp to their location.
There is no collision detection between users, so users can occupy the same grid tile.
However, there is a problem with strict "one user per time" in that it can turn doors into bottlenecks, you have to wait for people to clear a door way before anyone else can get through. If that user blocking the doorway is inattentive or lagged out, then it could be a long time before anyone can get through.
I think it would probably be better if everyone could still occupy the same tile, but their avatar got offset be a few pixels, so you could see that there is a stack of people. Perhaps the avatars could even be shrunk a little. Differently shaped avatars with transparent backgrounds would also help.
Hi,
I have gathered some suitable tilesets from itch.io:
There are also some avatars available:
these are enemies, but seems to be usable too:
There is definitely lot of stuff to take from itch.io.
I think I can volunteer to add code which will make these live, but I am not sure how to test it without my own jitsi meet instance. It's possible to test it somehow from localhost?
README says
If you need to change your microphone settings, click the < arrow in the menu in the upper right corner of the window to hide the map view and give you full access to the Jitsi Meet interface.
and IIRC it was there. But now for me I just have a menu bar on top and I found no way to hide away Calla interface and get into "raw" jitsi.
using chromium Version 79.0.3945.130 (Developer Build) built on Debian bullseye/sid, running on Debian bullseye/sid (64-bit)
It's come to my attention that it is hard for people to say. So I'm thinking of renaming the project.
I generally like to name my projects after plants. The name "Lozya" was just a joke about "Jitsi" being the Bulgarian word for "wires" and looking up the Bulgarian word for "vines".
Would like to hear feedback or alternative suggestions. Doesn't have to be a vine name, but I would like to keep it to a plant name. Try to be aware of pronunciation difficulty and potentially offensive connotations.
https://github.com/jitsi/lib-jitsi-meet/
This will enable more advanced integrations and remove the need to modify the base server
HTML5 audio volume setting seems to provide for a linear response. I couldn't tell at first, but after testing for a while, it seems pretty clear, given there is very little volume drop off until you reach the edge of the hearing range.
As the project grows, managing state gets harder. Need to encapsulate the various parts into classes, then use JS modules to import them to help enforce encapsulation.
Requiring the user to clock their own avatar is not accessible. There should be two buttons on the toolbar, one to spawn the most recent emote, and one to select an emote. If there is no most recent emote, then the spawn button should open the selection window.
Love this project!
Seems there may be ways to tweak the noise cancellation features of Jitsi Meet that are causing issue: https://community.jitsi.org/t/higher-audio-quality/31441/6
Have any of these been attempted yet?
Users should be able to choose from a selection of avatars, or maybe build an avatar from parts. With no selection, we can default to using the avatarURL that Jitsi Meet provides per-user.
The interface between Calla and Jitsi (BaseJitsiClient extended into LibJitsiMeetClient) has a very ad-hoc interface that relies on a lot of implicit interfaces for objects existing. This is a maintainability nightmare, as there is no documentation on what interface the methods expect.
This requires two things:
While we're at it, write documentation about it (Doxygen/GhostDoc/etc).
Let the room admin select an existing map for the room. There should be some default, and then the first user in the room can change it, and then subsequent users get the latest room from the admin.
This was something that the External API did on its own, but I need to reimplement with lib-jitsi-meet.
The icons on the emoji selector are very hard to read. Just need to increase the font size of the container.
If you enable, then later disable video, it doesn't seem that you can then enable video again. Maybe instead of using mute/unmute, I should be removing/adding the video track.
There should be some sort of highlighting of avatars when people speak.
Investigate how one can make an RPG map accessible to screen readers.
It's currently running on the bare primrosevr.com domain. The frontend is on meet.primrosevr.com. Need to move the backend to jitsi.primrosevr.com.
The username labels are really hard to see.
If we can detect mission activity (I think the Jitsi iframe API might have an event for it), then there should be a jabber animation on the avatar of users who are speaking.
Users should also have a list of emotion animations from which they can choose, probably presented through a radial context menu on their avatar.
Have different paths a person can follow to enter different portals that warp to different rooms.
Right now, all rooms are password-less.
I am afraid I am missing something. I can see my friend's and my avatar on the map; but the volume is always the same. What are we supposed to do?
After Issue #33, implement video feeds as user avatars
So you don't have to go into the Jitsi interface for a commonly used feature
Users in conversation with each other should be able to zoom in to see more detail on each other's avatars.
They should also be able to zoom out to see much larger areas of the map.
This project is really cool.
It's not entirely unlike Networked Aframe, where you can have walk around 3D space and communicate video WebRTC audio connections (or configure an SFU, which is what Mozilla Hubs does). If you tapped into that ecosystem, you'd get all the benefits of full 2D and 3D sprite rendering, avatar templates, all kinds of stuff.
However, nobody's done NAF with Jitsi, or really intelligently though of how to do SFUs in a way that could benefit people in a video conference setting or, using positional culling, allow hundreds of people to occupy the space of a football field and organize as they see fit.
Maybe there's a way that we could work together in a way that benefits everyone? We're building a free, open source virtual chat (github.com/xrchat) and just getting to work on integrating a Jitsi adapter for NAF that looks, at least on paper, exactly like a 3D version of what you're doing right here... a collaboration would be great :)
Once we have user avatars, I don't think we necessarily need the webcam view, but Jitsi Meet also supports screensharing, which would be useful to have in the "game". Whiteboards, slideshows, pair programming, etc.
Unfortunately, I think making the system support the video stream might require reimplementing the frontend using the full lib-jisti-meet API https://github.com/jitsi/lib-jitsi-meet
That would require a LOT more effort than I'm willing to put into this.
Maybe just scaling the opacity of the game view down while the user is actively watching a video stream would be enough. Maybe maps could have transparent areas designed into then to let the video feed through, if the user is standing in a "presentation room" type area.
All of the client code is in index.html. That makes it easy to read for people new to the project, but it makes managing the project hard, increasing the likelihood of merge conflicts.
Also, there should be a distinction between the base integration with Jitsi (jitsihax.js which gets installed on the Jitsi Meet server, the sendJitsiHax function, and the tx/rxGameData functions) and the front-end code (everything else). Making a clear distinction between the two will make it easier for people to fork the project and implement their own front-ends in whatever language/framework/whatever they want.
Jitsi Meet's concept of rooms is ephemeral. They exist only as long as there are users in them. But a Lozya front-end server could have data (assets, tilemaps, tilesets) specific to pre-configured room names.
Right now, the map is hardcoded into index.html, and the only "tileset" is manually drawing light-grey or dark-grey tiles.
Room files should have the following data:
Maybe everything out of range gets rendered with a dark, semi-transparent overlay. Or maybe it's just an outline.
When a user first joins a room, they will receive names of everyone in the room from Jitsi. The client code then requests the current position of each user, but it fails (with an error) because Jitsi has yet to establish the data channel between the local user and the remote user.
The local user will not see the correct location of each remote user until those individual remote users move to a new location.
We need some means of determining when the data channel has connected. Perhaps that's pinging the remote user until the remote acknowledges, and then request the remote user's position.
The user should be able to scroll around the map view without moving their avatar, using touch drags or pushing their mouse cursor to the edge of the map.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.