Comments (5)
* longterm solution which will work as good with 100+ avatars in future
Have you checked that the current implementation can't handle 100+ avatars? I imagine that the troubles that the current implementation would run into would happen with a networked request too.
* no changes in current process of adding new contributors, everything stays in place
Something not changing is not a pro.
* no rebuild needed for updates, previous chatterino versions will also get up-to-date list
IMO not a pro, but I can see people feeling this way.
* current binary files will be ~400kb smaller (6%)
IMO not relevant
* probably minor build time improvement
Not relevant to users
* possibility to include other dynamic data, like current count of commits
IMO not relevant, but I can see people feeling this way.
My biggest con for this is the increase in runtime complexity. It means more things can go wrong for a feature that we already have fool-proof solution for. That enough should be enough to discourage this, but happy to hear any counter-arguments.
from api.
no rebuild needed for updates, previous chatterino versions will also get up-to-date list
I see this as a con, to me the contributor list means who made the code currently running on my computer not what's in the remote git repository.
from api.
Have you checked that the current implementation can't handle 100+ avatars? I imagine that the troubles that the current implementation would run into would happen with a networked request too.
Im coming from standpoint that including avatar resources is wasteful overhead, today its 6% of binary size with just 15, but when it will go up to said 100, at same ratio it will be 2.5mb (or above 40%) just of content never seen in normal use. With current approach there is no ceiling for that growth, so at some point it would need to be addressed. I called fetching them over network longterm, because number of images wont affect binary size and all steps producing that binary.
Something not changing is not a pro.
Considered it pro because it doesn't change git history of added contributors which can have value for some ppl. Files would be kept in their current location and form, just not included in resources autogen.
If we are not interested in all goods coming from api due to extra complexity - i see your point here and agree.
What would all of you say about including just contributors.txt in c2 and fetching avatars from raw.github? It would be lightweight solution.
(I know this question belongs in other repo, but we can prolly come to conclusion here.)
from api.
Have you checked that the current implementation can't handle 100+ avatars? I imagine that the troubles that the current implementation would run into would happen with a networked request too.
Im coming from standpoint that including avatar resources is wasteful overhead, today its 6% of binary size with just 15, but when it will go up to said 100, at same ratio it will be 2.5mb (or above 40%) just of content never seen in normal use. With current approach there is no ceiling for that growth, so at some point it would need to be addressed. I called fetching them over network longterm, because number of images wont affect binary size and all steps producing that binary.
I think it will need to be changed at some point regardless, once we reach a certain set of contributors we'll need to paginate/reduce size of images. Keeping all data in the repo/build itself lets us make that decision when it's time without having to think about forward-proofing an API.
... What would all of you say about including just contributors.txt in c2 and fetching avatars from raw.github? It would be lightweight solution. (I know this question belongs in other repo, but we can prolly come to conclusion here.)
I'm not as strongly opposed to this for complexity reasons since lazy-loading of images is something we need to have right either way, but we come to the point that was discussed in an issue in the c2 repo before. If we link directly to the users GitHub avatar, we remove our right to decide what pictures fit and don't fit in Chatterino
If we reupload images (e.g. by hosting them in the repo) that wouldn't be an issue.
Then the question would just be: Why? For the binary size is not a big enough argument to even think about imo. I've considered including an entire emoji set which would double the binary size to reduce network requests, having 30+ network requests fire when the user opens the About page just seems "meh".
from api.
If we link directly to the users GitHub avatar, we remove our right to decide what pictures fit and don't fit in Chatterino
change im suggesting is to fetch them from our repo (resoucers/avatars/), not user profiles.
I've considered including an entire emoji set which would double the binary size to reduce network requests, having 30+ network requests fire when the user opens the About page just seems "meh".
It would be cached after first open, just like we cache 200 emotes/emojis from Emote picker :D
from api.
Related Issues (20)
- Tweet Link Thumbnails: Compose Multiple Images into Collage HOT 1
- Youtube Shorts support HOT 3
- Add support for imgur.com subdomain imgur.io HOT 3
- Serve Link Info for PDFs HOT 2
- Migrate to Twitter API v2
- Twitch clips resolver crashes when no twitch token is set
- Allow hosters to blocklist domains HOT 9
- Remove Windows-specific support HOT 1
- Make cache timeout values configurable HOT 2
- Additional prometheus metrics
- Reddit previews do not work HOT 3
- Twitter link resolver end of life HOT 2
- Alt text support HOT 1
- Include actual version in the user agent in network requests we send HOT 2
- Block private network requests HOT 1
- Ignore `?feature=share` at the end of youtube links HOT 2
- Some documentation and example config values are missing HOT 2
- Port link resolver responses to JSON HOT 5
- Twitter: Use OG resolver by default HOT 2
- No previews for images uploaded from certain Samsung phones HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from api.