Giter Site home page Giter Site logo

bobbywibowo / lolisafe Goto Github PK

View Code? Open in Web Editor NEW

This project forked from chibisafe/chibisafe

318.0 7.0 55.0 38.01 MB

Blazing fast file uploader and awesome bunker written in node! 🚀

License: MIT License

JavaScript 82.84% HTML 1.42% Dockerfile 0.10% Shell 0.02% SCSS 3.37% Nunjucks 12.21% PowerShell 0.03%
lolisafe file-upload file-sharing files upload uploader js node pomf sqlite uwebsockets uwebsocketsjs sqlite3 bulma nunjucks

lolisafe's Introduction

lolisafe, a small safe worth protecting

safe.fiery.me

GitHub license

JavaScript Style Guide

Features

  • Powered by uWebSockets.js & HyperExpress for a much more performant web server, due to being a Node.js binding of uWebSockets written in C & C++.
  • Powered by better-sqlite3 for performant SQLite3 database (using Knex.js for abstraction, thus support for other database engines may also come in the future).
  • Faster file hashing for duplicates detection by using BLAKE3 hash function.
  • ClamAV virus scanning support for Linux/OS X servers (read more).
  • Front-end pages templating with Nunjucks.
  • A more integrated Cloudflare support (automatically purge files remote cache upon deletion, and more).
  • Chunked uploads to support 100MB+ files when hosted behind Cloudflare, or any other proxies with file upload size limits.
  • Upload remote URLs (have lolisafe download those remote files for you).
  • Performant & configurable rate limits powered by rate-limiter-flexible.
  • Albums with shareable pretty public pages.
  • User dashboard to manage own uploads and albums.
  • Admin dashboard to manage all uploads, albums, and users.
  • Robust files search/filters and sorting in the dashboard.
  • Usergroups-based permissions.
  • Configurable file retention periods per-usergroups.
  • Strip images EXIF tags if required (can be forced or toggleable by users, and with experimental support for stripping videos tags as well).
  • Various options configurable via header tags upon file uploads (selected file retention period, whether to strip EXIF tags, and more).
  • ShareX support with config file builder in the homepage.
  • Token-based authentication on all APIs, allowing you to easily integrate lolisafe with anything.
  • ... and more!

Differences with Upstream/Chibisafe

This fork is the one being used at https://safe.fiery.me.

It was originally based on WeebDev/lolisafe v3, but later have been so heavily rewritten that it is now simply its own thing.

Chibisafe is an upstream rewrite & rebrand, and technically is lolisafe v4.

If you want to use an existing lolisafe v3 database with this fork, copy over database/db file from your previous installation, then run yarn migrate at least once to create the new database columns introduced in this fork (don't forget to make a backup).

Caution

The migration script is NOT COMPATIBLE with Chibisafe's database.

Configuration file of lolisafe v3 (config.js) is also NOT fully compatible with this fork. There are some options that had been renamed and/or restructured

Please make sure your config matches the sample in config.sample.js before starting and/or migrating your previous database (hint: this fork's default config assumes your database file is named db.sqlite3 instead of db).

Running in production mode

  1. Ensure you have at least Node.js v18.x or newer, and Yarn v1.x installed (incompatible with Yarn v2.x).

Note

Compatible up to Node.js v20.x.
I recommend using Volta to ensure you will always have & use the correct Node.js and Yarn versions for lolisafe, even if the requirements change in future updates.

If you want to use this on Docker, please check out the docker directory instead.

  1. Clone this repo.
  2. Copy config.sample.js as config.js.
  3. Modify port, domain, and other options if desired.
  4. Copy views/_globals.sample.njk as views/_globals.njk.
  5. Modify front-end strings and options if desired.
  6. Run yarn install --production to install all production dependencies.
  7. Run yarn start to start lolisafe. Alternatively, you can also start lolisafe with yarn pm2 if you have PM2 installed.

Note

If you see errors related to sharp engines upon starting lolisafe, try to run yarn install --production --ignore-engines first.

Important

Default admin/root account:
Username: root
Password: changeme

When running in production mode, lolisafe will use pre-built client-side CSS/JS files from dist directory, while the actual source codes are in src directory.

The pre-built files are processed with postcss-preset-env, cssnano, bublé, and terser, and done automatically with GitHub Actions.

Running in development mode

This fork has a separate development mode, with which client-side CSS/JS files in src directory will be automatically rebuilt using Gulp tasks.

  1. Follow steps 1 to 6 from the production instructions above.
  2. Run yarn install to install all dependencies (by omitting --production option, Yarn will also install development dependencies).
  3. Run yarn dev to start lolisafe in development mode (or yarn dev:reload to also watch file changes).

You can further modify the Gulp tasks through gulpfile.js file.

During development, the rebuilt files will be saved in dist-dev directory instead of dist directory. Lolisafe will also automatically serve the files from dist-dev directory instead.
This is to ensure that your IDE's Git extension will not unnecessarily rebuild diffs of the modified files.

Once you feel like your modifications are ready for production usage, you can then run yarn build to build production-ready files that will actually go to dist directory.

Tip

If you are submitting a Pull Request, please do not stage any changes to files in dist directory.
GitHub Actions will automatically rebuild those assets if and when required.

Updating when you have modified some files

Try to use git stash.

Basically you'll be doing this:

  1. git stash to stash away your changes.
  2. git pull to pull updates.
  3. yarn install (or yarn install --production) to install dependencies matching the updated yarn.lock file.
  4. git stash pop (or git stash apply) to restore your changes.

Be warned that some files may have been updated too heavily that they will require manual merging.

If you only do some small modifications such as editing .njk files and not much else, it's generally safe to do this even in a live production environment. But it's still best practice to at least review just what have been updated, and whether you will need to do some manual merging beforehand.

Still, I heavily recommend simply forking this repository and manually merging upstream changes whenever you feel like doing so. Read more about syncing a fork. Especially if you intend to modify client-side CSS/JS files in src directory, since you will then need to rebuild assets that go into dist directory, which are guaranteed to always conflict with every updates from this fork that modify them.

Afterwards, you can instead clone your fork into your production server and pull updates from there. You can then choose to only install production dependencies with yarn install --production there to save some disk space (hint: this is the workflow I use for https://safe.fiery.me).

ClamAV support

This fork has an optional virus scanning support using ClamAV, utilizing clamscan library (Linux and OS X only).

It will scan new files right after they are uploaded, then alert the uploaders of the virus names in ClamAV's database if the files are dirty.

Unfortunately, this will slow down uploads processing as it has to wait for the scans before responding the uploaders. However, it's still highly recommended for public usage, or if you're like me who find the constant buzzing from Google Safe Search too annoying.

To enable this, make sure you have ClamAV installed, or additionally have ClamAV daemon running (using daemon is considerably faster). Afterwards, configure uploads.scan options, and more importantly its sub-option clamOptions. Read more about them in config.sample.js.

Additionally, you can also configure usergroups bypass, extensions whitelist, and max file size, to lessen the burden on your server.

lolisafe's People

Contributors

azatej avatar bigbrainafk avatar bobbywibowo avatar camjac251 avatar dependabot-preview[bot] avatar dependabot[bot] avatar disre avatar dragonfire535 avatar foxt avatar giantcow avatar hyperdefined avatar icrawl avatar imurx avatar jasonetco avatar korewakiyo avatar nadyanayme avatar natnat-mc avatar null31 avatar osu avatar pascaltemel avatar pitu avatar pitu-deploy avatar renovate-bot avatar renovate[bot] avatar sanchezsihaya avatar scruffyrules avatar shinji257 avatar shumatsu avatar snyk-bot avatar vvlnt avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

lolisafe's Issues

[FEATURE REQUEST] Duplicate Checking

Is your feature request related to a problem? Please describe.

There are often a lot of duplicates uploaded and take up unnecessary space.

Describe the solution you'd like

Check for duplicates (ideally before) upload.

Additional context

This might be a completely wrong approach but how I would implement something like this is hash the file in the browser before the upload starts, check that against server log, if it matches, return the servers already stored file, if not continue with the upload.

Possible limitations with this approach would be how large files browsers can hash. afaik chrome couldn't do over 512mib a couple years ago although this may have changed.

Ideally, all processing would be done in the browser both to save bandwidth and CPU time on server.

This is something I have very poorly implemented on my own instance (fully server-sided) but I plan to experiment with doing it in the browser. so may be able to make a or once I have more time

Regarding switching hosts

Hi I am planning to switch my host is there any guide to switch domains and host without loosing any data.

Script to rebuild hashes of all existing files.

Back in May, I switched from using md5 to blake3 for file hashing (62a9775), but I didn't make any tool to convert hashes of existing files on DB from the old md5 to the new blake3.

Then perhaps also implement is:duplicate to filter for files with matching hash from dashboard (using group by sql something, idk, I'll look that up later).

[FEATURE REQUEST] Embedded Video/Audio playback

Would it possible if this app had a visual preview page for audio and video files? HTML supports embedding WAV files. Although directly opening wav files in Firefox forces them to download, no direct playing

[BUG] Element 'loading' not hides after album load.

Describe the bug
Tested as root on latest commit.
After click on album and page load 'loading' element (on right from album name) does not hide.

To Reproduce

  1. As root, crate album.
  2. Upload some content into album.
  3. Go to dashboard and click on album link.

Expected behavior
'loading' element should hide after page load. Correct behavior 'Statistics' link in dashboard as example.

Screenshots
Sorry, can't provide.

Additional context
None

.tar.zst being replaced as .zst

Describe the bug

When I upload a .tar.zst file, lolisafe changes to .zst, and needs to be renamed to .tar.zst to open the file.

To Reproduce

  1. Upload a .tar.zst file
  2. See the filename generated with wrong extension, .zst

Expected behavior

A file with .tar.zst like .tar.gz.

SQlite error: tables are not created

Hello,
I am trying to install lolisafe in production mode and for some reason lolisafe doesn't generate the database tables at all.
I've followed the six steps specified in the readme.md but as soon as I try to start it the following error is shown:

~/lolisafe# yarn start
yarn run v1.19.1
$ node ./lolisafe.js
[2019-10-21 11:42:38] { [Error: SQLITE_ERROR: no such table: files] errno: 1, code: 'SQLITE_ERROR' }
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I've tried both with an unprivileged user and with root, both fail, so this is not a permissions issue. "ls" to the database directory shows the db file as 0 bytes.
This is on a Debian Buster installation within a LXC container with a recent kernel. I am using the nodejs version available in Buster repositories (v10.15.2)

Show image URLs in Discord chat messages

Is your feature request related to a problem? Please describe.
No.

Describe the solution you'd like
Option to enable "Show Image URL" for the image uploaded as shown in the screenshot below:

Additional context
No.

  • Screenshot:
    image

Suggestion: Add some verbose output

After update I've got an error:

[17:43:57] ESLintError in plugin "gulp-eslint"
Message:
    Failed with 8 errors
Details:
    domainEmitter: [object Object]
    domain: [object Object]
    domainThrown: false

[17:43:57] 'lint' errored after 4.11 s
[17:43:57] 'default' errored after 4.12 s
[17:43:57] The following tasks did not complete: lint:css
[17:43:57] Did you forget to signal async completion?
error Command failed with exit code 1.

It would be much easier to find a problem if verbose output for jobs was enabled.

Something like this.
gulpfile.js

.pipe(eslint())

Add below: .pipe(eslint.format('junit', process.stdout))

.pipe(stylelint())

Change to: .pipe(stylelint({reporters:[{formatter:'verbose', console: true}]}))

[FEATURE REQUEST] User creation from dashboard

Describe the solution you'd like
As you can disable user registration by setting it to private mode.

It would be nice to be able to add accounts in the dashboard as an admin user.

Something like this:
Example Image

[BUG] Minimum password tooltip registration

I noticed something that might be a bug. When I try logging in with a password that's under the 5 character minimum, the tooltip pops up with the alert. Although when trying to sign up with the same password, nothing appears.

Issues to consider

A list of issues from upstream, for myself to consider to implement to this fork:

Normal priority:

Low priority:

  • Mime type blacklist: chibisafe#192
  • Tiered accounts: chibisafe#39
    While we already have basic support for "user groups", mainly to allow adding moderators & admins, we still do not have any upload quotas system, or per-user max file size limit.
    However, we already have uploads expiration date support, so riding on that support to enforce upload age on certain groups shouldn't take too much work.
    This is low priority as I don't really have any plan to make any paid accounts system on safe.fiery.me in the near future, yet the idea is still rather interesting to work on.

Implemented:

  • Strip EXIF tags from images: chibisafe#110
    Personally, I'm not too fond of the idea because I prefer not to tamper with the uploaded files.
    However, now that I've put this into things to consider, I won't be satisfied unless I add another entry to our homepage's config tab to toggle the feature.
    Maybe add a server's config to choose whether to enable/disable the feature by default, whether to enforce server's setting, and whether to allow users to configure the behavior through the homepage's config tab?
    Implemented in d9ddfe8.

[FEATURE REQUEST] serveFilesWithNode mimetypes

Could it be possible to force mimetypes when express serves certain files? In the current version it's possible to view html and shtml files with the option enabled.
I had these mimetypes set in nginx before the switch

  types {
    text/plain S asm c coffee conf cs d desktop fish go hs htm html hy ini java jsrb less log ls lua md moon php pl py rs scss sh shtml svg tcl tk txt vim ;
  }

I think these are the main ones that are security risks.

[FEATURE REQUEST] Docker compose

Is your feature request related to a problem? Please describe.

Describe the solution you'd like

Add docker-compose file, with Nginx integrated. Possibly letsencrypt integration?

Additional context
If there is any interest in this I am happy to make one.

Improve identifiers cache

Add an additional mode for identifiers cache to only cache identifiers of images and videos (use whitelisted extension names from utilsController.js).
Meaning, identifiers of other files (e.g. zips, txt, etc.) can be reused by other files, including other images or videos (of course only once).
I'm also thinking of intentionally not checking against cache if the new files also aren't images and videos.

Make this an additional mode, meaning people can toggle whether to use the current behavior OR this new one.

[BUG] mkdir on yarn run doesn't create parent folders

I believe this might be an easy fix with using mkdir -p instead of mkdir. It is trying to create the chunks folder in a nonexistent uploads folder. -p ensures that all parent folders are made if they do not exist

yarn run v1.22.4
$ node ./lolisafe.js
[2020-05-28 10:52:31] Starting lolisafe…
[2020-05-28 10:52:31] [Error: ENOENT: no such file or directory, mkdir '/root/lolisafe/uploads/chunks'] {
  errno: -2,
  code: 'ENOENT',
  syscall: 'mkdir',
  path: '/root/lolisafe/uploads/chunks'
}
error Command failed with exit code 1.

[FEATURE REQUEST] S3 support

Describe the solution you'd like
Allow uploading to s3 type storage buckets. My VPS is very limited space, but I have a wasabi bucket setup and I want to utilize my space on there.

Why remove the upload from URL feature

Now I know it's because DDG changed their proxy

So that leads me to this question:

Why not just download images directly? In the end, that's what DDG does and it works for them... that's actually what every good website does (except Google Images) so it's really really common, I don't see why you wouldn't be okay with just downloading the images directly.

It would be more reliable, more ethical (using DDG's own proxy is not nice at all) and it would use the same amount of bandwidth.

If you are scared of people using it to connect the server to illegal websites, remember you're already allowing any arbitrary file to be uploaded so you're already taking a big risk may as well go through with it xD

I don't think any country would ever hold you liable for this as there is no way to prevent this and you're not the one who initiated the action anyway.

So yeah, consider that please, I liked this feature :x

[BUG] Something went wrong installing the "sharp" module

Describe the bug
After updating to 3a787ad service not starting.

To Reproduce
OS: Centos 7 | Node: 13.11.0 | Yarn: 1.22.4

Expected behavior
Service start after update.

Additional context
Log:

Mar 12 18:59:54 host lolisafe: [2020-03-12 18:59:54] Uncaught Exception: Error:
Mar 12 18:59:54 host lolisafe: Something went wrong installing the "sharp" module
Mar 12 18:59:54 host lolisafe: /lib64/libstdc++.so.6: version "CXXABI_1.3.8" not found (required by /var/lolisafe/node_modules/sharp/build/Release/sharp.node)
Mar 12 18:59:54 host lolisafe: - Remove the "node_modules/sharp" directory then run
Mar 12 18:59:54 host lolisafe: "npm install --ignore-scripts=false --verbose" and look for errors
Mar 12 18:59:54 host lolisafe: - Consult the installation documentation at https://sharp.pixelplumbing.com/install
Mar 12 18:59:54 host lolisafe: - Search for this error at https://github.com/lovell/sharp/issues
Mar 12 18:59:54 host lolisafe: at Object. (/var/lolisafe/node_modules/sharp/lib/constructor.js:34:9)
Mar 12 18:59:54 host lolisafe: at Module._compile (internal/modules/cjs/loader.js:1147:30)
Mar 12 18:59:54 host lolisafe: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1167:10)
Mar 12 18:59:54 host lolisafe: at Module.load (internal/modules/cjs/loader.js:996:32)
Mar 12 18:59:54 host lolisafe: at Function.Module._load (internal/modules/cjs/loader.js:896:14)
Mar 12 18:59:54 host lolisafe: at Module.require (internal/modules/cjs/loader.js:1036:19)
Mar 12 18:59:54 host lolisafe: at require (internal/modules/cjs/helpers.js:72:18)
Mar 12 18:59:54 host lolisafe: at Object. (/var/lolisafe/node_modules/sharp/lib/index.js:3:15)
Mar 12 18:59:54 host lolisafe: at Module._compile (internal/modules/cjs/loader.js:1147:30)
Mar 12 18:59:54 host lolisafe: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1167:10)

Currently I'm looking for a solution, maybe you could help faster. ^^

[FEATURE REQUEST] Registration keys

The ability to have registration on, but only accessible if you have a registration key (Which would be given out by admins) would be something nice to have.
Admins would be able to give all/certain users a one time code that could be given to someone else, allowing them to register an account.

Disabling file cache identifiers should disable stricter collision check altogether

Even when caching is disabled, at the moment it's still programmed to read file list of the uploads directory on EVERY new uploads. In the end it'll still require more memory, albeit temporarily.
But I was thinking, if multiple new uploads happen in a short period of time, it's likely that memory usage during that time will be even worse than when caching is enabled, as each attempt to read file list is in its own task. As far as I know, they will not be removed from memory at the instant the session ends, but instead only after a period of time by the garbage collector (I may be wrong, I haven't read too deep into how Node's memory management works).

So yeah, I think it's better to just disable stricter collision check altogether when caching is disabled.
Basically it will fallback to its old behavior before this commit 6543a87.

Of course, when it's disabled then collisions such as "abcd.jpg" and "abcd.png" existing at the same time will then be able to occur again, and thus cause both files to only have a single thumbnail (remember, thumbnail will only be saved as a PNG file, regardless of the original file's extension - both files will then use thumbnail of the file that was uploaded the earliest), but people should enable caching anyways when they don't want that occur. While those that don't even use thumbnails can instead choose to disable caching to reduce unnecessary memory usage.

On a side note, this is this fork's first issue. All this time I forgot to enable Issues, kek.

[FEATURE REQUEST] Add ability to set chunk folder location

Describe the solution you'd like
I would like to be able to set the location where chunks are saved

Additional context
I use a Minio bucket for storage by mounting it with s3fs and putting the uploads folder into it, then serving and caching files with Nginx on a subdomain. However the uploads folder includes the chunks which are temporary files that introduce unnecessary traffic and slow down uploads.

[FEATURE REQUEST] Manage Albums Admin Page

Describe the solution you'd like
Similar to the manage users and uploads, it would be nice to have a page that displays the albums that have been created

Additional context
Format similar to the 'manage your albums' page but the admin page would include usernames + buttons/popup to modify it within the page

[FEATURE REQUEST] MiB as default unit

Hello,

I would like to know if it is possible to set the default size unit as MiB (mebibyte) instead of MB (megabyte) as it is what computers really use.

Having MB as default could lead Windows users (thanks, Microsoft) to think that they can upload eg. 512 "MB" (it's MiB in reality) when they can only upload 488 "MB" (MiB). Uploading a 500 "MB" file as shown from Windows Explorer would fail as it is bigger than 488 MiB.

Could I set config.js maxSize field to 512MiB? Or make the binary notation the default one for new users?

[FEATURE REQUEST] Use PostgreSQL as database

Is your feature request related to a problem? Please describe.
This application currently uses sqlite for its database which is not very scalable and is limiting in many regards.

Describe the solution you'd like
Switch (or allow an option) to use PostgreSQL as database as it is a better database in every way.

Additional context
There are cases where servers running lolisafe already have Postgres for one reason or another. Being able to use the same database for lolisafe would be a welcome improvement.

[FEATURE REQUEST] Permalinks

In the current version /dashboard is the main permalink for accessing all pages and sorting. Could it be possible to support different permalinks for all pages such as /admin/stats /admin/uploads for admin related pages and /uploads /albums /mytoken /changepass for user pages? If also possible could URL parameters (query strings) also be supported for sorting/filtering, like /uploads?page=35 or /uploads?page=35&names=original&filter=thatonefile.ext&view=grid for sorting/filtering/viewing.

[FEATURE REQUEST] Multiple domain support

A nice feature would be to support multiple domains. An option under the dashboard that would allow users to set which domain they want their files to link under.

Could possibly do something where you can access a file with any domain. So let's say I upload a file and it's called domain.tld/pnrTMp.png, using domain2.tld/pnrTMp,png would also work. The file wouldn't be limited to just the one domain.

Now for connecting multiple domains, I think the best option would to set a "main" domain that everyone defaults to. You would add each domain in the config and setup a server block for each domain with the lolisafe nginx config.

I have no clue how hard this would be to implement.

Extension names exclusion for virus scanning

I've wasted so many resources on unnecessarily scanning huge video files, for example.
I'll just not scan them in the future. Probably images too.
Of course bad people can then rename the extension name of their infected files, but I'll just rely on Google Safe Search for that, if they ever get detected.
Y'all bad people gotta stop sharing those infected files on public sites, lmao.
Anyway, ClamAV ain't perfect either. I'd still occasionally receive alerts from Google Safe Search about infected files.

Chibi Image Source

I often uses your site for sharing images to my friends, it is quite sad to say goodbye but still I can tell managing such site sure causing financial issue moreover everyone got a life.
I will not brush anymore (though I am not doing)

I open this section, for asking if you can share the .png image of the characters on the website ?
like https://safe.fiery.me/render/miku/044.png

I like them much and I often refresh to that I can download the image

Explore more about parallel chunked uploads

Back in June, I rewrote the logic for chunked uploads so that instead of rebuilding their chunks later, all chunked uploads would instead be streamed to a single temporary file as they were being uploaded, one by one (b4c8b1d). That had another advantage of being able to gradually hash the chunks as they were being uploaded, skipping the need to re-read the finished file merely for hashing (only then was it befitting of the whole "blazing fast" train of thought).
But that was only feasible since chunked uploads were forcefully made to be uploaded in series. You could have multiple chunked uploads active all at once, but only when all of them were of different files.

Split upload age option for registered and guests.

Would be nice if admin could configure upload age option (automatically delete uploads after a certain amount of time) for registered users and guests separately.
Example: guest can't configure upload age, registered user can choose from available options.

[BUG] Wrong IPs reported?

Hello,
it seems that lolisafe doesn't report the correct IPs on the "Manage uploads" tab from the admin interface. All the uploads have the same IP: ::ffff:127.0.0.1

I am not sure if this has something to do with nginx not passing the correct IP to the lolisafe daemon or if it is an error of lolisafe itself.

I am using the nginx config sample provided by lolisafe and nginx logs show the real visitor's address. Here is a sample of the access.log of nginx.

37.14.X.X - - [22/Oct/2019:13:15:51 +0200] "GET /js/dashboard.js HTTP/2.0" 304 0 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"
37.14.X.X - - [22/Oct/2019:13:15:51 +0200] "GET /images/logo.png HTTP/2.0" 304 0 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"
37.14.X.X - - [22/Oct/2019:13:15:51 +0200] "GET /js/misc/utils.js HTTP/2.0" 304 0 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"
37.14.X.X - - [22/Oct/2019:13:15:52 +0200] "POST /api/tokens/verify HTTP/2.0" 200 110 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"
37.14.X.X - - [22/Oct/2019:13:15:52 +0200] "GET /api/albums/sidebar HTTP/2.0" 304 0 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"
37.14.X.X - - [22/Oct/2019:13:15:55 +0200] "GET /api/uploads/0 HTTP/2.0" 200 1070 "https://[REDACTED]/dashboard" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:69.0) Gecko/20100101 Firefox/69.0"

Any help is appreciated.

[FEATURE REQUEST] captcha

Implement captcha for signup page. Because google is starting to charge for reCaptcha (and recaptcha is just hell for anyone who doesn't use chrome), something like hcaptcha or a self-made one makes sense.

This solves the issue of automatic account creation, easily bypassing limits.

Cluster mode (multi-core support)

PM2 has a built-in cluster mode, so I think I can focus on developing the cluster support with it in mind, instead of having to make a cluster master script.
Or maybe I can consider making a separate cluster master script anyway, but the main file lolisafe.js should still be a script that can be forked by PM2 (?).

Current state of the safe is still not suitable for clustering due to many implementation of in-memory states (such as file identifiers caching, statistics caching, album zip generation states, etc).
There are 2 solutions that I have in mind for this.
First is to make my own cluster master script and put all in-memory states there. Cluster workers will then communicate with master to get/set states. But we won't be able to use PM2's cluster mode with this, as PM2 will have to run the cluster master script instead.
Second is to use an external in-memory storage, such as Redis. This method may eliminate the need of having to make a cluster master script, as each workers will communicate directly with Redis. This solution may be preferable if I want to make use of PM2's built-in cluster mode.

I need to look into how to handle clamdjs scanner as well.
Like, will it be fine to create a new clamdjs scanner instance for each cluster worker, or should I create only one in a cluster master script?
I think the former should be fine, cause in the end all instances will communicate with a single clamd server anyway.

I'll also need to look into racing conditions when handling file names.
It's not an issue when file identifiers are cached, cause I've coded it to immediately lock the name once generated (so that the same name will not be used by future processes, even if the name hasn't been used to write file to the disk storage yet), but it's still an issue when that feature is turned off.
Solution for this is rather simple. I mean, I just need to do the same, as in locking the names by temporarily putting them in-memory as well (though as opposed to the former where all file names are cached, this will only temporarily cache names that will be used for new files).
But I'll also need to review other part of the codes around the theme of racing conditions.
There may be some things that I currently can't remember.

Anyway, this is a long term objective.
There is no immediate need for this yet, as safe.fiery.me is currently not too demanding to be in need of sharing loads to all CPU cores.
This is just a note to keep myself reminded of what I had in mind for this particular objective (and hopefully receive some insights from other users of this fork).

[FEATURE REQUEST] Rewriting Content-Disposition filename header on direct nginx serving block

I hope this'll make sense but one thing I noticed about the script that in both the file manager as well as files being served by nginx, the original filename is not used.

I was thinking this might be possible with nginx to rewrite the Content-Disposition filename to the original filename so that the url https://i.fiery.me/ykJfU.jpg could be accessible through that url but when you would save it in browser, it could write as the original filename grandcanyon-image142.jpg instead of ykJfU.jpg
Kipp has this working nicely

My idea was possibly if nginx could parse a local text file or csv/json/xml file that would be written to by lolisafe of entries like ykJfU.jpg|image142.jpg which it would read the first field, find its match, then set the Content-Disposition filename to that second field so the url can use the original filename on saving.

So far my findings have not been great so far
It might be possible with lua in nginx?
https://stackoverflow.com/questions/57612920/nginx-lua-write-original-filename-to-file
or possibly https://www.gakhov.com/articles/implementing-api-based-fileserver-with-nginx-and-lua.html

[Help] Deploying SSL

Hello,
I am a complete noob while Deploying things like these. I was able to deploy it to a server but couldn't figure out how to configure SSL. I have valid certs but dunno how to use them. Could anyone help me?
Thanks.

Sharex upload bug over 100MB files. [BUG]

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  1. Go to sharex
  2. Click on upload file larger than 100MB
  3. wait till the upload fails
  4. Error message:

Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host.

Request URL:
https://safe.0x0.la/api/upload

Stack trace:
at System.Net.Sockets.NetworkStream.Write(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.Security._SslStream.StartWriting(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.Security._SslStream.ProcessWrite(Byte[] buffer, Int32 offset, Int32 count, AsyncProtocolRequest asyncRequest)
at System.Net.TlsStream.Write(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.PooledStream.Write(Byte[] buffer, Int32 offset, Int32 size)
at System.Net.ConnectStream.InternalWrite(Boolean async, Byte[] buffer, Int32 offset, Int32 size, AsyncCallback callback, Object state)
at System.Net.ConnectStream.Write(Byte[] buffer, Int32 offset, Int32 size)
at ShareX.UploadersLib.Uploader.TransferData(Stream dataStream, Stream requestStream, Int64 dataPosition, Int64 dataLength)
at ShareX.UploadersLib.Uploader.SendRequestFile(String url, Stream data, String fileName, String fileFormName, Dictionary`2 args, NameValueCollection headers, CookieCollection cookies, HttpMethod method, String contentType, String relatedData)

Expected behavior
A clear and concise description of what you expected to happen.

should upload the file to the 0x0.la

Screenshots
If applicable, add screenshots to help explain your problem.

Additional context
Add any other context about the problem here.

I believe this is issue is something to do with cloudflare I have reported this in the past for other lolisafe hosts and same type.

Building sqlite3 v5.0.1 now requires Python 2 again

Since commit 4991b92, sqlite3 npm package returned to using node-gyp that only worked with Python 2 (previously it only worked with Python 3 instead).
This is irrelevant if they already have prebuilt binaries for your machines, but otherwise having Python 2 is a requirement to do an automated local build/compile.
You may also have to make sure python is indeed pointing to your Python 2, or otherwise you can set a preferred Python binary by following these instructions: https://github.com/nodejs/node-gyp#configuring-python-dependency.

A script to purge cache of HTML pages

Hmm, now that we're actively caching HTML pages with Cloudflare (as in the homepage itself, dashboard, and faq - actually pretty much only those 3 in our use case), we need a script to call Cloudflare's API to purge cache of those pages on every update (perhaps even allow it to read config.js to find out extra enabled pages and include those to the request).

I think I also still need to make sure the current Cache-Control for HTML pages will still be able to deliver the latest change ASAP. I think it already do, but just a self-reminder.

Now for those who isn't aware, the last few commits were dedicated into making safe.fiery.me-exclusive Cache-Control, mainly to cache HTML pages (the frontend).
It's to ensure the frontend itself loads as fast as possible anywhere on the planet Earth.
API calls are not cached, so those will still lag behind in those faraway countries, but having the pages load fast should be desirable to overall user experience.

At the moment the frontend is being cached for about 12 hours. After making this script, I presume it should be fine to raise that to 30 days right away? Not sure though, still need to see if browsers are truly respecting must-revalidate Cache-Control or not, and how often do they do it.

Album list API should return all albums in certain situations

self.list = async (req, res, next) => {

e.g. when presented with the option to choose album when trying to move an upload into an album, or the select album option in homepage uploader.
The payload can potentially be simplified to only their IDs and names, to minimize overhead when people have nasty amount of albums.

Afterwards may consider listing all albums in sidebar anyway, but with expand/collapse toggle.

Additionally, consider allowing to fetch albums' files list while non-authorized, as long as its public page is enabled.
Preparation stage for allowing Magane to use lolisafe-based albums as custom packs (lolisafe upstream dev branch already has non-auth API for this).

[BUG] User creation from dashboard

Describe the bug
Recently you have added the possibility to create users from the dashboard on the private instances.

i've updated mine to the latest version but i can't add users somehow, the button doesn't appear at all for me. not as root and not as a newly created admin account.

Make large files great again

I was in the process of stress testing this app before making the switch but I seem to have run into some problems with large files.

I don't expect to upload this large of a file but I tried to upload a 23GB mkv file through my admin sharex config. This was directly to my publicip:9999, no nginx inbetween. Monitoring the ram and cpu, cpu seemed to spike a few times throughout the upload but ram was pretty strong at 230~MB total usage and it wouldn't go up which is great. I watched the file gain in size in /uploads and started counting how long it would take from the upload being finished to it giving me the url. It was stuck on the process that was done uploading but still waiting to give a url for about a minute and 30 seconds. After that the file started over with reuploading, although in my uploads folder, the old file remained and a new file was being written for this new transfer.

I cancelled it then since I think it would just loop over and over probably. I tested beforehand with a gigabyte file and it worked all right (there was some delay between the upload being finished to url generated). In my config I have maxSize: '150000MB', and noJsMaxSize: '100GB',
The other main options are pretty much default.

Is there a way that I could give you helpful debug logs for these transfers? I'm worried I might be hitting a bottleneck with the architecture of node.

[FEATURE REQUEST] Allow to change headers

I would like to be able to change headers to my own. I have my own custom nginx config that I use and it's conflicting with the current set headers.

For example, make Strict-Transport-Security longer and preload.

[BUG] EXDEV: cross-device link not permitted

Describe the bug
When your chunks and uploads folder are set to different drives, chunked upload fails with error EXDEV: cross-device link not permitted

I'm sorry for causing trouble with my previous issue😅

[FEATURE REQUEST] Option to choose field for filename in frontend

I would find this feature very useful since I was hoping to only use the randomly hashed filename to serve as a random url but it's used in the front end as well.

Could there be an option in the config to choose between originalfilename.ext and filename..ext?

It would also make searching easier I'd imagine.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.