mtlynch / picoshare Goto Github PK
View Code? Open in Web Editor NEWA minimalist, easy-to-host service for sharing images and other files
Home Page: https://demo.pico.rocks
License: Other
A minimalist, easy-to-host service for sharing images and other files
Home Page: https://demo.pico.rocks
License: Other
Firefox seems to block inline <style>
elements even though we're using the nonce values. We're using the same mechanism for <script>
elements, and FF doesn't have a problem there. Everything works as expected on Chrome.
Observed on Firefox v100 on Win10.
Content Security Policy: The page’s settings blocked the loading of a resource at inline (“style-src”).
It would be good to have just a convenience part of the route that the backend doesn't even have to process, but it just is nicer for sharing like:
someserver.com/!ABC123/dummyfile.txt
Hi,
first thanks for all the work and for creating this project! I like the minimalist approach that still includes all functionality I wish for in this kind of service.
I gave the docker container a try and the only thing keeping me from deploying on my low-spec machine is that memory consumption increases pretty much by the upload size of the file. I tried with three test files, 100M each. Memory consumption had gone up by 300M after the last upload. I don't know if and when memory is freed again, but it didn't within 10 minutes or so. Even if it was freed within a reasonable timeframe, uploads could easily consume all memory on my machine and lead to OOM or at least intense swapping, which shouldn't be the case.
If you use many special character and generated passwords its ending up in a login loop
Picoshare started crashing over and over today. With an error that appears to be related to litestream.
I rolled back to 1.16 to check if it was something with the 1.17 release but the issue continued.
I had to comment out the litestream elements in the Docker Compose file to get it to start and remain stable in none litestream mode.
Nothing has been changed with regards to the amazon S3 setup for picoshare.
Any ideas?
Log below:
2022-05-27T20:25:59.488279190Z ++ echo true
2022-05-27T20:25:59.488301790Z + export readonly DB_PATH=/data/store.db
2022-05-27T20:25:59.488305390Z + DB_PATH=/data/store.db
2022-05-27T20:25:59.488307950Z + PS_LAUNCH_CMD='/app/picoshare -db /data/store.db'
2022-05-27T20:25:59.488310550Z + [[ true == \t\r\u\e ]]
2022-05-27T20:25:59.488313070Z + /app/litestream version
2022-05-27T20:25:59.614022173Z v0.3.8
2022-05-27T20:25:59.615379976Z LITESTREAM_BUCKET=p74litestream
2022-05-27T20:25:59.615397336Z LITESTREAM_ENDPOINT=s3.eu-west-2.amazonaws.com
2022-05-27T20:25:59.615415176Z + echo LITESTREAM_BUCKET=p74litestream
2022-05-27T20:25:59.615429616Z + echo LITESTREAM_ENDPOINT=s3.eu-west-2.amazonaws.com
2022-05-27T20:25:59.615433057Z + [[ -f /data/store.db ]]
2022-05-27T20:25:59.616128978Z ++ stat -c %s /data/store.db
2022-05-27T20:25:59.617270501Z Existing database is 785551360 bytes
2022-05-27T20:25:59.617292501Z + echo 'Existing database is 785551360 bytes'
2022-05-27T20:25:59.617307061Z + exec /app/litestream replicate -exec '/app/picoshare -db /data/store.db'
2022-05-27T20:25:59.627661206Z litestream v0.3.8
2022-05-27T20:25:59.628155647Z initialized db: /data/store.db
2022-05-27T20:25:59.628193167Z replicating to: name="s3" type="s3" bucket="p74litestream" path="db" region="eu-west-2" endpoint="s3.eu-west-2.amazonaws.com" sync-interval=1s
2022-05-27T20:25:59.674667519Z 2022/05/27 20:25:59 initializing random seed
2022-05-27T20:25:59.681445255Z 2022/05/27 20:25:59 main.go:22: Starting picoshare server
2022-05-27T20:25:59.681832176Z 2022/05/27 20:25:59 sqlite.go:51: reading DB from /data/store.db
2022-05-27T20:25:59.690743238Z 2022/05/27 20:25:59 sqlite.go:74: Migration counter: 5/5
2022-05-27T20:25:59.691986681Z 2022/05/27 20:25:59 main.go:49: Listening on 3001
2022-05-27T20:26:03.244189195Z panic: runtime error: index out of range [4194304] with length 4194304
2022-05-27T20:26:03.244226356Z
2022-05-27T20:26:03.244230676Z goroutine 41 [running]:
2022-05-27T20:26:03.244927477Z github.com/pierrec/lz4/v4/internal/lz4block.(*Compressor).CompressBlock(0x40031c2000, 0x400118e000, 0x400000, 0x400000, 0x4000d8e000, 0x400000, 0x400000, 0x4d2218, 0x40000b4000, 0x7ffff800001)
2022-05-27T20:26:03.244936437Z /home/runner/go/pkg/mod/github.com/pierrec/lz4/[email protected]/internal/lz4block/block.go:204 +0x90c
2022-05-27T20:26:03.244939837Z github.com/pierrec/lz4/v4/internal/lz4block.CompressBlock(0x400118e000, 0x400000, 0x400000, 0x4000d8e000, 0x400000, 0x400000, 0xd, 0x40000b4018, 0x40000b4000)
2022-05-27T20:26:03.244942797Z /home/runner/go/pkg/mod/github.com/pierrec/lz4/[email protected]/internal/lz4block/block.go:91 +0x80
2022-05-27T20:26:03.244945437Z github.com/pierrec/lz4/v4/internal/lz4stream.(*FrameDataBlock).Compress(0x40001be000, 0x40000bc090, 0x400118e000, 0x400000, 0x400000, 0x0, 0x40001139d8)
2022-05-27T20:26:03.244948397Z /home/runner/go/pkg/mod/github.com/pierrec/lz4/[email protected]/internal/lz4stream/block.go:221 +0x7c
2022-05-27T20:26:03.244977477Z github.com/pierrec/lz4/v4.(*Writer).write(0x40000bc120, 0x400118e000, 0x400000, 0x400000, 0x1, 0xffffa8fda108, 0x4000113a08)
2022-05-27T20:26:03.244981557Z /home/runner/go/pkg/mod/github.com/pierrec/lz4/[email protected]/writer.go:128 +0x64
2022-05-27T20:26:03.244984397Z github.com/pierrec/lz4/v4.(*Writer).Write(0x40000bc120, 0x40015fc978, 0xca0, 0xca0, 0x378, 0x0, 0x0)
2022-05-27T20:26:03.244987117Z /home/runner/go/pkg/mod/github.com/pierrec/lz4/[email protected]/writer.go:113 +0x2c4
2022-05-27T20:26:03.244989757Z github.com/benbjohnson/litestream.(*Replica).syncWAL(0x40001283c0, 0x12368e8, 0x4000371840, 0x0, 0x0)
2022-05-27T20:26:03.245053477Z /home/runner/work/litestream/litestream/replica.go:266 +0x594
2022-05-27T20:26:03.245072318Z github.com/benbjohnson/litestream.(*Replica).Sync(0x40001283c0, 0x12368e8, 0x4000371840, 0x0, 0x0)
2022-05-27T20:26:03.245076198Z /home/runner/work/litestream/litestream/replica.go:190 +0x5e8
2022-05-27T20:26:03.245078958Z github.com/benbjohnson/litestream.(*Replica).monitor(0x40001283c0, 0x12368e8, 0x4000371840)
2022-05-27T20:26:03.245081758Z /home/runner/work/litestream/litestream/replica.go:629 +0x130
2022-05-27T20:26:03.245084198Z github.com/benbjohnson/litestream.(*Replica).Start.func1(0x40001283c0, 0x12368e8, 0x4000371840)
2022-05-27T20:26:03.245087438Z /home/runner/work/litestream/litestream/replica.go:113 +0x64
2022-05-27T20:26:03.245090078Z created by github.com/benbjohnson/litestream.(*Replica).Start
2022-05-27T20:26:03.245092638Z /home/runner/work/litestream/litestream/replica.go:113 +0xb8
When you paste text on the upload screen, it uploads with Content-Type of application/octet-stream
. The better behavior would be for it to be text/plain
so that it renders in the browser when the user clicks views the upload entry.
Sorry, this is more of a question that an issue.
Is there a way to integrate picoshare with Google Cloud Storage an upload the files directly in the bucket? Can this is be ported to a cloud function for easy deployment?
Thanks.
The current max expiration is 1 year, but it would be good to have a "never" option for files that are meant to stay permanent.
I was wondering if its possible to add embed tags to use this application to host content for discord etc. Currently files show as links not rich embedded content. Great project though appreciate the work.
If i try to upload a file (.png) i get this error back.
I think the problem comes from trying to format a formatted time again 🤔?
2022/03/25 18:52:13 upload.go:158: invalid expiration URL parameter: [2022-04-24T18:52:11 02:00] -> parsing time "2022-04-24T18:52:11 02:00" as "2006-01-02T15:04:05Z07:00": cannot parse "" as "Z07:00"
2022/03/25 18:52:13 upload.go:43: invalid expiration URL parameter: invalid expiration URL parameter
::1 - - [25/Mar/2022:18:52:13 +0100] "POST /api/entry?expiration=2022-04-24T18:52:11+02:00 HTTP/1.1" 400 67
The included link shortener is pretty handy, but there are many other FOSS projects that do exactly this, just:
Thanks why I would like to ask, if you guys could think of connecting this to "Shlink" or if that would be no option?
Thanks in advance!
main.go
should be at cmd/main.go
Maybe types/
should be at root level?
I've accidentally clicked it so many times thinking that it will bring up the file picker because it's the main CTA on the page.
Add CSRF protection with gorilla/csrf
.
See What Got Done for an example implementation;
First of all good work on this project and the latest update on arm images.
Currently, picoshare allows the user to upload a file with under 100 character filenames (including .
and the extension). Anything over 100 characters results in the following error: can't read request body: filename too long
.txt
file with a filename of 97 characters. E.g: pQmvFuKknUjALPLSIuIuqdgKDVN8xAo2MYEKAxjfURW5JQ858qrQtvxxAmdizK5ILlRKm8McQwGTHAWb180P0ttVscosNPJc4.txt
It should be possible to put a CDN in front of PicoShare and then adjust the link generation so that it shows the CDN's URL rather than the direct PicoShare URL.
On Chrome, if you copy text from a website and paste it into the upload box, PicoShare takes the plaintext version of the text.
On Firefox, if you copy text from a website and paste it into the upload box, PicoShare pastes HTML tags with the text, which is unexpected.
Observed on Firefox v100 on Win10
.
Currently, PicoShare authenticates the password through simple string matching. This is vulnerable to timing attacks and brute force attacks.
We could use something like PBKDF2 to make it harder for an attacker to crack the password.
I'd like to upload directly with the shared secret using something like curl.
It seems that auth is currently done as a cookie. Is it also possible to use an Authorization header directly with the shared secret?
It would be helpful to expose disk space (used, free) in the web UI so that the user understands how much space they have left.
While testing to see max file size support, was able to upload files larger than 1GB, but when uploading a file approximately 6GB, I received this error message in the browser:
Failed to communicate with server: Failed to fetch
When looking through Docker logs, I found this:
2022/03/23 13:32:44 upload.go:51: error reading body: multipart: NextPart: http: request body too large
PicoShare seems to treat STEP files (eDrawings CAD files) as plaintext files instead of application files.
Hi @mtlynch,
the git hook script are not working :S
./dev-scripts/enable-git-hooks
+ set -e
+ set -u
+++ dirname ./dev-scripts/enable-git-hooks
++ cd ./dev-scripts
++ pwd
+ readonly SCRIPT_DIR=/mypath/GitHub/picoshare/dev-scripts
+ SCRIPT_DIR=/mypath/GitHub/picoshare/dev-scripts
+ cd /mypath/GitHub/picoshare/dev-scripts/..
+ [[ -L .git/hooks ]]
+ [[ -d .git/hooks ]]
+ ln --symbolic --force ../dev-scripts/git-hooks .git/hooks
ln: illegal option -- -
usage: ln [-s [-F] | -L | -P] [-f | -i] [-hnv] source_file [target_file]
ln [-s [-F] | -L | -P] [-f | -i] [-hnv] source_file ... target_dir
link source_file target_file
I'd love to run picoshare with stronger access control than just a shared password for admin functions. But rather than ask picoshare to add things like IP restrictions or 2FA, I think it would make more sense to do this with a reverse proxy like nginx in front of picoshare.
To make this easier, it would be helpful if picoshare had all of its admin functionality under a URL prefix like /admin
. This would make it relatively straight-forward to, for example, restrict admin functions to just specific IPs while allowing all IPs to fetch uploaded files. Or possibly even put admin URLs behind something like oauth2-proxy or Pomerium.
I think this could technically be done today by having a reverse proxy allow public access to all locations starting with /!
or /g/
(plus js, css, etc.). And then restricting other URLs. But having a separate /admin
prefix would make for a much cleaner and less fragile config.
Hi love how easy to deploy & use of picoshare, im wondering is it possible to add a maximum view limitation, after centain views the link will be auto expire. Thanks.
I just noticed that the types.NeverExpire
expire option is defined at 3000-01-01T00:00:00Z, but when it's rendered on the Upload file page on a system whose timezone is America/New_York
, the value is rendered like this:
<option value="2999-12-31T00:00:00Z">Never</option>
The browser submits the value to the server as 2999-12-31T00:00:00Z
, and the server interprets it as types.NeverExpire
, so it doesn't seem to be causing a problem, but I'd like to dig into this and figure out why it's not rendering as the year 3000.
hello! I recently deployed through docker and then assigned a public domain through Nginx Proxy Manager but when generating the link it shows the following error message:
But through the browser by internal IP it creates the link and I can then copy it from "files".
And this message is from a mobile browser:
hope you can helpme!
thanks!
I would love to be able to make some private notes on the uploaded files.
In my usecase i would love to use picoshare to host some images, avatars
and stuff you can use in forums and other self hosted services.
And i'd like to be able to make a note on each file that reminds me on where i have used the direct link to the file.
(in case you want to change the file some day)
I would like to see a progress percentage and speed so I can see how my file upload is doing.
Originally posted by @bendini20 in #109 (comment)
Currently, when a file is deleted from the WebGUI, the space taken by that file stills remain in the DB.
So, if possible, implementing VACUUM somewhere in the code when a file is deleted, it will free up memory.
mega.nz has a feature called Megadrop, where you can give a guest a private URL that provides write-only access, so they can share a file with you, but they can't see any of your other files.
It would be useful to have a similar feature for PicoShare when you want other users to be able to send you large files.
Love the simplicity of PicoShare. If possible, can you add a favicon so the shortcut on my bookmark line is visible?
Once SQLite expands the size of the DB file, it doesn't ever shrink it unless you run VACUUM
or PRAGMA auto_vacuum = FULL
.
We should add an option that regularly VACUUM
s the database to reclaim free space.
Thanks to AaronJudgesToothGap for reporting this.
This would probably fit in easily as part of garbagecollect.
SQLite's default page size is 4096 bytes.
There's a perf bottleneck in entries_data
because PicoShare has to read/write many rows per file, especially if the file is large. We could optimize this by tuning the chunk size so that the total size of each row is a multiple of 4096. That would let SQLite read exactly the data it needs from disk without having to read an extra page.
Credit: Nicolas Grilly
It takes up a lot of memory when uploading and downloading files
It always takes a long time to respond when downloading
Why not consider storing files in folders? I always feel that SQLite will have performance problems
It would be useful if users could upload to PicoShare directly from the command line.
LogPaste supports this functionality, and it's not too difficult.
My first thought is that PicoShare would have to generate some sort of keyed, unpredictable URL so that only the admin can upload files, like https://pico.example.com/upload/?key=1234abc&expires=2022-03-25T000000Z
.
Hi, thanks for this superb piece of software (and the arm64 docker container), especially for the new feature of anonymous guest upload. This could replace my installation of Nextcloud if it still allowed the ability to upload folders. Would that be possible or is that out of scope of the project?
I have picoshare on my raspberrypi docker container with nginx and use Cloudfare proxy
Usually from what I have researched Cloudfare places a 100mb restriction for file upload but whenever I upload a file above 1mb it shows me this error.
If I were to upload the same file using my local address it doesn't cause any problems
Is there a solution for this?
Being able to customize or completely disable the homepage would be a nice addition.
If you try to change the port 3200 instead 3001 like -p 3200/3001 on the first installation the picoshare will never start. This also not work if you change the -e environment variable -e PORT=3200
I may be missing something, but I would like to specify where exactly the files are stored that I upload to picoshare. If this is not already a feature, I would love for it to be added.
Hi,
Wanna say great app, keep it up.
Can we also have the ability to delete file in couple of hours? options like 30 min, 1 hour and 5 hours?
Thanks.
Do we need to remove the row after delete the file?
Currently, the link/file is deleted but the row is still displayed. Only after a reload of the page, the row is gone.
Currently, the user has to decide the expiration time before uploading a file. It would be better if the user could change the expiration time at any time.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.