Giter Site home page Giter Site logo

l3uddz / cloudplow Goto Github PK

View Code? Open in Web Editor NEW
339.0 16.0 48.0 370 KB

Automatic rclone remote uploader, with support for multiple remote/folder pairings. UnionFS Cleaner functionality: Deletion of UnionFS whiteout files and their corresponding files on rclone remotes. Automatic remote syncer: Sync between different remotes via a Scaleway server instance, that is created and destroyed at every sync.

License: GNU General Public License v3.0

Python 98.68% Shell 1.32%

cloudplow's People

Contributors

aaronlake avatar daghaian avatar desimaniac avatar l3uddz avatar matchai avatar mitchellklijs avatar sabrsorensen avatar saltydk avatar web-flow avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudplow's Issues

delay uploading files

would be great to have the ability to delay files for a specific time before uploading to gdrive.
I personally let Plex anaylze my local upload Folder for 2 days, to create thumbnails etc.

thanks

Rclone not sleeping when hits limit

It doesn't sleep it just sits there and keeps trying to upload even though it is not going logs just show the files it is trying to upload and all sitting at 0% no mention of the limit or the sleep or the error
"Failed to copy: googleapi: Error 403: User rate limit exceeded

Sabnzbd integration

Describe the problem
I run my media through an mp4 remux workflow of my own using my own scripts. I don't exclude mkv's from upload because I don't want the remux job to create a backlog if it can't convert quickly enough but I would like to be able to control how quickly SAB downloads to give the remux job more time to finish.

I'm currently doing this from my script but it would be cool if cloudplow could throttle SAB in a similar way to how it throttles itself when plex has active streams.

Occasionally when a lot of media is requested/downloaded at once

Describe any solutions you think might work

API request to SAB to limit download speed while cloudplow uploads or possibly depending on the size of the data to upload.

Additional context
Add any other context or screenshots about the feature request here.

Config File

Can We get a sample config file posted for testing?

Error on upload

Describe the bug
Error on trying to upload

To Reproduce
Steps to reproduce the behavior:

  1. Remove excludes lines from "uploader" section
  2. Remote folder have spaces on their name

Expected behavior
Upload content from specified on config.json folder

Logs

Traceback (most recent call last): File "/opt/cloudplow/utils/rclone.py", line 156, in upload excludes = self.__excludes2string() File "/opt/cloudplow/utils/rclone.py", line 187, in __excludes2string self.config['rclone_excludes']).replace('=None', '').strip() KeyError: 'rclone_excludes' 2021-02-04 00:47:39,448 - ERROR - cloudplow - do_upload - Exception occurred while uploading: Traceback (most recent call last): File "/opt/cloudplow/cloudplow.py", line 377, in do_upload resp_delay, resp_trigger = uploader.upload() File "/opt/cloudplow/utils/uploader.py", line 53, in upload upload_status, return_code = rclone.upload(self.__logic)

System Information

  • Cloudplow Version: Develop (latest)
  • Operating System: alpine (Docker)

Additional context

https://hub.docker.com/r/sabrsorensen/alpine-cloudplow/dockerfile

Upload crash with utf-chars in filename

2018-07-25 08:28:32,347 - INFO       - cloudplow            - scheduled_uploader             - Uploader: cap. Local folder size is currently 68 GB over the maximum limit of 2 GB
2018-07-25 08:28:32,348 - INFO       - cloudplow            - do_hidden                      - Starting hidden cleaning
2018-07-25 08:28:32,348 - INFO       - unionfs              - __files                        - Found 0 hidden files in '/mnt/local/.unionfs-fuse'
2018-07-25 08:28:32,348 - INFO       - unionfs              - __folders                      - Found 0 hidden folders in '/mnt/local/.unionfs-fuse'
2018-07-25 08:28:32,350 - INFO       - unionfs              - remove_empty_dirs              - Removed empty directories from '/mnt/local/.unionfs-fuse'
2018-07-25 08:28:32,351 - INFO       - cloudplow            - do_hidden                      - Finished hidden cleaning
2018-07-25 08:28:32,351 - INFO       - decorators           - timer                          - 'do_hidden' from 'cloudplow.py' finished in 0 seconds
2018-07-25 08:28:32,351 - INFO       - cloudplow            - do_upload                      - Starting upload
2018-07-25 08:28:32,373 - INFO       - cloudplow            - do_plex_monitor                - Plex server url + token were validated, sleeping 15 seconds before checking Rclone rc url...
2018-07-25 08:28:32,557 - INFO       - uploader             - upload                         - Uploading '/mnt/local/Media' to remote: cap
2018-07-25 08:28:32,570 - INFO       - process              - execute                        - 2018/07/25 08:28:32 Failed to load filters: bad glob pattern "\\/Movies\\/Aliens\\ \\(1986\\)\\/Aliens\\ \\-\\ Die\\ R\\ückkehr\\ \\(1986\\)\\ Bluray\\-1080p\\.mkv\\.partial\\~ " (regexp "(^|/)\\/Movies\\/Aliens\\ \\(1986\\)\\/Aliens\\ \\-\\ Die\\ R\\ückkehr\\ \\(1986\\)\\ Bluray\\-1080p\\.mkv\\.partial\\~$"): error parsing regexp: invalid escape sequence: `\ü`
2018-07-25 08:28:32,571 - INFO       - uploader             - upload                         - Finished uploading to remote: cap

Remote Duplicate Directory

Background: I'm running Cloudplow with Cloudbox on a remote server to manage my plex library. I have a Gdrive linked to this remote server where Cloudplow uploads all the media.

Problem: My Gdrive Media/Movies folder already had movies in it prior to me connecting it to the remote server. I also downloaded higher-res versions (4K) of the same movies on my remote server. I want to keep both versions for multiple playback options on plex (1080p and 4K). The names of the movie folders are the same on both the remote server and the Gdrive. So now when Cloudplow tries to upload to Gdrive, it gives a Duplicate Directory error and ignores the upload for these movies with multiple versions.

Question: How can I disable this feature in Cloudplow such that the duplicate folders are not ignored and Cloudplow uploads and merges the files into the same duplicate folder on Gdrive?

Request: cloudplow and plex_autoscan without Radarr/Sonarr

Describe the problem
I would love to use these both scripts in combination.
but i download from shared-hosters with JDownloader and rename with Filebot to a "Move2Drive"-Folder. I than just need a script uploading these afterwards (cloudplow) which than informs plex to scan these specific locations (plex_autoscan)

Describe any solutions you think might work
send location path from just added items via cloudplow to plex_autoscan

Additional context
pretty much what this script does, but with way less functionality
https://github.com/masonr/PLEXiDRIVE

Thank you

Manage service accounts better by rotating

If you upload a lot of data to multiple team dives using a pool of service accounts it would be great if the app remembered how much it has uploaded on each service account recently and pick the one it has uploaded the least on instead of seeming to choose the first service account and working it way though them till it finds one that works fromt he start each time.

It would also be good if in the rclone sleeps you could specify to move to the next key as well.

Cleanup empy folders errors but says sucessful

If the find command erros because the -empty is not supported CloudPlow reports the folders as cleaned sucessfully rather than the error. I have wasted quite a bit of time trying to work out why my folders did not get deleted.

I have raised an error on the docker image as well docke rimage issue for them to fix the find command they install but really CloudPlow should report the command errored.This is using the latest CloudPlow as far as i know.

2019-12-01 06:24:50,569 - DEBUG      - path                 - remove_empty_dirs              - Removing empty directories from '/data/main' with mindepth 1
2019-12-01 06:24:50,570 - DEBUG      - path                 - remove_empty_dirs              - Using: find /data/main -mindepth 1 -type d -empty -delete
2019-12-01 06:24:50,585 - INFO       - uploader             - remove_empty_dirs              - Removed empty directories from '/data/main' with mindepth: 1

I logged into the container and ran /data/main -mindepth 1 -type d -empty -delete manually and got an error below

find /data/main -mindepth 1 -type d -empty -delete
find: unrecognized: -empty
BusyBox v1.30.1 (2019-06-12 17:51:55 UTC) multi-call binary.

Usage: find [-HL] [PATH]... [OPTIONS] [ACTIONS]

Search for files and perform actions on them.
First failed action stops processing of current file.
Defaults: PATH is current directory, action is '-print'

        -L,-follow      Follow symlinks
        -H              ...on command line only
        -xdev           Don't descend directories on other filesystems
        -maxdepth N     Descend at most N levels. -maxdepth 0 applies
                        actions to command line arguments only
        -mindepth N     Don't act on first N levels
        -depth          Act on directory *after* traversing it

Actions:
        ( ACTIONS )     Group actions for -o / -a
        ! ACT           Invert ACT's success/failure
        ACT1 [-a] ACT2  If ACT1 fails, stop, else do ACT2
        ACT1 -o ACT2    If ACT1 succeeds, stop, else do ACT2
                        Note: -a has higher priority than -o
        -name PATTERN   Match file name (w/o directory name) to PATTERN
        -iname PATTERN  Case insensitive -name
        -path PATTERN   Match path to PATTERN
        -ipath PATTERN  Case insensitive -path
        -regex PATTERN  Match path to regex PATTERN
        -type X         File type is X (one of: f,d,l,b,c,s,p)
        -executable     File is executable
        -perm MASK      At least one mask bit (+MASK), all bits (-MASK),
                        or exactly MASK bits are set in file's mode
        -mtime DAYS     mtime is greater than (+N), less than (-N),
                        or exactly N days in the past
        -mmin MINS      mtime is greater than (+N), less than (-N),
                        or exactly N minutes in the past
        -newer FILE     mtime is more recent than FILE's
        -inum N         File has inode number N
        -user NAME/ID   File is owned by given user
        -group NAME/ID  File is owned by given group
        -size N[bck]    File size is N (c:bytes,k:kbytes,b:512 bytes(def.))
                        +/-N: file size is bigger/smaller than N
        -links N        Number of links is greater than (+N), less than (-N),
                        or exactly N
        -prune          If current file is directory, don't descend into it
If none of the following actions is specified, -print is assumed
        -print          Print file name
        -print0         Print file name, NUL terminated
        -exec CMD ARG ; Run CMD with all instances of {} replaced by
                        file name. Fails if CMD exits with nonzero
        -exec CMD ARG + Run CMD with {} replaced by list of file names
        -delete         Delete current file/directory. Turns on -depth option
        -quit           Exit

Add Dockerfile

This could be also offered as a docker container, which would then allow Windows users to use it

Use rclone_config_path

Currently only rclone_config_path is used by the syncer to copy the rclone configuration to the new scaleway instance.

As we have this configuration option available, it may aswell be used by all rclone operations (delete hiddens / upload content).

Failed to send discord notifications

Describe the bug
A clear and concise description of what the bug is.

To Reproduce
Steps to reproduce the behavior:

  • Enable discord notifications via apprise in the config
  • Wait for a scheduled upload to begin
  • Look at the logs, notice that discord notification haven't completed and this is the error message:
    end - Failed to send Discord notification: Bad Request - Unsupported Parameters., error=400.

Expected behavior
Discord notifications should be sent via apprise successfully

Screenshots
If applicable, add screenshots to help explain your problem.

Logs
end - Failed to send Discord notification: Bad Request - Unsupported Parameters., error=400.

You can enable debug mode by adding --loglevel=DEBUG to the run command (e.g. systemd).

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Ubuntu 19.04

Additional context
Add any other context about the problem here.

Max Transfer Reached timeout is hard coded

Max Transfer Reached (Error Code 7) from rclone triggers a mandatory 25 hour cooldown that cannot be configured through the config.json.

Similar to how rclone_sleeps are being defined in the config.json, an Error Code 7 behavior parameter could also be added.

I prefer that if rclone runs out of quota for the day, that it try again in 4 hours since the quota is on a rolling timeline and I do small batches of upload throughout the day. If cloudplow detects a log defined in rclone_sleeps, it works well, but on my new seedbox that uses supervisorctl, for some reason it captures the Error Code 7 on quota limit reached. I have no way of controlling that parameter. I was able to modify uploader.py and change it to 4 to fix my problem, but a parameter should be added to config.json to make this user-friendly and update-friendly.

Upload when there are any files otherwise don't log/run

Describe the problem
I would like to use CloudPlow to upload files within a minute of them being created. While you can specify a max file/folder size trigger i need to set that to 0 to capture small text files. It would be great if there was an option to specify that there must also be atleast one file. If you are running every minute to check if there are any files currently the logs end up quite large with just junk runs to say it found no files.

Describe any solutions you think might work
Add a config setting for the uploader to add a requirement that there needs to be atleast one file in the folder to upload. Ignore folders when checking. Ideally also have an option to no log the run at all if there was no file found so as to keep the logs and console just having useful info like when actual uploads happen.

Another option may be to use a different trigger for a run. Maybe having the 2 triggers below not just a timer.

  1. A setting to trigger upload on start up. (as in as soon as you start no waiting for timer to trigger)
  2. A setting to trigger based on a file watcher. This could maybe tigger when a file is written to or created or a lock removed with maybe a short delay to give the action time to complete like 10-30 seconds.

min_size_gb?

As per now, if I set max_size_gb to for example 100, and the folder size is 120GB, the script uploads all 120GB.
But what about a "added" feature that only uploads 20 GB instead of 120GB? min_size_gb maybe?

Special Characters are not handles when present in an opened file

Describe the bug
When a file is opened and is attempted to be excluded for an rclone operation, if one of those files contains certain special characters, an error is produced and the move fails.
To Reproduce
Steps to reproduce the behavior:

  1. Open Videos/Movies/Déjà Vu (2006)/Déjà Vu (2006) Bluray-1080p.mkv video or some other video with special characters in the title
  2. start cloudplow

Expected behavior
Cloudplow should exclude the open file and continue, instead it crashes with an error message
Screenshots
If applicable, add screenshots to help explain your problem.

Logs
2021-01-08 18:43:03,873 - INFO - uploader - upload - Excluding these files from being uploaded because they were open: ['Videos/Movies/Blow (2001)/Blow (2001) Remux-1080p.mkv', 'Videos/Movies/Deep Impact (1998)/Deep Impact (1998) Bluray-1080p.mkv', 'Videos/Movies/Déjà Vu (2006)/Déjà Vu (2006) Bluray-1080p.mkv', 'Videos/Movies/A Walk Among the Tombstones (2014)/A Walk Among the Tombstones (2014) Remux-1080p.mkv']
2021-01-08 18:43:03,873 - INFO - uploader - upload - Uploading '/mnt/shared/local/' to remote: Media_vfs_plow
2021-01-08 18:43:03,899 - INFO - process - execute - 2021/01/08 18:43:03 Failed to load filters: bad glob pattern "Videos\/Movies\/D\éj\à\ Vu\ \(2006\)\/D\éj\à\ Vu\ \(2006\)\ Bluray\-1080p\.mkv" (regexp "(^|/)Videos\/Movies\/D\éj\à\ Vu\ \(2006\)\/D\éj\à\ Vu\ \(2006\)\ Bluray\-1080p\.mkv$"): error parsing regexp: invalid escape sequence:

System Information

  • Cloudplow Version: Develop (latest)
  • Operating System: Ubuntu Server 16.04 LTS

'ascii' codec can't encode character '\u2026'

Describe the bug
When running cloudplow upload the following message appears:

UnicodeEncodeError: 'ascii' codec can't encode character '\u2026' in position 146: ordinal not in range(128)
Call stack:
  File "/usr/local/bin/cloudplow", line 815, in <module>
    do_upload()
  File "/opt/cloudplow/utils/decorators.py", line 13, in timer
    result = method(*args, **kw)
  File "/usr/local/bin/cloudplow", line 377, in do_upload
    resp_delay, resp_trigger = uploader.upload()
  File "/opt/cloudplow/utils/uploader.py", line 53, in upload
    upload_status, return_code = rclone.upload(self.__logic)
  File "/opt/cloudplow/utils/rclone.py", line 168, in upload
    return_code = process.execute(cmd, callback)
  File "/opt/cloudplow/utils/process.py", line 18, in execute
    log.info(output)

Expected behavior
The \u2026 character should not be appearing in the file path. It corresponds to ... used by the system when the file path is too long

System Information

  • Cloudplow Version: Master(latest)
  • Operating System:Ubuntu Server 16.04 LTS

verbose max level

Hello there, about the --verbose flag, normally i'd use "--verbose=x" where x is the verbose level, however, no where in the whole github and site i could find the max level of the verbose, so my question is, what is the max level of verbose?

Schedular for pause or speed limits

Good Day,

Can you please assist/point me in the correct direction.

Is it possible to pause or limit the sync speeds?

If not would it be possible to add as a feature as I would like to limit upload speeds during the day and pause when I am home.

Regards

Sonarr + cloudplow

Hey. I'm using cloudplow for my uploads and loving it. Thank you very much for this.

Problem here is that i also want Sonarr to rename finished downloads. Because filebot has huge limitations without knowing the download origin.

So question is; how can i split renaming with Sonarr (and moving to upload location instead of Drive) and uploading with cloudplow ?

thank you

Plex streams not being counted for throttling

Describe the bug
Cloudplow always logs that there are "0 playing stream(s)" despite there being actively playing streams.

This occurs despite the logs saying that the "Plex Media Server URL + Token were validated":

2020-06-08 12:56:08,950 - INFO       - cloudplow            - do_plex_monitor                - Plex Media Server URL + Token were validated. Sleeping for 15 seconds before checking Rclone RC URL.
2020-06-08 12:56:23,973 - INFO       - cloudplow            - do_plex_monitor                - Rclone RC URL was validated. Stream monitoring for Plex Media Server will now begin.

To Reproduce
Steps to reproduce the behavior:

  1. Configure Cloudplow to run automatically
  2. Set plex.max_streams_before_throttle to 1
  3. Restart Cloudplow service and watch logs for the following message:
    2020-06-09 12:50:16,130 - INFO       - cloudplow            - do_plex_monitor                - There was 0 playing stream(s) on Plex Media Server it was already throttled to 1000M. Throttling will continue.
    
  4. Start playing a stream on Plex
  5. See the same messages continue to be logged

Expected behavior
A clear and concise description of what you expected to happen.

Throttling begins to kick in at the increased thresholds, matching the number of Plex streams playing.

Logs

Despite there being active "playing" streams, they aren't being accounted for by Cloudplow:

2020-06-09 23:38:02,183 - DEBUG      - plex                 - get_streams                    - Server responded with status_code=200, content: b'{"MediaContainer":{"size":3,"Metadata":[{"addedAt":"1591490585","art":"/library/metadata/1205/art/1591664610","chapterSource":"media","contentRating":"TV-MA","duration":"1802600","grandparentArt":"/library/metadata/1205/art/1591664610","grandparentGuid":"com.plexapp.agents.thetvdb://76203?lang=en","grandparentRatingKey":"1205","grandparentTheme":"/library/metadata/1205/theme/1591664610","grandparentThumb":"/library/metadata/1205/thumb/1591664610","grandparentTitle":"Curb Your Enthusiasm","guid":"com.plexapp.agents.thetvdb://76203/2/3?lang=en","index":"3","key":"/library/metadata/1226","librarySectionID":"2","originallyAvailableAt":"2001-10-07","parentGuid":"com.plexapp.agents.thetvdb://76203/2?lang=en","parentIndex":"2","parentRatingKey":"1218","parentThumb":"/library/metadata/1218/thumb/1591664610","parentTitle":"Season 2","rating":"8.0","ratingKey":"1226","sessionKey":"47","summary":"When Larry and Cheryl attend the premiere of a movie written by a handicapped friend, Cliff Cobb, Larry offends a Jewish neighbor, alienates Cliff (as well as the memory of his salad-inventing grandfather) and sends misguided romantic overtures to Cliff\\'s wife, Shelly. Later, the Davids experience the \\"            "trick\\"            " side of Halloween when Larry refuses to give candy to two uncostumed teenagers. Having failed to get a bracelet for his wife on her birthday, Larry makes amends with the perfect gift on a romantic morning, but then Jeff arrives to play a round of golf. In the end, Larry gets revenge on a jewish neighbor and his teenage daughters, by arriving at his home with the orchestra that he used for Cheryl\\'s birthday, as they played the tune that offended this neighbor.","thumb":"/library/metadata/1226/thumb/1591664608","title":"Trick or Treat","type":"episode","updatedAt":"1591664608","viewOffset":"241000","year":"2001","Media":[{"aspectRatio":"1.78","audioChannels":"2","audioCodec":"eac3","bitrate":"13876","container":"mkv","duration":"1802600","height":"1080","id":"1231","videoCodec":"h264","videoFrameRate":"NTSC","videoProfile":"high","videoResolution":"1080","width":"1920","selected":true,"Part":[{"container":"mkv","duration":"1802600","file":"/data/TV/Curb Your Enthusiasm (1999)/Season 02/Curb Your Enthusiasm - S02E03 [WEBDL-1080p H264 EAC3 2.0]-AJP69.mkv","id":"1924","key":"/library/parts/1924/1504006642/file.mkv","size":"3126534806","videoProfile":"high","decision":"directplay","selected":true,"Stream":[{"bitDepth":"8","bitrate":"13876","chromaLocation":"left","chromaSubsampling":"4:2:0","codec":"h264","codedHeight":"1088","codedWidth":"1920","colorPrimaries":"bt709","colorRange":"tv","colorSpace":"bt709","colorTrc":"bt709","default":"1","displayTitle":"1080p (H.264)","frameRate":"29.970","hasScalingMatrix":"0","height":"1080","id":"7530","index":"0","level":"40","profile":"high","refFrames":"4","scanType":"progressive","streamType":"1","width":"1920","location":"direct"},{"audioChannelLayout":"stereo","bitrate":"224","channels":"2","codec":"eac3","default":"1","displayTitle":"English (EAC3 Stereo)","id":"7531","index":"1","language":"English","languageCode":"eng","samplingRate":"48000","selected":"1","streamType":"2","location":"direct"}]}]}],"Writer":[{"id":"1853","tag":"Larry David"}],"User":{"id":"","thumb":"","title":""},"Player":{"address":"","device":"","machineIdentifier":"","model":"standalone","platform":"windows","platformVersion":"10.0.18363","product":"Plex for Windows","remotePublicAddress":"","state":"paused","title":"","vendor":"","version":"1.12.1.1253-c29aa096","local":true,"relayed":false,"secure":false,"userID":0},"Session":{"id":"49w8z92rds8j8ptpugu3jcr6","bandwidth":29140,"location":"lan"}},{"addedAt":"1590690937","art":"/library/metadata/895/art/1590762184","chapterSource":"media","contentRating":"TV-14","duration":"1435031","grandparentArt":"/library/metadata/895/art/1590762184","grandparentGuid":"com.plexapp.agents.thetvdb://267440?lang=en","grandparentKey":"/library/metadata/895","grandparentRatingKey":"895","grandparentTheme":"/library/metadata/895/theme/1590762184","grandparentThumb":"/library/metadata/895/thumb/1590762184","grandparentTitle":"Attack on Titan","guid":"com.plexapp.agents.thetvdb://267440/3/3?lang=en","index":"3","key":"/library/metadata/899","lastViewedAt":"1591760087","librarySectionID":"2","librarySectionKey":"/library/sections/2","librarySectionTitle":"TV Shows","originallyAvailableAt":"2018-08-06","parentGuid":"com.plexapp.agents.thetvdb://267440/3?lang=en","parentIndex":"3","parentKey":"/library/metadata/896","parentRatingKey":"896","parentThumb":"/library/metadata/896/thumb/1590762184","parentTitle":"Season 3","ratingKey":"899","sessionKey":"53","summary":"Historia and Erwin look back on their past, revealing a shady government with a hidden agenda and inspiration for an uprising.","thumb":"/library/metadata/899/thumb/1590762183","title":"Old Story","type":"episode","updatedAt":"1590762183","viewOffset":"90000","year":"2018","Media":[{"id":"861","videoProfile":"high 10","audioChannels":"2","audioCodec":"aac","bitrate":"38890","container":"mp4","duration":"1435031","height":"1080","optimizedForStreaming":"1","protocol":"dash","videoCodec":"h264","videoFrameRate":"24p","videoResolution":"1080p","width":"1920","selected":true,"Part":[{"id":"1533","videoProfile":"high 10","bitrate":"38890","container":"mp4","duration":"1435031","height":"1080","optimizedForStreaming":"1","protocol":"dash","width":"1920","decision":"transcode","selected":true,"Stream":[{"bitrate":"38634","chromaLocation":"left","codec":"h264","codedHeight":"1088","codedWidth":"1920","colorPrimaries":"bt709","colorTrc":"bt709","default":"1","displayTitle":"1080p (H.264 High 10)","frameRate":"23.975999999999999","height":"1080","id":"6227","language":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e","languageCode":"jpn","streamType":"1","width":"1920","decision":"transcode","location":"segments-video"},{"bitrate":"256","bitrateMode":"cbr","channels":"2","codec":"aac","default":"1","displayTitle":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e (FLAC Stereo)","id":"6228","language":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e","languageCode":"jpn","selected":"1","streamType":"2","decision":"transcode","location":"segments-audio"},{"burn":"1","default":"1","displayTitle":"English (ASS)","id":"6229","language":"English","languageCode":"eng","selected":"1","streamType":"3","decision":"burn","location":"segments-video"}]}]}],"Writer":[{"filter":"writer=1200","id":"1200","tag":"Hiroyuki Morita"},{"filter":"writer=1198","id":"1198","tag":"Yasuko Kobayashi"}],"User":{"id":"","thumb":"","title":""},"Player":{"address":"","device":"OSX","machineIdentifier":"","model":"bundled","platform":"Chrome","platformVersion":"81.0","product":"Plex Web","profile":"Web","remotePublicAddress":"135.19.62.102","state":"playing","title":"Chrome","vendor":"","version":"4.30.2","local":true,"relayed":false,"secure":false,"userID":0},"Session":{"id":"ddky85qs12tcs7c3qvf5afva","bandwidth":40835,"location":"lan"}},{"addedAt":"1590690937","art":"/library/metadata/895/art/1590762184","chapterSource":"media","contentRating":"TV-14","duration":"1435031","grandparentArt":"/library/metadata/895/art/1590762184","grandparentGuid":"com.plexapp.agents.thetvdb://267440?lang=en","grandparentKey":"/library/metadata/895","grandparentRatingKey":"895","grandparentTheme":"/library/metadata/895/theme/1590762184","grandparentThumb":"/library/metadata/895/thumb/1590762184","grandparentTitle":"Attack on Titan","guid":"com.plexapp.agents.thetvdb://267440/3/3?lang=en","index":"3","key":"/library/metadata/899","lastViewedAt":"1591760113","librarySectionID":"2","librarySectionKey":"/library/sections/2","librarySectionTitle":"TV Shows","originallyAvailableAt":"2018-08-06","parentGuid":"com.plexapp.agents.thetvdb://267440/3?lang=en","parentIndex":"3","parentKey":"/library/metadata/896","parentRatingKey":"896","parentThumb":"/library/metadata/896/thumb/1590762184","parentTitle":"Season 3","ratingKey":"899","sessionKey":"55","summary":"Historia and Erwin look back on their past, revealing a shady government with a hidden agenda and inspiration for an uprising.","thumb":"/library/metadata/899/thumb/1590762183","title":"Old Story","type":"episode","updatedAt":"1590762183","viewOffset":"90000","year":"2018","Media":[{"id":"861","videoProfile":"high 10","audioChannels":"2","audioCodec":"aac","bitrate":"9429","container":"mp4","duration":"1435031","height":"1080","optimizedForStreaming":"1","protocol":"dash","videoCodec":"h264","videoFrameRate":"24p","videoResolution":"1080p","width":"1920","selected":true,"Part":[{"id":"1533","videoProfile":"high 10","bitrate":"9429","container":"mp4","duration":"1435031","height":"1080","optimizedForStreaming":"1","protocol":"dash","width":"1920","decision":"transcode","selected":true,"Stream":[{"bitrate":"9299","chromaLocation":"left","codec":"h264","codedHeight":"1088","codedWidth":"1920","colorPrimaries":"bt709","colorTrc":"bt709","default":"1","displayTitle":"1080p (H.264 High 10)","frameRate":"23.975999999999999","height":"1080","id":"6227","language":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e","languageCode":"jpn","streamType":"1","width":"1920","decision":"transcode","location":"segments-video"},{"bitrate":"130","bitrateMode":"cbr","channels":"2","codec":"aac","default":"1","displayTitle":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e (FLAC Stereo)","id":"6228","language":"\\xe6\\x97\\xa5\\xe6\\x9c\\xac\\xe8\\xaa\\x9e","languageCode":"jpn","selected":"1","streamType":"2","decision":"transcode","location":"segments-audio"},{"burn":"1","default":"1","displayTitle":"English (ASS)","id":"6229","language":"English","languageCode":"eng","selected":"1","streamType":"3","decision":"burn","location":"segments-video"}]}]}],"Writer":[{"filter":"writer=1200","id":"1200","tag":"Hiroyuki Morita"},{"filter":"writer=1198","id":"1198","tag":"Yasuko Kobayashi"}],"User":{"id":"","thumb":"","title":""},"Player":{"address":"","device":"OSX","machineIdentifier":"","model":"bundled","platform":"Chrome","platformVersion":"81.0","product":"Plex Web","profile":"Web","remotePublicAddress":"","state":"paused","title":"Chrome","vendor":"","version":"4.30.2","local":true,"relayed":false,"secure":false,"userID":0},"Session":{"id":"pc7odepci5c2nl3h55fcle1c","bandwidth":9901,"location":"lan"},"TranscodeSession":{"key":"/transcode/sessions/gxypyuhjz9lw8evewm207opj","throttled":false,"complete":false,"progress":"38.299999237060547","size":-22,"speed":"3.9000000953674316","duration":1435031,"remaining":226,"context":"streaming","sourceVideoCodec":"h264","sourceAudioCodec":"flac","videoDecision":"transcode","audioDecision":"transcode","subtitleDecision":"burn","protocol":"dash","container":"mp4","videoCodec":"h264","audioCodec":"aac","audioChannels":2,"transcodeHwRequested":false,"timeStamp":"1591760115.6037188","maxOffsetAvailable":"549.00734067400708","minOffsetAvailable":"95.970970153808594"}}]}}'
2020-06-09 23:38:02,186 - INFO       - cloudplow            - do_plex_monitor                - There was 0 playing stream(s) on Plex Media Server it was already throttled to 1000M. Throttling will continue.

System Information

  • Cloudplow Version: master
  • Operating System: Ubuntu 18.04.4 LTS

Files deleted despite "Not deleting source as copy failed" reported

Describe the bug
I noticed that some of the movies went missing. Upon inspecting the logs I noticed the following:

2020-03-12 22:45:50,142 - INFO       - process              - execute                        - 2020/03/12 22:45:50 INFO  :
2020-03-12 22:45:50,143 - INFO       - process              - execute                        - Transferred:        32.345G / 32.338 GBytes, 100%, 2.209 MBytes/s, ETA -
2020-03-12 22:45:50,143 - INFO       - process              - execute                        - Checks:                 4 / 5, 80%
2020-03-12 22:45:50,143 - INFO       - process              - execute                        - Deleted:                2
2020-03-12 22:45:50,143 - INFO       - process              - execute                        - Transferred:            2 / 3, 67%
2020-03-12 22:45:50,143 - INFO       - process              - execute                        - Elapsed time:    4h9m56.0s
2020-03-12 22:45:50,144 - INFO       - process              - execute                        - Checking:
2020-03-12 22:45:50,144 - INFO       - process              - execute                        - Transferring:
2020-03-12 22:45:50,144 - INFO       - process              - execute                        - * Movies/movie/movie.mkv:100% /29.365G, 352/s, -
2020-03-12 22:46:08,566 - INFO       - process              - execute                        - 2020/03/12 22:46:08 NOTICE: xxxx/yyyy/zzzz: Failed to cancel multipart upload: itemNotFound: The resource could not be found.
2020-03-12 22:46:08,676 - INFO       - process              - execute                        - 2020/03/12 22:46:08 ERROR : Movies/movie/movie.mkv: Failed to copy: unexpected end of JSON input
2020-03-12 22:46:08,697 - INFO       - process              - execute                        - 2020/03/12 22:46:08 ERROR : Movies/movie/movie.mkv: Not deleting source as copy failed: unexpected end
of JSON input
2020-03-12 22:46:08,736 - INFO       - process              - execute                        - 2020/03/12 22:46:08 ERROR : Attempt 1/3 failed with 1 errors and: unexpected end of JSON input
2020-03-12 22:46:50,159 - INFO       - process              - execute                        - 2020/03/12 22:46:50 INFO  :
2020-03-12 22:46:50,159 - INFO       - process              - execute                        - Transferred:        32.345G / 32.345 GBytes, 100%, 2.206 MBytes/s, ETA 0s
2020-03-12 22:46:50,159 - INFO       - process              - execute                        - Checks:                 5 / 5, 100%
2020-03-12 22:46:50,159 - INFO       - process              - execute                        - Deleted:                2
2020-03-12 22:46:50,160 - INFO       - process              - execute                        - Transferred:            2 / 2, 100%
2020-03-12 22:46:50,160 - INFO       - process              - execute                        - Elapsed time:   4h10m14.6s
2020-03-12 22:47:18,654 - INFO       - process              - execute                        - 2020/03/12 22:47:18 INFO  : Movies/movie/movie.mkv: Deleted```

**Expected behavior**
After the ``Not deleting source as copy failed`` was reported, the file should not be deleted.

**System Information**

 - Cloudplow Version: cloudbox's most recent, rclone v1.51.0-076-g38a4d50e-beta 
 - Operating System: Ubuntu 18.04.4 LTS

Exception is thrown during service account rotation

Describe the bug
Checking for next available service account is failing. An sqlite exception is thrown and cloudplow will fail to switch to next available (unbanned) account.

Logs
2019-10-12 06:09:22,680 - ERROR - cloudplow - check_suspended_sa - Exception checking suspended service accounts:
Traceback (most recent call last):
File "/opt/cloudplow/cloudplow.py", line 149, in check_suspended_sa
if sa_delay[uploader_to_check] is not None:
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 243, in getitem
item = self.conn.select_one(GET_ITEM, (key,))
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 515, in select_one
return next(iter(self.select(req, arg)))
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 507, in select
self.check_raise_error()
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 475, in check_raise_error
reraise(e_type, e_value, e_tb)
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 71, in reraise
raise value
File "/usr/local/lib/python3.6/dist-packages/sqlitedict.py", line 409, in run
cursor.execute(req, arg)
sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Ubuntu Server 16.04 LTS

Setup FlareSolverr with Jackett

Describe the problem
The most popular torrent sites are protected by DDOS protection. To get around this issue Jacket has added support for FlareSolverr

Describe any solutions you think might work

  • Add a new role configuring FlareSolverr in the base provisioning or as an extra role
  • Add a configuration step in the doc to configure Jackett with FlareSolverr

I would love to add it myself but I don't have a lot of free time on hand and I'm not versed well enough with ansible to create a PR for it efficiently

How to disable SYNCER?

is there a way to disable the SYNCER function?

2018-04-25 07:48:16,400 - ERROR - syncer - load - Exception while loading service, kwargs={'service': 'scaleway', 'sync_to': 'amzn', 'sync_from': 'google', 'rclone_extras': {'--transfers': 16, '--bwlimit': '80M', '--verbose': 1, '--checkers': 32, '--drive-chunk-size': '64M', '--stats': '60s'}, 'use_copy': True, 'tool_path': '/home/seed/go/bin/scw', 'instance_destroy': True}: Traceback (most recent call last): File "/opt/cloudplow/utils/syncer/__init__.py", line 37, in load sync_from_config = self.config['remotes'][kwargs['sync_from']] KeyError: 'google' 2018-04-25 07:48:16,401 - INFO - cloudplow - <module> - Added google2amzn syncer to schedule, syncing every 24 hours

Transferred and Elapsed Time controls

Describe the problem
Feature: Monitor rclone output for "Transferred" and "Elapsed time"; so that when 750 GB has been transferred (or if a specified time has elapsed) then it will move on the next sequential uploader or sync job option.

Describe any solutions you think might work
In the same way that the "rclone sleeps" section works you can monitor for the "Transferred:" and "Elapsed time:"
There could be settings in the config.json that call out "transferred-limit" and "elapsed-time-limit".
Then during uploads when either "transferred-limit" and "elapsed-time-limit" the cloudplow will stop that uploader or syncer job and move onto the next job.

Additional context
This is to allow users to have several remotes configured and avoid hitting data caps for a team drive, for example. Yet still, be able to move on to a different team drive and upload content to the next one.

Ability to run commands after cloudplow upload process finishes

Currently I use a cron to run a script to upload to the cloud and once it finishes it runs a curl to resume sabnzbd as the disk space will be free again and sabnzbd doesn't have a resume function.

Is there any way to add extra commands to the end of the cloudplow upload process?

.unionfs-fuse files

.unionfs-fuse files don't get removed so it tries to remove the hidden file again on the next run and fails. I have a couple hundred in there now and the script is taking around 10min to go thru all of those that fail because the _HIDDEN~ file still exists but the file it's trying to delete doesn't.

Plex throttling doesn’t stop when streams stop

Describe the bug
When doing a cloudplow upload operation, the uploads throttle when streams are playing, however if a stream or all the streams stop, the throttling doesn’t end.

To Reproduce
Steps to reproduce the behavior:

  1. Start cloudplow upload
  2. Plex stream starts playing
  3. Throttling begins
  4. Plex stream stops playing, or pauses
  5. Throttling does not end and remains

Expected behavior
The throttle should be lifted

Screenshots
N/a

Logs
Link to debug or trace log files.

You can enable debug mode by adding --loglevel=DEBUG to the run command (e.g. systemd).

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Ubuntu Server 18.04 LTS

Additional context
I only have evidence of the throttling failing to lift with one stream, I was never game enough to try an upload operation with two streams playing. I have noted that it happens whether the upload operation was manually initiated or automatically initiated

error parsing regexp: invalid escape sequence: `\ `

Describe the bug
Error occur while trying to upload a file:
2020-02-06 11:40:43,240 - INFO - process - execute - 2020/02/06 11:40:43 Failed to load filters: bad glob pattern "\/movies\/Cars\ \(2006\)\/Plex\ Versions\/Optimized\ for\ Mobile\/\.inProgress\/Cars\\u00a0_\ Quatre\ roues\ \(2006\)\.mp4\.33" (regexp "(^|/)\/movies\/Cars\ \(2006\)\/Plex\ Versions\/Optimized\ for\ Mobile\/\.inProgress\/Cars\\u00a0_\ Quatre\ roues\ \(2006\)\.mp4\.33$"): error parsing regexp: invalid escape sequence:

To Reproduce
Steps to reproduce the behavior:

  1. Have a filename containing \u00a0
  2. Try to upload it
  3. See error

Expected behavior
No error and upload the file

Logs
2020-02-06 11:25:42,002 - INFO - cloudplow - do_upload - Starting upload
2020-02-06 11:25:42,011 - INFO - cloudplow - do_upload - There is XXX available service accounts
2020-02-06 11:25:42,012 - INFO - uploader - set_service_account - Using service account: '/home/XXX/docker/sa/1.json'
2020-02-06 11:25:42,104 - INFO - uploader - upload - Uploading '/home/XXX/mnt/move' to remote: XXX
2020-02-06 11:25:42,134 - INFO - process - execute - 2020/02/06 11:25:42 Failed to load filters: bad glob pattern "\/movies\/Cars\ \(2006\)\/Plex\ Versions\/Optimized\ for\ Mobile\/\.inProgress\/Cars\\u00a0_\ Quatre\ roues\ \(2006\)\.mp4\.33" (regexp "(^|/)\/movies\/Cars\ \(2006\)\/Plex\ Versions\/Optimized\ for\ Mobile\/\.inProgress\/Cars\\u00a0_\ Quatre\ roues\ \(2006\)\.mp4\.33$"): error parsing regexp: invalid escape sequence:
2020-02-06 11:25:42,135 - INFO - uploader - upload - Finished uploading to remote: XXX

System Information

  • Cloudplow Version: Develop (latest)
  • Operating System: Ubuntu 18.04.4 LTS

Run Clean as Cronjob?

How would I run the clean function as a cronjob? When I invoke the clean command manually everything works. I tried doing this as a cron and get an error with each file that it tries to delete on the remote. It looks like this:

2018-07-23 00:00:39,450 - ERROR - rclone - delete_file - Exception deleting file 'gdrive:/Media/Movies/The Last Samurai (2003)/The Last Samurai (2003) Bluray-1080p.mkv' from remote gdrive:
Traceback (most recent call last):
File "/opt/cloudplow/utils/rclone.py", line 37, in delete_file
resp = process.execute(cmd, logs=False)
File "/opt/cloudplow/utils/process.py", line 11, in execute
stderr=subprocess.STDOUT)
File "/usr/lib/python3.5/subprocess.py", line 947, in init
restore_signals, start_new_session)
File "/usr/lib/python3.5/subprocess.py", line 1551, in _execute_child
raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'rclone'

Emby integration / speed throttle

Cloudplow can throttle Rclone uploads during active, playing Plex streams. Is it possible to create an Emby integration as well ? I believe many users including myself, would find it very useful.

Thank you !

Clean Multiple Remotes

Is there anyway to have cloudplow clean multiple remotes? IE fails on remote 1 tries remote 2?
Reason is I started my collection encrypted. (Will take me too long to move to unencrypted.)
So I started anything new in a unencrypted. But sometimes Radarr/Sonarr downloads an upgrade for something on the old encrypted mount which is really just a read only space. And cloudplow will try to clean it but it can't cause it is on the old encrypted mount not the new non encrypted. Hopefully that made sense.

Add local check before throttling

I host locally and it was bothering me that when I was watching locally, it was throttling my uploads unnecessarily.

I made changes to the main cloudplow.py and utils/plex.py so that it doesn't count them. I think it would be a good option for those of us who aren't 100% cloud.

In the PlexStream class from plex.py:
if 'Player' in stream:
self.player = stream['Player']['product']
self.ip = stream['Player']['remotePublicAddress']
self.local = stream['Player']['local']

And at the bottom of plex.py:
return u"{user} is playing {media} using {player}. "
"Stream state: {state}, type: {type}, local: {local}.".format(user=self.user,
media=self.title,
player=self.player,
state=self.state,
type=stream_type,
local=self.local)

In the do_plex_monitor logic from cloudplow.py

stream_count = 0
for stream in streams:
if stream.state == 'playing' or stream.state == 'buffering':
stream_count += 1
if stream.local == 1:
stream_count -= 1

Notification success when upload failed

Describe the bug
Hello,

I made a mistake, changing rights on my rlone.conf. The user that luanched cloudplow could not read this file, so I got an error, but the script all went well :

2019-11-14 08:46:30,061 - INFO       - process              - execute                        - 2019/11/14 08:46:30 Failed to load config file "/home/seed/.config/rclone/rclone
.conf": open /home/seed/.config/rclone/rclone.conf: permission denied                                                                                                          
2019-11-14 08:46:30,064 - INFO       - uploader             - upload                         - Finished uploading to remote: google                                            
2019-11-14 08:46:30,064 - INFO       - cloudplow            - do_upload                      - Upload completed successfully for uploader: google                              
2019-11-14 08:46:30,219 - INFO       - notifications        - send                           - Sent notification with Slack                                                    

To Reproduce
Steps to reproduce the behavior:
Change the ownership of rclone.conf file, so the user that launched the service cannot read it

Expected behavior
As cloudplow cannot read the rclone file, it should raise an error, and not sent a "success notification"

image

Throttle notifications failing to discord notifier

Describe the bug
When using discord notifications and throttling is enabled, throttle notifications fail. Other notifications are successful to discord, i.e. cloudplow start/stop. Tried both formats for url value 'https://' and 'discord://'.

To Reproduce
Steps to reproduce the behavior:

  1. Configure both throttling and discord notification
  2. Launch cloudplow
  3. Errors are logged in cloudplow log and throttle notifications are not sent

Logs

2021-01-29 16:05:39,687 - ERROR      - apprise              - send                           - Error sending notification to 'https://discord.com/api/webhooks/xxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx
Traceback (most recent call last):
  File "/opt/cloudplow/utils/notifications/apprise.py", line 26, in send
    body=kwargs['message'],
  File "/usr/local/lib/python3.6/dist-packages/apprise/Apprise.py", line 434, in notify
    if not py3compat.asyncio.notify(coroutines, debug=self.debug):
  File "/usr/local/lib/python3.6/dist-packages/apprise/py3compat/asyncio.py", line 77, in notify
    loop = asyncio.get_event_loop()
  File "/usr/lib/python3.6/asyncio/events.py", line 694, in get_event_loop
    return get_event_loop_policy().get_event_loop()
  File "/usr/lib/python3.6/asyncio/events.py", line 602, in get_event_loop
    % threading.current_thread().name)
RuntimeError: There is no current event loop in thread 'plex-monitor'.

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Ubuntu Server 18.04.5 LTS

Enhancement to new Schedule Feature

I noticed this new feature added recently, and I think this is an awesome add. However I was wondering could a condition be added that will allow the upload to kick off outside of those hours if a space threshold is met. IE if my normal upload is 200GB and it reaches 250GB and outside of the allowed window, it will kick off anyway to prevent fill up. Just a thought.

"schedule": {
"allowed_from": "02:00",
"allowed_until": "10:00",

Does cloudplow need direct access to service account credentials?

Is there a reason the upload tasks have a service_account_path setting as opposed to just letting rclone handle setting up of the remotes? It seems that if you allow rclone to handle everything, you should be able to run sync tasks with service accounts without any changes in the cloudplow code.

Special Characters Problem

Describe the bug
During the upload process, when the file name has a special character, it has an error.

Expected behavior
Could change the special character.
If "Ç" do "C"
If "Is" do "E"
and so on.

just a suggestion.

Logs
2021-10-22 18:20:05,997 - INFO - cloudplow - do_hidden - Finished hidden cleaning
2021-10-22 18:20:05,997 - INFO - decorators - timer - 'do_hidden' from 'cloudplow.py' finished in 0 seconds
2021-10-22 18:20:05,997 - INFO - cloudplow - do_upload - Starting upload
2021-10-22 18:20:06,000 - INFO - cloudplow - do_upload - There is 200 available service accounts
2021-10-22 18:20:06,000 - INFO - uploader - set_service_account - Using service account: '/opt/sa-nexus/1.json'
2021-10-22 18:20:06,138 - INFO - uploader - upload - Excluding these files from being uploaded because they were open: ['/Decoding Annie Parker (2014)/Decoding Annie Parker (2014) [WEBDL-1080p] {imdb-tt1464191}.mkv']
2021-10-22 18:20:06,138 - INFO - uploader - upload - Uploading '/mnt/local/FILMES' to remote: nexus2
2021-10-22 18:20:06,175 - INFO - process - execute - Failed to load filters: bad glob pattern "\/Ningu\ém\ Ama\ Ningu\ém\ por\ Mais\ de\ Dois\ Anos\ \(2015\)\/Ningu\ém\ Ama\ Ningu\ém\ por\ Mais\ de\ Dois\ Anos\ \(2015\)\ \[WEBDL\-1080p\]\ \{imdb\-tt5532160\}\.mkv" (regexp "(^|/)\/Ningu\ém\ Ama\ Ningu\ém\ por\ Mais\ de\ Dois\ Anos\ \(2015\)\/Ningu\ém\ Ama\ Ningu\ém\ por\ Mais\ de\ Dois\ Anos\ \(2015\)\ \[WEBDL\-1080p\]\ \{imdb\-tt5532160\}\.mkv$"): error parsing regexp: invalid escape sequence:
2021-10-22 18:20:06,176 - INFO - uploader - upload - Finished uploading to remote: nexus2
2021-10-22 18:20:06,179 - INFO - uploader - remove_empty_dirs - Removed empty directories from '/mnt/local/FILMES' with mindepth: 1
2021-10-22 18:20:06,179 - INFO - cloudplow - do_upload - Finished upload
2021-10-22 18:20:06,179 - INFO - decorators - timer - 'do_upload' from 'cloudplow.py' finished in 0 seconds

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Ubuntu Server 18.04

error when running cloudplow upload

What am i doing wrong?:

Traceback (most recent call last):
  File "/usr/local/bin/cloudplow", line 64, in <module>
    conf.load()
  File "/opt/cloudplow/utils/config.py", line 256, in load
    cfg, upgraded = self.upgrade_settings(json.load(fp))
  File "/usr/lib/python3.7/json/__init__.py", line 296, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "/usr/lib/python3.7/json/__init__.py", line 348, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.7/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "/usr/lib/python3.7/json/decoder.py", line 353, in raw_decode
    obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Expecting ',' delimiter: line 42 column 17 (char 1109)

Use with Cache Mounted Remotes?

Is it possible to use this script with an rclone cache remote that is already mounted? I have had issues with multiple rclone uses against the same cache remote at the same time.

Size of upload is incorrect in Notification

Describe the bug
I have set a min-age on files that I was uploaded with Cloudplow as an rclone_extra setting. The setting seems to filter the files for upload correctly, however the notification I receive tells me it is uploading the size of the source directory, not the actual upload size.

To Reproduce
Steps to reproduce the behavior:

  1. In rclone_extras, set "--min-age": "365d"
  2. Configure dry_run so nothing is actually uploaded
  3. perform a manual upload and observe the upload size in the notification (in my case, slack)
  4. See the incorrect upload size.

Expected behavior
I would expect that the notification I receive should be for the size of the actual upload, not just the size of my source directory.

Screenshots
Actual size of media greater than 365d:
image

Notification from Cloudplow:
image

System Information

  • Cloudplow Version: Master (latest)
  • Operating System: Debian Stretch

Add healthchecks.io as notification option

Healthchecks.io is an open-source project that helps with monitoring and alerting. It's very useful to connect multiple notification providers such as Slack, Telegram, Email...

Integrating CloudPlow with healthchecks.io would allow to notify when a task is completed successfully or when it fails to do so. This idea is based on [Docker-Rclone] (https://github.com/bcardiff/docker-rclone)

mover and rclone 1.48

I noticed rclone 1.48 changelog included:

Drive

  • Fix move and copy from TeamDrive to GDrive (Fionera)
  • Add notes that cleanup works in the background on drive (Nick Craig-Wood)
  • Add --drive-server-side-across-configs to default back to old server side copy semantics by default (Nick Craig-Wood)
  • Add --drive-size-as-quota to show storage quota usage for file size (Garry McNulty)

Would this alleviate the need for using rclone 1.47-019 for the mover feature that migrates from TD to MD?

Thanks!

"Schedule" is not working yet? or Did I wrong...

Hello Sir,

The cloudplow is a great program and I've used it over 3 months without issue.
For more comfi internet I would like to run this as per schedule, however it is not working as I planned.

Below is my config.json. Sorry for messy layout as I don't know how to put the code with space/tab.

    "uploader": {
        "google": {
            "check_interval": 30,
            "exclude_open_files": false,
            "max_size_gb": 2,
            "opened_excludes": [
                "/downloads/"
            ],
            "schedule": {
                "allowed_from": "00:05",
                "allowed_until": "12:00",
                "enabled": true
            },
            "size_excludes": [
                "downloads/*"
            ]
        }
    }

And.. Below is a recent log took yesterday. Above setting was used for this operations by reboot.

 2019-05-11 19:45:14,036 - INFO       - git                  - check_version                  - You are using the latest master: 5f6a4c35c2999e8661ddc957b3fa34fff6f1367a
 2019-05-11 19:45:14,036 - INFO       - cloudplow            - <module>                       - Started in run mode
 2019-05-11 19:45:14,037 - INFO       - cloudplow            - <module>                       - Added google uploader to schedule, checking available disk space every 30 minutes
 2019-05-11 20:15:14,792 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 340 GB over the maximum limit of 30 GB
 2019-05-11 20:15:14,793 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 20:15 is not within the allowed upload time periods 00:05 -> 16:00
 2019-05-11 20:45:15,602 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 340 GB over the maximum limit of 30 GB
 2019-05-11 20:45:15,602 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 20:45 is not within the allowed upload time periods 00:05 -> 16:00
 2019-05-11 21:15:16,361 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 340 GB over the maximum limit of 30 GB
 2019-05-11 21:15:16,361 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 21:15 is not within the allowed upload time periods 00:05 -> 16:00
 2019-05-11 21:45:17,143 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 369 GB over the maximum limit of 30 GB
 2019-05-11 21:45:17,144 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 21:45 is not within the allowed upload time periods 00:05 -> 16:00
 2019-05-11 21:57:08,141 - INFO       - lock                 - ensure_lock_folder             - Created lock folder: '/opt/cloudplow/locks'
 2019-05-11 21:57:09,652 - INFO       - git                  - check_version                  - You are using the latest master: 5f6a4c35c2999e8661ddc957b3fa34fff6f1367a
 2019-05-11 21:57:09,652 - INFO       - cloudplow            - <module>                       - Started in run mode
 2019-05-11 21:57:09,662 - INFO       - cloudplow            - <module>                       - Added google uploader to schedule, checking available disk space every 30 minutes
 2019-05-11 22:27:09,706 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 403 GB over the maximum limit of 2 GB
 2019-05-11 22:27:09,707 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 22:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-11 22:57:10,496 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 415 GB over the maximum limit of 2 GB
 2019-05-11 22:57:10,496 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 22:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-11 23:27:11,353 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 442 GB over the maximum limit of 2 GB
 2019-05-11 23:27:11,355 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 23:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-11 23:57:12,326 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 442 GB over the maximum limit of 2 GB
 2019-05-11 23:57:12,326 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 23:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 00:27:13,131 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 442 GB over the maximum limit of 2 GB
 2019-05-12 00:27:13,131 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 00:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 00:57:13,843 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 442 GB over the maximum limit of 2 GB
 2019-05-12 00:57:13,844 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 00:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 01:27:14,546 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 443 GB over the maximum limit of 2 GB
 2019-05-12 01:27:14,546 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 01:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 01:57:15,442 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 444 GB over the maximum limit of 2 GB
 2019-05-12 01:57:15,442 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 01:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 02:27:16,375 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 444 GB over the maximum limit of 2 GB
 2019-05-12 02:27:16,376 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 02:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 02:57:17,315 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 444 GB over the maximum limit of 2 GB
 2019-05-12 02:57:17,316 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 02:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 03:27:18,286 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 444 GB over the maximum limit of 2 GB
 2019-05-12 03:27:18,286 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 03:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 03:57:19,203 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 444 GB over the maximum limit of 2 GB
 2019-05-12 03:57:19,204 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 03:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 04:27:20,127 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 446 GB over the maximum limit of 2 GB
 2019-05-12 04:27:20,127 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 04:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 04:57:21,082 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 446 GB over the maximum limit of 2 GB
 2019-05-12 04:57:21,082 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 04:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 05:27:22,030 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 446 GB over the maximum limit of 2 GB
 2019-05-12 05:27:22,031 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 05:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 05:57:22,979 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 446 GB over the maximum limit of 2 GB
 2019-05-12 05:57:22,980 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 05:57 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 06:27:23,876 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 449 GB over the maximum limit of 2 GB
 2019-05-12 06:27:23,876 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 06:27 is not within the allowed upload time periods 00:05 -> 12:00
 2019-05-12 06:57:24,760 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. Local folder size is currently 449 GB over the maximum limit of 2 GB
 2019-05-12 06:57:24,760 - INFO       - cloudplow            - scheduled_uploader             - Uploader: google. The current time 06:57 is not within the allowed upload time periods 00:05 -> 12:00`

It seems that the setting is applied well, but running time calculation is not working.

Thanks in advance.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.