xerbo / furaffinity-dl Goto Github PK
View Code? Open in Web Editor NEWFurAffinity Downloader, now with 100% more Python
License: BSD 3-Clause "New" or "Revised" License
FurAffinity Downloader, now with 100% more Python
License: BSD 3-Clause "New" or "Revised" License
I used "EditThisCookie" to export the cookies in Netscape format and I get this error after trying to download from a gallery which is restricted to only logged in users:
ERROR: You have provided a cookies file, but it does not contain valid cookies.
I thought I used the wrong path but that couldn't be... Any help would be appreciated.
Is the utility completely broken? Just trying to execute
./furaffinity-dl -c ./cookies.txt -o ./user -r gallery/user
And all it does is create a blank file named "https:" in the folder.
Hi, thanks for this great script, I found it a few days ago and it's really useful :) This isn't really an issue, per se, but something I thought it might be nice to amend if using the separate metadata files.
I had a look at the pagesource and it looks like the descriptions were being taken from <meta property="og:description" content=", unfortunately that gives incomplete descriptions limited to ~121 characters without spaces.
I'd like to propose the following edit to copy the full description. There was also an '& quot ;' issue in some of the titles which is hopefully sorted as well. It might not be the cleanest way of doing it, but it seems to work okay. It copies it verbatim, so if they've put a lot of linebreaks in the desc, they'll appear in the metadata file. I'm interested to hear what you think :)
From line 147 'Get metadata'
description="$(cat "$tempfile" | tr '\n' ' ' | sed 's/\(<div class="submission-description">\)/\n\1/gI' | sed 's/\(<\/div>\)/\1\n/gI' | grep -o '<div class="submission-description".*</div>' | sed 's/<div class="submission-description"> //g' | sed 's@<br />@\n@g' | sed 's/<a href="//g' | sed 's@" class=".*</a>@@g' | sed 's@ </div>@@g' | sed 's/"/"/g')"
if [ $classic = true ]; then
title="$(grep -Eo '<h2>.*</h2>' "$tempfile" | awk -F "<h2>" '{print $2}' | awk -F "</h2>" '{print $1}' | sed 's/"/"/g')"
else
title="$(grep -Eo '<h2><p>.*</p></h2>' "$tempfile" | awk -F "<p>" '{print $2}' | awk -F "</p>" '{print $1}' | sed 's/"/"/g')"
fi
An earlier version of this script would note "file exists, skipping" when iterating through a user's updated gallery. This function seems to have stopped working.
FA's new "ReCaptcha" authentication is preventing the program from logging in properly to download files.
The following piece of code does not work on the BSD version of mktemp
:
mktemp --tmpdir=$runtime_dir
Fixed locally like this:
mktemp $runtime_dir/fdl.XXXXXX
Man page can be viewed here.
I'm not sure what the problem is, but when I try to use my cookies.txt file, a bunch of blank files are attempted to be created. They are all overwritten by each other when using -w, none of which are actual files from a user's gallery or favs. It might be something to do with the formatting of cookie.txt, but I'm using the exact extension that was recommended for Chrome. Any thoughts?
Whenever a flash file is encountered, an error is thrown and a .https
file is downloaded without data. I think it's because the directory for where these files are linked from were changed. Every time this happens, a batch stops and has to be restarted.
ftp://https/
=> ‘.listing’
Resolving https (https)... failed: Name or service not known.
wget: unable to resolve host address ‘https’
Looked through the closed issues and saw that this would happen with the Beta Skin, but I don't currently use it and have cleared site Data already to try and solve it.
Hey!
I'm not finding in the documentation any functionality for skipping files that have already been downloaded (on the python branch.) Could this be a feature that could be added?
Not logged in, NSFW content is inaccessible
/data/data/com.termux/files/home/furaffinity-dl.py:209: DeprecationWarning: The 'text' argument to find()-type methods is deprecated. Use 'string' instead.
next_button = s.find('button', class_='button standard', text="Next")
Unable to find next button
Finished downloading
it happens even if user has SFW content
The Script uses the domain d.facdn.net in Line 153 to download the Image. But they seems to use the domain d.furaffinity.net by now. To chance the line worked for me! :D
https://github.com/Xerbo/furaffinity-dl/blob/master/furaffinity-dl#L153
Thank you for your work <3
So it works and it downloads, but it seems to only download the first page of an artist I tried it on, and I'm not sure why. Any pointers?
Okay so im new to this whole running script thing, ive installed wget, coreutils in ubuntu and updated my binaries, but i cant seem to download all it says is command not found
While I appreciate someone putting in effort to try and make this tool better, this seems to have simply broken everything.
~/furaffinity-dl [master] » ./furaffinity-dl -p -c ~/facookies.txt gallery/yuurikin
INFO: eyed3 is not installed, no metadata will be injected into music files.
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
2019-07-23 17:40:22 URL:https://www.furaffinity.net/gallery/yuurikin [59304] -> "/home/katt/.cache/furaffinity-dl/fa-dl.gNmqXvwS7r" [1]
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
2019-07-23 17:40:23 URL:https://www.furaffinity.net/view/31925136/ [34063] -> "/home/katt/.cache/furaffinity-dl/fa-dl.gNmqXvwS7r" [1]
--2019-07-23 17:40:23-- https://d.facdn.net/art/yuurikin/1560634746/1560634746.yuurikin_gsg.png
Loaded CA certificate '/etc/ssl/certs/ca-certificates.crt'
Resolving d.facdn.net (d.facdn.net)... 104.25.211.99, 104.25.212.99
Connecting to d.facdn.net (d.facdn.net)|104.25.211.99|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 98477 (96K) [image/png]
Saving to: ‘./SD: PastelGenet.png’
./SD: PastelGenet.png 100%[====================================================================================================================================================================================================>] 96.17K --.-KB/s in 0.005s
2019-07-23 17:40:23 (17.5 MB/s) - ‘./SD: PastelGenet.png’ saved [98477/98477]
Error: Not a valid PNG (looks more like a JPEG) - ./SD: PastelGenet.png
0 image files updated
1 files weren't updated due to errors
~/furaffinity-dl [master●] »
This is all I get since updating, Why does it care about the png being valid? I even told it to get "(P)lain file without any additional metadata", yet it renames the file and crashes?
Please revert these silly changes or at least provide a way to get good 'ol plain files, not renamed, and not modified.
When download a folder you have to make sure to put everything after /folder/ from the URL for a folder for example 406765/May-the-Best-Man-Win instead of just 406765 then the downloader will get the correct url for the 2nd page.
Someone should probably add and example to the README.md and the Help text that shows when you run the program for this
is it possible to create as metadata a descriptor from class="submission-description user-submitted-links"
I am unsure why I can't use my cookies, if I don't use it I can download a library without issue, but when I include it I seem to get this
--2018-06-24 19:02:35-- ftp://http/
=> ‘.listing’
Resolving http (http)... failed: Name or service not known.
wget: unable to resolve host address ‘http’
Hi there,
Just wondering is there anyway to queue the download commands, rather than waiting to put a new command after the previous one is done?
Thanks!
Hey, thanks for continuing to keep this working, I really like having this around to keep the never-ending rotation of my FA favorites on my desktop current with whatever goofy smut I'm currently into. <3
I just moved from the bash version to the Python version and I figured I'd say thanks.
Pretty much what is says on the tin. It didn't crop up till many files into an update I was running but eventually it got to a file where it tried to update the exif but the exif section always uses $file so it causes an error when using the -r option.
I like the idea of names based on titles by the way, it's a nice touch! I just don't want to redownload everything cause it's a lot. On that note, when I first ran it with the option it made duplicates ending in numbers instead of skipping existing files if they're up to date. I manually added the -N flag to the wget lines and that seemed to fix it though.
Thank you for the work you've done on this script it's been very useful!
Everything else seems to work fine up until this point in the program and i'm left with nothing
i don't know much but apparently whatever directory and file that's supposed to be created isn't being created...
i'll download a distro or something and try it on that in the meantime
What the title says, I keep on getting the error Error: Not a valid PNG (looks more like a JPEG) and the download stops. Even if I use the -p flag, the download stops in the same place, without printing the error message.
Just a note that because of the dumb way that it handles files, FA has tons of images with the wrong extension.
torify ./grab gallery/kacey ./cookies.txt
--2018-03-23 15:03:24-- http://www.furaffinity.net/gallery/kacey
Resolving www.furaffinity.net (www.furaffinity.net)... 104.20.69.59
Connecting to www.furaffinity.net (www.furaffinity.net)|104.20.69.59|:80... connected.
HTTP request sent, awaiting response... 403 Forbidden
2018-03-23 15:03:24 ERROR 403: Forbidden.
same with https.
Xerbo Allow downloading of specific folders, ref #38
running in a terminal window on ubuntu or bash on windows just returns me to the command line without doing anything
if i dont enter any parameters it shows some help stuff so i know i got the path right
When I run it in macOS Monterey, I get the following:
mktemp: illegal option -- -
usage: mktemp [-d] [-q] [-t prefix] [-u] template ...
mktemp [-d] [-q] [-u] -t prefix
It appears there is no --suffix
argument for mktemp in macOS.
The man page for macOS' version of mktemp is here: https://manpagez.com/man/1/mktemp/
For example: https://www.furaffinity.net/user/-tja-/
furaffinity-dl.py: error: unrecognized arguments: -tja-
So I tried to use it to download a gallery, cookie file doesn't return anything bad but it stops when it says:
ERROR: The certificate of ‘www.furaffinity.net’ is not trusted.
ERROR: The certificate of ‘www.furaffinity.net’ hasn't got a known issuer.
Which stops the download. How can I whitelist the certificate? Thanks!
Sometimes half of the information in the image is the description like stories. The description should be downloaded.
Mostly noticed this with audio posts, but it seems to throw an "ftp" before the http in the url and then the entire process comes to a stop
also had trouble saving stuff from fav galleries, it just seems to go in a loop without downloading anything other than the first page
great script btw, it's been really helpful outside of these couple of issues!
I tried to download 5razor gallery but stops at the middle of the first page, only workaround for me is to download the next page, gallery/5razor/2. Also I use classic skin with cookies on
hey i use the tool on windows
i installed a Linux version from the Microsoft store
i tiped this in the command line:
cd /mnt/c/Users/Ich/Desktop/fu
chmod +x furaffinity-dl
./furaffinity-dl -o user -c /cookies/cookies.txt -p gallery/user
but it works only on some users. :(
Aso it dosnt downlode Adult contet
Should be using //d2.facdn.net/art/ now:
image_url="$prefix$(grep --only-matching --max-count=1 ' href="//d2.facdn.net/art/.+">Download' "$tempfile" | cut -d '"' -f 2)"
If I have https://www.furaffinity.net/view/45842186/ (NSFW, cw: hyper titties, clothed) in my favorites, the Python downloader will crash with the following error when it comes to this image:
Traceback (most recent call last):
File "/Users/egypt/Documents/scripts/fa/peggy faves/../furaffinity-dl.py", line 202, in <module>
download(img.find('a').attrs.get('href'))
File "/Users/egypt/Documents/scripts/fa/peggy faves/../furaffinity-dl.py", line 135, in download
for tag in s.find(class_='tags-row').findAll(class_='tags'):
AttributeError: 'NoneType' object has no attribute 'findAll'
If I unfav this one particular image, the downloader will happily go past the point it occupied in my favorites; favorite it again and the downloader once again crashes when it comes to it. The downloader also has no problems grabbing other images by this artist.
when downloading it goes, "description...." also meta data doesnt include tags and mentioned users
Ubuntu 19.10
I used this tool once about a month ago. I came back to use it today and it's not working. Am I doing something wrong, or is the tool broken? Here's the output:
~/furaffinity-dl-master$ ./furaffinity-dl -c ./cookie.txt -o /path/to/download/to gallery/artist
INFO: eyed3 is not installed, no metadata will be injected into music files.
INFO: exiftool is not installed, no metadata will be injected into pictures.
2020-03-09 16:57:01 URL:https://www.furaffinity.net/gallery/artist [123456] -> "/home/user/.cache/furaffinity-dl/fa-dl.etcetcetc" [1]
2020-03-09 16:57:02 URL:https://www.furaffinity.net/view/12345678/ [123456] -> "/home/user/.cache/furaffinity-dl/fa-dl.etcetcetc" [1]
File already exists, skipping. Use -w to skip this check
2020-03-09 16:57:03 URL:https://www.furaffinity.net/view/12345678/ [12345] -> "/home/user/.cache/furaffinity-dl/fa-dl.etcetcetc" [1]
--2020-03-09 16:57:03-- ftp://https/
=> ‘.listing’
Resolving https (https)... failed: No address associated with hostname.
wget: unable to resolve host address ‘https’
Did something change on FA's server and so the script's address is mispointed?
Been using the bash script for quite some time already and yesterday when I tried to download a gallery it began throwing this "https:: Invalid argument" error at me. Tried executing the script with -i and had no luck either, just returns "http:: Invalid argument" instead.
Running the script on Windows 7 32bit through mingw (the one that comes with git by default). Have all the dependencies listed in the readme except for eyed3 and exif ones.
It's finally happening, when finished all current code will be moved to a new branch and the python version will occupy the master branch.
All existing functionality will be migrated, this would also allow native windows support and easier development.
I'm using Bash for Windows and when trying to use my cookies exported from Firefox using Ganbo (Yes, I was on the right tab, I checked), it still doesn't like them.
The script appears to have an issue with download text files, such as .pdf, .txt, and .docx. While images download with their exact filename (ie <file_id>.filename.png), text files appear to download as the page id and no extension.
text documents don't support the new URL
https://d.facdn.net/download/art/user/stories/id/id.file.ext
they use the old style URL
https://d.facdn.net/art/user/stories/id/id.file.ext
here's an example:
https://www.furaffinity.net/view/34436711/
using the download button we get a text file, using the download submission button we get an "igame not found" error
If the -c option is used with a valid cookie but the associated account uses the "beta" layout (as opposed to the "classic" one), attempting to download content will stop after the first page.
This is because next_page_url is generated by grepping for the class "button-link right"; in the "beta" layout, the Next Buttons are class "button mobile-button right" instead. Because the grep fails to find a valid URL, the download stops after the first page.
Going into account settings and switching back to the "classic" layout fixes the problem, but it's rather inconvenient.
When specifying a cookie file, this occurs:
--2017-03-30 20:32:14-- ftp://http/button%20download-logged-in => ‘.listing’ Resolving http (http)... failed: Name or service not known. wget: unable to resolve host address ‘http’
Most recently receiving the following error:
Traceback (most recent call last):
File "furaffinity-dl.py", line 200, in <module>
next_button = s.find('button', class_='button standard', text="Next").parent
AttributeError: 'NoneType' object has no attribute 'parent'
All gallery pages were downloaded, and I assume there's supposed to be a catch to see if this is the last page in a gallery or not?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.