Giter Site home page Giter Site logo

yiff.party-image-scraper's People

Contributors

ruthalas avatar scrapert avatar viktor02 avatar xealeph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

yiff.party-image-scraper's Issues

Kemono Party

It looks like people are starting to move over to Kemono Party now that Yiff Party is dead.

Would you consider making a similar tool for Kemono Party?

Download post as text

Many Patreons have links in their posts leading to 3rd party downloading sites. For example, an artist might link to a Mega.nz file, Vimeo file, or even a gfycat.com file, etc etc etc.

I assume adding a downloader for all cases would be too much. So perhaps a link detector instead? If a link is detected in a post, it could be put into a text file in the same Patreon download folder. That way a user could download the links themselves without having to scroll and check every single post.

One issue though:
Many links (like Vimeo videos) are password protected. Either a "password detector" would be needed or maybe we would need the option to download all the posts as text files?

Folder Naming Suggestion

Right now, the created folders are simply named with their user number. It can get very confusing to find content when everything is just numbers. I suggest adding the Patreon/Fantia username to the folder name.

One potential issue to this is that previous downloaded folders would have to be renamed. Otherwise the scraper would just make another folder and start downloading everything again.

Missing image from scraping

Sometimes images present on page are not downloaded by your script.
If I try to scrape this profile (https://yiff.party/patreon/3221105) the image at this link (https://data.yiff.party/patreon_inline/15766578/f8a6acddcc7c6e06e63ca590ff46e90d5ff57e1c.jpg) is not downloaded. I notice same thing on other profile in particular when there are more than one image in gallery and not as attachment.
I know a bit of python and try to fix your code but I'm not be able to do it. However I recognise that missing image link is present in "containersPart2" but for some reason when you do container.find_all('a') and linkList.append(subCont['href']) that link remain excluded. Hope that these information can help you.
Sorry for my bad english and thanks in advance for your time and help

Makes duplicates

The script makes duplicates, usually they are located nearby, with the same names (except for the number). Perhaps this is due to the download of the attached files and cover art.

identicals files name erased

When downloading, files with the same name are erased and only the last seem to be kept, is there a way to fix this? a lot of content can easily disapears.

Do you have any idea how to fix this?

Download folder messed up on Linux

The folder creation for .\Images\ is broken for Linux, and instead creates hidden files with the path as the filename.

$ ls -a
.\Images\
.\Images\17697790\
.\Images\17697790\0 0084A251-4A39-4E2E-9AB4-2C5EE2C84688.jpg
.\Images\17697790\001A931F-808A-4B63-9E57-C2AD695FA38F.JPEG
.\Images\17697790\0084A251-4A39-4E2E-9AB4-2C5EE2C84688.jpg
...

I'm running Linux (Ubuntu 16.04), Python 3.7.3

>>> os.path.abspath(".\\Images\\")
'/home1/bpo5r5/scraper/Yiff.party-Image-Scraper/.\\Images\\'

edit: i believe os.getcwd() can solve this issue

Second page bug

If the author has a second page, then the scraper does not download it, can you fix this bug?

adding youtube-dl?

I see you are unable to download the embeded vimio videos and I assume other external videos. If you include youtube-dl (might also need ffmpeg for some suff) as the video downloader you can easily has it download youtube videos or from other sights as long as they are on youtube-dl's list. It can also handle the embeded vimeo videos though I'm not sure how you would go about getting the "link". Here is a reddit post that shows who youtube-dl can download these embeded videos. I just tested it and it did work. https://www.reddit.com/r/youtubedl/comments/fzv58p/getting_some_embedded_vimeo_videos_from_a_webpage/
something to note when they tell you to filter urls that contain .json?base64_init=1 then copy it and relpace the text they just quoated wiuth .mpd they mean relace .json?base64_init=1 at the end on the link you just copied. Though I don't know how you would o about grabbing that link in the first place sadly. Hope this idea helps! Also aould be nice if we could set download location.

Skip downloaded files?

First of all, thank you for making this! It works really well!

Is it possible that you could make it detect if you already downloaded a file and only get the newest/missing posts?

Thanks!

Error after downloading

Traceback (most recent call last):
  File "yiff_image_scraper.py", line 100, in <module>
    downloadImages(url, urlCounter)
  File "yiff_image_scraper.py", line 94, in downloadImages
    print("\nSuccessfully downloaded " + str(len(imageCounter)) + " Images!\n")
TypeError: object of type 'int' has no len()

Feature Request

Could you add the baility to set a folder location for the files instead of it just downloading to the scripts folder?

'list' referenced before assingment (due to Fantia?)

I tried running the current version on a patreon yiff.party, and it failed at line 324 when it tries to reference the variable list, which I think (?) has not been set because the link used is not on the Fantia platform.
234:

    if platform != 'fantia':
        linkList = list(dict.fromkeys(linkList))

If the link is Fantia, that variable gets set at line 251:

if platform == 'fantia':
            list = []
            containersFantia = soup.find_all('div', {'class': 'col s12 m6'})
            for cont in containersFantia:
                list.append("https://yiff.party" + cont.a['href'].strip())
                linkList += fantiaSubroutine(list)
            continue

But if the platform is not Fantia, the variable is never initialized, I think.

This is the output:

======Starting Scraper========
Traceback (most recent call last):
  File "yiff_image_scraper.py", line 437, in <module>
    downloadImages(url, urlCounter, useFolders)
  File "yiff_image_scraper.py", line 324, in downloadImages
    linkList = list(dict.fromkeys(linkList))
UnboundLocalError: local variable 'list' referenced before assignment

NoneType object is not subscriptable

Found another bug!

Tried to get this URL: https://yiff.party/fantia/4284

And got:
Traceback (most recent call last):
File "yiff_image_scraper.py", line 490, in <module>
downloadImages(url, urlCounter, useFolders)
File "yiff_image_scraper.py", line 292, in downloadImages
linkList += fantiaSubroutine(fantiaList)
File "yiff_image_scraper.py", line 197, in fantiaSubroutine
linklist.append(var.a['href'])
TypeError: 'NoneType' object is not subscriptable

Store which files have already been downloaded

Downloading entire Patreons can end up being quite large in file size. It would be nice to be able to put my saved Patreons onto offline storage.

The problem with that is if I want to download only the new posts later on, Yiff Image Scraper has no idea that I previously downloaded the older posts. It would just download everything all over again.

Could we get some sort of database that remembers everything we downloaded so it never downloads it again? Unless manually overridden I suppose...

UnicodeEncodeError

Thanks so much for the database! Looks like it's working perfectly!

In the process of using the new update, it seems I've found a bug.

Tried to get this: https://yiff.party/fantia/21000
(NSFW)

And got this error:

Traceback (most recent call last):
File "yiff_image_scraper.py", line 490, in
downloadImages(url, urlCounter, useFolders)
File "yiff_image_scraper.py", line 392, in downloadImages
f.writelines(galleryAuthor + '\n;')
File "C:\Users\User\AppData\Local\Programs\Python\Python38\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\u3072' in position 0: character maps to

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.