Giter Site home page Giter Site logo

blogspot-downloader's Introduction

My Github Repo:

Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card Readme Card


Forked (& modified) Github Repo:

Readme Card Readme Card Readme Card Readme Card Readme Card


Copied (& minor modified) Code to Github Repo:

Readme Card


profile for Fruit on Stack Exchange, a network of free, community-driven Q&A sites

blogspot-downloader's People

Contributors

limkokhole avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

blogspot-downloader's Issues

when i run the py.file to download, there is an error log

Try to scrape rss feed url automatically ... https://xxx.blogspot.com/
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)>
Request webpage failed, please check your network OR authorized to access that url.

I want to know how to solve it? I can sure i can browse the blogspot.com in Chrome, and i have one proxy in the iterm where the blogspot_downloader.py was run.

Error either passing URL or feed URL

This is what I get passing feed URL:

python blogspot_downloader.py -p -f https://foo.blogspot.com/feeds/posts/default
Download in rss feed mode
Scraping rss feed... https://foo.blogspot.com/feeds/posts/default?start-index=1&max-results=25
Traceback (most recent call last):
  File "blogspot_downloader.py", line 636, in <module>
    main()
  File "blogspot_downloader.py", line 610, in main
    url = download(url, url, d_name, ext)
  File "blogspot_downloader.py", line 348, in download
    print('\ntitle: ' + title_raw)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf4' in position 25: ordinal not in range(128)

Exception -2
Traceback (most recent call last):
  File "blogspot_downloader.py", line 636, in <module>
    main()
  File "blogspot_downloader.py", line 610, in main
    url = download(url, url, d_name, ext)
  File "blogspot_downloader.py", line 348, in download
    print('\ntitle: ' + title_raw)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xf4' in position 25: ordinal not in range(128)

Exception -1

and this is what I get passing only the simple URL:

python blogspot_downloader.py -p https://foo.blogspot.com
Download in rss feed mode
Scraping rss feed... https://foo.blogspot.com?start-index=1&max-results=25
Try to scrape rss feed url automatically ... https://foo.blogspot.com
Traceback (most recent call last):
  File "blogspot_downloader.py", line 636, in <module>
    main()
  File "blogspot_downloader.py", line 610, in main
    url = download(url, url, d_name, ext)
  File "blogspot_downloader.py", line 213, in download
    soup = BeautifulSoup(r, "lxml")
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1522, in __init__
    BeautifulStoneSoup.__init__(self, *args, **kwargs)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1147, in __init__
    self._feed(isHTML=isHTML)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1189, in _feed
    SGMLParser.feed(self, markup)
  File "/usr/lib/python2.7/sgmllib.py", line 104, in feed
    self.goahead(0)
  File "/usr/lib/python2.7/sgmllib.py", line 174, in goahead
    k = self.parse_declaration(i)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1463, in parse_declaration
    j = SGMLParser.parse_declaration(self, i)
  File "/usr/lib/python2.7/markupbase.py", line 109, in parse_declaration
    self.handle_decl(data)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1448, in handle_decl
    self._toStringSubclass(data, Declaration)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1381, in _toStringSubclass
    self.endData(subclass)
  File "/usr/local/lib/python2.7/dist-packages/BeautifulSoup.py", line 1251, in endData
    (not self.parseOnlyThese.text or \
AttributeError: 'str' object has no attribute 'text'

[SSL: CERTIFICATE_VERIFY_FAILED]

Trying to download a blogspot gives me the following error:

>> python3 blogspot_downloader.py https://google.blogspot.com/ -a -p
Download all in website mode
<urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)>
Please check your network OR url.

Someone else had the same issue, but even after updating everything it still doesn't work

Various RSS PDF download issues

Thanks for this useful script!

For some reason I seem to only be able to download the specific blog I'm after through the rss feed (with option -p), however every time I run the command the scrapping and downloading stops at a particular post.

I've tried using the -a -s and -p flags to download a specific year (or month) after the post which seems to be causing the problem, but I get the following error:

title: Baru Samarinda
link: https://blog-name.blogspot.com//2015/07/baru-samarinda-terima.html
Download html as PDF, please be patient...18/71
file path: /home/xthursdayx/blogspot-downloader/blog name.blogspot.com /Baru Samarinda Terima Penganugerahan P....pdf
pdfkit IOError

I also tried the command python3 blogspot_downloader.py -lo http://blog-name.blogspot.com/, exporting the results to urls.list and then ran the command python3 blogspot_downloader.py -p -1 <urls.list and got the following error:

URL: Create single pdf: /home/vidrir/blogspot-downloader/flores borneo.blogspot.com.pdf
IOError --one:  wkhtmltopdf reported an error:
Loading page (1/2)
Error: Failed to load https://blog-name.blogspot.com/2015/07/baru-samarinda-terima.html.html?action=backlinks&widgetId=Blog1&widgetType=Blog&responseType=js&postID=6408130842748688230&xssi_token=AOuZoY7PJVRw0EwDhHe-xNsCx9cPbEV4gQ400A1620183076462, with network status code 302 and http status code 400 - Error transferring https://blog-name.blogspot.com/2015/007/baru-samarinda-terima.html?action=backlinks&widgetId=Blog1&widgetType=Blog&responseType=js&postID=6408130842748688230&xssi_token=AOuZoY7PJVRw0EwDhHe-xNsCx9cPbEV4gQ%3A1620183076462 - server replied:
Printing pages (2/2)
Done
Exit with code 1 due to network error: ProtocolInvalidOperationError

Any idea what my problem is? Thanks for the help!

**blog name and post changed for the owner's benefit.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.