Giter Site home page Giter Site logo

stewartmckee / cobweb Goto Github PK

View Code? Open in Web Editor NEW
226.0 9.0 45.0 5.46 MB

Web crawler with very flexible crawling options. Can either use standalone or can be used with resque to perform clustered crawls.

License: MIT License

Ruby 16.56% CSS 13.30% JavaScript 54.29% Shell 0.01% HTML 15.84%

cobweb's Issues

Falling into Crawl Traps

Hi,

To start, thank you for an excellent piece of work. Appreciated.

I'm trying to use this to crawl the site http://www.udemy.com/ . I added it to my Gemfile, bundle install and started it up and everything looked great. What I found was that it fell victim to crawl traps generating urls like this:

https://www.udemy.com/courses/photography/mobile-photography/all-courses/?p=324

The actual number pages on the base url:

https://www.udemy.com/courses/photography/mobile-photography/all-courses/

is only 3 so its very much spidering far, far deeper than needed.

Any suggestions for how to go about addressing this?

What I'm trying to do is build a page_archiver and my core loop looks like this (its being executed from a Rake task):

statistics = CobwebCrawler.new(:cache => 600, :thread_count => 10, :valid_mime_types => ["text/html"]).crawl("http://www.udemy.com") do |page|
  puts "Just crawled #{page[:url]} and got a status of #{page[:status_code]}."
  if page[:mime_type] == "text/html"
    page_ctr = page_ctr + 1
    #puts page.title
    #debugger
    page_archive = PageArchive.find_or_create(page[:body], page[:url])
    total_time = (Time.now - start_time) 
    puts "  Total time: #{total_time}"
    puts "  Total pages: #{page_ctr}"
    puts "  Time per page: #{total_time.to_f / page_ctr}"
  else
    puts "  Not text/html for: #{page[:url]}"
  end
  
end

After running it for about 20 minutes, it got 10,000 "pages" deep almost all of which was just "psuedo pages" like the ?p=324 url.

I didn't see any kind of configuration option that would limit progress so this feels like something internal to the guts of the crawler but if I've missed something, my bad.

Thanks
Scott

Redirect Limit causing crawl to stop

When the redirect limit is hit it kills the crawl. The RedirectError is thrown but doesn't seem to be trapped, it seems to be thrown for each subsequent call into the get method, which it shouldn't do because it should check that the redirect_limit has count down to 0.

Inbound links are not normalized when stored

If I call Stats.inbound_links_for(my_url) during parse, sometimes I don't see the correct results. This is due to the fact that the URI being processed during parse has been normalized before fetching the page data, but links are not normalized before having their digests calculated as redis keys.

Feature request: Stop crawl at time

Hello -- this looks like a great crawler, but I need a way, when crawling, to max-out crawl times on a per-url basis.

Because of that I recommend two features:

Actually raise exceptions

This would allow me to decide any arbitrary conditions upon which to stop crawling.

require 'cobweb'
require 'securerandom'

def condition
  true if SecureRandom.hex(10).include?("a") # or whatever condition I deem relevant
end

CobwebCrawler.new(:raise_arbitrary_exceptions => true).crawl("http://pepsico.com") do |page|
  puts "Just crawled #{page[:url]} and got a status of #{page[:status_code]}."
  raise MyCustomError, "message" if condition
end
Just crawled http://www.pepsico.com/ and got a status of 200.
# ... eventually condition is met ...
MyCustomError: message
        from (somewhere):3
# ...

Encode crawl stop options

This would be a higher level way of enshrining these as features, and would be a lot cleaner overall.

require 'cobweb'

pages = 0
puts Time.now #=> 2017-04-19 13:33:11 +0100 

CobwebCrawler.new(:max_pages => 1000, :max_time => 360).crawl("http://pepsico.com") do |page|
  pages += 1
end
puts "Stopped after #{pages} pages at #{Time.now}"
#=> Stopped after 1000 pages at 2017-04-19 13:36:25 +0100
# (... or some other time that is not more than 360 seconds from start time)

Ideally :max_time would accept DateTime, Time or Integer objects, where the integer would represent seconds.

I'm totally new to this project, so feel free to let me know if these are crazy requests. I'm happy to help make this too, if you can give me a pointer as to where this would start out.

Error on first run

Running cobweb, I get this:

/Library/Ruby/Gems/2.0.0/gems/cobweb-1.1.0/bin/cobweb:13:in `block in <top (required)>': undefined method `banner' for main:Object (NoMethodError)
    from /Library/Ruby/Gems/2.0.0/gems/slop-4.2.1/lib/slop/options.rb:33:in `initialize'
    from /Library/Ruby/Gems/2.0.0/gems/slop-4.2.1/lib/slop.rb:23:in `new'
    from /Library/Ruby/Gems/2.0.0/gems/slop-4.2.1/lib/slop.rb:23:in `parse'
    from /Library/Ruby/Gems/2.0.0/gems/cobweb-1.1.0/bin/cobweb:12:in `<top (required)>'
    from /usr/local/bin/cobweb:23:in `load'
    from /usr/local/bin/cobweb:23:in `<main>'

I don't know what to do.

LoadError with version 1.0.26

When using 1.0.26 I get the below message. However, if I set to the prior version of the gem (gem 'cobweb', '1.0.25'), I have no issues starting the rails server.

$ rails s
/Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb_crawler.rb:3:in `require': cannot load such file -- ap (LoadError)
from /Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb_crawler.rb:3:in `<top (required)>'
from /Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb.rb:8:in `require'
from /Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb.rb:8:in `block in <top (required)>'
from /Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb.rb:7:in `each'
from /Users/xxx/.rvm/gems/ruby-2.2.0/gems/cobweb-1.0.26/lib/cobweb.rb:7:in `<top (required)>'

undefined method `banner' for main:Object (NoMethodError) on calling from command line

Hi,

I'm getting the following error when I try to use cobweb from command line. Here is the full stack trace:

#<Gem::Specification name=sidekiq version=4.1.4>
/Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/gems/cobweb-1.1.0/bin/cobweb:13:in `block in <top (required)>': undefined method `banner' for main:Object (NoMethodError)
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/gems/slop-4.3.0/lib/slop/options.rb:33:in `initialize'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/gems/slop-4.3.0/lib/slop.rb:23:in `new'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/gems/slop-4.3.0/lib/slop.rb:23:in `parse'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/gems/cobweb-1.1.0/bin/cobweb:12:in `<top (required)>'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/bin/cobweb:23:in `load'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/bin/cobweb:23:in `<main>'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/bin/ruby_executable_hooks:15:in `eval'
    from /Users/gustavo/.rvm/gems/ruby-2.3.1@site-shift/bin/ruby_executable_hooks:15:in `<main>'

The error occurs when I simply call cobweb --help.

I'm using:

  1. Sidekiq 4.1.4
  2. Redis Server 2.8.19

Is there anything I'm missing?

Thanks in advance,
Gustavo

Error raised when there's a valid <base> tag in <head>

After several years of happy operation our Cobweb-dependent crawler ran into a page at https://sso.cas.org/ where the <head> contains this <base> tag:

<base href="https://sso.cas.org/"/>

Our log file was reporting

Error loading http://our.example.com/url: undefined method `present?' for "https://sso.cas.org/":String

and I believe I've traced the problem to a bug in Cobweb's lib/content_link_parser.rb. In the code

14    if @doc.at("base[href]")
15      @base_url = @doc.at("base[href]").attr("href").to_s if @doc.at("base[href]").attr("href").to_s.present?
16    end

I believe the second line is intended to be:

15      @base_url = @doc.at("base[href]").attr("href").to_s if @doc.at("base[href]").attr("href").present?

though I haven't been under the hood in Cobweb before and may be misunderstanding what you're trying to do.

external_urls not treated as external

External urls are not treated as external if they match the cache. A test should be done when retrieving from the cache to make sure that all criteria are checked as it may have changed since last crawl.

error while installing cobweb-1.0.28.gem: Invalid argument @ rb_sysopen

[OS: Win 7 x64]

ruby -v
ruby 2.2.1p85 (2015-02-26 revision 49769) [x64-mingw32]

gem install cobweb

Fetching: redis-3.2.1.gem (100%)
Successfully installed redis-3.2.1
Fetching: redis-namespace-1.5.2.gem (100%)
Successfully installed redis-namespace-1.5.2
Fetching: tilt-2.0.1.gem (100%)
Successfully installed tilt-2.0.1
Fetching: haml-4.0.6.gem (100%)
Successfully installed haml-4.0.6
Fetching: rack-protection-1.5.3.gem (100%)
Successfully installed rack-protection-1.5.3
Fetching: sinatra-1.4.6.gem (100%)
Successfully installed sinatra-1.4.6
Fetching: cobweb-1.0.28.gem (100%)
ERROR:  While executing gem ... (Errno::EINVAL)
    Invalid argument @ rb_sysopen - C:/Ruby22-x64/lib/ruby/gems/2.2.0/gems/cobweb-1.0.28/spec/samples/sample_site/boxgrid>withsillyname.html

Encoding problems

Regardless from sidekiq or resque, I always get this error:

crawl_id: fdc9cd1655a54b3d303e2f38a916cc114c9be2c7
url: https://github.com/stewartmckee/cobweb/blob/master/.ruby-version
processing_queue: CrawlerResqueJob
crawl_finished_queue: CrawlerFinishedJob
internal_urls:
- https://github.com/stewartmckee/cobweb/blob/master/*
debug: true
raise_exceptions: true
redis_options:
  host: localhost
  port: '6379'
use_encoding_safe_process_job: false
follow_redirects: true
redirect_limit: 10
queue_system: resque
quiet: true
cache: 300
cache_type: crawl_based
timeout: 10
external_urls: []
seed_urls: []
first_page_redirect_internal: true
text_mime_types:
- text/*
- application/xhtml+xml
obey_robots: false
user_agent: cobweb/1.0.18 (ruby/1.9.3 nokogiri/1.6.0)
valid_mime_types:
- ! '*/*'
store_inbound_links: false
crawl_limit_by_page: false
parent: https://github.com/stewartmckee/cobweb/blob/master/
Exception
Encoding::UndefinedConversionError
Error
"\xC2" from ASCII-8BIT to UTF-8

The only workaround possible is to make this crawler work is to do it from inside Rails... which is a pity since I planned to build a service - without rails - integrating this crawler in my project.

Sidekiq doesnt work from inside Rails neither...

On the other hand, this error does not occur (Resque) when the encoding_flash is setup but then the process job is not being executed.

`require': cannot load such file -- resque (LoadError)

Even though I don't use Resque in my project (I use Sidekiq) I get the following, when trying to start the rails console, after adding Cobweb to my gemfile:

/Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/activesupport-4.0.5/lib/active_support/dependencies.rb:229:in `require': cannot load such file -- resque (LoadError)
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/activesupport-4.0.5/lib/active_support/dependencies.rb:229:in `block in require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/activesupport-4.0.5/lib/active_support/dependencies.rb:214:in `load_dependency'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/activesupport-4.0.5/lib/active_support/dependencies.rb:229:in `require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/cobweb-1.0.19/lib/cobweb.rb:3:in `<top (required)>'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:76:in `require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:76:in `block (2 levels) in require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:72:in `each'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:72:in `block in require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:61:in `each'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler/runtime.rb:61:in `require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/bundler-1.6.2/lib/bundler.rb:132:in `require'
  from /Users/NK/Programmering/au/config/application.rb:4:in `<top (required)>'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/railties-4.0.5/lib/rails/commands.rb:60:in `require'
  from /Users/NK/.rvm/gems/ruby-2.0.0-p353@au_rails4/gems/railties-4.0.5/lib/rails/commands.rb:60:in `<top (required)>'
  from script/rails:6:in `require'
  from script/rails:6:in `<main>'

Should it be possible to add "depth" in the data hash ?

Hello,

As far as I can see, the generated hash for each page doesn't include the "depth" information, that is to say how many clicks from the homepage each page is distant.
Do you think it could be possible to add this option in the hash ?
By the way, I really appreciate your gem, good work Stewart !

Thanks.

Code organization

Hi, first I want to say thank you for sharing this crawler and for the work you put in it.

Here is our experience with it and thoughts for improvements. I would be happy to know if you agree and if you would like to get this implemented (we can contribute of course).

We have a repository of code, we use for doing lots of data processing using resque.
We tried to use cobweb within our repository and here are our issues:

  1. name conflicts, classes are declared on a global level. Classes declared in cobweb should be name-spaced in a module. Example: Cobweb::Stats
  2. Sinatra loaded by default. We run our code on multiple machines with multiple processes. As I understand sinatra's purpose is to provide a UI for stats. We don't need/want it to be loaded every time on all boxes consuming memory and slowing down the boot time of our app. So this should be optional (example: 'require cobweb-web' or separate gem).
  3. files directive in gemspec. Everything you put in the files directive, can be loaded automatically. This again exposes naming conflicts. For example we use Fozzie that declares Stats module. But when you do 'require stats', you don't know which one is going to be loaded.
  4. sidekick vs resque, could be optional programmers decision and I would avoid auto detection
  5. logging should be configurable and puts statements should not be used. ruby Cobwbeb.logger = Logger.new

In conclusion this is what i have in mind:

require 'cobweb-resque'
# OR
require 'cobweb-sidekick'
require 'cobweb-web' # optional
Cobweb.logger = Logger.new("crawler.log")

Standalone Crawler gives error for redis

When trying to use it as stand alone crawler without redis it gives error for redis connection.

If you run this:

crawler = CobwebCrawler.new(:cache => 600)
statistics = crawler.crawl("http://www.pepsico.com")

You will get error Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (ECONNREFUSED)

While according to documentation, it should run without requiring redis in this case. Thanks

Improve connection handling

Seem to have issues with connections to redis sometimes under load, need to give ability to specify your own redis and and check handling of dropped connections.

License missing from gemspec

RubyGems.org doesn't report a license for your gem. This is because it is not specified in the gemspec of your last release.

via e.g.

spec.license = 'MIT'
# or
spec.licenses = ['MIT', 'GPL-2']

Including a license in your gemspec is an easy way for rubygems.org and other tools to check how your gem is licensed. As you can image, scanning your repository for a LICENSE file or parsing the README, and then attempting to identify the license or licenses is much more difficult and more error prone. So, even for projects that already specify a license, including a license in your gemspec is a good practice. See, for example, how rubygems.org uses the gemspec to display the rails gem license.

There is even a License Finder gem to help companies/individuals ensure all gems they use meet their licensing needs. This tool depends on license information being available in the gemspec. This is an important enough issue that even Bundler now generates gems with a default 'MIT' license.

I hope you'll consider specifying a license in your gemspec. If not, please just close the issue with a nice message. In either case, I'll follow up. Thanks for your time!

Appendix:

If you need help choosing a license (sorry, I haven't checked your readme or looked for a license file), GitHub has created a license picker tool. Code without a license specified defaults to 'All rights reserved'-- denying others all rights to use of the code.
Here's a list of the license names I've found and their frequencies

p.s. In case you're wondering how I found you and why I made this issue, it's because I'm collecting stats on gems (I was originally looking for download data) and decided to collect license metadata,too, and make issues for gemspecs not specifying a license as a public service :). See the previous link or my blog post about this project for more information.

Cobweb gem causes Rails app to run 10x slower

Hi,

I have a very bare-bones Rails App, and when I add

gem 'cobweb'

to the Gemfile, run 'bundle' and restart, then all web requests take around 15s to execute instead of 1s. The weird thing is that I haven't even started calling Cobweb code at all.

There are no errors and MiniProfiler tells me that no time is begin spent in SQL. It's just much slower.

Any idea on where to start looking? Is the Gem initialising somehow, even if I'm not explicitly calling it?

Using Webrick and PostGres. Ruby 2.2.3 and Rails 4.2.4.

Thanks!

Simon

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.