Giter Site home page Giter Site logo

crwlrsoft / crawler Goto Github PK

View Code? Open in Web Editor NEW
299.0 4.0 11.0 865 KB

Library for Rapid (Web) Crawler and Scraper Development

Home Page: https://www.crwlr.software/packages/crawler

License: MIT License

PHP 98.77% HTML 0.37% Hack 0.86%
crawling php scraper scraping scraping-websites web-crawler web-crawling web-scraping hacktoberfest crawler web-scraper

crawler's Introduction

crwlr.software logo

Library for Rapid (Web) Crawler and Scraper Development

This library provides kind of a framework and a lot of ready to use, so-called steps, that you can use as building blocks, to build your own crawlers and scrapers with.

To give you an overview, here's a list of things that it helps you with:

Documentation

You can find the documentation at crwlr.software.

Contributing

If you consider contributing something to this package, read the contribution guide (CONTRIBUTING.md).

crawler's People

Contributors

github-actions[bot] avatar otsch avatar szepeviktor avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

crawler's Issues

Limit Pagination Crawler

Hello end thanks for the great library.

My question is, it's possible to limit the number of pages to get with pagination method?

For example, if I have 1000 pages in a listing articles, I need only the first 100 pages.

You have an example code?

Question about submitting form data

Hello!
Nice tool you made, but I have some question, that I want to clarify. In first question here you mentioned something about performing login. But in documentation I can't find any example, how to do that. How to fill input fields and submit specific button. Can you give some examples, how to do that? Or that's still work in progress?

Use proxy

thanks for the great library. I did not find how to use a proxy, is it possible?

Extracting <script> tags

Hi, what would be the best way to extract all <script> tags with their src attribute?
I've been trying the code below, looks like working fine, but getting empty strings as src and deprecation warnings:
Deprecated: trim(): Passing null to parameter #1 ($string) of type string is deprecated in /Users/XXXX/Projects/php-scraper/vendor/crwlr/crawler/src/Steps/Html/DomQuery.php on line 255

use Crwlr\Crawler\Exceptions\UnknownLoaderKeyException;
use Crwlr\Crawler\HttpCrawler;
use Crwlr\Crawler\Steps\Dom;
use Crwlr\Crawler\Steps\Html;
use Crwlr\Crawler\Steps\Loading\Http;

try {
    $domain = 'https://XXXX.com/';

    $crawler = HttpCrawler::make()
        ->withBotUserAgent('Agent')
        ->input($domain)
        ->addStep(Http::get())
        ->addStep(
            Html::root()
                ->extract([
                    'scripts' => Dom::cssSelector('script')->attribute('src'),
                ])
        );
} catch (UnknownLoaderKeyException $e) {
}

foreach ($crawler->run() as $result) {
    var_dump($result->toArray());
}

Improve variable names

Currently there are ~300 lines with names like key, value, data.
Please consider using informative variable names.
source

// For example
protected function setDomain(string $value, bool $viaAttribute = false): void {}
protected function setDomain(string $domainName, bool $viaAttribute = false): void {}

Paginating links with JavaScript href

Hi Christian,

Thanks for making this useful library available to mere mortals like me.

I have read the docs and found the paginate() method, but it does not work in my situation. The links to the next page are not ordinary links with an ordinary url in the href, but some JavaScript that triggers a page refresh with the new data. The website is not quite a SPA, but this particular page behaves somewhat like one. Is there any way to make this work?

How to process extracted values?

Great library, love the simplicity, however I have a question related to processing the extracted data that is not obvious reading the docs.

Example:

$crawler = new Crawler();
$crawler
    ->setStore(new JsonFileStore(base_path(), "output.json"))
    ->input('http://books.toscrape.com/')
    ->addStep(Http::get())
    ->addStep(Html::getLinks('.product_pod .image_container a'))
    ->addStep(Http::get())
    ->addStep(
        Html::first("article")->extract([
            'title' => 'h1',
            'price' => Dom::cssSelector('.price_color')->first(),
            'stock' => Dom::cssSelector('.availability')->first(),
        ])->addToResult()
    );

foreach ($crawler->run() as $result) {
    dump($result->toArray());
    
    // [
    //      "title" => "Rip it Up and Start Again"
    //      "price" => "ยฃ35.02"
    //      "stock" => "In stock (19 available)" <--- How would i go about extracting only the number as part of the extraction?
    // ]
}

Where and how is the appropriate place to "parse" the value of the "stock" field to extract the number itself, the value cannot be grabbed by selector only, as the number value itself is not surrounded by a tag.

It might "belong" in the custom Store, where one would "parse" the extracted data further, but for things like this it would be convenient to be able to do something like this (Str in the example is the Laravel Stringable class):

Html::first("article")->extract([
    'title' => 'h1',
    'price' => Dom::cssSelector('.price_color')->first(),
    'stock' => Dom::cssSelector('.availability')->first()->value(
        callback: fn($value) => Str::of($value)->after("(")->before(" ")->toString()
    ),
])->addToResult()

Is there currently a way to do something similar?

Question

Hi @otsch

Is this package supports chrome-like session storage to perform crawling as a logged-in user? I think about giving this package a try, but not sure it fits all necessary requirements for the crawler I need. At least I don't see any references to WebDriver or PhantomJS in the docs. If this is currently out of support, do you plan to support it in the future?

sub steps

Is there a way to create sub steps for outputs?

i've crawled a list of book series and got this output array:

[
   {
      "title":"a Book series",
      "author":"book series author",
      "volumes":[
         "..list of urls.."
      ]
   },
   {
      "title":"Just another series",
      "author":"best author",
      "volumes":[
         "..list of urls.."
      ]
   }
]

now i want to make subrequests to the urls to get an output array like this:

[
   {
      "title":"a Book series",
      "author":"book series author",
      "volumes":[
         {
            "title":"A book series - part 1",
            "volumeNumber":1,
            "price":2499
         },
         {
            "title":"A book series - part 2",
            "volumeNumber":2,
            "price":2599
         }
      ]
   },
   {
      "title":"Just another series",
      "author":"best author",
      "volumes":[
         {
            "title":"Just another series - the good ones",
            "volumeNumber":1,
            "price":1999
         },
         {
            "title":"Just another series - the bad ones",
            "volumeNumber":2,
            "price":2699
         }
      ]
   }
]

the most practical solution i found is to use a transformer and invoke a second crawler.. but this seems not very practical to me.
Is there maybe already a better way to accomplish this?

"Microseconds" not found

Hello @otsch,
I receve this error

PHP Fatal error: Uncaught Error: Class "Microseconds" not found in .../crawler.php:65

for this code reader on documentations

	    public function loader(UserAgentInterface $userAgent, LoggerInterface $logger): LoaderInterface {
	
			$cache = new FileCache(__DIR__ . '/cachedir');
			$cache->ttl(new DateInterval('P2D'));
			$cache->useCompression();
	
	        $loader = new HttpLoader($userAgent, logger: $logger);
	
	        $loader->throttle()
	            ->waitBetween(
	                Microseconds::fromSeconds(1.0),
	                Microseconds::fromSeconds(2.0)
	            );

			$loader->setCache($cache);	

	        return $loader;
	
	    }

Documentation request

I want to add Response Data to the Result as documented here.

use Crwlr\Crawler\Steps\Loading\Http;

$crawler
    ->input('https://www.example.com')
    ->addStep(
        Http::get()
            ->addToResult(['url', 'status', 'headers', 'body'])
    );

From the documentation I cannot figure out how to add this step to my working code which looks like this:

$crawler->input('https://www.example.com/sitemap.xml')
    ->addStep(
        Http::crawl()
            ->inputIsSitemap()
            ->maxOutputs(5)
    )
    ->addStep(
        Crawler::group()
            ->addStep(
              Html::root()
                  ->extract([
                      'title' => 'h1',
                      'date' => '#date',
                  ])
            )
            ->addToResult(['page'])
            ->addStep(
              Html::metaData()
                  ->only(['keywords', 'publisher'])
            )
            ->addToResult()
    );

Is there a way to add a Http::get() step to this approach? Or is there another sulution?

PHP 8.0 support

Hello

Cool project.

Would it be possible to lower the minimum PHP version to PHP 8?

Process URLs from sitemap in chunks

Is there a way to process the URLs from sitemap in chunks of 500 URLs? With a large sitemap and a lot of HTML to extract, the script runs out of memory.

I was expecting that runAndTraverse() would store the results after fetching each URL, but the script runs and writes all the results after fetching all URLs.

$crawler->setStore(new MyStore());

$crawler->input('https://www.example.com/sitemap.xml')
    ->addStep(Http::get())
    ->addStep(Sitemap::getUrlsFromSitemap())
    ->addStep(Http::get())
    ->addStep(
        [...]
    );

$crawler->runAndTraverse();

Fatal error: is not a valid URL

Hello again, i'm try to get articles from a website but i receive this error:

PHP Fatal error:  Uncaught Crwlr\Url\Exceptions\InvalidUrlException: 2023-06-24T19:29:00+02:00 is not a valid URL. in /composer/vendor/crwlr/url/src/Url.php:771
Stack trace:
#0 /composer/vendor/crwlr/url/src/Url.php(80): Crwlr\Url\Url->validate()
#1 //composer/vendor/crwlr/url/src/Url.php(93): Crwlr\Url\Url->__construct()
#2 /composer/vendor/crwlr/url/src/Url.php(103): Crwlr\Url\Url::parse()
#3 /composer/vendor/crwlr/crawler/src/Steps/Step.php(180): Crwlr\Url\Url::parsePsr7()
#4 /composer/vendor/crwlr/crawler/src/Steps/Loading/Http.php(237): Crwlr\Crawler\Steps\Step->validateAndSanitizeToUriInterface()
#5 /composer/vendor/crwlr/crawler/src/Steps/Step.php(45): Crwlr\Crawler\Steps\Loading\Http->validateAndSanitizeInput()
#6 /composer/vendor/crwlr/crawler/src/Crawler.php(230): Crwlr\Crawler\Steps\Step->invokeStep()
#7 /composer/vendor/crwlr/crawler/src/Crawler.php(240): Crwlr\Crawler\Crawler->invokeStepsRecursive()
#8 /composer/vendor/crwlr/crawler/src/Crawler.php(240): Crwlr\Crawler\Crawler->invokeStepsRecursive()
#9 /composer/vendor/crwlr/crawler/src/Crawler.php(240): Crwlr\Crawler\Crawler->invokeStepsRecursive()
#10 /composer/vendor/crwlr/crawler/src/Crawler.php(240): Crwlr\Crawler\Crawler->invokeStepsRecursive()
#11 /composer/vendor/crwlr/crawler/src/Crawler.php(277): Crwlr\Crawler\Crawler->invokeStepsRecursive()
#12 /composer/vendor/crwlr/crawler/src/Crawler.php(263): Crwlr\Crawler\Crawler->storeAndReturnDefinedResults()
#13 /composer/vendor/crwlr/crawler/src/Crawler.php(187): Crwlr\Crawler\Crawler->storeAndReturnResults()
#14 /script/test/qdg.php(696): Crwlr\Crawler\Crawler->run()
#15 {main}
  thrown in /composer/vendor/crwlr/url/src/Url.php on line 771

I know, 2023-06-24T19:29:00+02:00 is not a valid URL but I do not know where i catch.

Any idea how to check if string is a valid URL before catch?

This is my code:

$crawler->input('https://mywebsite.com')->addStep(
		Http::get()->paginate('[class="pagination"] a', 50)
	)->addStep(
		Html::each('[class="thematic__row"] article header a')->extract([
			'url' => Dom::cssSelector('a')->attribute('href')
        ])
	)->addStep(
                Http::get()->useInputKeyAsUrl('url')
	)->addStep(

		Crawler::group()->addStep(

			Html::root()->extract([
				'title' => 'h1',
				'pubdate' => Dom::cssSelector('[pubdate="pubdate"]')->text(),
				'datetime' => Dom::cssSelector('[pubdate="pubdate"][itemprop="datePublished"]')->attribute('datetime'),
				'summary' => Dom::cssSelector('.summa')->text(),
				'content' => Dom::cssSelector('[class="the-article__content"] > div[class^="formatted-text"] > p')->text(),
				'people' => Dom::cssSelector('a[href^="/persone/"]')->text(),
			])

		)->addStep(
			Html::metaData()->only(['og:url', 'og:image', 'article:section'])
		)->addToResult(['title', 'pubdate', 'datetime', 'summary', 'content', 'people', 'article:section', 'og:url', 'og:image'])

);

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.