Giter Site home page Giter Site logo

amitupreti / email-crawler-lead-generator Goto Github PK

View Code? Open in Web Editor NEW
93.0 8.0 44.0 32 KB

This email crawler will visit all pages of a provided website and parse and save emails found to a csv file.

License: MIT License

Python 100.00%
python3 email-parsing webscraping requests lead-generation

email-crawler-lead-generator's Introduction


Email Crawler and Lead generator in python

This crawler takes an webaddress as input and then extracts all emails from that website by sequentially visiting every url in that domain.


· Email-Crawler-Lead-Generatorrt Bug · Request Feature

Table of Contents

About The Project

Crawler Demo

The Email Crawler makes sure that it only visits the urls in same domain and doesnot save duplicate emails.It also keeps the log of urls visited and dumps them at the end of crawling

Built With

Getting Started

To get a local copy up and running follow these simple steps.

Installation

  1. Clone the Email-Crawler-Lead-Generator
git clone https://github.com/nOOBIE-nOOBIE/Email-Crawler-Lead-Generator.git
  1. Install dependencies
pip install -r requirements.txt

If you have python2 and python3 both installed. You might need to do.

pip3 install -r requirements.txt

Usage

Simply pass the url as an argument

python email_crawler.py https://medium.com/

If you have python2 and python3 both installed. You might need to do.

python3 email_crawler.py https://medium.com/

Output

➜  email_crawler python3 email_crawler.py https://medium.com/
WELCOME TO EMAIL CRAWLER
CRAWL : https://medium.com/
1 Email found [email protected]
2 Email found [email protected]
CRAWL : https://medium.com/creators
3 Email found [email protected]
4 Email found [email protected]
5 Email found [email protected]
6 Email found [email protected]
7 Email found [email protected]
CRAWL : https://medium.com/@mshannabrooks
CRAWL : https://medium.com/m/signin?operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40mshannabrooks&source=listing-----5f0204823a1e---------------------bookmark_sidebar-
CRAWL : https://medium.com/m/signin?operation=register&redirect=https%3A%2F%2Fmedium.com%2F%40mshannabrooks&source=-----e5d9a7ef4033----6------------------

Roadmap

See the open issues for a list of proposed features (and known issues).

Contributing

Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

Contact

Amit Upreti - @amitupreti

Project Link: https://github.com/nOOBIE-nOOBIE/Email-Crawler-Lead-Generator

email-crawler-lead-generator's People

Contributors

amitupreti avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

email-crawler-lead-generator's Issues

Adding URL Found at

Hello, is it possible to another column in the csv to show in which url, the email found

Feature bulk crawl

Hello,

I would like to contribute and add a bulk crawl function.

It would use multiprocessing to optimize the crawling duration.
The user can add the desired domains to a list in the file and execute it with the CL.

If needed I will also enable a text file, where the user adds all the domains.

@amitupreti: Please provide feedback

No Requirements.txt

C:\Email-Crawler-Lead-Generator>pip install -r requirements.txt

ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
WARNING: You are using pip version 19.1.1, however version 19.3.1 is available.
You should consider upgrading via the 'python -m pip install --upgrade pip' command.

It gives RecursionError: maximum recursion depth exceeded Error

For sites with many links it gives "RecursionError: maximum recursion depth exceeded" error. I think it uses tail recursion and that is why it stucks at default recursion limit of python. The logic needs to be improvised to not face this limit issue. Can you help?

Traceback (most recent call last):
File "email_crawler.py", line 146, in
crawl.crawl()
File "email_crawler.py", line 40, in crawl
self.crawl()
File "email_crawler.py", line 40, in crawl
self.crawl()
File "email_crawler.py", line 40, in crawl
self.crawl()
[Previous line repeated 958 more times]
File "email_crawler.py", line 36, in crawl
self.parse_url(url)
File "email_crawler.py", line 62, in parse_url
response = requests.get(current_url, headers=self.headers)
File "\lib\site-packages\requests\api.py", line 76, in get
return request('get', url, params=params, **kwargs)
File "\lib\site-packages\requests\api.py", line 61, in request
return session.request(method=method, url=url, **kwargs)
File "\lib\site-packages\requests\sessions.py", line 530, in request
resp = self.send(prep, **send_kwargs)
File "\lib\site-packages\requests\sessions.py", line 665, in send
history = [resp for resp in gen]
File "\lib\site-packages\requests\sessions.py", line 665, in
history = [resp for resp in gen]
File "\lib\site-packages\requests\sessions.py", line 245, in resolve_redirects
**adapter_kwargs
File "\lib\site-packages\requests\sessions.py", line 643, in send
r = adapter.send(request, **kwargs)
File "\lib\site-packages\requests\adapters.py", line 533, in send
return self.build_response(request, resp)
File "\lib\site-packages\requests\adapters.py", line 265, in build_response
response = Response()
File "\lib\site-packages\requests\models.py", line 608, in init
self.headers = CaseInsensitiveDict()
File "\lib\site-packages\requests\structures.py", line 46, in init
self.update(data, **kwargs)
File "\lib_collections_abc.py", line 839, in update
if isinstance(other, Mapping):
File "\lib\abc.py", line 182, in instancecheck
if subclass in cls._abc_cache:
RecursionError: maximum recursion depth exceeded

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.