It allows you to extract proxies to use and avoid being blocked when doing web scraping or simply having to perform multiple requests on a website
Install the dependencies
$ pip install -r requirements.txt
Go to src directory and run the scrap_now.py file
$ cd src
$ python scrap_now.py
Done!, now you have a list of valid proxies in your terminal, you can use the list as you wish
Go to src directory and run the start_proxy_pool.py file
$ cd src
$ python start_proxy_pool.py
Great!, now you are making requests without risk of block