- A web crawler to fetch the data: A web crawler, spider, or search engine bot downloads and indexes content from all over the Internet. The goal of such a bot is to learn what every webpage on the web is about so that the information can be retrieved when it's needed. Devices run services and those services are what we collect information about. Websites are hosted on devices that run a web service and we would gather information by connecting with that service. The information for each service is stored in an object called the banner. It prepares an interactive sitemap for the targeted site by carrying out a recursive crawl and input parameter based repetitive sort. The final report generated is cross-checked and examined with the input parameters and presented with output ordered from active service to inert services.
- Username search
- Phone number/email validation
- Reverse image search
- Sentiment analysis of available data
- A frontend using which user can access all the implemented features
- Extraction of location and other info from image data
pip install requiremnets.txt