This repository contains a LinkedIn jobs scraper written in Python that extracts job data and saves it in JSON files.
- Clone the repository to your local machine:
git clone https://github.com/Ruy-Araujo/Linkedin-Jobs-Scraper
- Install the dependencies:
cd Linkedin-Jobs-Scraper
pip install -r requirements.txt
-
Configure the exemple.env file:
-
Fill in the LINKEDIN_COOKIES and CSRF_TOKEN parameters with those from the platform see how to generate cookies and csrf-token
-
The KEYWORDS field is a string with the keywords that will be used to filter job listings.
-
The LOCATION field is a string with the location where the job listings will be searched.
-
-
Run the main.py script
python main.py
The scraper will extract job data from LinkedIn Jobs and save it in a JSON file in the project directory.
-
Access the LinkedIn Jobs website.
-
Open the browser console (F12) and go to the Network tab.
-
In the Network tab, press CTRL+F to perform a search and type "csrf-token".
-
Select any item and you will see the "cookie" and "csrf-token" fields in the request header.
The scraper uses the Scrapy framework to parse the HTML of the LinkedIn jobs page and extract information such as job title, company name, location, job description, and date of publication.
The raw data is available here
If you want to contribute to this project, feel free to open an issue or send a pull request.