This personal project serves as a Data Engineering technical test for Servier
Here is a high-level overview of the steps followed by the 'servier_pipeline' Airflow DAG:
- Ingestion and Preliminary Preprocessing: Data is ingested from Minio (local S3), cleaned in .csv format, and then saved back in minio.
- Graph Calculation: Cleaned data are merged resulting in the drug_graph_final.json file that can be found in the Minio bucket (impor/minio/airflow).
$ Ensure Docker is installed on your machine:
$ brew install --cask docker
$ git clone https://github.com/SmadjaPaul/servier-de
Navigate to the project directory and run:
$ docker-compose up
To run in the background:
$ docker-compose up -d
Application | URL | Credentials |
---|---|---|
Airflow | http://localhost:8080 | User: airflow Pass: airflow |
MinIO | http://localhost:9000 | User: admin Pass: minio-password |
|
Review the Makefile for additional commands. Note: Commands were tested on Linux.
echo "Setting Airflow settings"
echo "AIRFLOW_UID=$(id -u)" > .env
echo "Please make sure the user id is correct in .env"
nano .env
echo "Starting everything"
echo "Please wait a while or run 'make ps' to see if things are 'ready' and the init containers have exited (finished)"
echo "Airflow: http://127.0.0.1:8080 (Username: 'airflow', Password: 'airflow')"
echo "Minio: http://127.0.0.1:9002 (Username: 'admin', Password: 'minio-password')"
echo "Ctrl+c to exit when ready"
make up
echo "Cleaning up / removing everything"
make stop
If you want to run tests, use poetry after setting up a virtual environment and installing all dependencies. You can find the tests in test_func.py, including those related to the Bonus part.
$ poetry run pytest
- Creates a
minio/minio
(MinIO) container - Use MC inside
minio/mc
(minio-init
) to create a bucket and upload csv files - The
airflow-provision
creates a connector in Airflow that allows Airflow to access MinIO. - DAG to read files from MinIO, make data cleaning, and final calculation
-
Adding functionality: The final JSON file could be loaded into a Neo4J database, allowing for beautiful visual representations and enhanced analytics capabilities from tools like Jupyter Notebooks.
-
DevOps: Implement robust CI/CD and Infrastructure as Code (IaC) practices for a production environment.
-
Alternative Workflow: Consider using tools beyond pandas in a production context. Loading data directly into a SQL table and managing transformations with dbt (ELT) or using PySpark (ETL) are alternatives.
-
Dynamic variable: Transform all constants into environment variables and set up secrets for better credential management.