End-to-End(e2e) tests added are BDD test using gingko.
It(“should create airflow-base mysql and nfs components”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: verify
for mysql, the root password secret should be created
Mysql and nfs stateful sets should have 1 pods each
It(“should create airflow-base sqlproxy and nfs components”)
Step 1: create AirflowBase object with storage and sqlproxy enabled
Step 2: wait for sqlproxy StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: verify
for cloudsql (sqlproxy), the root password secret should be created
sqlproxy and nfs stateful sets should have 1 pods each
It(“should create airflow-cluster components using mysql base and celery executor”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and celery executor
Step 4: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each except workers which should have 2 pods
Scheduler is configured correctly to connect to mysql, and celery connection string points to redis instance
UI is configured correctly to connect to mysql
Workers celery config and mysql config is correct
Scheduler, ui and workers all synced the DAGs from git repo
It(“should create airflow-cluster components using cloudsql base and celery executor”)
Step 1: create AirflowBase object with storage and sqlproxy enabled
Step 2: wait for sqlproxy StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and celery executor
Step 4: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each except workers which should have 2 pods
Scheduler is configured correctly to connect to cloudsql, and celery connection string points to redis instance
UI is configured correctly to connect to mysql
Workers celery config and mysql config is correct
Scheduler, ui and workers all synced the DAGs from git repo
It(“should create airflow-cluster components using mysql base and celery executor with DAGs in GCS bucket”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and DAGs pointing to GCS bucket and celery executor
Step 4: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each except workers which should have 2 pods
Scheduler is configured correctly to connect to mysql, and celery connection string points to redis instance
UI is configured correctly to connect to mysql
Workers celery config and mysql config is correct
Scheduler, ui and workers all synced the DAGs from GCS bucket
It(“should support monitoring by scraping output of INFO command”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and celery executor
Step 4: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 5: verify
use curl to scrape monitoring endpoint of scheduler for prometheus-style INFO details
check that the scraped INFO details have the necessary airflow metrics
It(“should support scaling for workers”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and celery executor
Step 4: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 3: scale up workers by one (via modifying .spec.worker.replicas)
Step 4: verify new worker pod is created
Step 6: scale down workers by one (via modifying .spec.worker.replicas)
Step 7: verify the latest worker pod is deleted
It(“should respect topology constraints when scheduling shard and sentinel pods”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with redis, ui, scheduler and workers enabled and celery executor and .spec.affinity set to “failure-domain.beta.kubernetes.io/zone”
Step 5: wait for redis, ui, scheduler and worker StatefulSets to become ready
all pods have to be available
Step 6: check all airflow cluster pods are scheduled with respect to topology constraint
Step 7: repeat step 1 to 6 with .spec.affinity specified as kubernetes.io/hostname
It(“should create airflow-cluster components using mysql base and local executor”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with ui, scheduler enabled and local executor
Step 4: wait for ui, scheduler StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each
Scheduler is configured correctly to connect to mysql
UI is configured correctly to connect to mysql
Scheduler, ui all synced the DAGs from git repo
It(“should create airflow-cluster components using cloudsql base and local executor”)
Step 1: create AirflowBase object with storage and sqlproxy enabled
Step 2: wait for sqlproxy StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with ui, scheduler enabled and local executor
Step 4: wait for ui, scheduler StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each
Scheduler is configured correctly to connect to sqlproxy
UI is configured correctly to connect to sqlproxy
Scheduler, ui all synced the DAGs from git repo
It(“should create airflow-cluster components using mysql base and k8s executor”)
Step 1: create AirflowBase object with storage and mysql enabled
Step 2: wait for mysql StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with ui, scheduler enabled and k8s executor
Step 4: wait for ui, scheduler StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each
Scheduler is configured correctly to connect to mysql
UI is configured correctly to connect to mysql
Scheduler, ui all synced the DAGs from git repo
It(“should create airflow-cluster components using cloudsql base and k8s executor”)
Step 1: create AirflowBase object with storage and sqlproxy enabled
Step 2: wait for sqlproxy StatefulSet and nfs StatefulSet to become ready
all pods have to be available
Step 3: create AirflowCluster object with ui, scheduler enabled and k8s executor
Step 4: wait for ui, scheduler StatefulSets to become ready
all pods have to be available
Step 5: verify
All stateful sets have 1 pod each
Scheduler is configured correctly to connect to sqlproxy
UI is configured correctly to connect to sqlproxy
Scheduler, ui all synced the DAGs from git repo