Giter Site home page Giter Site logo

astronomer / astronomer Goto Github PK

View Code? Open in Web Editor NEW
454.0 43.0 83.0 9.63 MB

Helm Charts for the Astronomer Platform, Apache Airflow as a Service on Kubernetes

Home Page: https://www.astronomer.io

License: Other

Makefile 0.75% Shell 5.24% Smarty 2.32% Python 80.07% Mustache 8.28% Jinja 3.34%
apache-airflow kubernetes docker astronomer-platform astronomer-software

astronomer's Issues

Clickstream: Streaming data mapper

@schnie commented on [Tue Oct 03 2017]

We've discussed this internally several times and Alooma has a similar feature. We'd like to be able to optionally slot a mapper function in the middle of our streaming pipelines. This could be useful for clickstream as well as our standard pipelines built on Kafka Connect. Potential use cases would be light data transformations, or enrichment. We're getting this request from customers and potential clients a lot lately.

Some open questions:

  1. Where do we slot this into the clickstream product, if at all?
  2. What languages should we support?
  3. Code function in browser or code push (like airflow) or git hooks?

@willastronomer commented on [Wed Nov 29 2017]

We could create consume/produce pairs for different languages, create and interface for them that the customer implements and then pull that code in between the event-router and integration-worker.

GDPR CS Compliance: delete data

  • Turn off the archiver
  • Delete all the archiver data (S3)
  • Add a new step to the Airflow CS DAG to delete S3 data for that run (addressed by bucket lifecycle policy)
  • Delete all redshift loader historical data (S3) (addressed by bucket lifecycle policy)
  • Send customer notice: turn on S3 integration if you want your events saved to a data lake

Create a lite version of Airflow in examples

A user may prefer to develop and work on only Airflow. We need to create an example that loads the bare bones install of Airflow Scheduler and Webserver along with the PostgresDB.

PostgresDB is required if we want users to be able to demo Airflow DAG concurrency.

Open documentation needs updating

This section of the documentation is hard to follow:

https://open.astronomer.io/airflow/index.html

I noticed this commit 31f95f4#diff-364439c8141492fe1670af4f51c33125 removed the start/stop scripts.

What is the recommended way to start the services now? The commit message mentions the CLI but I could not find any documentation on using the CLI with Open.

I tried docker-compose up but it does not seem to be pointing to the onbuild image that reads requirements.txt and packages.txt so dependencies are not installed.

The document also mentions the .astro directory, which I don't believe is created by this version of the platform.

Support multiple virtualenvs

I've been using your Open setup with moderate success. I've spent a lot of time reconciling dependencies. It would be great if instead of a single requirements.txt you could support multiple virtual environments. I don't believe this is possible out-of-the-box now but if it is, please let me know.

To add more context, this is to be able to run Singer taps and targets which designed to be run from the command line (as opposed to as a python module). Another package I had trouble with was dbt.

I realize there is a Python operator that has support for virtual environments but given the fact these tools are run from the command line, I am not sure that would serve my use case.

As a workaround, I created a new Dockerfile based on astronomerinc/ap-airflow:latest-onbuild and installed target-stitch and dbt in their separate venvs. I ran a quick test and that seems to be working OK.

Build 'How to Use Airflow' Video

We have good content around installing Astronomer Airflow in Kubes, but we don't have very many videos or much material on actually using Airflow in the wild.

Airflow: Can we get memory usage over time for a task?

@tedmiston commented on Fri Dec 15 2017

It might be possible to get this info through Flower or the Airflow statsd thing?


@tedmiston commented on Mon Dec 18 2017

@schnie Can you add anything you've learned here recently? I think you were working with statsd in Airflow. Is there overlap with that and how we use Prometheus?


@schnie commented on Wed Dec 20 2017

@tedmiston I don't have anything currently that will track memory usage at a task level over time, but I think we could get live monitoring of the celery workers into prometheus/grafana with cadvisor. I'll tinker around some more and report back.

.

.

Broken links in README.md

README.md

Full documentation for using the images can be found here.

The project is licensed under the Apache 2 license. For more information on the licenses for each of the individual Astronomer Platform components packaged in the images, please refer to the respective Astronomer Platform documentation for each component.

This URL returns 404: https://astronomerio.github.io/astronomer/

Distinguish Open From EE

After a conversation with a customer - we need to be distinguishing Open from EE. Ideas around this are

  • Add warning to docs to highlight that Open is meant for testing and viewing of EE internals, not for dev or production workflows. For this they should be pointed to the Astronomer EE CLI.

  • change airflow-enterprise example to airflow-ee-bundle in order to remove the impression that the example is anywhere near feature parody to an enterprise install.

This is slightly inline with #39

Redshift Loader v4

We are looking to replace the Airflow based Redshift loader with any combination of Flink, Kafka, Spark

Fix Airflow stats

Our grafana dashboard for the scheduler/workers shows some inaccurate numbers. See: https://issues.apache.org/jira/browse/AIRFLOW-774.

We should fix those bugs on our fork, release it, and submit a PR to apache.

This is kind of important, but since people aren't actually using it in production, there are no complaints. We should get ahead of it.

Consider pinning Celery to 4.0.2

Currently we take any version of Celery >=4.0.2, however there is significant discussion on the Airflow Dev list of stability issues with the latest release of Celery (4.1.0) e.g., the recent thread "Airflow looses track of Dag Tasks" and the issue apache/airflow#2806 where a core contrib recommended pinning to 4.0.

Our dependency is https://github.com/astronomerio/incubator-airflow/blob/148c8f4a26bcc1be745534e0a5981982202db66e/setup.py#L100-L103 via https://github.com/astronomerio/astronomer/blob/f03223f773c93cd09417044742722679dd8b97a8/docker/platform/airflow/Dockerfile#L26

The puckel/docker-airflow repo that we forked docker-airflow-saas and docker-airflow-clickstream from pins to ==4.0.2 as well.

https://github.com/puckel/docker-airflow/blob/ef712bccc2d68994c16f6851a065bb835a5d4f78/Dockerfile#L60

@schnie @andscoop @cwurtz @ryw I think we should pin to 4.0.2 like everyone else for now for similar reasons. Anyone opposed to that?

Clickstream: Implement message deduping system

https://segment.com/blog/exactly-once-delivery

Big takeaways here is we should be partitioning by messageId so that each message should get processed by the same worker every time, so then we can build a log of what messages we have already seen. If we haven't seen it, we pass it along. If we have, we drop it.

This is more about de-duping messages that were sent multiple times (a true dupe). Kafka can give us exactly-once processing, but if there's duplicate messages then that would just guarantee that we process each of the duplicate messages once.

Clickstream Open: Cross-Domain Analytics

@schnie commented on [Wed Nov 08 2017]

Several customers/prospects have been asking about cross-domain analytics and how they can track user actions across multiple domains.

An example use case is a company that uses a micro-site or blog to drive traffic into their main site to purchase products. This company could pump events from both sites into schemas within a single data warehouse or cluster and run queries that JOIN the two datasets to see how events/pageviews on one site, drive sales on the other.

A solution would be to append a crossDomainId to all events originating from a domain that the customer has set up in our app somewhere in our event processing pipeline.

Simple example query could look like:

SELECT count(*)
FROM my_blog.pages
INNER JOIN my.bid_on_item
ON my_blog.pages.context_traits_crossDomainId = 
    my.bid_on_item.context_traits_crossDomainId
WHERE my_blog.pages.name = 'Introducing Collectibles' AND
    my.bid_on_item.product_category = 'Collectible'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.