Giter Site home page Giter Site logo

awesome-opensource-data-engineering's Introduction

Awesome Open-Source Data Engineering

This Awesome List aims at providing an overview of open-source projects related to data engineering. This is a community effort: please contribute and send your pull requests for growing this list! For a list including non-OSS tools, see this amazing Awesome List.

Analytics

  • Apache Spark - A unified analytics engine for large-scale data processing. Includes APIs in Scala, Java, Python (known as PySpark), and R (SparkR).

  • Apache Beam - An open-source implementation of Google DataFlow. Provides capabilites of batch and streaming data processing jobs that run on any execution engine, including Spark, Flink, or its own DirectRunner. Supports multiple APIs in Java, Python, and Go.

  • Apache Flink - Stateful computations over data streams.

Business Intelligence

  • Apache Superset - A modern, enterprise-ready business intelligence web application.

  • HUE - The Hadoop User Interface. Similar to Superset, but interfaces between RDBMS, Hive, Impala, HBase, Spark, HDFS & S3, Oozie, Pig, YARN Job Explorer, and more. Offers an extensible Django environment for custom app integration.

  • Metabase - An easy way for everyone in your company to ask questions and learn from data.

  • Redash - All the tools to unlock your data.

Change Data Capture

  • Debezium - Change data capture for MySQL, Postgres, MongoDB, SQL Server and others.

  • Maxwell - Maxwell’s daemon, a MySQL-to-JSON Kafka producer.

Datastores

  • Apache Calcite - SQL parser, building blocks for datastores.

  • Apache Cassandra - Open Source distributed wide column store, NoSQL database.

  • Apache Druid - A high performance real-time analytics database.

  • Apache HBase - Open Source non-relational distributed database.

  • Apache Pinot - A realtime distributed OLAP datastore.

  • ClickHouse - Open Source distributed column-oriented DBMS.

  • InfluxDB - Purpose-Built Open Source Time Series Database.

  • MinIO - MinIO is a high performance, distributed object storage system and AWS S3 compatible.

  • Postgres - The World’s Most Advanced Open Source Relational Database.

Data Governance and Registries

  • Amundsen - metadata catalogue.

  • Apache Atlas - Data governance and metadata framework for Hadoop.

  • DataHub - A Generalized Metadata Search & Discovery Tool.

  • Metacat - Unified metadata exploration API service.

Data Virtualization

  • Apache Drill - Schema-free SQL Query Engine for Hadoop, NoSQL and Cloud Storage.

  • Dremio - A data lake engine. Provides an Apache Arrow-based query and acceleration engine together with the ability to create an IT-governed self-service layer for data scientists and analysts.

  • Teiid - A relational abstraction of different information sources.

  • Presto - Distributed SQL Query Engine for Big Data.

Data Orchestration

  • Alluxio - Scalable, multi-tiered distributed caching for HDFS, S3, Ceph, NFS, and related filestores. Provides integrations for SQL queries into a Catalog from Spark, Hive, and Presto.

Formats

  • Apache Avro - A data serialization system.

  • Apache Parquet - A columnar storage format.

  • Apache ORC - Another columnar storage format.

  • Apache Thrift - Data type and service interface definitions and code generator.

  • Apache Arrow - A cross-language development platform for in-memory data. It specifies a standardized, language-independent, columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides computational libraries and zero-copy IPC and streaming messaging.

  • Cap’n Proto - A data interchange format and capability-based RPC system.

  • FlatBuffers - An efficient cross platform serialization library for C++, C#, C, Go, Java, JavaScript, Lobster, Lua, TypeScript, PHP, Python, and Rust.

  • MessagePack - An efficient binary serialization format. It lets you exchange data among multiple languages like JSON.

  • Protocol Buffers - Google’s language-neutral, platform-neutral, extensible mechanism for serializing structured data.

Integration

  • Apache Camel - Easily integrate various systems consuming or producing data.

  • Kafka Connect - Reusable framework to handle data int-and-out of Apache Kafka.

  • Logstash - Open Source server-side data processing pipeline.

  • Telegraf - a plugin-driven server agent writen in Go (deployed as a single binary with no external dependencies) for collecting and sending metrics and events from databases, systems, and IoT sensors. Offers hundreds of existing plugins.

Messaging Infrastructure

  • Apache ActiveMQ - Flexible & Powerful Multi-Protocol Messaging.

  • Apache Kafka - A distributed commit log with messaging capabilities.

  • Apache Pulsar - A distributed pub-sub messaging system.

  • Liiklus - An event gateway that provides reactive gRPC/RSocket access to Kafka-like systems.

  • Nakadi - A distributed event bus that implements a RESTful API abstraction on top of Kafka-like queues].

  • NATS - A simple, secure and high performance messaging system.

  • RabbitMQ - A message broker.

  • Waltz - A quorum-based distributed write-ahead log for replicating transactions.

  • ZeroMQ - An open-source universal, high-performance messaging library.

Specifications and Standards

  • CloudEvents - A specification for describing event data in a common way.

Stream Processing

  • Apache Heron - The "direct successor of Apache Storm", built to be backwards compatible with Storm’s topology API but with a wide array of architectural improvements.

  • Apache Kafka Streams - A client library for building applications and microservices, where the input and output data are stored in Kafka.

  • Apache Samza - A distributed stream processing framework.

  • Apache Spark Structured Streaming - A scalable and fault-tolerant stream processing engine built on the Spark SQL engine.

  • Apache Storm - A distributed realtime computation system.

Testing

Workflow Management

  • Awesome Workflow Engines - A curated list of awesome open source workflow engines.

  • Apache Airflow - A platform created by community to programmatically author, schedule and monitor workflows.

  • Apache NiFi - Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic

  • KNIME - KNIME Analytics Platform offers a WYSIWYG Editor for Spark-based workflows, with over 2000+ integrations. Offers visualization and flow analytics in-place. KNIME Server is a commercially licensed component that adds additional features.

  • Prefect - A workflow management system designed for modern infrastructure.

only overview contents, no specific tools

Slide Decks, Recordings and Podcasts

Blog Posts and Articles

Collections

License

The contents of this repository is licensed under the "Creative Commons Attribution-ShareAlike 4.0 International License".

awesome-opensource-data-engineering's People

Contributors

antonmry avatar bsideup avatar chloejay avatar chrjohn avatar geoand avatar gunnarmorling avatar jogoodma avatar josep2 avatar joshmeek avatar justinpitts avatar onecricketeer avatar senordeveloper avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.