Giter Site home page Giter Site logo

lxyea / azure-databricks-anomaly Goto Github PK

View Code? Open in Web Editor NEW

This project forked from devlace/azure-databricks-anomaly

0.0 1.0 0.0 3.18 MB

Anomaly Detection Pipeline on Azure Databricks

License: MIT License

Makefile 7.08% Python 15.44% Dockerfile 0.65% Shell 20.09% Scala 56.74%

azure-databricks-anomaly's Introduction

Build Status

Anomaly Detection Pipeline on Azure Databricks

The following is an anomaly detection data pipeline on Azure Databricks. This solution was built to demonstrate how to build Advance Analytics Pipelines on Azure Databricks, with a particular focus on the Spark MLLib library. This solution includes:

  1. Initial ETL Data loading process into SparkSQL tables
  2. Model training and scoring
    • Explanation of Pipelines, Transformer and Estimators
    • Sample Custom Estimator (PCAAnomaly)
  3. Persisting trained models
  4. Productionizing models through
    • Batch inference
    • Streaming

Architecture

Architecture

Data

KDD Cup 1999 Data

Deployment

  • Ensure you are in the root of the repository
  • To deploy the solution, use one of the following commands:
    1. (Easiest) Using pre-built docker container: docker run -it devlace/azdatabricksanomaly
    2. Build and run the container locally: make deploy_w_docker
    3. Deploy using local environment (see requirements below): make deploy
  • Follow the prompts to login to Azure, name of resource group, deployment location, etc.
  • When prompted for a Databricks Host, enter the full name of your databricks workspace host, e.g. https://southeastasia.azuredatabricks.net
  • When prompted for a token, you can generate a new token in the databricks workspace.

To view additional make commands run make

For local deployment

Requirements

Development environment

  • The following works with Windows Subsystem for Linux
  • Clone this repository
  • cd azure-databricks-anomaly
  • Create a python environment (Virtualenv or Conda). The following uses virtualenv.
    • virtualenv . This creates a python virtual environment to work in.
    • source bin/activate This activates the virtual environment.
  • make requirements. This installs python dependencies in the virtual environment.

Project Organization


├── LICENSE
├── Makefile           <- Makefile with commands like `make data` or `make train`
├── README.md          <- The top-level README for developers using this project.
├── deploy             <- Deployment artifacts
│   │
│   └── databricks     <- Deployment artifacts in relation to the Databricks workspace
│   │
│   └── deploy.sh      <- Deployment script to deploy all Azure Resources
│   │
│   └── azuredeploy.json <- Azure ARM template w/ .parameters file
│   │
│   └── Dockerfile     <- Dockerfile for deployment
│
├── models             <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks          <- Jupyter notebooks. Naming convention is a number (for ordering),
│                         the creator's initials, and a short `-` delimited description, e.g.
│                         `1.0-jqp-initial-data-exploration`.
│
├── references         <- Contains the powerpoint presentation, and other reference materials.
│
├── requirements.txt   <- The requirements file for reproducing the analysis environment, e.g.
│                         generated with `pip freeze > requirements.txt`
│
├── setup.py           <- makes project pip installable (pip install -e .) so src can be imported

Project based on the cookiecutter data science project template. #cookiecutterdatascience

azure-databricks-anomaly's People

Contributors

devlace avatar azure-pipelines[bot] avatar

Watchers

James Cloos avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.