Train and Deploy your ML and AI Models in the Following Environments:
- Slack:
- Email: [email protected]
- Web: https://support.pipeline.ai
- Troubleshooting: Guide
- PipelineAI Deep Learning Workshops (TensorFlow + Spark + GPUs)
- Advanced Spark and TensorFlow Meetup (Global)
PipelineAI Home
PipelineAI Features
Each model is built into a separate Docker image with the appropriate Python, C++, and Java/Scala Runtime Libraries for training or prediction.
Use the same Docker Image from Local Laptop to Production to avoid dependency surprises.
Click HERE to view model samples for the following:
- Scikit-Learn
- TensorFlow
- Keras
- Spark ML (formerly called Spark MLlib)
- Xgboost
- PMML/PFA
- Custom Java
- Custom Python
- Model Ensembles
- Python
- Java
- Scala
- C++
- Caffe2
- Theano
- TensorFlow Serving (TensorFlow)
- Nvidia TensorRT (TensorFlow, Caffe2)
- MXNet
- CNTK
- ONNX
- Click HERE to compare PipelineAI Products.