Giter Site home page Giter Site logo

amumu96 / inference Goto Github PK

View Code? Open in Web Editor NEW

This project forked from xorbitsai/inference

0.0 0.0 0.0 21.29 MB

Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you need. With Xinference, you're empowered to run inference with any open-source language models, speech recognition models, and multimodal models, whether in the cloud, on-premises, or even on your laptop.

Home Page: https://inference.readthedocs.io

License: Apache License 2.0

JavaScript 12.50% Python 86.96% CSS 0.16% HTML 0.13% Dockerfile 0.24%

inference's Introduction

xorbits

Xorbits Inference: Model Serving Made Easy πŸ€–

PyPI Latest Release License Build Status Slack Twitter

English | 中文介绍 | ζ—₯本θͺž


Xorbits Inference(Xinference) is a powerful and versatile library designed to serve language, speech recognition, and multimodal models. With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.

πŸ”₯ Hot Topics

Framework Enhancements

  • Support specifying worker and GPU indexes for launching models: #1195
  • Support SGLang backend: #1161
  • Support LoRA for LLM and image models: #1080
  • Support speech recognition model: #929
  • Metrics support: #906
  • Docker image: #855
  • Support multimodal: #829

New Models

Integrations

  • Dify: an LLMOps platform that enables developers (and even non-developers) to quickly build useful applications based on large language models, ensuring they are visual, operable, and improvable.
  • FastGPT: a knowledge-based platform built on the LLM, offers out-of-the-box data processing and model invocation capabilities, allows for workflow orchestration through Flow visualization.
  • Chatbox: a desktop client for multiple cutting-edge LLM models, available on Windows, Mac and Linux.
  • RAGFlow: is an open-source RAG engine based on deep document understanding.

Key Features

🌟 Model Serving Made Easy: Simplify the process of serving large language, speech recognition, and multimodal models. You can set up and deploy your models for experimentation and production with a single command.

⚑️ State-of-the-Art Models: Experiment with cutting-edge built-in models using a single command. Inference provides access to state-of-the-art open-source models!

πŸ–₯ Heterogeneous Hardware Utilization: Make the most of your hardware resources with ggml. Xorbits Inference intelligently utilizes heterogeneous hardware, including GPUs and CPUs, to accelerate your model inference tasks.

βš™οΈ Flexible API and Interfaces: Offer multiple interfaces for interacting with your models, supporting OpenAI compatible RESTful API (including Function Calling API), RPC, CLI and WebUI for seamless model management and interaction.

🌐 Distributed Deployment: Excel in distributed deployment scenarios, allowing the seamless distribution of model inference across multiple devices or machines.

πŸ”Œ Built-in Integration with Third-Party Libraries: Xorbits Inference seamlessly integrates with popular third-party libraries including LangChain, LlamaIndex, Dify, and Chatbox.

Why Xinference

Feature Xinference FastChat OpenLLM RayLLM
OpenAI-Compatible RESTful API βœ… βœ… βœ… βœ…
vLLM Integrations βœ… βœ… βœ… βœ…
More Inference Engines (GGML, TensorRT) βœ… ❌ βœ… βœ…
More Platforms (CPU, Metal) βœ… βœ… ❌ ❌
Multi-node Cluster Deployment βœ… ❌ ❌ βœ…
Image Models (Text-to-Image) βœ… βœ… ❌ ❌
Text Embedding Models βœ… ❌ ❌ ❌
Multimodal Models βœ… ❌ ❌ ❌
Audio Models βœ… ❌ ❌ ❌
More OpenAI Functionalities (Function Calling) βœ… ❌ ❌ ❌

Getting Started

Please give us a star before you begin, and you'll receive instant notifications for every new release on GitHub!

Jupyter Notebook

The lightest way to experience Xinference is to try our Juypter Notebook on Google Colab.

Docker

Nvidia GPU users can start Xinference server using Xinference Docker Image. Prior to executing the installation command, ensure that both Docker and CUDA are set up on your system.

docker run --name xinference -d -p 9997:9997 -e XINFERENCE_HOME=/data -v </on/your/host>:/data --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0

Quick Start

Install Xinference by using pip as follows. (For more options, see Installation page.)

pip install "xinference[all]"

To start a local instance of Xinference, run the following command:

$ xinference-local

Once Xinference is running, there are multiple ways you can try it: via the web UI, via cURL, via the command line, or via the Xinference’s python client. Check out our docs for the guide.

web UI

Getting involved

Platform Purpose
Github Issues Reporting bugs and filing feature requests.
Slack Collaborating with other Xorbits users.
Twitter Staying up-to-date on new features.

Contributors

inference's People

Contributors

uranusseven avatar chengjieli28 avatar aresnow1 avatar codingl2k1 avatar qinxuye avatar pangyoki avatar bojun-feng avatar onesuper avatar jiayini1119 avatar rayji01 avatar minamiyama avatar hainaweiben avatar amumu96 avatar mikeshi80 avatar yiboyasss avatar ago327 avatar mujin2 avatar frostyplanet avatar notsyncing avatar xiaodouzi666 avatar wertycn avatar richzw avatar liunux4odoo avatar fengsxy avatar zhangtianrong avatar yukun-cui avatar eltociear avatar zhanghx0905 avatar waltcow avatar utopia2077 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.