Giter Site home page Giter Site logo

bentoml / onediffusion Goto Github PK

View Code? Open in Web Editor NEW
324.0 12.0 24.0 1.72 MB

OneDiffusion: Run any Stable Diffusion models and fine-tuned weights with ease

Home Page: https://bentoml.com

License: Apache License 2.0

Python 100.00%
ai diffusion-models fine-tuning kubernetes lora model-serving stable-diffusion

onediffusion's Introduction

πŸ–ΌοΈ OneDiffusion

pypi_status Twitter Discord

OneDiffusion is an open-source one-stop shop for facilitating the deployment of any diffusion models in production. It caters specifically to the needs of diffusion models, supporting both pretrained and fine-tuned diffusion models with LoRA adapters.

Key features include:

  • 🌐 Broad compatibility: Support both pretrained and LoRA-adapted diffusion models, providing flexibility in choosing and deploying the appropriate model for various image generation tasks.
  • πŸ’ͺ Optimized performance and scalability: Automatically select the best optimizations like half-precision weights or xFormers to achieve best inference speed out of the box.
  • βŒ›οΈ Dynamic LoRA adapter loading: Dynamically load and unload LoRA adapters on every request, providing greater adaptability and ensuring the models remain responsive to changing inputs and conditions.
  • 🍱 First-class support for BentoML: Seamless integration with the BentoML ecosystem, allowing you to build Bentos and push them to BentoCloud.

OneDiffusion is designed for AI application developers who require a robust and flexible platform for deploying diffusion models in production. The platform offers tools and features to fine-tune, serve, deploy, and monitor these models effectively, streamlining the end-to-end workflow for diffusion model deployment.

Supported models

Currently, OneDiffusion supports the following models:

  • Stable Diffusion v1.4, v1.5 and v2.0
  • Stable Diffusion XL v1.0
  • Stable Diffusion XL Turbo

More models (for example, ControlNet and DeepFloyd IF) will be added soon.

Note

If you want to deploy Stable Video Diffusion, see the project BentoSVD.

Get started

To quickly get started with OneDiffusion, follow the instructions below or try this tutorial in Google Colab: Serving Stable Diffusion with OneDiffusion.

Prerequisites

You have installed Python 3.8 (or later) and pip.

Install OneDiffusion

Install OneDiffusion by using pip as follows:

pip install onediffusion

To verify the installation, run:

$ onediffusion -h

Usage: onediffusion [OPTIONS] COMMAND [ARGS]...

       β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•—
      β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘
      β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘
      β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β•šβ•β•β•β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘
      β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ•‘     β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘
       β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•β•β•šβ•β•β•β•β•β•β•β•šβ•β•β•β•β•β• β•šβ•β•β•šβ•β•     β•šβ•β•      β•šβ•β•β•β•β•β• β•šβ•β•β•β•β•β•β•β•šβ•β• β•šβ•β•β•β•β•β• β•šβ•β•  β•šβ•β•β•β•
          
          An open platform for operating diffusion models in production.
          Fine-tune, serve, deploy, and monitor any diffusion models with ease.
          

Options:
  -v, --version  Show the version and exit.
  -h, --help     Show this message and exit.

Commands:
  build     Package a given model into a Bento.
  download  Setup diffusion models interactively.
  start     Start any diffusion models as a REST server.

Start a diffusion server

OneDiffusion allows you to quickly spin up any diffusion models. To start a server, run:

onediffusion start stable-diffusion

This starts a server at http://0.0.0.0:3000/. You can interact with it by visiting the web UI or send a request via curl.

curl -X 'POST' \
  'http://0.0.0.0:3000/text2img' \
  -H 'accept: image/jpeg' \
  -H 'Content-Type: application/json' \
  --output output.jpg \
  -d '{
  "prompt": "a bento box",
  "negative_prompt": null,
  "height": 768,
  "width": 768,
  "num_inference_steps": 50,
  "guidance_scale": 7.5,
  "eta": 0
}'

By default, OneDiffusion uses stabilityai/stable-diffusion-2 to start the server. To use a specific model version, add the --model-id option as below:

onediffusion start stable-diffusion --model-id runwayml/stable-diffusion-v1-5

To specify another pipeline, use the --pipeline option as below. The img2img pipeline allows you to modify images based on a given prompt and image.

onediffusion start stable-diffusion --pipeline "img2img"

OneDiffusion downloads the models to the BentoML local Model Store if they have not been registered before. To view your models, install BentoML first with pip install bentoml and then run:

$ bentoml models list

Tag                                                                                         Module                              Size        Creation Time
pt-sd-stabilityai--stable-diffusion-2:1e128c8891e52218b74cde8f26dbfc701cb99d79              bentoml.diffusers                   4.81 GiB    2023-08-16 17:52:33
pt-sdxl-stabilityai--stable-diffusion-xl-base-1.0:bf714989e22c57ddc1c453bf74dab4521acb81d8  bentoml.diffusers                   13.24 GiB   2023-08-16 16:09:01

Start a Stable Diffusion XL server

OneDiffusion also supports running Stable Diffusion XL 1.0, the most advanced development in the Stable Diffusion text-to-image suite of models launched by Stability AI. To start an XL server, simply run:

onediffusion start stable-diffusion-xl

It downloads the model automatically if it does not exist locally. Options such as --model-id are also supported. For more information, run onediffusion start stable-diffusion-xl --help.

Similarly, visit http://0.0.0.0:3000/ or send a request via curl to interact with the XL server. Example prompt:

{
  "prompt": "the scene is a picturesque environment with beautiful flowers and trees. In the center, there is a small cat. The cat is shown with its chin being scratched. It is crouched down peacefully. The cat's eyes are filled with excitement and satisfaction as it uses its small paws to hold onto the food, emitting a content purring sound.",
  "negative_prompt": null,
  "height": 1024,
  "width": 1024,
  "num_inference_steps": 50,
  "guidance_scale": 7.5,
  "eta": 0
}

Example output:

sdxl-cat

Start a Stable Diffusion XL Turbo server

SDXL Turbo is a distilled version of SDXL 1.0 and is capable of creating images in a single step, with improved real-time text-to-image output quality and sampling fidelity.

To serve SDXL Turbo locally, run:

onediffusion start stable-diffusion-xl --model-id stabilityai/sdxl-turbo

Visit http://0.0.0.0:3000/ or send a request via curl to interact with the server. Example prompt:

{
  "prompt": "Create a serene landscape at sunset, with a tranquil lake reflecting the vibrant colors of the sky. Surrounding the lake are lush, green forests and distant mountains.",
  "height": 512,
  "width": 512,
  "num_inference_steps": 1,
  "guidance_scale": 0.0
}

Note

SDXL Turbo can run inference with only one step, so you can set num_inference_steps to 1 and this is enough to generate high quality images. However, increasing the number of steps to 2, 3 or 4 should improve image quality. In addition, make sure you set guidance_scale to 0.0 to disable it as the model was trained without it. See the official release notes to learn more.

Example output:

sdxl-turbo-output

Add LoRA weights

Low-Rank Adaptation (LoRA) is a training method to fine-tune models without the need to retrain all parameters. You can add LoRA weights to your diffusion models for specific data needs.

Add the --lora-weights option as below:

onediffusion start stable-diffusion-xl --lora-weights "/path/to/lora-weights.safetensors"

Alternatively, dynamically load LoRA weights by adding the lora_weights field:

{
  "prompt": "the scene is a picturesque environment with beautiful flowers and trees. In the center, there is a small cat. The cat is shown with its chin being scratched. It is crouched down peacefully. The cat's eyes are filled with excitement and satisfaction as it uses its small paws to hold onto the food, emitting a content purring sound.",
  "negative_prompt": null,
  "height": 1024,
  "width": 1024,
  "num_inference_steps": 50,
  "guidance_scale": 7.5,
  "eta": 0,
  "lora_weights": "/path/to/lora-weights.safetensors"
}

By specifying the path of LoRA weights at runtime, you can influence model outputs dynamically. Even with identical prompts, the application of different LoRA weights can yield vastly different results. Example output (oil painting vs. pixel):

dynamic loading

Download a model

If you want to download a diffusion model without starting a server, use the onediffusion download command. For example:

onediffusion download stable-diffusion --model-id "CompVis/stable-diffusion-v1-4"

Create a BentoML Runner

You can create a BentoML Runner with diffusers_simple.stable_diffusion.create_runner(), which downloads the model specified automatically if it does not exist locally.

import bentoml

# Create a Runner for a Stable Diffusion model
runner = bentoml.diffusers_simple.stable_diffusion.create_runner("CompVis/stable-diffusion-v1-4")

# Create a Runner for a Stable Diffusion XL model
runner_xl = bentoml.diffusers_simple.stable_diffusion_xl.create_runner("stabilityai/stable-diffusion-xl-base-1.0")

You can then wrap the Runner into a BentoML Service. See the BentoML documentation for more details.

Build a Bento

A Bento in BentoML is a deployable artifact with all the source code, models, data files, and dependency configurations. You can build a Bento for a supported diffusion model directly by running onediffusion build.

# Build a Bento with a Stable Diffusion model 
onediffusion build stable-diffusion

# Build a Bento with a Stable Diffusion XL model 
onediffusion build stable-diffusion-xl

To specify the model to be packaged into the Bento, use --model-id. Otherwise, OneDiffusion packages the default model into the Bento. If the model does not exist locally, OneDiffusion downloads the model automatically. In addition, the pipeline to use can also be specified through --pipeline. By default, OneDiffusion uses the text2image pipeline.

To package LoRA weights into the Bento, use the --lora-dir option to specify the directory where LoRA files are stored. These files can be dynamically loaded to the model when deployed with Docker or BentoCloud to create task-specific images.

onediffusion build stable-diffusion-xl --lora-dir "/path/to/lorafiles/dir/"

If you only have a single LoRA file to use, run the following instead:

onediffusion build stable-diffusion-xl --lora-weights "/path/to/lorafile"

Each Bento has a BENTO_TAG containing both the Bento name and the version. To customize it, specify --name and --version options.

onediffusion build stable-diffusion-xl --name sdxl --version v1

Once your Bento is ready, log in to BentoCloud and run the following command to push the Bento.

bentoml push BENTO_TAG

Alternatively, create a Docker image by containerizing the Bento with the following command. You can retrieve the BENTO_TAG by running bentoml list.

bentoml containerize BENTO_TAG

You can then deploy the image to any Docker-compatible environments.

Roadmap

We are working to improve OneDiffusion in the following ways and invite anyone who is interested in the project to participate 🀝.

  • Support more models, such as ControlNet and DeepFloyd IF
  • Support more pipelines, such as inpainting
  • Add a Python API client to interact with diffusion models
  • Implement advanced optimization like AITemplate
  • Offer a unified fine-tuning training API

Contribution

We weclome contributions of all kinds to the OneDiffusion project! Check out the following resources to start your OneDiffusion journey and stay tuned for more announcements about OneDiffusion and BentoML.

onediffusion's People

Contributors

aarnphm avatar larme avatar sherlock113 avatar ssheng avatar xianml avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

onediffusion's Issues

Support raw prompt embeddings input

Hi.

Instead of passing a string prompt to the generator, I would like to pass in the raw text embeddings.

The HF Diffusers library allows this using the prompt_embeds (and pooled_prompt_embeds for SDXL) parameters on the StableDiffusionPipeline pipeline objects.

Would it be possible to add support for this to OneDiffusion? Thank you.

Using SDXL Refiner

Is there a method for enabling / using the SD XL refiner. Ideally using passing over the latent rather than using img2img.

Server is busy

2023-10-14T19:53:47+0000 [ERROR] [api_server:474] Exception on /text2img [POST] (trace=14018d3cdfaed397ec9c7385ef3aa282,span=fd90260af03a0807,sampled=0,service.name=onediffusion-sdxl-service)
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/bentoml/_internal/server/http_app.py", line 343, in api_func
output = await run_in_threadpool(api.func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/starlette/concurrency.py", line 35, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 2106, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 833, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/bentoml/bento/src/generated_stable_diffusion_xl_service.py", line 74, in text2img
res = model_runner.run(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/bentoml/_internal/runner/runner.py", line 52, in run
return self.runner._runner_handle.run_method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/bentoml/_internal/runner/runner_handle/remote.py", line 355, in run_method
anyio.from_thread.run(
File "/usr/local/lib/python3.11/dist-packages/anyio/from_thread.py", line 45, in run
return async_backend.run_async_from_thread(func, args, token=token)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/anyio/_backends/_asyncio.py", line 2121, in run_async_from_thread
return f.result()
^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.11/dist-packages/bentoml/_internal/runner/runner_handle/remote.py", line 248, in async_run_method
raise ServiceUnavailable(body.decode()) from None
bentoml.exceptions.ServiceUnavailable: Service Busy

Do you know why on certain requests there is this error with SDXL?

Issues running OneDiffusion locally

Apple Macbook Pro M1
Sonoma 14.0
pip: 24.0
python: 3.8.6

When running onediffusion -h after install Im getting the following error:


onediffusion onediffusion -h         
Traceback (most recent call last):
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/utils/lazy.py", line 120, in _get_module
    return importlib.import_module("." + module_name, self.__name__)
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/models/stable_diffusion/configuration_stable_diffusion.py", line 23, in <module>
    class StableDiffusionConfig(onediffusion.SDConfig):
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/_configuration.py", line 592, in __init_subclass__
    _make_init(
TypeError: _make_init() missing 1 required positional argument: 'cls_on_setattr'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/sam/.pyenv/versions/3.8.6/bin/onediffusion", line 5, in <module>
    from onediffusion.cli import cli
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/cli.py", line 745, in <module>
    _cached_http = {key: start_model_command(key, _context_settings=_CONTEXT_SETTINGS) for key in onediffusion.CONFIG_MAPPING}
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/cli.py", line 745, in <dictcomp>
    _cached_http = {key: start_model_command(key, _context_settings=_CONTEXT_SETTINGS) for key in onediffusion.CONFIG_MAPPING}
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/cli.py", line 522, in start_model_command
    sd_config = onediffusion.AutoConfig.for_model(model_name)
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/models/auto/configuration_auto.py", line 113, in for_model
    return CONFIG_MAPPING[model_name].model_construct_env(**attrs)
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/models/auto/configuration_auto.py", line 64, in __getitem__
    if hasattr(self._modules[module_name], value):
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/bentoml/_internal/utils/lazy_loader.py", line 70, in __getattr__
    return getattr(self._module, item)
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/utils/lazy.py", line 110, in __getattr__
    module = self._get_module(self._class_to_module.__getitem__(name))
  File "/Users/sam/.pyenv/versions/3.8.6/lib/python3.8/site-packages/onediffusion/utils/lazy.py", line 122, in _get_module
    raise RuntimeError(
RuntimeError: Failed to import onediffusion.models.stable_diffusion.configuration_stable_diffusion because of the following error (look up to see its traceback):
_make_init() missing 1 required positional argument: 'cls_on_setattr'


Inpainting

Any chance of getting in-painting working here?

Return Multiple Images

First off outstanding repo!
Thanks so much, really takes the headache out of working with Oneflow.

Just wondering is there a way to return 2+ images per call or increase the batch size with this API.
My use case would be returning 2+ images from the same prompt.

All the best!

Change the sampler

Hey! Just wondering if its possible to change the sampler Euler, DDIM etc

Loading Local Models

Is it possible to load local models.
When specifying a folder containing the model using the model_id param the client still attempts to pull the model from HF

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.