Giter Site home page Giter Site logo

manifest's Issues

Performance issue in caching for remote huggingface models

Description of the bug

Currently, the HuggingFaceClient makes an HTTP request in get_model_params for each generation. This seriously slows down execution when the result is already cached (and the underlying HTTP connection is slow), especially in cases where one has to make a larger number of calls.

Expected behavior

If I understand correctly, the result of get_model_params is static, e.g.:

{'model_name': 'togethercomputer/RedPajama-INCITE-Instruct-3B-v1',
 'model_path': 'togethercomputer/RedPajama-INCITE-Instruct-3B-v1',
 'client_name': 'huggingface'}

Therefore, my proposal would be to simply cache these params after one initial request and thereby get rid of the overhead for later requests. I hacked this into my code and it seems work well:

from manifest import Manifest 
# Serving https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1 via SSH portforwarding 
redpajama_client = Manifest(client_name = "huggingface", client_connection = "http://127.0.0.1:5550",
                            cache_name='sqlite', cache_connection="demo_data/rp3b-cache.sqlite")

# Hack to cache model params
from manifest.clients.huggingface import HuggingFaceClient
import types

client = redpajama_client.client_pool.get_current_client()
redpajama_model_params = client.get_model_params()

def cached_params(self):
    return redpajama_model_params

client.get_model_params = types.MethodType(cached_params, client)

import errors?

Description of the bug

Whenever I run the app and navigate to the url line 262 in manifest/manifest/api/app.py calls a metaseq resource. This isn't installed during setup.py which forces you to install metaseq from pypi which is a python2 library for genetic modeling.

To Reproduce

Steps to reproduce the behavior:
Build the manifest and run a module then go to the website at /

Expected behavior

App actually loads?

Error Logs/Screenshots

Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 349, in get_provider
module = sys.modules[moduleOrReq]
KeyError: 'metaseq'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "//manifest/manifest/api/app.py", line 262, in index
fn = pkg_resources.resource_filename("metaseq", "service/index.html")
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 1135, in resource_filename
return get_provider(package_or_requirement).get_resource_filename(
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 351, in get_provider
import(moduleOrReq)
ModuleNotFoundError: No module named 'metaseq'

Environment (please complete the following information)

  • WSL2 on FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04 image (docker)

Additional context

Add any other context about the problem here.

Unable to load local model : id must be in the form 'repo_name' or 'namespace/repo_name'

Description of the bug

When loading a local model using this command :

python3 -m manifest.api.app \
    --model_type huggingface \
    --model_name_or_path /workspace/models/minotaur-15b \
    --device 0 \
    --model_generation_type text-generation

I get the following error message :
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/fsx/bigcode/experiments/pretraining/conversions/starcoderplus/large-model'. Use repo_type argument if needed.

The path is indeed or valid one pulled from https://huggingface.co/openaccess-ai-collective/minotaur-15b.
I tried by quoting "/workspace/models/minotaur-15b" or using a local path "./models/minotaur-15b".

To Reproduce

Clone the above repositor, and execute the specified command.

Expected behavior

It should load the model.

Environment (please complete the following information)

  • Runpod machine, 4090 GPU

The following `model_kwargs` are not used by the model: ['token_type_ids']

Description of the bug

Hi, im trying to use an HuggingFace model (NumbersStation/nsql-llama-2-7B) on my local machine.
I ran the model using the following command:

python3 -m manifest.api.app \
    --model_type huggingface \
    --model_generation_type text-generation \
    --model_name_or_path nsql-llama-2-7B \
    --device 0

and executed a simple postman call (previously tried LangChain to interact with the model but got the sam error):

curl --location 'http://127.0.0.1:5002/completions' \
--header 'Content-Type: application/json' \
--data '{
    "prompt": "Hello World",
    "max_tokens": 1024,
    "temperature": 0.0,
    "repetition_penalty": 1,
    "top_k": 50,
    "top_p": 10,
    "do_sample": "True",
    "n": 1,
    "max_new_tokens": 1024
}'

But Im getting the following error each time:

The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
127.0.0.1 - - [03/Aug/2023 10:49:30] "POST /completions HTTP/1.1" 400 -

any ideas?

Packaging for PyPi

Description of the feature request

Related to #29, it would be very nice if this were a lightweight package that were available to pip install.

Description of the solution you'd like

Proposed solution:

  • The core library could be shipped with just requests and tqdm as dependencies. This would mean removing dependence on the cohere and openai packages, and replacing their clients' get_request methods with requests HTTP calls, similar to ai21_client

  • The caches could either be included, or shipped separately (as manifest-caches or something) to avoid dependence on sqllitedict and redis, as well as allowing for additional backends

  • The API package could be released separately for those who want to use it (as I feel the most common production usecase by far will be using HTTP), this removes dependence on flask, torch, transformers, etc


I am happy to work on this if you think this is a good/useful architecture. Perhaps it's worth a fork of the library at that point since I'm sure a good amount of research code now depends on this package as-is.

Add logprobs to request.

Add ability to get logprobs from clients.

  • Add to HF client
  • Pass in appropriate kwargs for run/batch_run

pip install manifest-ml[api] fails?

Description of the bug

deepspeed package has a dependency on py-cpuinfo

installing py-cpuinfo directly allows mainfest to install successfully

To Reproduce

pip install manifest-ml[api]

Expected behavior

I expected pip to confirm installation

Error Logs/Screenshots

If applicable, add error logs or screenshots to help explain your problem.

Environment (please complete the following information)

  • OS: OSX 14.2, Apple M1 Pro
  • Python 3.11.5

Please do not hard code the api-version in the AzureChatClient

    def get_generation_url(self) -> str:
        """Get generation URL."""
        engine = getattr(self, "engine")
        deployment_name = AZURE_DEPLOYMENT_NAME_MAPPING.get(engine, engine)
        return (
            self.host
            + "/openai/deployments/"
            + deployment_name
            + "/chat/completions?api-version=2023-05-15"
        )

The api-version is hard coded as 2023-05-15. Please make it configureable as a parameter or os enviroment.

ChatGPT support

(while I'm adding requests)

ChatGPT API call requires different options Chat vs. Completions. Was going to add this to my lib, but am going to just move to manifest.

Specify multiple gpus for deepspeed, accelerate, etc.

I was wondering if there was a good way to specify which GPUs deepspeed, accelerate, etc. should utilize. Right now I've been doing something like:

CUDA_VISIBLE_DEVICES=3,4,5 python -m manifest.api.app --model_type huggingface --model_name_or_path EleutherAI/gpt-j-6B --device 0 --use_accelerate_multigpu

Can the devices be specified as an argument?

Adding ChatGPT API

Hi, could we have ChatGPT API in the manifest, since there have been a lot of ones to call it in GitHub these days, I think maybe many fans(of course including me) want to use that inside manifest for the unified interface to run more interesting demos and research experiments.
I can help with it if you would like to.๐Ÿ˜‰

Allow for re-running prompts without using run_batch

Description of the feature request

Hello, thank you for creating this library. As far as I can understand, due to caching it is not possible to run the same prompt/params multiple times and get a non-cached result

For the purpose of running prompts using parallel processing, or just generating results with the same input multiple times, it would be nice to be able to provide a flag which tells the client to re-run a prompt, and cache the result with a new key.

Perhaps have a re-run flag, which appends a unique string or param to the cache key, like a uuid or something? I'm happy to add this and PR it.

Output tokens in addition to token_logprobs

Description of the feature request

I'm working on a use case where I want to assign scores to different parts of the LLM output. The PR adding the token_logprobs (#59) goes a long way, but I think I need the corresponding tokens as well if I'm going to get the scores for a specific substring of the output.

Description of the solution you'd like

Probably just an additional field in the he output containing the tokenised output? Not the token ids, the word pieces.

Description of the alternatives you've considered

I can have the right tokenizer loaded locally to work out the token logprob alignment. Slightly defeats the promise of the Manifest approach though.

Additional context

Add any other context or screenshots about the feature request here.

How to load model with half-precision, such as float16 since only have limited gpu memory

Description of the bug

can not load model with half precision. And haven't figured out how to transfer model to CPU or GPU?

To Reproduce

run model gpt-j-6B as in the demo
use local huggingface method

Expected behavior

return a repsonse.

Error Logs/Screenshots

requests.exceptions.HTTPError: {'message': '"LayerNormKernelImpl" not implemented for 'Half''}

Environment (please complete the following information)

  • OS: [e.g. Ubuntu 20.04]

Thanks in advance.

pull request #85 broke locally hosted Hugging Face models

Locally hosted Hugging Face models return tokens as a list of ints rather than a list of strings. Pull request #85 changed the LMModelChoice class to only accept tokens as a list of strings so that the OpenAI API would work properly.

Suggested fix: Change

class LMModelChoice(BaseModel):
    """Model single completion."""

    text: str
    token_logprobs: Optional[List[float]] = None
    tokens: Optional[List[int]] = None
    tokens: Optional[List[str]] = None

to

class LMModelChoice(BaseModel):
    """Model single completion."""

    text: str
    token_logprobs: Optional[List[float]] = None
    tokens: Optional[List[int]] = None
    tokens: Optional[List[Union[str, int]]] = None

Streaming API

Super lo-pri but the OpenAI streaming API is really cool. Would be fun to add that somehow. (I'm moving minichain to just use Manifest for everything.)

Unable to load HF T0pp in 8bit

Description of the bug

I have working example of T0pp running on my hardware, directly using transformer's AutoModelForSeq2SeqLM.from_pretrained() and I'm trying to load the model with manifest.

It seems impossible to pass parameters to "from_pretrained" method (in /api/models/huggingface.py). I need to pass load_in_8bit=True here and I believe this would fully solve my issue (as this is what I'm doing in my script).

To Reproduce

Steps to reproduce the behavior:

  1. There's no way how to pass load_in_8bit=True into from_pretrained() from python3 -m manifest.api.app --model_type huggingface --model_name_or_path bigscience/T0pp --device 0.

Expected behavior

Add possibility to pass arguments to huggingface's from_pretrained().

HuggingFace Hub

It seems like HuggingFace is only supported for local models. It would be nice to support calling model on the Hub with the InferenceAPI. I think this is relatively straightforward to do, you just give the model name and the type of inference (text-generation or feature-extraction)

Accelerate memory too restrictive

Hook into accelerate to pass the max_mem keyword arg with around 85% of the current max memory. They have a helper function you can call to get the memory of each available gpu.

Unable to initialize non-standard models from commandline

Description of the bug

I'm aware of --model_generation_type, but currently it allows to specify only "casual" model types. There's plenty of HF models not defined in MODEL_REGISTRY, which cannot be initialized without adding more generation types, or without reworking the mechanism to be more dynamic. For example, Manifest is not aware of BLOOMZ model, which uses BloomForCausalLM.

Expected behavior

Ability to define (at least) all models imported from transformers in huggingface.py.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.