hazyresearch / manifest Goto Github PK
View Code? Open in Web Editor NEWPrompt programming with FMs.
License: Apache License 2.0
Prompt programming with FMs.
License: Apache License 2.0
Currently, the HuggingFaceClient
makes an HTTP request in get_model_params for each generation. This seriously slows down execution when the result is already cached (and the underlying HTTP connection is slow), especially in cases where one has to make a larger number of calls.
If I understand correctly, the result of get_model_params
is static, e.g.:
{'model_name': 'togethercomputer/RedPajama-INCITE-Instruct-3B-v1',
'model_path': 'togethercomputer/RedPajama-INCITE-Instruct-3B-v1',
'client_name': 'huggingface'}
Therefore, my proposal would be to simply cache these params after one initial request and thereby get rid of the overhead for later requests. I hacked this into my code and it seems work well:
from manifest import Manifest
# Serving https://huggingface.co/togethercomputer/RedPajama-INCITE-Instruct-3B-v1 via SSH portforwarding
redpajama_client = Manifest(client_name = "huggingface", client_connection = "http://127.0.0.1:5550",
cache_name='sqlite', cache_connection="demo_data/rp3b-cache.sqlite")
# Hack to cache model params
from manifest.clients.huggingface import HuggingFaceClient
import types
client = redpajama_client.client_pool.get_current_client()
redpajama_model_params = client.get_model_params()
def cached_params(self):
return redpajama_model_params
client.get_model_params = types.MethodType(cached_params, client)
Whenever I run the app and navigate to the url line 262 in manifest/manifest/api/app.py calls a metaseq resource. This isn't installed during setup.py which forces you to install metaseq from pypi which is a python2 library for genetic modeling.
Steps to reproduce the behavior:
Build the manifest and run a module then go to the website at /
App actually loads?
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 349, in get_provider
module = sys.modules[moduleOrReq]
KeyError: 'metaseq'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 2525, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1822, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1820, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.9/dist-packages/flask/app.py", line 1796, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "//manifest/manifest/api/app.py", line 262, in index
fn = pkg_resources.resource_filename("metaseq", "service/index.html")
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 1135, in resource_filename
return get_provider(package_or_requirement).get_resource_filename(
File "/usr/local/lib/python3.9/dist-packages/pkg_resources/init.py", line 351, in get_provider
import(moduleOrReq)
ModuleNotFoundError: No module named 'metaseq'
Add any other context about the problem here.
Instead of run_chat.
Client IS_CHAT
. Then if it's a list of dict, turn on the other path.
When loading a local model using this command :
python3 -m manifest.api.app \
--model_type huggingface \
--model_name_or_path /workspace/models/minotaur-15b \
--device 0 \
--model_generation_type text-generation
I get the following error message :
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/fsx/bigcode/experiments/pretraining/conversions/starcoderplus/large-model'. Use repo_type argument if needed.
The path is indeed or valid one pulled from https://huggingface.co/openaccess-ai-collective/minotaur-15b.
I tried by quoting "/workspace/models/minotaur-15b" or using a local path "./models/minotaur-15b".
Clone the above repositor, and execute the specified command.
It should load the model.
Currently we match AI21's format (e.g., maxTokens
versus max_tokens
). We should standardize input params across models for easier scripts.
Hi, im trying to use an HuggingFace model (NumbersStation/nsql-llama-2-7B) on my local machine.
I ran the model using the following command:
python3 -m manifest.api.app \
--model_type huggingface \
--model_generation_type text-generation \
--model_name_or_path nsql-llama-2-7B \
--device 0
and executed a simple postman call (previously tried LangChain to interact with the model but got the sam error):
curl --location 'http://127.0.0.1:5002/completions' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Hello World",
"max_tokens": 1024,
"temperature": 0.0,
"repetition_penalty": 1,
"top_k": 50,
"top_p": 10,
"do_sample": "True",
"n": 1,
"max_new_tokens": 1024
}'
But Im getting the following error each time:
The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
127.0.0.1 - - [03/Aug/2023 10:49:30] "POST /completions HTTP/1.1" 400 -
any ideas?
Related to #29, it would be very nice if this were a lightweight package that were available to pip install
.
Proposed solution:
The core library could be shipped with just requests
and tqdm
as dependencies. This would mean removing dependence on the cohere
and openai
packages, and replacing their clients' get_request
methods with requests HTTP calls, similar to ai21_client
The caches could either be included, or shipped separately (as manifest-caches
or something) to avoid dependence on sqllitedict
and redis
, as well as allowing for additional backends
The API package could be released separately for those who want to use it (as I feel the most common production usecase by far will be using HTTP), this removes dependence on flask
, torch
, transformers
, etc
I am happy to work on this if you think this is a good/useful architecture. Perhaps it's worth a fork of the library at that point since I'm sure a good amount of research code now depends on this package as-is.
Add ability to get logprobs from clients.
deepspeed package has a dependency on py-cpuinfo
installing py-cpuinfo
directly allows mainfest to install successfully
pip install manifest-ml[api]
I expected pip to confirm installation
If applicable, add error logs or screenshots to help explain your problem.
Please support gpt-3.5-turbo-16k
Normalize the scores from HF models in score function.
Add
manifest/manifest/api/models/huggingface.py
Line 591 in c8bf001
def get_generation_url(self) -> str:
"""Get generation URL."""
engine = getattr(self, "engine")
deployment_name = AZURE_DEPLOYMENT_NAME_MAPPING.get(engine, engine)
return (
self.host
+ "/openai/deployments/"
+ deployment_name
+ "/chat/completions?api-version=2023-05-15"
)
The api-version is hard coded as 2023-05-15
. Please make it configureable as a parameter or os enviroment.
(while I'm adding requests)
ChatGPT API call requires different options Chat vs. Completions. Was going to add this to my lib, but am going to just move to manifest.
I was wondering if there was a good way to specify which GPUs deepspeed, accelerate, etc. should utilize. Right now I've been doing something like:
CUDA_VISIBLE_DEVICES=3,4,5 python -m manifest.api.app --model_type huggingface --model_name_or_path EleutherAI/gpt-j-6B --device 0 --use_accelerate_multigpu
Can the devices be specified as an argument?
Hi, could we have ChatGPT API in the manifest, since there have been a lot of ones to call it in GitHub these days, I think maybe many fans(of course including me) want to use that inside manifest for the unified interface to run more interesting demos and research experiments.
I can help with it if you would like to.๐
Hello, thank you for creating this library. As far as I can understand, due to caching it is not possible to run the same prompt/params multiple times and get a non-cached result
For the purpose of running prompts using parallel processing, or just generating results with the same input multiple times, it would be nice to be able to provide a flag which tells the client to re-run a prompt, and cache the result with a new key.
Perhaps have a re-run flag, which appends a unique string or param to the cache key, like a uuid or something? I'm happy to add this and PR it.
I'm working on a use case where I want to assign scores to different parts of the LLM output. The PR adding the token_logprobs (#59) goes a long way, but I think I need the corresponding tokens as well if I'm going to get the scores for a specific substring of the output.
Probably just an additional field in the he output containing the tokenised output? Not the token ids, the word pieces.
I can have the right tokenizer loaded locally to work out the token logprob alignment. Slightly defeats the promise of the Manifest approach though.
Add any other context or screenshots about the feature request here.
Is there a way to embed text with OpenAI for Manifest? I couldn't find docs.
can not load model with half precision. And haven't figured out how to transfer model to CPU or GPU?
run model gpt-j-6B as in the demo
use local huggingface method
return a repsonse.
requests.exceptions.HTTPError: {'message': '"LayerNormKernelImpl" not implemented for 'Half''}
Thanks in advance.
Add a if log probs is not none condition
Why not just allow manifest.run
to accept a callable directly?
You are leaking your API key here: ...
Found via: https://github.com/screedcode/openai-key-checker
Let people have caching turned off by default.
Will need to add a NoOp Cache with no caching.
Set that to be default.
Use the accelerate package (dispatch model) to inference of HuggingFace models.
Locally hosted Hugging Face models return tokens as a list of ints rather than a list of strings. Pull request #85 changed the LMModelChoice class to only accept tokens as a list of strings so that the OpenAI API would work properly.
Suggested fix: Change
class LMModelChoice(BaseModel):
"""Model single completion."""
text: str
token_logprobs: Optional[List[float]] = None
tokens: Optional[List[int]] = None
tokens: Optional[List[str]] = None
to
class LMModelChoice(BaseModel):
"""Model single completion."""
text: str
token_logprobs: Optional[List[float]] = None
tokens: Optional[List[int]] = None
tokens: Optional[List[Union[str, int]]] = None
Super lo-pri but the OpenAI streaming API is really cool. Would be fun to add that somehow. (I'm moving minichain to just use Manifest for everything.)
I have working example of T0pp running on my hardware, directly using transformer's AutoModelForSeq2SeqLM.from_pretrained()
and I'm trying to load the model with manifest.
It seems impossible to pass parameters to "from_pretrained" method (in /api/models/huggingface.py
). I need to pass load_in_8bit=True
here and I believe this would fully solve my issue (as this is what I'm doing in my script).
Steps to reproduce the behavior:
load_in_8bit=True
into from_pretrained()
from python3 -m manifest.api.app --model_type huggingface --model_name_or_path bigscience/T0pp --device 0
.Add possibility to pass arguments to huggingface's from_pretrained()
.
It seems like HuggingFace is only supported for local models. It would be nice to support calling model on the Hub with the InferenceAPI. I think this is relatively straightforward to do, you just give the model name and the type of inference (text-generation or feature-extraction)
Ai21 client is not in openai request dict format. Need to standardize.
Was curious if you were planning on adding asynchronous calls? I've been using async_openai for https://github.com/srush/minichain . Would be easier to switch to manifest for the backend. However the async thing is kind of nice.
Hook into accelerate to pass the max_mem keyword arg with around 85% of the current max memory. They have a helper function you can call to get the memory of each available gpu.
I'm aware of --model_generation_type
, but currently it allows to specify only "casual" model types. There's plenty of HF models not defined in MODEL_REGISTRY, which cannot be initialized without adding more generation types, or without reworking the mechanism to be more dynamic. For example, Manifest is not aware of BLOOMZ model, which uses BloomForCausalLM
.
Ability to define (at least) all models imported from transformers in huggingface.py
.
I'd like to be able to pass the logprobs
parameter to the OpenAI request endpoint. See API reference: https://platform.openai.com/docs/api-reference/completions/create.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.