Comments (8)
Try setting key_filter = False
when you run query
from paper-qa.
Thanks: tried it but got same error. Is this (specifying a HuggingFaceHub()
model) expected to work? Or am I trying something that's not officially supported? I'm not yet familiar enough with langchain to know whether what I'm trying to do makes sense!
from paper-qa.
The "error" you're showing is just a warning. Are you sure there is not another exception?
from paper-qa.
Ah, good point: I may have jumped the gun! I had assumed something had gone fatally wrong because the process hangs after emitting the warning. I've started it again and left it running; will see if it's completed or errored out or still hanging in the morning...
from paper-qa.
OK here's the final exception (actually two exceptions?):
NotImplementedError: Async generation not implemented for this LLM.
ValueError: Error raised by inference API: Model mosaicml/mpt-7b-chat time out
/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py:110: RuntimeWarning: coroutine 'AsyncRunManager.on_text' was never awaited
run_manager.on_text(_text, end="\n", verbose=self.verbose)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
Traceback (most recent call last):
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/qaprompts.py", line 91, in agenerate
return await super().agenerate(input_list, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 90, in agenerate
return await self.llm.agenerate_prompt(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 136, in agenerate_prompt
return await self.agenerate(prompt_strings, stop=stop, callbacks=callbacks)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 266, in agenerate
raise e
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 258, in agenerate
await self._agenerate(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 395, in _agenerate
await self._acall(prompt, stop=stop, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 363, in _acall
raise NotImplementedError("Async generation not implemented for this LLM.")
NotImplementedError: Async generation not implemented for this LLM.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/mike/src/chatbot/./chatbot", line 3187, in <module>
answer = bot.query(args.query)
File "/Users/mike/src/chatbot/./chatbot", line 1388, in query
response = index.query(input_text, k = k, max_sources = max_sources)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/docs.py", line 378, in query
return loop.run_until_complete(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
return future.result()
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/docs.py", line 412, in aquery
answer = await self.aget_evidence(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/docs.py", line 311, in aget_evidence
results = await asyncio.gather(*[process(doc) for doc in docs])
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/docs.py", line 298, in process
context=await summary_chain.arun(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 262, in arun
return (await self.acall(kwargs, callbacks=callbacks))[self.output_keys[0]]
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 180, in acall
raise e
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 174, in acall
await self._acall(inputs, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 195, in _acall
response = await self.agenerate([inputs], run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/paperqa/qaprompts.py", line 93, in agenerate
return self.generate(input_list, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/chains/llm.py", line 79, in generate
return self.llm.generate_prompt(
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 127, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 199, in generate
raise e
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 193, in generate
self._generate(missing_prompts, stop=stop, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/base.py", line 377, in _generate
self._call(prompt, stop=stop, run_manager=run_manager)
File "/Users/mike/.pyenv/versions/3.10.7/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/langchain/llms/huggingface_hub.py", line 111, in _call
raise ValueError(f"Error raised by inference API: {response['error']}")
ValueError: Error raised by inference API: Model mosaicml/mpt-7b-chat time out
from paper-qa.
Looks like the model timed out @maspotts - the open source models are a lot slower and paper-qa requires a lot of calls. I would recommend Claude or HuggingFace with a GPU instance if you're look to replace the OpenAI models. Llama can work too, but it's extremely slow
from paper-qa.
Thanks: (off-topic but) do I need to upgrade to a paid huggingface tier to get sufficient performance?
from paper-qa.
Not sure, I have paid huggingface and I'm not sure what is available on free tier
from paper-qa.
Related Issues (20)
- Usage with Google Gemini
- Exception: "cannot pickle '_thread.RLock' object" from "import paperqa" (dependency on openai <= 0.28.1?) HOT 4
- Segmentation error and unable to install simsimd HOT 2
- New implementation of locally hosted paper-qa according to the README causes a pydantic error HOT 10
- Advice for streaming contexts faster
- html/xml tags
- No citations
- Docs() generates exception due to langchain AIMessage() not supplying __len__() when I specify a langchain llm HOT 1
- Pydantic throwing an error when following example code HOT 2
- client.model_name doesn't work for langchain ChatAnthropic HOT 1
- minimal example of how to use FAISS: storing, combining, reusing
- 'ChatOllama' object has no attribute 'model_name' HOT 1
- Decompose get_evidence HOT 2
- PaperQA CLI Script draft (ib.bsb.br/clipaper-qa)
- Extract logprobs
- RuntimeError: This event loop is already running HOT 1
- Parsing JSON with newlines
- is working fully now? last time I try was having a lot of problem in the embedding LLMs
- Getting a lot of python packages version mismatch ...
- HTTP Error 403 and 418 on add_url in paper-qa
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from paper-qa.