Comments (13)
Here is another approach to trigger onCompletion
with no wrapper
import { StreamingTextResponse, LangChainStream, Message } from 'ai'
import { CallbackManager } from 'langchain/callbacks'
import { ChatOpenAI } from 'langchain/chat_models/openai'
import { AIChatMessage, HumanChatMessage } from 'langchain/schema'
export const runtime = 'edge'
export async function POST(req: Request) {
const { messages } = await req.json()
const { stream, handlers } = LangChainStream({
onStart: async () => console.log('start'),
// onToken: async token => console.log(token),
onCompletion: async () => console.log('end')
})
const llm = new ChatOpenAI({
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers)
})
llm
.call(
(messages as Message[]).map(m =>
m.role == 'user'
? new HumanChatMessage(m.content)
: new AIChatMessage(m.content)
)
)
.catch(console.error)
.finally(() => {
// Call handleStreamEnd when the chat or stream ends
handlers.handleChainEnd()
})
return new StreamingTextResponse(stream)
}
from ai.
@aranlucas the issue is actually that you're passing handlers for chain events to the constructor of a chat model, therefore they will never be called. You should pass the handlers either to the constructor of the chain you're using or to the .call() method
from ai.
I can confirm Langchain does call handleLLMEnd
, see the screenshot attached, so it must be an issue with something in this vercel library
from ai.
Have had the same issue which makes onCompletion not work but also the onLoading in the UI.
from ai.
Thanks e-roy.
.finally(() => {
// Call handleStreamEnd when the chat or stream ends
handlers.handleChainEnd()
})
Is a good workaround 🙂
from ai.
@e-roy thanks for the snippet, also solved my problem 👏
from ai.
This also causes isLoading from useChat() to never go back to false when using LangChainStream.
However, e-roy's fix seems to work.
from ai.
Had the same issue and the workaround I used was to just overwrite the callback after llm.call
from ai.
@Itsnotaka Could I trouble you to share an example?
from ai.
I noticed the same issue with the example.
One workaround to get to work is add an wrappedCall
const { messages } = await req.json()
const { stream, handlers } = LangChainStream({
onStart: async () => console.log('start'),
onToken: async token => console.log(token),
onCompletion: async () => console.log('end')
})
const llm = new ChatOpenAI({
streaming: true,
callbackManager: CallbackManager.fromHandlers(handlers)
})
async function wrappedCall(messages: Message[], onCompletion: () => void) {
try {
await llm.call(
messages.map(m =>
m.role == 'user'
? new HumanChatMessage(m.content)
: new AIChatMessage(m.content)
)
)
} catch (error) {
console.error(error)
} finally {
onCompletion()
}
}
wrappedCall(messages as Message[], () => {
console.log('end')
})
return new StreamingTextResponse(stream)
The issue might be within the models in the langchain library to expose a completion event or callback.
from ai.
another solution:
new ChatOpenAI({
temperature: 0,
modelName: 'gpt-3.5-turbo',
maxTokens: 512, // Choose the max allowed tokens in completion
streaming: true,
callbackManager: CallbackManager.fromHandlers({
async handleLLMNewToken(token: string) {
// console.log({ token })
handlers.handleLLMNewToken(token)
},
async handleLLMEnd(output) {
console.log('End of stream...', output)
handlers.handleChainEnd(output)
},
async handleLLMError(e) {
console.log('Error in stream...', e)
handlers.handleLLMError(e)
},
}),
})
from ai.
I think the issue is caused by be90740
When you do what @nfcampos suggested overwriting the returned handlers
const { stream, handlers } = LangChainStream();
const model = new ChatOpenAI({
temperature: 0,
streaming: true,
callbacks: CallbackManager.fromHandlers({
handleLLMNewToken: handlers.handleLLMNewToken,
handleChainEnd: async () => {
console.log("handleChainEnd");
await handlers.handleChainEnd();
},
handleLLMEnd: async () => {
console.log("handleLLMEnd");
await handlers.handleChainEnd();
},
handleLLMError: handlers.handleLLMError,
}),
});
handleChainEnd
never ends up in showing in the logs. (For my use case, using ConversationalRetrievalQAChain
)
It seems that depending on which chain/model you use, handleChainEnd
is not guaranteed to be called.
It looks like adding handleLLMEnd
(keeping handleChainEnd
) should fix this issue, but I'm not sure what issue @jaredpalmer might have been fixing when switching handleLLMEnd
-> handleChainEnd
from ai.
Yeah took me a while to get to that conclusion in #205 (comment).
from ai.
Related Issues (20)
- Add ability to stream additional data immediately in `StreamData`
- Only plain objects can be passed to Client Components from Server Components HOT 11
- Add stop helper to useAssistant HOT 4
- Request to review changes for Vercel deployment HOT 3
- DOMException [InvalidCharacterError]: Invalid character HOT 1
- `streamObject` alternating between `undefined` and the response HOT 8
- aistate messages dissapear?
- 'useUIState' is not exported from 'ai/rsc' HOT 5
- Tool calls do not work as expected HOT 14
- No errors thrown even if there is no response HOT 3
- streamObject works on local machines, but not when deployed to vercel (pro user) HOT 11
- fail type inference at createStreamableValue HOT 3
- Error: action is not a function HOT 4
- AI State not updated during iterations
- Client side not working Next:JS HOT 5
- Demo streamUI with a custom provider HOT 1
- web-llm integration HOT 3
- "You uploaded an unsupported image" when trying to streamText with image data HOT 8
- `experimental_onToolCall` callback is not triggered but the `tool_calls` correctly streamed HOT 1
- Issue submitting image query with base64 data HOT 5
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ai.