spellcraftai / nextjs-openai Goto Github PK
View Code? Open in Web Editor NEWHooks and components for working with OpenAI streams.
Home Page: https://nextjs-openai.vercel.app
License: MIT License
Hooks and components for working with OpenAI streams.
Home Page: https://nextjs-openai.vercel.app
License: MIT License
Currently useTextBuffer
makes the API call on page load and I can't seem to prevent that behaviour with an option. I would like to make the API call on a button click only when the prompt has been entered by the user. Is something like enabled: false
option possible? An idea from react-query's useQuery option.
``I am using a similar authentication to the following file.
https://github.com/nextauthjs/next-auth-example/blob/main/pages/api/examples/protected.ts
The issue I am having is that this uses NextApiRequest instead of NextRequest.
When returning the value using res instead of return I cannot get it to be a live stream. Any ideas how to return properly in my code?
Thank you!
import { NextApiHandler, NextApiRequest, NextApiResponse } from "next";
import { getServerSession } from "next-auth/next";
import { authOptions } from "./auth/[...nextauth]";
import { OpenAI } from "openai-streams";
export const demoApi: NextApiHandler = async (
req: NextApiRequest,
res: NextApiResponse
) => {
const session = await getServerSession(req, res, authOptions);
if (!session) {
res.status(401).json({ message: "Unauthorized" });
return;
}
// console
const { name } = await JSON.parse(req.body);
if (!name) {
return res
.status(400)
.json({ message: "Did not include `name` parameter" });
}
const completionsStream = await OpenAI(
// "edits",
"completions",
{
// model: "text-davinci-edit-001",
// input: "What day of the wek is it?",
// instruction: "Fix the spelling mistakes",
model: "text-davinci-003",
prompt: `Write a nice two-sentence paragraph about a person named ${name}.\n\n`,
temperature: 1,
max_tokens: 100,
}
);
return new Response(completionsStream);
};
export default demoApi;
Hey folks,
Great library! Thank you very much 🙏
I'm using UseTextBuffer
along with <StreamingText />
in an app. One thing that has surprised me is that on mobile devices, the stream seems to flow much faster than on desktop. Desktop performance is not problematic, but the mobile performance is extremely fast.
Has anyone else experienced this? Any ideas about why the performance might be different ?
I suspected it could even be CSS related, but haven't been able to find any obvious reasons...
Here's the relevant code — not much to it:
const {
buffer,
done: isBufferComplete,
error,
} = useTextBuffer({
url: NEXT_API_CHAT_STREAM,
throttle: 50,
options: {
method: 'POST',
},
data: messagesForServer,
});
....
<StreamingText buffer={buffer} />
Any insight would be much appreciated!
Not sure whether this more relates to openai-streams
but done
flag is never true when using a chat
endpoint, even after output stops
as an aside it would be nice for StreamingText[Url] to have a built in onDone callback
export const ChatGPTCompleter: React.FC<Props> = ({url, data, onDone}: Props) => {
const {
buffer,
done,
} = useTextBuffer({
url,
data,
});
const parsedBuffer = buffer.map(parseChatGPTMessage);
const [doneTriggered, setDoneTriggered] = useState(false);
useEffect(() => {
console.log(`done: ${done}, doneTriggered: ${doneTriggered}, parsedBuffer: ${parsedBuffer.join('')}`)
if (done && !doneTriggered) {
setDoneTriggered(true);
onDone(parsedBuffer.join(''));
}
}, [done, doneTriggered, buffer, onDone]);
return <StreamingText buffer={parsedBuffer} />
}
I am trying to save the buffer after done is true in a state as below:
useEffect(() => {
if (done) {
setResult(buffer);
}
}, [done]);
But done seems to be stuck on true even after reset is triggered.
Lovely library. Would be happy to help if needed!
On NextJS 18, I'm getting
tons of errors when I simple just add data parameter to it.
OK
const { buffer, refresh, cancel, done } = useTextBuffer({
url: "/ai/completions/prompt",
throttle: 100,
options: {
method: "POST",
},
});
THROWS
const { buffer, refresh, cancel, done } = useTextBuffer({
url: "/ai/completions/prompt",
throttle: 100,
data: {
genre: "genre",
keywords: "keywords",
playlist: "playlist",
},
options: {
method: "POST",
},
});
Any Ideas?
ai:dev: ⨯ TypeError: Response body object should not be disturbed or locked
ai:dev: at extractBody (node:internal/deps/undici/undici:4323:17)
ai:dev: at new _Request (node:internal/deps/undici/undici:5272:48)
ai:dev: at new NextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/request.js:33:14)
ai:dev: at NextRequestAdapter.fromNodeNextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/adapters/next-request.js:94:16)
ai:dev: at NextRequestAdapter.fromBaseNextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/adapters/next-request.js:70:35)
ai:dev: at doRender (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1329:73)
ai:dev: at cacheEntry.responseCache.get.routeKind (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1552:34)
ai:dev: at ResponseCache.get (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/response-cache/index.js:49:26)
ai:dev: at DevServer.renderToResponseWithComponentsImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1460:53)
ai:dev: at /Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:990:121
ai:dev: at NextTracerImpl.trace (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/lib/trace/tracer.js:104:20)
ai:dev: at DevServer.renderToResponseWithComponents (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:990:41)
ai:dev: at DevServer.renderPageComponent (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1843:35)
ai:dev: at async DevServer.renderToResponseImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1881:32)
ai:dev: at async DevServer.pipeImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:909:25)
ai:dev: at async NextNodeServer.handleCatchallRenderRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/next-server.js:266:17)
ai:dev: at async DevServer.handleRequestImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:805:17)
Server side
import { chat_completions, chat_completions_stream } from "@/lib/openai";
import { NextRequest, NextResponse } from "next/server";
export async function POST(req: NextRequest) {
const data = await req.json();
console.log("POST", data);
const stream = await chat_completions_stream();
return new NextResponse(stream);
}
export const config = {
runtime: "edge",
};
Lib
import { OpenAI as OpenAiStreams } from "openai-streams";
export async function chat_completions_stream() {
return await OpenAiStreams("chat", {
model: "gpt-4",
messages: [{ role: "user", content: "Say this is a test" }],
max_tokens: 25,
});
}
I know it's possible to put a query in the url, and maybe this is more RESTful, but it's also a more complex solution, which, in my opinon, makes it not as good as simply being able to add a POST body when making the api call.
After installing the library, I went to go and use it but found that I could not. I cannot import anything from the library because of this error:
Cannot find module 'nextjs-openai' or its corresponding type declarations.
I did pretty much everything I could think of to make this error go away, including the obvious things like restarting vsc, deleting and reinstalling node_modules, etc. I also double checked this libs package.json
to try to find the error there, but saw nothing.
This is a showstopper, as the library cannot be used at all until this error is fixed.
When the text streaming generation is done, isn't the done boolean variable supposed to change to True
? It doesn't seem to update correctly
I found that when using an OpenAI stream with the "chat" endpoint, we get JSON objects like this:
['{"role":"assistant"}{"content":"This"}', '{"content":" is"}{"content":" an"}', '{"content":" example"}']
So when we use useTextBuffer or StreamingTextURL, we end up with these strings instead of plain text.
I came up with a helper function in my project that parses the content out of these objects. Happy to submit a PR for this. Maybe one can add another parameter to useTextBuffer/StreamTextURL, like this:
useTextBuffer({
url: "api/chat",
type: "chat" // defaults to "text" or "plaintext"
});
Let me know what you think.
I was using the useTextBuffer
in my next.js page, and I noticed a bug that seemed important. Not only was the component rerendering in an infanite loop (tested via logging), but also, the endpoint was also getting hit up everytime it rerendered.
I bebugged by trying the following:
useState
in the hookuseTextBuffer
hook to see if it still infinately rerendered (it did not)On documentation mentioned Node <18 option, but got the error message during installation.
[email protected]: The engine "node" is incompatible with this module. Expected version ">=18". Got "16.19.0"
I'm on Digital Ocean App Platform, and it does not support Node 18, what are the options to install this module with Node 16.X
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.