Giter Site home page Giter Site logo

spellcraftai / nextjs-openai Goto Github PK

View Code? Open in Web Editor NEW
235.0 235.0 20.0 830 KB

Hooks and components for working with OpenAI streams.

Home Page: https://nextjs-openai.vercel.app

License: MIT License

JavaScript 4.66% CSS 1.86% TypeScript 93.47%
nextjs openai

nextjs-openai's People

Contributors

ctjlewis avatar elliotsayes avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

nextjs-openai's Issues

How to prevent API call on page load?

Currently useTextBuffer makes the API call on page load and I can't seem to prevent that behaviour with an option. I would like to make the API call on a button click only when the prompt has been entered by the user. Is something like enabled: false option possible? An idea from react-query's useQuery option.

Return Types

``I am using a similar authentication to the following file.

https://github.com/nextauthjs/next-auth-example/blob/main/pages/api/examples/protected.ts

The issue I am having is that this uses NextApiRequest instead of NextRequest.
When returning the value using res instead of return I cannot get it to be a live stream. Any ideas how to return properly in my code?

Thank you!

import { NextApiHandler, NextApiRequest, NextApiResponse } from "next";
import { getServerSession } from "next-auth/next";
import { authOptions } from "./auth/[...nextauth]";
import { OpenAI } from "openai-streams";

export const demoApi: NextApiHandler = async (
  req: NextApiRequest,
  res: NextApiResponse
) => {
  const session = await getServerSession(req, res, authOptions);

  if (!session) {
    res.status(401).json({ message: "Unauthorized" });
    return;
  }
  // console
  const { name } = await JSON.parse(req.body);
  if (!name) {
    return res
      .status(400)
      .json({ message: "Did not include `name` parameter" });
  }
  const completionsStream = await OpenAI(
    // "edits",
    "completions",
    {
      // model: "text-davinci-edit-001",
      // input: "What day of the wek is it?",
      // instruction: "Fix the spelling mistakes",
      model: "text-davinci-003",
      prompt: `Write a nice two-sentence paragraph about a person named ${name}.\n\n`,
      temperature: 1,
      max_tokens: 100,
    }
  );

  return new Response(completionsStream);
};

export default demoApi;

Stream performance is different on mobile vs desktop

Hey folks,

Great library! Thank you very much 🙏

I'm using UseTextBuffer along with <StreamingText /> in an app. One thing that has surprised me is that on mobile devices, the stream seems to flow much faster than on desktop. Desktop performance is not problematic, but the mobile performance is extremely fast.

Has anyone else experienced this? Any ideas about why the performance might be different ?

I suspected it could even be CSS related, but haven't been able to find any obvious reasons...

Here's the relevant code — not much to it:

const {
    buffer,
    done: isBufferComplete,
    error,
  } = useTextBuffer({
    url: NEXT_API_CHAT_STREAM,
    throttle: 50,
    options: {
      method: 'POST',
    },
    data: messagesForServer,
  });
  
  ....
  
  <StreamingText buffer={buffer} />

Any insight would be much appreciated!

Chat endpoint never sets `done` to true

Not sure whether this more relates to openai-streams but done flag is never true when using a chat endpoint, even after output stops

as an aside it would be nice for StreamingText[Url] to have a built in onDone callback

export const ChatGPTCompleter: React.FC<Props> = ({url, data, onDone}: Props) => {  
  const { 
    buffer,
    done,
  } = useTextBuffer({
    url,
    data,
  });

  const parsedBuffer = buffer.map(parseChatGPTMessage);

  const [doneTriggered, setDoneTriggered] = useState(false);
  useEffect(() => {
    console.log(`done: ${done}, doneTriggered: ${doneTriggered}, parsedBuffer: ${parsedBuffer.join('')}`)
    if (done && !doneTriggered) {
      setDoneTriggered(true);
      onDone(parsedBuffer.join(''));
    }
  }, [done, doneTriggered, buffer, onDone]);

  return <StreamingText buffer={parsedBuffer} />
}

done doesnt seem to change from true after rest call

I am trying to save the buffer after done is true in a state as below:
useEffect(() => {
if (done) {
setResult(buffer);
}
}, [done]);
But done seems to be stuck on true even after reset is triggered.
Lovely library. Would be happy to help if needed!

TypeError: Response body object should not be disturbed or locked

On NextJS 18, I'm getting
tons of errors when I simple just add data parameter to it.

OK

  const { buffer, refresh, cancel, done } = useTextBuffer({
    url: "/ai/completions/prompt",
    throttle: 100,
    options: {
      method: "POST",
    },
  });

THROWS

  const { buffer, refresh, cancel, done } = useTextBuffer({
    url: "/ai/completions/prompt",
    throttle: 100,
    data: {
      genre: "genre",
      keywords: "keywords",
      playlist: "playlist",
    },
    options: {
      method: "POST",
    },
  });

Any Ideas?

ai:dev:  ⨯ TypeError: Response body object should not be disturbed or locked
ai:dev:     at extractBody (node:internal/deps/undici/undici:4323:17)
ai:dev:     at new _Request (node:internal/deps/undici/undici:5272:48)
ai:dev:     at new NextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/request.js:33:14)
ai:dev:     at NextRequestAdapter.fromNodeNextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/adapters/next-request.js:94:16)
ai:dev:     at NextRequestAdapter.fromBaseNextRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/web/spec-extension/adapters/next-request.js:70:35)
ai:dev:     at doRender (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1329:73)
ai:dev:     at cacheEntry.responseCache.get.routeKind (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1552:34)
ai:dev:     at ResponseCache.get (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/response-cache/index.js:49:26)
ai:dev:     at DevServer.renderToResponseWithComponentsImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1460:53)
ai:dev:     at /Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:990:121
ai:dev:     at NextTracerImpl.trace (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/lib/trace/tracer.js:104:20)
ai:dev:     at DevServer.renderToResponseWithComponents (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:990:41)
ai:dev:     at DevServer.renderPageComponent (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1843:35)
ai:dev:     at async DevServer.renderToResponseImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:1881:32)
ai:dev:     at async DevServer.pipeImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:909:25)
ai:dev:     at async NextNodeServer.handleCatchallRenderRequest (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/next-server.js:266:17)
ai:dev:     at async DevServer.handleRequestImpl (/Users/studio/Documents/grida-enterprise/project-lemon/node_modules/.pnpm/[email protected][email protected][email protected]/node_modules/next/dist/server/base-server.js:805:17)

Server side

import { chat_completions, chat_completions_stream } from "@/lib/openai";
import { NextRequest, NextResponse } from "next/server";

export async function POST(req: NextRequest) {
  const data = await req.json();
  console.log("POST", data);
  const stream = await chat_completions_stream();
  return new NextResponse(stream);
}

export const config = {
  runtime: "edge",
};

Lib

import { OpenAI as OpenAiStreams } from "openai-streams";

export async function chat_completions_stream() {
  return await OpenAiStreams("chat", {
    model: "gpt-4",
    messages: [{ role: "user", content: "Say this is a test" }],
    max_tokens: 25,
  });
}

no way to "POST" a body using <StreamingText>

I know it's possible to put a query in the url, and maybe this is more RESTful, but it's also a more complex solution, which, in my opinon, makes it not as good as simply being able to add a POST body when making the api call.

Critial error: Cannot find module...

After installing the library, I went to go and use it but found that I could not. I cannot import anything from the library because of this error:

Cannot find module 'nextjs-openai' or its corresponding type declarations.

I did pretty much everything I could think of to make this error go away, including the obvious things like restarting vsc, deleting and reinstalling node_modules, etc. I also double checked this libs package.json to try to find the error there, but saw nothing.

This is a showstopper, as the library cannot be used at all until this error is fixed.

`useTextBuffer` Issue with Chat Endpoint

I found that when using an OpenAI stream with the "chat" endpoint, we get JSON objects like this:

['{"role":"assistant"}{"content":"This"}', '{"content":" is"}{"content":" an"}', '{"content":" example"}']

So when we use useTextBuffer or StreamingTextURL, we end up with these strings instead of plain text.

I came up with a helper function in my project that parses the content out of these objects. Happy to submit a PR for this. Maybe one can add another parameter to useTextBuffer/StreamTextURL, like this:

useTextBuffer({ 
  url: "api/chat",
  type: "chat" // defaults to "text" or "plaintext"
});

Let me know what you think.

the `useTextBuffer` hook is causing an infinate loop

I was using the useTextBuffer in my next.js page, and I noticed a bug that seemed important. Not only was the component rerendering in an infanite loop (tested via logging), but also, the endpoint was also getting hit up everytime it rerendered.

I bebugged by trying the following:

  • removed all references to any variables made via useState in the hook
  • testing with the most basic endpoint possible, and testing by copying the endpoint in the example
  • commenting out the useTextBuffer hook to see if it still infinately rerendered (it did not)
  • removed all references to the buffer in the code

Support Node <18

On documentation mentioned Node <18 option, but got the error message during installation.

[email protected]: The engine "node" is incompatible with this module. Expected version ">=18". Got "16.19.0"

I'm on Digital Ocean App Platform, and it does not support Node 18, what are the options to install this module with Node 16.X

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.